html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/891#issuecomment-229762102,https://api.github.com/repos/pydata/xarray/issues/891,229762102,MDEyOklzc3VlQ29tbWVudDIyOTc2MjEwMg==,17951292,2016-06-30T19:22:43Z,2016-06-30T19:22:43Z,NONE,"Thanks, Stephan! Enjoy your vacation! For now I am processing multiple files over an opendap connection using netCDF4.MFDataset and then building a DataArray or Dataset from the variables stored within the resulting MFDataset object. This appears much faster than the process recommend here when a lot of big files are involved: http://xarray.pydata.org/en/stable/io.html (i.e., using read_netcdfs). Once a bug fix is implemented, I'll try using the dask-optimized open_mfdataset(). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,162726984