html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1981#issuecomment-373806224,https://api.github.com/repos/pydata/xarray/issues/1981,373806224,MDEyOklzc3VlQ29tbWVudDM3MzgwNjIyNA==,6181563,2018-03-16T18:34:19Z,2018-03-16T18:34:19Z,CONTRIBUTOR,distributed,"{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,304201107
https://github.com/pydata/xarray/issues/1981#issuecomment-373794415,https://api.github.com/repos/pydata/xarray/issues/1981,373794415,MDEyOklzc3VlQ29tbWVudDM3Mzc5NDQxNQ==,6181563,2018-03-16T17:53:44Z,2018-03-16T17:53:44Z,CONTRIBUTOR,"For what's worth, this is exactly the workflow I use (https://github.com/OceansAus/cosima-cookbook) when opening a large number of netCDF files:
bag = dask.bag.from_sequence(ncfiles)
load_variable = lambda ncfile: xr.open_dataset(ncfile,
chunks=chunks,
decode_times=False)[variables]
bag = bag.map(load_variable)
dataarrays = bag.compute()
and then
dataarray = xr.concat(dataarrays,
dim='time', coords='all', )
and it appears to work well.
Code snippets from cosima-cookbook/cosima_cookbook/netcdf_index.py
","{""total_count"": 3, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,304201107