issue_comments: 485497398
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/2912#issuecomment-485497398 | https://api.github.com/repos/pydata/xarray/issues/2912 | 485497398 | MDEyOklzc3VlQ29tbWVudDQ4NTQ5NzM5OA== | 2443309 | 2019-04-22T18:06:56Z | 2019-04-22T18:06:56Z | MEMBER | Since the final dataset size is quite manageable, I would start by forcing computation before the write step:
While writing of xarray datasets backed by dask is possible, its a poorly optimized operation. Most of this comes from constraints in netCDF4/HDF5. There are ways to side step some of these challenges ( |
{
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
435535284 |