issue_comments: 408928221
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2329#issuecomment-408928221 | https://api.github.com/repos/pydata/xarray/issues/2329 | 408928221 | MDEyOklzc3VlQ29tbWVudDQwODkyODIyMQ== | 1197350 | 2018-07-30T16:37:05Z | 2018-07-30T16:37:23Z | MEMBER | Can you forget about zarr for a moment and just do a reduction on your dataset? For example:
Keep the same chunk arguments you are currently using. This will help us understand if the problem is with reading the files. Is it your intention to chunk the files contiguously in time? Depending on the underlying structure of the data within the netCDF file, this could amount to a complete transposition of the data, which could be very slow / expensive. This could have some parallels with #2004. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
345715825 |