issue_comments: 406705740
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2300#issuecomment-406705740 | https://api.github.com/repos/pydata/xarray/issues/2300 | 406705740 | MDEyOklzc3VlQ29tbWVudDQwNjcwNTc0MA== | 1530840 | 2018-07-20T19:36:08Z | 2018-07-20T19:38:03Z | NONE | Ah, that's great. I do see some improvement. Specifically, I can now set chunks using xarray, and successfully write to zarr, and reopen it. However, when reopening it I do find that the chunks have been inconsistently applied (some fields have the expected chunksize whereas some small fields have the entire variable in one chunk). Furthermore, trying to write a second time with I also tried loading my entire dataset into memory, allowing the initial Curious: Is there any downside in xarray to using datasets with inconsistent chunks? I take it that it is a supported configuration because xarray allows it to happen, but just outputs that error when calling One other thing to add: it might be nice to have an option to allow zarr auto-chunking even when |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
342531772 |