issue_comments: 906023525
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3486#issuecomment-906023525 | https://api.github.com/repos/pydata/xarray/issues/3486 | 906023525 | IC_kwDOAMm_X842ANJl | 2656596 | 2021-08-26T02:19:50Z | 2021-08-26T02:19:50Z | NONE | This seems to be an ongoing problem (Unexpected behaviour when chunking with multiple netcdf files in xarray/dask, Performance of chunking in xarray / dask when opening and re-chunking a dataset) that has not been resolved nor has feedback been provided. I've been running into this problem trying to handle netcdfs that are larger than my RAM. From my testing, chunks must be passed with open_mfdataset to be of any use. The chunks method on the datatset after opening seems to do nothing in this use case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
517799069 |