issues: 617476316
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
617476316 | MDU6SXNzdWU2MTc0NzYzMTY= | 4055 | Automatic chunking of arrays ? | 56925856 | closed | 0 | 11 | 2020-05-13T14:02:41Z | 2020-05-25T19:23:45Z | 2020-05-25T19:23:45Z | CONTRIBUTOR | Hi there, Hopefully this turns out to be a basic issue, but I was wondering why the I get the impression that the dask method automatically tries to prevent the issues of "too many chunks" or "too few chunks" which can sometimes happen when choosing chunk sizes automatically. If so, it would maybe be a useful thing to include in future versions? Happy to be corrected if I've misunderstood something here though, still getting my head around how the dask/xarray compatibility really works... Cheers! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4055/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |