issue_comments: 1005162696
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/6036#issuecomment-1005162696 | https://api.github.com/repos/pydata/xarray/issues/6036 | 1005162696 | IC_kwDOAMm_X8476ZDI | 3698640 | 2022-01-04T20:53:36Z | 2022-01-04T20:54:13Z | CONTRIBUTOR | This isn't a fix for the overhead required to manage an arbitrarily large graph, but note that creating chunks this small (size 1 in this case) is explicitly not recommended. See the dask docs on Array Best Practices: Select a good chunk size - they recommend chunks no smaller than 100 MB. Your chunks are 8 bytes. This creates 1 billion tasks, which does result in an enormous overhead - there's no way around this. Note that storing this on disk would not help - the problem results from the fact that 1 billion tasks will almost certainly overwhelm any dask scheduler. The general dask best practices guide recommends keeping the number of tasks below 1 million if possible. Also, I don't think that the issue here is in specifying the universe of the tasks that need to be created, but rather in creating and managing the python task objects themselves. So pre-computing or storing them wouldn't help. For me, changing to (1000, 1000, 100) chunks (~750MB for a float64 array) reduces the time to a couple ms:
With this chunking scheme, you could store and work with much, much more data. In fact, scaling the size of your example by 3 orders of magnitude only increases the runtime by ~5x:
|
{ "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1068225524 |