issue_comments: 1165001097
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/6709#issuecomment-1165001097 | https://api.github.com/repos/pydata/xarray/issues/6709 | 1165001097 | IC_kwDOAMm_X85FcIGJ | 3309802 | 2022-06-23T23:15:19Z | 2022-06-23T23:15:19Z | NONE | I took a little bit more of a look at this and I don't think root task overproduction is the (only) problem here. I also feel like intuitively, this operation shouldn't require holding so many root tasks around at once. But the graph dask is making, or how it's ordering it, doesn't seem to work that way. We can see the ordering is pretty bad: When we actually run it (on https://github.com/dask/distributed/pull/6614 with overproduction fixed), you can see that dask requires keeping tons of the input chunks in memory, because they're going to be needed by a future task that isn't able to run yet (because not all of its inputs have been computed): I feel like it's possible that the order in which dask is executing the input tasks is bad? But I more thank that I haven't thought about the problem enough, and there's an obvious reason why the graph is structured like this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1277437106 |