issue_comments: 358123564
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1832#issuecomment-358123564 | https://api.github.com/repos/pydata/xarray/issues/1832 | 358123564 | MDEyOklzc3VlQ29tbWVudDM1ODEyMzU2NA== | 306380 | 2018-01-16T22:07:11Z | 2018-01-31T17:31:49Z | MEMBER | Looking at the worker diagnostic page during execution is informative. It has a ton of work that it can do and a ton of communication that it can do (to share results with other workers to compute the reductions). In this example it's able to start new work much faster than it is able to communicate results to peers, leading to significant buildup. These two processes happen asynchronously without any back-pressure between them, leading to most of the input being produced before it can be reduced and processed. That's my current guess anyway. I could imagine pausing worker threads if there is a heavy communication buildup. I'm not sure how generally valuable this is though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
288785270 |