issue_comments: 398586226
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/2237#issuecomment-398586226 | https://api.github.com/repos/pydata/xarray/issues/2237 | 398586226 | MDEyOklzc3VlQ29tbWVudDM5ODU4NjIyNg== | 306380 | 2018-06-20T00:26:39Z | 2018-06-20T00:26:39Z | MEMBER | Thanks. This example helps.
I'm not sure I understand this. The situation on the whole does seem sensible to me though. This starts to look a little bit like a proper shuffle situation (using dataframe terminology). Each of your 365 output partitions would presumably touch 1/12th of your input partitions, leading to a quadratic number of tasks. If after doing something you then wanted to rearrange your data back then presumably that would require an equivalent number of extra tasks. Am I understanding the situation correctly? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
333312849 |