issue_comments
4 rows where author_association = "MEMBER", issue = 333312849 and user = 2448579 sorted by updated_at descending
This data as json, CSV (advanced)
issue 1
- why time grouping doesn't preserve chunks · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1126847735 | https://github.com/pydata/xarray/issues/2237#issuecomment-1126847735 | https://api.github.com/repos/pydata/xarray/issues/2237 | IC_kwDOAMm_X85DKlT3 | dcherian 2448579 | 2022-05-15T02:44:06Z | 2022-05-15T02:44:06Z | MEMBER | Fixed on main with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
789078512 | https://github.com/pydata/xarray/issues/2237#issuecomment-789078512 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDc4OTA3ODUxMg== | dcherian 2448579 | 2021-03-02T17:29:51Z | 2021-03-02T18:03:17Z | MEMBER | I think the behaviour in Ryan's most recent comment is a consequence of groupby.mean being
I think the fundamental question is: Is it really possible for dask to recognize that the chunk structure after the We can explicitly ask for consolidation of chunks by saying the output should be chunked Then if we set Can we make dask recognize that the 5 getitem tasks from input-chunk-0, at the bottom of each tower, can be fused to a single task? In that case, fuse the 5 getitem tasks and "propagate" that fusion up the tower. I guess another failure here is that when |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
789090356 | https://github.com/pydata/xarray/issues/2237#issuecomment-789090356 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDc4OTA5MDM1Ng== | dcherian 2448579 | 2021-03-02T17:48:01Z | 2021-03-02T17:48:47Z | MEMBER | Reading up on fusion, the docstring says
So we need the opposite : fuse "single input, multiple output" to a single task when some appropriate heuristic is satisfied. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 | |
482241098 | https://github.com/pydata/xarray/issues/2237#issuecomment-482241098 | https://api.github.com/repos/pydata/xarray/issues/2237 | MDEyOklzc3VlQ29tbWVudDQ4MjI0MTA5OA== | dcherian 2448579 | 2019-04-11T18:22:41Z | 2019-04-11T18:22:41Z | MEMBER | Can this be closed or is there something to do on the xarray side now that dask/dask#3648 has been merged? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
why time grouping doesn't preserve chunks 333312849 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1