issue_comments
6 rows where issue = 984555353 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Variable.stack constructs extremely large chunks · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1479640004 | https://github.com/pydata/xarray/issues/5754#issuecomment-1479640004 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X85YMYPE | yucsong 92543657 | 2023-03-22T14:13:06Z | 2023-03-22T14:51:44Z | NONE |
I simply tested var.stack(new=("x", "y")) and I got the above message. I don't understand why From 1521 to 1527 line of xarray/core/variable.py they did reshape? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 | |
1479711087 | https://github.com/pydata/xarray/issues/5754#issuecomment-1479711087 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X85YMplv | dcherian 2448579 | 2023-03-22T14:51:42Z | 2023-03-22T14:51:42Z | MEMBER | This is fine. That warning says they're fixing the issue reported here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 | |
1479626289 | https://github.com/pydata/xarray/issues/5754#issuecomment-1479626289 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X85YMU4x | dcherian 2448579 | 2023-03-22T14:04:40Z | 2023-03-22T14:04:40Z | MEMBER | It was fixed in dask, but we're still sub-optimal. Do you have an example of a problem? Please open a new issue with a reproducible example if you do. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 | |
1479578109 | https://github.com/pydata/xarray/issues/5754#issuecomment-1479578109 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X85YMJH9 | yucsong 92543657 | 2023-03-22T13:34:58Z | 2023-03-22T13:34:58Z | NONE | Sorry, is this fixed? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 | |
993814525 | https://github.com/pydata/xarray/issues/5754#issuecomment-993814525 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X847PGf9 | dcherian 2448579 | 2021-12-14T17:31:45Z | 2021-12-14T17:31:45Z | MEMBER | Fixed upstream |
{ "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 | |
909841259 | https://github.com/pydata/xarray/issues/5754#issuecomment-909841259 | https://api.github.com/repos/pydata/xarray/issues/5754 | IC_kwDOAMm_X842OxNr | dcherian 2448579 | 2021-09-01T03:20:15Z | 2021-09-01T03:22:43Z | MEMBER | Ah this is https://github.com/dask/dask/issues/5544 again. It looks like dask needs to break up the potentially-very-large intermediate chunks. That said our strategy of transposing first means that the optimization implemented in https://github.com/dask/dask/issues/5544#issuecomment-712280433 doesn't kick in in this case. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Variable.stack constructs extremely large chunks 984555353 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2