issue_comments
6 rows where author_association = "MEMBER" and issue = 473692721 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- rolling: bottleneck still not working properly with dask arrays · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
516195053 | https://github.com/pydata/xarray/issues/3165#issuecomment-516195053 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNjE5NTA1Mw== | shoyer 1217238 | 2019-07-29T23:05:57Z | 2019-07-29T23:05:57Z | MEMBER | I think this triggers a case that dask's scheduler doesn't handle well, related to this issue: https://github.com/dask/dask/issues/874 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 | |
516193739 | https://github.com/pydata/xarray/issues/3165#issuecomment-516193739 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNjE5MzczOQ== | shoyer 1217238 | 2019-07-29T23:00:37Z | 2019-07-29T23:00:37Z | MEMBER | Actually, there does seem to be something fishy going on here. I find that I'm able to execute |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 | |
516193582 | https://github.com/pydata/xarray/issues/3165#issuecomment-516193582 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNjE5MzU4Mg== | shoyer 1217238 | 2019-07-29T22:59:48Z | 2019-07-29T22:59:48Z | MEMBER | For context, xarray's rolling window code creates a "virtual dimension" for the rolling window. So if your chunks are size (5000, 100) before the rolling window, they are size (5000, 100, 100) within the rolling window computation. So it's not entirely surprising that there are more issues with memory usage -- these are much bigger arrays, e.g., see ```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 | |
516187643 | https://github.com/pydata/xarray/issues/3165#issuecomment-516187643 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNjE4NzY0Mw== | shoyer 1217238 | 2019-07-29T22:33:56Z | 2019-07-29T22:33:56Z | MEMBER | You want to use the chunks argument inside da.zeros, e.g., da.zeros((5000, 50000), chunks=100). On Mon, Jul 29, 2019 at 3:30 PM peterhob notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 | |
516060323 | https://github.com/pydata/xarray/issues/3165#issuecomment-516060323 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNjA2MDMyMw== | shoyer 1217238 | 2019-07-29T16:20:07Z | 2019-07-29T16:20:07Z | MEMBER | Did you try converting |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 | |
515738254 | https://github.com/pydata/xarray/issues/3165#issuecomment-515738254 | https://api.github.com/repos/pydata/xarray/issues/3165 | MDEyOklzc3VlQ29tbWVudDUxNTczODI1NA== | shoyer 1217238 | 2019-07-28T06:55:43Z | 2019-07-28T06:55:43Z | MEMBER | Have you tried adding more chunking, e.g., along the x dimension? That’s that usual recommendation if you’re running out of memory. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
rolling: bottleneck still not working properly with dask arrays 473692721 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1