issue_comments
2 rows where author_association = "CONTRIBUTOR" and issue = 326533369 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Memory leak while looping through a Dataset · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1046665303 | https://github.com/pydata/xarray/issues/2186#issuecomment-1046665303 | https://api.github.com/repos/pydata/xarray/issues/2186 | IC_kwDOAMm_X84-YthX | lumbric 691772 | 2022-02-21T09:41:00Z | 2022-02-21T09:41:00Z | CONTRIBUTOR | I just stumbled across the same issue and created a minimal example similar to @lkilcher. I am using What seems to work: do not use the If I understand things correctly, this indicates that the issue is a consequence of dask/dask#3530. Not sure if there is anything to be fixed on the xarray side or what would be the best work around. I will try to use the processes scheduler. I can create a new (xarray) ticket with all details about the minimal example, if anyone thinks that this might be helpful (to collect work-a-rounds or discuss fixes on the xarray side). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory leak while looping through a Dataset 326533369 | |
393846595 | https://github.com/pydata/xarray/issues/2186#issuecomment-393846595 | https://api.github.com/repos/pydata/xarray/issues/2186 | MDEyOklzc3VlQ29tbWVudDM5Mzg0NjU5NQ== | Karel-van-de-Plassche 6404167 | 2018-06-01T10:57:09Z | 2018-06-01T10:57:09Z | CONTRIBUTOR | @meridionaljet I might've run into the same issue, but I'm not 100% sure. In my case I'm looping over a Dataset containing variables from 3 different files, all of them with a Can you see what happens when using the distributed client? Put Also, for me the memory behaviour looks very different between the threaded and multi-process scheduler, although they both leak. (I'm not sure if leaking is the right term here). Maybe you can try I've tried without succes:
- explicitly deleting For my messy and very much work in process code, look here: https://github.com/Karel-van-de-Plassche/QLKNN-develop/blob/master/qlknn/dataset/hypercube_to_pandas.py |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory leak while looping through a Dataset 326533369 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2