issue_comments
1 row where user = 17830036 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1061602285 | https://github.com/pydata/xarray/issues/2186#issuecomment-1061602285 | https://api.github.com/repos/pydata/xarray/issues/2186 | IC_kwDOAMm_X84_RsPt | hmkhatri 17830036 | 2022-03-08T10:00:07Z | 2022-03-08T10:00:07Z | NONE | Hello, I am facing the same memory leak issue. I am using from dask.distributed import Client client = Client() main code goes hereds = xr.open_mfdataset("*nc") for i in range(0, len(ds.time)): ds1 = ds.isel(time=i) # perform some computations here
ds.close() ```` I have tried the following - explicit ds.close() calls on datasets - gc.collect() - client.cancel(vars) None of the solutions worked for me. I have also tried increasing RAM but that didn't help either. I was wondering if anyone has found a work around this problem. @lumbric @shoyer @lkilcher I am using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory leak while looping through a Dataset 326533369 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1