issue_comments
5 rows where user = 7348840 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- marcosrdac · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1381839977 | https://github.com/pydata/xarray/issues/7429#issuecomment-1381839977 | https://api.github.com/repos/pydata/xarray/issues/7429 | IC_kwDOAMm_X85SXTRp | marcosrdac 7348840 | 2023-01-13T13:17:42Z | 2023-01-13T13:17:42Z | NONE | I've managed to try this bug on a virtualenv and could not see any leaks, the code ran nicely. Also in my real case. So it seems to be a singularity problem. Closing the issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Training on xarray files leads to CPU memory leak (PyTorch) 1525546857 | |
1375630008 | https://github.com/pydata/xarray/issues/7429#issuecomment-1375630008 | https://api.github.com/repos/pydata/xarray/issues/7429 | IC_kwDOAMm_X85R_nK4 | marcosrdac 7348840 | 2023-01-09T13:30:16Z | 2023-01-09T13:31:02Z | NONE | Update: If not using my cluster and its docker image but a colab notebook, I could not reproduce the leak. Below benchmark uses XarrayDataset and | epoch | memory (GB) | |-------|-------------| | 0 | 0.357 | | 1 | 11.144 | | 2 | 11.117 | | 3 | 10.965 | | 4 | 10.965 | | 5 | 10.965 | | 6 | 10.965 | | 7 | 10.965 | | 8 | 10.965 | | 9 | 10.965 | | 10 | 10.965 | |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Training on xarray files leads to CPU memory leak (PyTorch) 1525546857 | |
1225823013 | https://github.com/pydata/xarray/issues/865#issuecomment-1225823013 | https://api.github.com/repos/pydata/xarray/issues/865 | IC_kwDOAMm_X85JEJMl | marcosrdac 7348840 | 2022-08-24T14:42:47Z | 2022-08-24T14:42:47Z | NONE |
Just making it clear: those would configure lossless compression of netcdf4 lib, not lossy compression. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to reduce the output size with to_netcdf? 158078410 | |
1220093929 | https://github.com/pydata/xarray/issues/865#issuecomment-1220093929 | https://api.github.com/repos/pydata/xarray/issues/865 | IC_kwDOAMm_X85IuSfp | marcosrdac 7348840 | 2022-08-19T00:04:21Z | 2022-08-19T00:04:21Z | NONE | Thanks, I thought there were some methods to choose from or something like that. For future readers, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to reduce the output size with to_netcdf? 158078410 | |
1219669758 | https://github.com/pydata/xarray/issues/865#issuecomment-1219669758 | https://api.github.com/repos/pydata/xarray/issues/865 | IC_kwDOAMm_X85Isq7- | marcosrdac 7348840 | 2022-08-18T16:01:58Z | 2022-08-18T16:01:58Z | NONE | How do I get lossy compression? I could not find it on the documentation :( |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How to reduce the output size with to_netcdf? 158078410 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 2