issue_comments
1 row where author_association = "NONE" and issue = 1340994913 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Memory Leakage Issue When Running to_netcdf · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1219672919 | https://github.com/pydata/xarray/issues/6924#issuecomment-1219672919 | https://api.github.com/repos/pydata/xarray/issues/6924 | IC_kwDOAMm_X85IsrtX | lassiterdc 64621312 | 2022-08-18T16:05:02Z | 2022-08-18T16:05:02Z | NONE | I cross posted this as a dask issue and on stack overflow. I learned that "dask will often have as many chunks in memory as twice the number of active threads" (best practices with dask arrays) and including |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory Leakage Issue When Running to_netcdf 1340994913 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1