issue_comments
1 row where author_association = "MEMBER", issue = 995207525 and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- combining 2 arrays with xr.merge() causes temporary spike in memory usage ~3x the combined size of the arrays · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
918583565 | https://github.com/pydata/xarray/issues/5790#issuecomment-918583565 | https://api.github.com/repos/pydata/xarray/issues/5790 | IC_kwDOAMm_X842wHkN | max-sixty 5635139 | 2021-09-13T21:15:56Z | 2021-09-13T21:15:56Z | MEMBER | I'll let others respond, but temporary memory usage of 3X sounds within expectations, albeit towards the higher end. If we can reduce it that would be great, but probably needs someone to work on this fairly methodically. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combining 2 arrays with xr.merge() causes temporary spike in memory usage ~3x the combined size of the arrays 995207525 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1