issue_comments
1 row where issue = 995207525 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- combining 2 arrays with xr.merge() causes temporary spike in memory usage ~3x the combined size of the arrays · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
918697549 | https://github.com/pydata/xarray/issues/5790#issuecomment-918697549 | https://api.github.com/repos/pydata/xarray/issues/5790 | IC_kwDOAMm_X842wjZN | shoyer 1217238 | 2021-09-14T00:39:03Z | 2021-09-14T00:39:03Z | MEMBER |
Yes, I'm pretty sure this is the case.
Yes, I imagine this could work. But on the other hand, the implementation would get more complex. For example, it's nice to be able to use By the way, if you haven't tried Dask already I would recommend it for this use-case. It can do streaming operations that can result in significant memory savings. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
combining 2 arrays with xr.merge() causes temporary spike in memory usage ~3x the combined size of the arrays 995207525 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1