issue_comments
1 row where author_association = "MEMBER", issue = 293293632 and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- running out of memory trying to write SQL · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
362072512 | https://github.com/pydata/xarray/issues/1874#issuecomment-362072512 | https://api.github.com/repos/pydata/xarray/issues/1874 | MDEyOklzc3VlQ29tbWVudDM2MjA3MjUxMg== | max-sixty 5635139 | 2018-01-31T21:13:49Z | 2018-01-31T21:13:49Z | MEMBER | There's no xarray->SQL connector, unfortunately. I don't have that much experience here so I'll let other chime in. You could try chunking to pandas and then to Postgres (but you'll always be limited by memory with pandas). If there's a NetCDF -> tabular connector, that would allow you to operate beyond memory. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
running out of memory trying to write SQL 293293632 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1