issue_comments
1 row where user = 56707780 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
544312894 | https://github.com/pydata/xarray/issues/3416#issuecomment-544312894 | https://api.github.com/repos/pydata/xarray/issues/3416 | MDEyOklzc3VlQ29tbWVudDU0NDMxMjg5NA== | angelolab 56707780 | 2019-10-21T01:04:48Z | 2019-10-21T01:04:48Z | NONE | Got it, thanks everyone. I'll open this issue there. We'll try and work on getting our NetCDF4 compatibility issues addressed to avoid this space issue, as we are working with large imaging datasets. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cannot write array larger than 4GB with SciPy netCDF backend 509285415 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1