issue_comments
1 row where author_association = "MEMBER" and issue = 427768540 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- cannot properly .close() a dataset opened with `chunks` argument? · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
478765910 | https://github.com/pydata/xarray/issues/2862#issuecomment-478765910 | https://api.github.com/repos/pydata/xarray/issues/2862 | MDEyOklzc3VlQ29tbWVudDQ3ODc2NTkxMA== | shoyer 1217238 | 2019-04-01T22:10:02Z | 2019-04-01T22:10:02Z | MEMBER | Something like this should definitely work:
Deep copying maintains dask arrays, so they are still linked to the original file on disk. If you close that file, then dask is definitely going to error when you attempt to use it. I agree that there is an opportunity for better error messages here, though. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
cannot properly .close() a dataset opened with `chunks` argument? 427768540 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1