issue_comments
1 row where author_association = "MEMBER" and issue = 218013400 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Chunking and dask memory errors · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
290235797 | https://github.com/pydata/xarray/issues/1338#issuecomment-290235797 | https://api.github.com/repos/pydata/xarray/issues/1338 | MDEyOklzc3VlQ29tbWVudDI5MDIzNTc5Nw== | shoyer 1217238 | 2017-03-29T21:44:46Z | 2017-03-29T21:44:46Z | MEMBER | Currently it's the user's responsibility to choose appropriate chunk sizes for their workflow and available memory. Dask doesn't provide any functionality to help with this, though perhaps it could. You might raise an issue on the Dask issue tracker. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Chunking and dask memory errors 218013400 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1