issue_comments
1 row where issue = 99026442 and user = 6063709 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Wall time much greater than CPU time · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
133992153 | https://github.com/pydata/xarray/issues/516#issuecomment-133992153 | https://api.github.com/repos/pydata/xarray/issues/516 | MDEyOklzc3VlQ29tbWVudDEzMzk5MjE1Mw== | aidanheerdegen 6063709 | 2015-08-24T02:21:43Z | 2015-08-24T02:21:43Z | CONTRIBUTOR | What is the netCDF4 chunking scheme for your compressed data? (use 'ncdump -hs' to reveal the per variable chunking scheme). Very large datasets can have very long load times depending on the access pattern. This can be overcome with an appropriately chosen chunking scheme, but if the chunk sizes are not well chosen (and the default library chunking is pretty terrible) then certain access patterns might still be very slow. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wall time much greater than CPU time 99026442 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1