issue_comments
1 row where issue = 257400162 and user = 10050469 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Modifying data set resulting in much larger file size · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
329232225 | https://github.com/pydata/xarray/issues/1572#issuecomment-329232225 | https://api.github.com/repos/pydata/xarray/issues/1572 | MDEyOklzc3VlQ29tbWVudDMyOTIzMjIyNQ== | fmaussion 10050469 | 2017-09-13T17:01:09Z | 2017-09-13T17:04:12Z | MEMBER | Yes, your file uses lossy compression, which is lost in the conversion to the type double. You can either use lossy compression again, or store your data as float instead of double to reduce the output file size. (http://xarray.pydata.org/en/latest/io.html#writing-encoded-data) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Modifying data set resulting in much larger file size 257400162 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1