issue_comments
1 row where issue = 28445412 and user = 514053 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Allow the ability to add/persist details of how a dataset is stored. · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
36279915 | https://github.com/pydata/xarray/issues/26#issuecomment-36279915 | https://api.github.com/repos/pydata/xarray/issues/26 | MDEyOklzc3VlQ29tbWVudDM2Mjc5OTE1 | akleeman 514053 | 2014-02-27T19:20:25Z | 2014-02-27T19:20:25Z | CONTRIBUTOR | Yeah I think keeping them transparent to the user except when reading/writing is the way to go. Two datasets with the same data but different encodings should still be equal when compared, and operations beyond slicing should probably destroy encodings. Not sure how to handle the various file formats, like you said it could be all part of the store, or we could just throw warnings/fail if encodings aren't feasible. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Allow the ability to add/persist details of how a dataset is stored. 28445412 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1