issue_comments
1 row where issue = 374460958 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Treat accessor dataarrays as members of parent dataset · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
435532247 | https://github.com/pydata/xarray/issues/2517#issuecomment-435532247 | https://api.github.com/repos/pydata/xarray/issues/2517 | MDEyOklzc3VlQ29tbWVudDQzNTUzMjI0Nw== | shoyer 1217238 | 2018-11-02T22:54:19Z | 2018-11-02T22:54:19Z | MEMBER | I think the cleanest way to do this in the long term would be to combine some sort of "lazy array" object with caching, e.g., along the lines of what's described in https://github.com/pydata/xarray/issues/2298. I'm not sure what the best solution in the short-term is, though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Treat accessor dataarrays as members of parent dataset 374460958 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1