issue_comments
1 row where issue = 401874795 and user = 1872600 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- read ncml files to create multifile datasets · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
832761716 | https://github.com/pydata/xarray/issues/2697#issuecomment-832761716 | https://api.github.com/repos/pydata/xarray/issues/2697 | MDEyOklzc3VlQ29tbWVudDgzMjc2MTcxNg== | rsignell-usgs 1872600 | 2021-05-05T15:02:55Z | 2021-05-05T15:04:59Z | NONE | It's worth pointing out that you can create FileReferenceSystem JSON to accomplish many of the tasks we used to use NcML for: * create a single virtual dataset that points to a collection of files * modify dataset and variable attributes It also has the nice feature that it makes your dataset faster to work with on the cloud because the map to the data is loaded in one shot! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
read ncml files to create multifile datasets 401874795 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1