issue_comments
1 row where issue = 202260275 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
274230041 | https://github.com/pydata/xarray/issues/1223#issuecomment-274230041 | https://api.github.com/repos/pydata/xarray/issues/1223 | MDEyOklzc3VlQ29tbWVudDI3NDIzMDA0MQ== | shoyer 1217238 | 2017-01-21T03:18:38Z | 2017-01-21T03:21:19Z | MEMBER | @martindurant thanks for posting this as an issue -- I didn't get a notification from your ping in the gist. I agree that serializing xarray objects to zarr should be pretty straightforward and seems quite useful. To properly handle edge cases like strange data types (e.g., datetime64 or object) and So we could either directly write a DataStore or write a separate "znetcdf" or "netzdf" module that implements an interface similar to h5netcdf (which itself is a thin wrapper on top of h5py). All things being equal, I would prefer the later approach, because people seem to find these intermediate interfaces useful, and it would help clarify the specification of the file format vs. details of how xarray uses it. As far as the spec goes, I agree that JSON is the sensible file format. Really, all we need on top of zarr is:
- specified dimensions sizes, stored at the group level ( This could make sense either as part of zarr or a separate library. I would lean towards putting it in zarr only because that would be slightly more convenient, as we could safely make use of subclassing to add the extra functionality. zarr already handles hierarchies, arrays and metadata, which is most of the hard work. I'm certainly quite open to integrate experimental data formats like this one into xarray, but ultimately of course it depends on interest from the community. This wouldn't even necessarily need to live in xarray proper (though that would be fine, too). For example, @rabernat wrote a DataStore for loading MIT GCM outputs (https://github.com/xgcm/xmitgcm). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
zarr as persistent store for xarray 202260275 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1