issue_comments
2 rows where issue = 1536004355 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Parallel + multi-threaded reading of NetCDF4 + HDF5: Hidefix! · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1396560033 | https://github.com/pydata/xarray/issues/7446#issuecomment-1396560033 | https://api.github.com/repos/pydata/xarray/issues/7446 | IC_kwDOAMm_X85TPdCh | gauteh 56827 | 2023-01-19T07:44:30Z | 2023-01-19T07:44:30Z | NONE | On Tue, Jan 17, 2023 at 5:23 PM Ryan Abernathey @.***> wrote:
Great, the package should already register itself with xarray. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Parallel + multi-threaded reading of NetCDF4 + HDF5: Hidefix! 1536004355 | |
1385683582 | https://github.com/pydata/xarray/issues/7446#issuecomment-1385683582 | https://api.github.com/repos/pydata/xarray/issues/7446 | IC_kwDOAMm_X85Sl9p- | rabernat 1197350 | 2023-01-17T16:23:01Z | 2023-01-17T16:23:01Z | MEMBER | Hi @gauteh! This is very cool! Thanks for sharing. I'm really excited about way that Rust can be used to optimized different parts of our stack. A couple of questions: - Can your reader read over HTTP / S3 protocol? Or is it just local files? - Do you know about kerchunk? The approach you described:
...is identical to the approach taken by Kerchunk (although the implementation is different). I'm curious what specification you use to store your indexes. Could we make your implementation interoperable with kerchunk, such that a kerchunk reference specification could be read by your reader? It would be great to reach for some degree of alignment here. - Do you know about hdf5-coro - http://icesat2sliderule.org/h5coro/ - they have similar goals, but focused on cloud-based access
This is definitely of general interest! However, it is not necessary to add a new backend directly into xarray. We support entry points which allow packages to implement their own readers, as you have apparently already discovered: https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html Installing your package should be enough to enable the new engine. We would, however, welcome a documentation PR that described how to use this package on the I/O page. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Parallel + multi-threaded reading of NetCDF4 + HDF5: Hidefix! 1536004355 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2