issue_comments
5 rows where author_association = "CONTRIBUTOR" and issue = 1108138101 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- [FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1028730657 | https://github.com/pydata/xarray/issues/6174#issuecomment-1028730657 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X849US8h | tovogt 57705593 | 2022-02-03T08:39:45Z | 2022-02-03T08:41:16Z | CONTRIBUTOR |
Thanks for the hint! Unfortunately, it says already in the docstring that "it is no different than calling to_netcdf repeatedly". And I explained in my OP that this would cause repeated file open/close operations - which is the whole point of this issue. Furthermore, when using However, it might still be the way to go API-wise. So, when talking about the solution of this issue, we could aim at fixing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1019879801 | https://github.com/pydata/xarray/issues/6174#issuecomment-1019879801 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848yiF5 | tovogt 57705593 | 2022-01-24T09:16:40Z | 2022-01-24T09:16:40Z | CONTRIBUTOR |
Here is my PR for the docstring improvements: https://github.com/pydata/xarray/pull/6187 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1019849836 | https://github.com/pydata/xarray/issues/6174#issuecomment-1019849836 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848yaxs | tovogt 57705593 | 2022-01-24T08:43:36Z | 2022-01-24T08:43:36Z | CONTRIBUTOR | It's not at all tricky to implement the listing of groups in a NETCDF4 file, at least not for the "netcdf4" engine. The code for that is in my OP above: ```python def _xr_nc4_groups_from_store(store): """List all groups contained in the given NetCDF4 data store
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1018257806 | https://github.com/pydata/xarray/issues/6174#issuecomment-1018257806 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848sWGO | tovogt 57705593 | 2022-01-21T07:40:55Z | 2022-01-21T07:46:06Z | CONTRIBUTOR | When I first posted this issue, I thought, the best solution is to just implement my proposed helper functions as part of the official xarray API. I don't think our project would add DataTree as a new dependency just for this as long as we have a very easy and viable solution of ourselves. But now I have a new idea. At first, I noticed that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1017298572 | https://github.com/pydata/xarray/issues/6174#issuecomment-1017298572 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848or6M | tovogt 57705593 | 2022-01-20T09:53:16Z | 2022-01-20T09:53:32Z | CONTRIBUTOR | Thanks for your quick response, Tom! I'm sure that DataTree is a really neat solution for most people working with hierarchically structured data. In my case, we are talking about a very unusual application of the NetCDF4 groups feature: We store literally thousands of very small NetCDF datasets in a single file. A file containing 3000 datasets is typically not larger than 100 MB. With that setup, the I/O performance is critical. Opening and closing the file on each group read/write is very, very bad. On our cluster this means that writing that 100 MB file takes 10 hours with your DataTree implementation, and 30 minutes with my helper functions. For reading, the effect is smaller, but still noticeable. So, my request is really about the I/O performance, and I don't need a full-fledged hierarchical data management API in xarray for that. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1