issue_comments
10 rows where issue = 1108138101 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- [FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation · 10 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1028730657 | https://github.com/pydata/xarray/issues/6174#issuecomment-1028730657 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X849US8h | tovogt 57705593 | 2022-02-03T08:39:45Z | 2022-02-03T08:41:16Z | CONTRIBUTOR |
Thanks for the hint! Unfortunately, it says already in the docstring that "it is no different than calling to_netcdf repeatedly". And I explained in my OP that this would cause repeated file open/close operations - which is the whole point of this issue. Furthermore, when using However, it might still be the way to go API-wise. So, when talking about the solution of this issue, we could aim at fixing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1028136906 | https://github.com/pydata/xarray/issues/6174#issuecomment-1028136906 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X849SB_K | shoyer 1217238 | 2022-02-02T16:46:24Z | 2022-02-02T17:20:50Z | MEMBER | Have you seen In principle, it was designed for exactly this sort of thing. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1019879801 | https://github.com/pydata/xarray/issues/6174#issuecomment-1019879801 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848yiF5 | tovogt 57705593 | 2022-01-24T09:16:40Z | 2022-01-24T09:16:40Z | CONTRIBUTOR |
Here is my PR for the docstring improvements: https://github.com/pydata/xarray/pull/6187 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1019849836 | https://github.com/pydata/xarray/issues/6174#issuecomment-1019849836 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848yaxs | tovogt 57705593 | 2022-01-24T08:43:36Z | 2022-01-24T08:43:36Z | CONTRIBUTOR | It's not at all tricky to implement the listing of groups in a NETCDF4 file, at least not for the "netcdf4" engine. The code for that is in my OP above: ```python def _xr_nc4_groups_from_store(store): """List all groups contained in the given NetCDF4 data store
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1019311097 | https://github.com/pydata/xarray/issues/6174#issuecomment-1019311097 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848wXP5 | Illviljan 14371165 | 2022-01-22T17:09:30Z | 2022-01-22T17:09:30Z | MEMBER | Is it that difficult to get a list of groups though? I've been testing a backend engine that merges many groups into 1 dataset (dims/coords/variables renamed slightly to avoid duplicate names until they've been interpolated together) using Getting the groups are like the first thing you have to do, the code would look something like this: ```python
Sure, it can be quite tiresome to navigate the backend engines and 3rd party modules in xarray to add this. But most of them uses h5py or something quite similar at its core so it shouldn't be THAT bad. For example one could add another method here that retrieves them in a quick and easy way: https://github.com/pydata/xarray/blob/c54123772817875678ec7ad769e6d4d6612aeb92/xarray/backends/common.py#L356-L360 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1018681263 | https://github.com/pydata/xarray/issues/6174#issuecomment-1018681263 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848t9ev | TomNicholas 35968931 | 2022-01-21T16:48:42Z | 2022-01-21T16:48:42Z | MEMBER |
FYI the plan with DataTree is to eventually integrate the work upstream into xarray, so no new dependency would be required at that point. That might take a while however.
That's good at least! Do you have any suggestions for where the docs should be improved? PRs are of course always welcome too :grin:
I agree, and would be open to a function like this (even if eventually DataTree renders it redundant). It's definitely an omission on our part that xarray still doesn't provide an easy way to do this - I've found myself wanting to easily see all the groups multiple times. However, my understanding is that it's slightly tricky to implement, though suggestions/corrections are welcome! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1018257806 | https://github.com/pydata/xarray/issues/6174#issuecomment-1018257806 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848sWGO | tovogt 57705593 | 2022-01-21T07:40:55Z | 2022-01-21T07:46:06Z | CONTRIBUTOR | When I first posted this issue, I thought, the best solution is to just implement my proposed helper functions as part of the official xarray API. I don't think our project would add DataTree as a new dependency just for this as long as we have a very easy and viable solution of ourselves. But now I have a new idea. At first, I noticed that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1017782089 | https://github.com/pydata/xarray/issues/6174#issuecomment-1017782089 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848qh9J | TomNicholas 35968931 | 2022-01-20T18:11:26Z | 2022-01-20T18:12:32Z | MEMBER |
Ah - thanks for the clarification as to the context @tovogt !
That's fair enough.
So are you asking if: a) We should add a function to xarray which uses the same trick your helper functions do, for when people have a similar problem to you? b) We should use the same trick your helper functions do to rewrite the I/O implementation of DataTree to only require one open/close? (It seems to me that this could be the best of both worlds, once implemented.) c) Whether there is some other way to do this even faster than your helper functions? EDIT: Tagging @alexamici / @aurghs for their backends expertise + interest in DataTree |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1017298572 | https://github.com/pydata/xarray/issues/6174#issuecomment-1017298572 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848or6M | tovogt 57705593 | 2022-01-20T09:53:16Z | 2022-01-20T09:53:32Z | CONTRIBUTOR | Thanks for your quick response, Tom! I'm sure that DataTree is a really neat solution for most people working with hierarchically structured data. In my case, we are talking about a very unusual application of the NetCDF4 groups feature: We store literally thousands of very small NetCDF datasets in a single file. A file containing 3000 datasets is typically not larger than 100 MB. With that setup, the I/O performance is critical. Opening and closing the file on each group read/write is very, very bad. On our cluster this means that writing that 100 MB file takes 10 hours with your DataTree implementation, and 30 minutes with my helper functions. For reading, the effect is smaller, but still noticeable. So, my request is really about the I/O performance, and I don't need a full-fledged hierarchical data management API in xarray for that. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 | |
1016705107 | https://github.com/pydata/xarray/issues/6174#issuecomment-1016705107 | https://api.github.com/repos/pydata/xarray/issues/6174 | IC_kwDOAMm_X848mbBT | TomNicholas 35968931 | 2022-01-19T17:37:12Z | 2022-01-19T18:05:07Z | MEMBER |
If you've read through all of #4118 you will have seen that there is a prototype package providing a nested data structure which can handle groups. Using ```python from datatree import DataTree dt = DataTree.from_dict(ds_dict) dt.to_netcdf('filepath.nc') ``` (Here if you want groups within groups then the keys in the dictionary should be specified like filepaths, e.g.
Again
To extract all the groups as individual datasets you can do this to recreate the dictionary of datasets:
Is your solution noticeably faster? We (@jhamman and I) haven't really thought about speed of DataTree I/O yet I don't think, preferring to just make something simple which works for now. The current I/O code for DataTree is here. Despite that project only being a prototype, it is still probably the best solution to your problem that we currently have (at least the neatest). If you are interested in trying it out and reporting any problems then that would be greatly appreciated! EDIT: The idea discussed here might also be of interest to you. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation 1108138101 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4