html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/6174#issuecomment-1028730657,https://api.github.com/repos/pydata/xarray/issues/6174,1028730657,IC_kwDOAMm_X849US8h,57705593,2022-02-03T08:39:45Z,2022-02-03T08:41:16Z,CONTRIBUTOR,"> Have you seen [`xarray.save_mfdataset`](https://xarray.pydata.org/en/stable/generated/xarray.save_mfdataset.html)? > > In principle, it was designed for exactly this sort of thing. Thanks for the hint! Unfortunately, it says already in the docstring that ""it is no different than calling to_netcdf repeatedly"". And I explained in my OP that this would cause repeated file open/close operations - which is the whole point of this issue. Furthermore, when using `save_mfdataset` with my setup, it complains: ``` ValueError: cannot use mode='w' when writing multiple datasets to the same path ``` But when using `mode='a'` instead, it will complain that the file doesn't exist. However, it might still be the way to go API-wise. So, when talking about the solution of this issue, we could aim at fixing `save_mfdataset`: 1) Writing to the same file should use a single open/close operation. 2) Support `mode='w'` (or `mode='w+'`) when writing several datasets to the same path.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1028136906,https://api.github.com/repos/pydata/xarray/issues/6174,1028136906,IC_kwDOAMm_X849SB_K,1217238,2022-02-02T16:46:24Z,2022-02-02T17:20:50Z,MEMBER,"Have you seen [`xarray.save_mfdataset`](https://xarray.pydata.org/en/stable/generated/xarray.save_mfdataset.html)? In principle, it was designed for exactly this sort of thing.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1019879801,https://api.github.com/repos/pydata/xarray/issues/6174,1019879801,IC_kwDOAMm_X848yiF5,57705593,2022-01-24T09:16:40Z,2022-01-24T09:16:40Z,CONTRIBUTOR,"> That's good at least! Do you have any suggestions for where the docs should be improved? PRs are of course always welcome too Here is my PR for the docstring improvements: https://github.com/pydata/xarray/pull/6187","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1019849836,https://api.github.com/repos/pydata/xarray/issues/6174,1019849836,IC_kwDOAMm_X848yaxs,57705593,2022-01-24T08:43:36Z,2022-01-24T08:43:36Z,CONTRIBUTOR,"It's not at all tricky to implement the listing of groups in a NETCDF4 file, at least not for the ""netcdf4"" engine. The code for that is in my OP above: ```python def _xr_nc4_groups_from_store(store): """"""List all groups contained in the given NetCDF4 data store Parameters ---------- store : xarray.backend.NetCDF4DataStore Returns ------- list of str """""" def iter_groups(ds, prefix=""""): groups = [""""] for group_name, group_ds in ds.groups.items(): groups.extend([f""{prefix}{group_name}{subgroup}"" for subgroup in iter_groups(group_ds, prefix=""/"")]) return groups with store._manager.acquire_context(False) as root: return iter_groups(root) ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1019311097,https://api.github.com/repos/pydata/xarray/issues/6174,1019311097,IC_kwDOAMm_X848wXP5,14371165,2022-01-22T17:09:30Z,2022-01-22T17:09:30Z,MEMBER,"Is it that difficult to get a list of groups though? I've been testing a backend engine that merges many groups into 1 dataset (dims/coords/variables renamed slightly to avoid duplicate names until they've been interpolated together) using `h5py`. Getting the groups are like the first thing you have to do, the code would look something like this: ```python >>> f = h5py.File('foo.hdf5','w') >>> f.name '/' >>> list(f.keys()) [] ``` https://docs.h5py.org/en/stable/high/group.html Sure, it can be quite tiresome to navigate the backend engines and 3rd party modules in xarray to add this. But most of them uses h5py or something quite similar at its core so it shouldn't be THAT bad. For example one could add another method here that retrieves them in a quick and easy way: https://github.com/pydata/xarray/blob/c54123772817875678ec7ad769e6d4d6612aeb92/xarray/backends/common.py#L356-L360","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1018681263,https://api.github.com/repos/pydata/xarray/issues/6174,1018681263,IC_kwDOAMm_X848t9ev,35968931,2022-01-21T16:48:42Z,2022-01-21T16:48:42Z,MEMBER,"> I don't think our project would add DataTree as a new dependency just for this as long as we have a very easy and viable solution of ourselves. FYI the plan with DataTree is to eventually integrate the work upstream into xarray, so no new dependency would be required at that point. That might take a while however. > If this would be communicated more transparently in the docstrings, it would bring us a big step closer to the solution of this issue That's good at least! Do you have any suggestions for where the docs should be improved? PRs are of course always welcome too :grin: > one problem left: Getting a full list of all groups contained in a NetCDF4 file so that we can read them all in. > > I would insist that xarray should be able to do this. Maybe we need a open_datasets_from_groups function for that, or rather a function list_datasets. But it should somehow be solvable within the xarray API without requiring a two-year debate about the management and representation of hierarchical data structures. I agree, and would be open to a function like this (even if eventually DataTree renders it redundant). It's definitely an omission on our part that xarray still doesn't provide an easy way to do this - I've found myself wanting to easily see all the groups multiple times. However, my understanding is that [it's slightly tricky to implement](https://github.com/pydata/xarray/issues/4840#issuecomment-766198081), though suggestions/corrections are welcome!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1018257806,https://api.github.com/repos/pydata/xarray/issues/6174,1018257806,IC_kwDOAMm_X848sWGO,57705593,2022-01-21T07:40:55Z,2022-01-21T07:46:06Z,CONTRIBUTOR,"When I first posted this issue, I thought, the best solution is to just implement my proposed helper functions as part of the official xarray API. I don't think our project would add DataTree as a new dependency just for this as long as we have a very easy and viable solution of ourselves. But now I have a new idea. At first, I noticed that `open_dataset` won't actually close the file handle, but reuse it later if needed. So, at least there is no performance problem with the current *read* setup. For writing, there should be an option in `to_netcdf` that ensures that xarray is not closing the file handle. xarray already uses a `CachingFileManager` to open NetCDF4-files: https://github.com/pydata/xarray/blob/0ffb0f42282a1b67c4950e90e1e4ecd146307aa8/xarray/backends/netCDF4_.py#L379-L381 That means, that manager already ensures that the same file handle is re-used in subsequent operations of `to_netcdf` with the same file, unless it's closed in the meantime. Closing is managed here: https://github.com/pydata/xarray/blob/0ffb0f42282a1b67c4950e90e1e4ecd146307aa8/xarray/backends/api.py#L1072-L1094 It's a bit intransparent, when closing is actually triggered in practice - especially if you only look at the current docstrings. I found that, in fact, setting `compute=False` in `to_netcdf` will prevent the closing until you explicitly call compute on the returned object: ```python for name, ds in zip(ds_names, ds_list): delayed = ds.to_netcdf(path, group=name, compute=False) delayed.compute() ``` If this would be communicated more transparently in the docstrings, it would bring us a big step closer to the solution of this issue :slightly_smiling_face: Apart from that, there is only one problem left: **Getting a full list of all groups contained in a NetCDF4 file so that we can read them all in.** In DataTree, you fall back to using directly the NetCDF4 (or h5netcdf) API for that purpose: `_get_nc_dataset_class` and `_iter_nc_groups`. That's not the worst solution. However, I would insist that xarray should be able to do this. Maybe we need a `open_datasets_from_groups` function for that, or rather a function `list_datasets`. But it should somehow be solvable within the `xarray` API without requiring a two-year debate about the management and representation of hierarchical data structures.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1017782089,https://api.github.com/repos/pydata/xarray/issues/6174,1017782089,IC_kwDOAMm_X848qh9J,35968931,2022-01-20T18:11:26Z,2022-01-20T18:12:32Z,MEMBER,"> In my case, we are talking about a very unusual application of the NetCDF4 groups feature: We store literally thousands of very small NetCDF datasets in a single file. A file containing 3000 datasets is typically not larger than 100 MB. Ah - thanks for the clarification as to the context @tovogt ! > So, my request is really about the I/O performance, and I don't need a full-fledged hierarchical data management API in xarray for that. That's fair enough. > On our cluster this means that writing that 100 MB file takes 10 hours with your DataTree implementation, and 30 minutes with my helper functions. For reading, the effect is smaller, but still noticeable. So are you asking if: a) We should add a function to xarray which uses the same trick your helper functions do, for when people have a similar problem to you? b) We should use the same trick your helper functions do to rewrite the I/O implementation of DataTree to only require one open/close? (It seems to me that this could be the best of both worlds, once implemented.) c) Whether there is some other way to do this even faster than your helper functions? EDIT: Tagging @alexamici / @aurghs for their backends expertise + interest in DataTree","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1017298572,https://api.github.com/repos/pydata/xarray/issues/6174,1017298572,IC_kwDOAMm_X848or6M,57705593,2022-01-20T09:53:16Z,2022-01-20T09:53:32Z,CONTRIBUTOR,"Thanks for your quick response, Tom! I'm sure that DataTree is a really neat solution for most people working with hierarchically structured data. In my case, we are talking about a very unusual application of the NetCDF4 groups feature: We store literally thousands of very small NetCDF datasets in a single file. A file containing 3000 datasets is typically not larger than 100 MB. With that setup, the I/O performance is critical. Opening and closing the file on each group read/write is very, very bad. On our cluster this means that writing that 100 MB file takes 10 hours with your DataTree implementation, and 30 minutes with my helper functions. For reading, the effect is smaller, but still noticeable. So, my request is really about the I/O performance, and I don't need a full-fledged hierarchical data management API in xarray for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101 https://github.com/pydata/xarray/issues/6174#issuecomment-1016705107,https://api.github.com/repos/pydata/xarray/issues/6174,1016705107,IC_kwDOAMm_X848mbBT,35968931,2022-01-19T17:37:12Z,2022-01-19T18:05:07Z,MEMBER,"> I would like to have a function xr.to_netcdf that writes a list (or a dictionary) of datasets to a single NetCDF4 file. If you've read through all of #4118 you will have seen that there is a [prototype package](https://github.com/TomNicholas/datatree) providing a nested data structure which can handle groups. Using `DataTree` we can easily write a dictionary of datasets to a single netCDF file as groups: ```python from datatree import DataTree dt = DataTree.from_dict(ds_dict) dt.to_netcdf('filepath.nc') ``` (Here if you want groups within groups then the keys in the dictionary should be specified like filepaths, e.g. `/group1/group2/ds_name`.) > Ideally there should also be a way to read many datasets at once from a single NetCDF4 file using xr.open_dataset. Again `DataTree` allows you to open all the groups at once, returning a tree-like structure which contains all the groups: ```python dt = open_datatree('filepath.nc') ``` To extract all the groups as individual datasets you can do this to recreate the dictionary of datasets: ```python ds_dict = {node.pathstr: node.ds for node in dt.subtree} ``` > However, this is really slow when you have many (hundreds or thousands of) small datasets because the file is opened and closed in every iteration. > > Currently, I'm using the following read/write functions to achieve the same: Is your solution noticeably faster? We (@jhamman and I) haven't really thought about speed of DataTree I/O yet I don't think, preferring to just make something simple which works for now. The current [I/O code for DataTree is here](https://github.com/TomNicholas/datatree/blob/main/datatree/io.py). Despite that project only being a prototype, it is still probably the best solution to your problem that we currently have (at least the neatest). If you are interested in trying it out and reporting any problems then that would be greatly appreciated! EDIT: The [idea discussed here](https://github.com/TomNicholas/datatree/issues/51) might also be of interest to you.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1108138101