home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3 rows where type = "issue" and user = 57705593 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • open 2
  • closed 1

type 1

  • issue · 3 ✖

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1154014066 I_kwDOAMm_X85EyNty 6310 Only auxiliary coordinates are listed in nc variable attribute tovogt 57705593 closed 0     9 2022-02-28T12:08:33Z 2022-04-10T14:45:38Z 2022-04-10T14:45:38Z CONTRIBUTOR      

What happened?

When saving a dataset to a NetCDF file with both dimension and auxiliary coordinates, the data variables' coordinate attribute only lists the coordinates that are not related to any dimension (so-called "auxiliary" coordinates).

What did you expect to happen?

I expect xarray to adhere to the CF conventions where it says:

Every element of every feature must be unambiguously associated with its space and time coordinates and with the feature that contains it. The coordinates attribute must be attached to every data variable to indicate the spatiotemporal coordinate variables that are needed to geo-locate the data. [from https://cfconventions.org/cf-conventions/cf-conventions.html#coordinates-metadata]

See also this example: https://cfconventions.org/cf-conventions/cf-conventions.html#_single_trajectory

Minimal Complete Verifiable Example

Python import xarray as xr xr.Dataset({"values": ('time', [0.0, 0.1])}, coords={ 'time': ('time', [0, 1]), 'lat': ('time', [5, 4]), 'lon': ('time', [10, 12]) }).to_netcdf("test.nc")

bash $ ncdump -h test.nc netcdf test { dimensions: time = 2 ; variables: double values(time) ; values:_FillValue = NaN ; values:coordinates = "lon lat" ; int64 time(time) ; int64 lat(time) ; int64 lon(time) ; } Note that values:coordinates doesn't list the time coordinate (as in the example in the CF conventions).

Relevant log output

No response

Anything else we need to know?

I don't want to offend anyone, please correct me if I read the CF conventions in the wrong way or if this is a special case, and in general, the CF conventions recommend to only include auxiliary coordinates in the coordinates attribute. I couldn't find any such recommendation in the conventions text, but it's a very long document. What I did find (see my second post) is that it is "permissible, but optional" to list non-auxiliary variables in general, and it is required for the special case of "Discrete Sampling Geometries".

Environment

``` INSTALLED VERSIONS


commit: None python: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.4.0-100-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1

xarray: 0.19.0 pandas: 1.3.4 numpy: 1.21.4 scipy: 1.7.0 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: 3.4.0 Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.10 cfgrib: 0.9.9.0 iris: None bottleneck: 1.3.2 dask: 2021.07.1 distributed: 2021.07.1 matplotlib: 3.4.2 cartopy: 0.20.1 seaborn: None numbagg: None pint: 0.17 setuptools: 59.2.0 pip: 21.2.1 conda: None pytest: None IPython: 7.29.0 sphinx: None ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6310/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1108138101 I_kwDOAMm_X85CDNh1 6174 [FEATURE]: Read from/write to several NetCDF4 groups with a single file open/close operation tovogt 57705593 open 0     10 2022-01-19T14:02:36Z 2022-02-03T08:41:16Z   CONTRIBUTOR      

Is your feature request related to a problem?

I know that there is a big discussion going on in #4118 about organizing hierarchies of datasets within xarray's data structures. But this issue is supposed to address only a comparably simple aspect of this.

Suppose that you have a list ds_list of xarray.Dataset objects with different dimensions etc. and you want to store them all in one NetCDF4 file by using the group feature introduced in NetCDF4. The group name of each dataset is stored in ds_names. Obviously, you can do something like this: python for name, ds in zip(ds_names, ds_list): ds.to_netcdf(path, group=name) However, this is really slow when you have many (hundreds or thousands of) small datasets because the file is opened and closed in every iteration.

Describe the solution you'd like

I would like to have a function xr.to_netcdf that writes a list (or a dictionary) of datasets to a single NetCDF4 file with a single open/close operation. Ideally there should also be a way to read many datasets at once from a single NetCDF4 file using xr.open_dataset.

Describe alternatives you've considered

Currently, I'm using the following read/write functions to achieve the same:

```python import pathlib from xarray.backends import NetCDF4DataStore from xarray.backends.api import dump_to_store from xarray.backends.common import ArrayWriter from xarray.backends.store import StoreBackendEntrypoint

def _xr_to_netcdf_multi(path, ds_dict, encoding=None): """Write multiple xarray Datasets to separate groups in a single NetCDF4 file

Parameters
----------
path : str or Path
    Path of the target NetCDF file.
ds_dict : dict whose keys are group names and values are xr.Dataset
    Each xr.Dataset in the dict is stored in the group identified by its key in the dict.
    Note that an empty string ("") is a valid group name and refers to the root group.
encoding : dict whose keys are group names and values are dict, optional
    For each dataset/group, one dict that is compliant with the format of the `encoding`
    keyword parameter in `xr.Dataset.to_netcdf`. Default: None
"""
path = str(pathlib.Path(path).expanduser().absolute())
store = NetCDF4DataStore.open(path, "w", "NETCDF4", None)
try:
    writer = ArrayWriter()
    for group, dataset in ds_dict.items():
        store._group = group
        unlimited_dims = dataset.encoding.get("unlimited_dims", None)
        encoding = None if encoding is None or group not in encoding else encoding[group]
        dump_to_store(dataset, store, writer, encoding=encoding, unlimited_dims=unlimited_dims)
finally:
    store.close()

def _xr_open_dataset_multi(path, prefix=""): """Read multiple xarray Datasets from groups contained in a single NetCDF4 file

Warning: The data is loaded into memory!

Parameters
----------
path : str or Path
    Path of the NetCDF file to read.
prefix : str, optional
    If given, only read groups whose name starts with this prefix. Default: ""

Returns
-------
ds_dict : dict whose keys are group names and values are xr.Dataset
    Each xr.Dataset in the dict is taken from the group identified by its key in the dict.
    Note that an empty string ("") is a valid group name and refers to the root group.
"""
path = str(pathlib.Path(path).expanduser().absolute())
store = NetCDF4DataStore.open(path, "r", "NETCDF4", None)
ds_dict = {}
try:
    groups = [g for g in _xr_nc4_groups_from_store(store) if g.startswith(prefix)]
    store_entrypoint = StoreBackendEntrypoint()
    for group in groups:
        store._group = group
        ds = store_entrypoint.open_dataset(store)
        ds.load()
        ds_dict[group] = ds
finally:
    store.close()
return ds_dict

def _xr_nc4_groups_from_store(store): """List all groups contained in the given NetCDF4 data store

Parameters
----------
store : xarray.backend.NetCDF4DataStore

Returns
-------
list of str
"""
def iter_groups(ds, prefix=""):
    groups = [""]
    for group_name, group_ds in ds.groups.items():
        groups.extend([f"{prefix}{group_name}{subgroup}"
                       for subgroup in iter_groups(group_ds, prefix="/")])
    return groups
with store._manager.acquire_context(False) as root:
    return iter_groups(root)

```

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6174/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1110166098 I_kwDOAMm_X85CK8pS 6181 [FEATURE]: Support reading/writing NetCDF4 from/to buffer (and .nc.gz) tovogt 57705593 open 0     0 2022-01-21T07:59:53Z 2022-01-21T16:07:46Z   CONTRIBUTOR      

Is your feature request related to a problem?

There is this veeeeery old PR from 2016 about adding gzip support to NetCDF4 when used with OpenDAP: https://github.com/pydata/xarray/pull/817 The last comment, from 2018, claimed that "netCDF4-Python does not support Python file objects". However, this limitation has been solved a long time ago in 2017 in NetCDF4 (https://github.com/Unidata/netcdf4-python/pull/652).

Describe the solution you'd like

It would be great if xr.Dataset.to_netcdf as well as xr.open_dataset could add support for file-like objects when used with engine="netcdf4". Once this is done, it should be easy to also support .nc.gz files in case of the NETCDF4 format (which is currently only supported in case of NETCDF3).

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6181/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1523.197ms · About: xarray-datasette