home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

8 rows where repo = 13221727, type = "issue" and user = 5821660 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 7
  • open 1

type 1

  • issue · 8 ✖

repo 1

  • xarray · 8 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1881254873 I_kwDOAMm_X85wIavZ 8146 Add proper tests with fill_value for test_interpolate_pd_compat-tests kmuehlbauer 5821660 closed 0     2 2023-09-05T06:02:10Z 2023-09-24T15:28:32Z 2023-09-24T15:28:32Z MEMBER      

What is your issue?

In #8125 breaking tests in Upstream-CI was reported. Root cause is upstream #https://github.com/pandas-dev/pandas/issues/54920 which is tackled in https://github.com/pandas-dev/pandas/pull/54927.

To get the tests working, the fill_value-kwarg has been removed in #8139. The tests didn't cover the use of fill_value for scipy based interpolators (the used np.nan is the default-value) completely.

This issue is to keep track of adding proper tests for fill_value.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8146/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1696097756 I_kwDOAMm_X85lGGXc 7817 nanosecond precision lost when reading time data kmuehlbauer 5821660 closed 0     3 2023-05-04T14:06:17Z 2023-09-17T08:15:27Z 2023-09-17T08:15:27Z MEMBER      

What happened?

When reading nanosecond precision time data from netcdf the precision is lost. This happens because CFMaskCoder will convert the variable to floating point and insert "NaN". In CFDatetimeCoder the floating point is cast back to int64 to transform into datetime64. This casting is sometimes undefined, hence #7098.

What did you expect to happen?

Precision should be preserved. The transformation to floating point should be omitted.

Minimal Complete Verifiable Example

```Python import xarray as xr import numpy as np import netCDF4 as nc import matplotlib.pyplot as plt

create time array and fillvalue

min_ns = -9223372036854775808 max_ns = 9223372036854775807 cnt = 2000 time_arr = np.arange(min_ns, min_ns + cnt, dtype=np.int64).astype("M8[ns]") fill_value = np.datetime64("1900-01-01", "ns")

create ncfile with time with attached _FillValue

with nc.Dataset("test.nc", mode="w") as ds: ds.createDimension("x", cnt) time = ds.createVariable("time", "<i8", ("x",), fill_value=fill_value) time[:] = time_arr time.units = "nanoseconds since 1970-01-01"

normal decoding

with xr.open_dataset("test.nc").load() as xr_ds: print("--- normal decoding ----------------------") print(xr_ds["time"]) plt.plot(xr_ds["time"].values.astype(np.int64) + max_ns, color="g", label="normal")

no decoding

with xr.open_dataset("test.nc", decode_cf=False).load() as xr_ds: print("--- no decoding ----------------------") print(xr_ds["time"]) plt.plot(xr_ds["time"].values + max_ns, lw=5, color="b", label="raw")

do not decode times, this shows how the CFMaskCoder converts

the array to floating point before it would run CFDatetimeCoder

with xr.open_dataset("test.nc", decode_times=False).load() as xr_ds: print("--- no time decoding ----------------------") print(xr_ds["time"])

do not run CFMaskCoder to show that times will be converted nicely

with CFDatetimeCoder

with xr.open_dataset("test.nc", mask_and_scale=False).load() as xr_ds: print("--- no masking ------------------------------") print(xr_ds["time"]) plt.plot(xr_ds["time"].values.astype(np.int64) + max_ns, lw=2, color="r", label="nomask")

plt.legend() ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

Python --- normal decoding ---------------------- <xarray.DataArray 'time' (x: 2000)> array([ 'NaT', 'NaT', 'NaT', ..., '1677-09-21T00:12:43.145226240', '1677-09-21T00:12:43.145226240', '1677-09-21T00:12:43.145226240'], dtype='datetime64[ns]') Dimensions without coordinates: x --- no decoding ---------------------- <xarray.DataArray 'time' (x: 2000)> array([-9223372036854775808, -9223372036854775807, -9223372036854775806, ..., -9223372036854773811, -9223372036854773810, -9223372036854773809]) Dimensions without coordinates: x Attributes: _FillValue: -2208988800000000000 units: nanoseconds since 1970-01-01 --- no time decoding ---------------------- <xarray.DataArray 'time' (x: 2000)> array([-9.22337204e+18, -9.22337204e+18, -9.22337204e+18, ..., -9.22337204e+18, -9.22337204e+18, -9.22337204e+18]) Dimensions without coordinates: x Attributes: units: nanoseconds since 1970-01-01 --- no masking ------------------------------ <xarray.DataArray 'time' (x: 2000)> array([ 'NaT', '1677-09-21T00:12:43.145224193', '1677-09-21T00:12:43.145224194', ..., '1677-09-21T00:12:43.145226189', '1677-09-21T00:12:43.145226190', '1677-09-21T00:12:43.145226191'], dtype='datetime64[ns]') Dimensions without coordinates: x Attributes: _FillValue: -2208988800000000000

Anything else we need to know?

Plot from above code:

Xref: #7098, https://github.com/pydata/xarray/issues/7790#issuecomment-1531050846

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:27:40) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.14.21-150400.24.60-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: ('de_DE', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2 xarray: 2023.4.2 pandas: 2.0.1 numpy: 1.24.2 scipy: 1.10.1 netCDF4: 1.6.3 pydap: None h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None iris: None bottleneck: 1.3.7 dask: 2023.3.1 distributed: 2023.3.1 matplotlib: 3.7.1 cartopy: 0.21.1 seaborn: None numbagg: None fsspec: 2023.3.0 cupy: 11.6.0 pint: 0.20.1 sparse: None flox: None numpy_groupies: None setuptools: 67.6.0 pip: 23.0.1 conda: None pytest: 7.2.2 mypy: None IPython: 8.11.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7817/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1655569401 I_kwDOAMm_X85irfv5 7723 default fill_value not masked when read from file kmuehlbauer 5821660 closed 0     5 2023-04-05T12:54:53Z 2023-09-13T12:44:54Z 2023-09-13T12:44:54Z MEMBER      

What happened?

When reading a netcdf file wich has been created with fill_value=None (default) those data is not masked. If one is writing back to disk this manifests.

What did you expect to happen?

Values should be masked.

There seems to be a simple solution:

On read apply the netcdf default fill_value in the variables attributes before decoding if no _FillValue attribute is set. After decoding we could change that to np.nan for floating point types.

Minimal Complete Verifiable Example

```Python import numpy as np import netCDF4 as nc import xarray as xr

with nc.Dataset("test-no-missing-01.nc", mode="w") as ds: x = ds.createDimension("x", 5) test = ds.createVariable("test", "f4", ("x",), fill_value=None) test[:4] = np.array([0.0, np.nan, 1.0, 8.0], dtype="f4") with nc.Dataset("test-no-missing-01.nc") as ds: print(ds["test"]) print(ds["test"][:])

with xr.open_dataset("test-no-missing-01.nc").load() as roundtrip: print(roundtrip) print(roundtrip["test"].attrs) print(roundtrip["test"].encoding) roundtrip.to_netcdf("test-no-missing-02.nc") with nc.Dataset("test-no-missing-02.nc") as ds: print(ds["test"]) print(ds["test"][:]) ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python <class 'netCDF4._netCDF4.Variable'> float32 test(x) unlimited dimensions: current shape = (5,) filling on, default _FillValue of 9.969209968386869e+36 used [0.0 nan 1.0 8.0 --]

<xarray.Dataset> Dimensions: (x: 5) Dimensions without coordinates: x Data variables: test (x) float32 0.0 nan 1.0 8.0 9.969e+36 {} {'zlib': False, 'szip': False, 'zstd': False, 'bzip2': False, 'blosc': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': True, 'chunksizes': None, 'source': 'test-no-missing-01.nc', 'original_shape': (5,), 'dtype': dtype('float32')} <class 'netCDF4._netCDF4.Variable'> float32 test(x) _FillValue: nan unlimited dimensions: current shape = (5,) filling on [0.0 -- 1.0 8.0 9.969209968386869e+36] ```

Anything else we need to know?

The issue is similar to #7722 but is more intricate, as now the status of certain data values change from masked to some netcdf specific default value.

This is when only parts of the source dataset have been written to. Then the default fill_value get's delivered to the user but it is not backed by an _FillValue attribute.

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:27:40) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.14.21-150400.24.55-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: ('de_DE', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2 xarray: 2023.3.0 pandas: 1.5.3 numpy: 1.24.2 scipy: 1.10.1 netCDF4: 1.6.3 pydap: None h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2023.3.1 distributed: 2023.3.1 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2023.3.0 cupy: 11.6.0 pint: 0.20.1 sparse: None flox: None numpy_groupies: None setuptools: 67.6.0 pip: 23.0.1 conda: None pytest: 7.2.2 mypy: None IPython: 8.11.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7723/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1655483374 I_kwDOAMm_X85irKvu 7722 Conflicting _FillValue and missing_value on write kmuehlbauer 5821660 open 0     3 2023-04-05T11:56:46Z 2023-06-21T13:23:16Z   MEMBER      

What happened?

see also #7191

If missing_value and _FillValue is an attribute of a DataArray it can't be written out to file if these two contradict:

python ValueError: Variable 'test' has conflicting _FillValue (nan) and missing_value (1.0). Cannot encode data.

This happens, if missing_value is an attribute of a specific netCDF Dataset of an existing file. On read the missing_value will be masked with np.nan on the data and it will be preserved within encoding. On write, _FillValue will be added as attribute by xarray (if not available, at least for floating point types), too. So far so good.

The error first manifests if you read back this file and try to write it again. There is no warning on the second read, that the two _FillValue and missing_value are differing. Only on the second write.

What did you expect to happen?

The file should be written on the second roundtrip.

There are at least two solutions to this:

  1. Mask missing_value on read and purge missing_value completely in favor of _FillValue.
  2. Do not handle missing_value at all, but let the user take action.

Minimal Complete Verifiable Example

```Python import numpy as np import netCDF4 as nc import xarray as xr

with nc.Dataset("test-no-fillval-01.nc", mode="w") as ds: x = ds.createDimension("x", 4) test = ds.createVariable("test", "f4", ("x",), fill_value=None) test.missing_value = 1. test.valid_min = 2. test.valid_max = 10. test[:] = np.array([0.0, np.nan, 1.0, 8.0], dtype="f4") with nc.Dataset("test-no-fillval-01.nc") as ds: print(ds["test"]) print(ds["test"][:])

with xr.open_dataset("test-no-fillval-01.nc").load() as roundtrip: print(roundtrip) print(roundtrip["test"].attrs) print(roundtrip["test"].encoding) roundtrip.to_netcdf("test-no-fillval-02.nc")

with xr.open_dataset("test-no-fillval-02.nc").load() as roundtrip: print(roundtrip) print(roundtrip["test"].attrs) print(roundtrip["test"].encoding) roundtrip.to_netcdf("test-no-fillval-03.nc") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python <class 'netCDF4._netCDF4.Variable'> float32 test(x) missing_value: 1.0 valid_min: 2.0 valid_max: 10.0 unlimited dimensions: current shape = (4,) filling on, default _FillValue of 9.969209968386869e+36 used

<xarray.Dataset> Dimensions: (x: 4) Dimensions without coordinates: x Data variables: test (x) float32 0.0 nan nan 8.0 {'valid_min': 2.0, 'valid_max': 10.0} {'zlib': False, 'szip': False, 'zstd': False, 'bzip2': False, 'blosc': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': True, 'chunksizes': None, 'source': 'test-no-fillval-01.nc', 'original_shape': (4,), 'dtype': dtype('float32'), 'missing_value': 1.0}

<xarray.Dataset> Dimensions: (x: 4) Dimensions without coordinates: x Data variables: test (x) float32 0.0 nan nan 8.0 {'valid_min': 2.0, 'valid_max': 10.0} {'zlib': False, 'szip': False, 'zstd': False, 'bzip2': False, 'blosc': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': True, 'chunksizes': None, 'source': 'test-no-fillval-02.nc', 'original_shape': (4,), 'dtype': dtype('float32'), 'missing_value': 1.0, '_FillValue': nan}

File /home/kai/miniconda/envs/xarray_311/lib/python3.11/site-packages/xarray/coding/variables.py:167, in CFMaskCoder.encode(self, variable, name) 160 mv = encoding.get("missing_value") 162 if ( 163 fv is not None 164 and mv is not None 165 and not duck_array_ops.allclose_or_equiv(fv, mv) 166 ): --> 167 raise ValueError( 168 f"Variable {name!r} has conflicting _FillValue ({fv}) and missing_value ({mv}). Cannot encode data." 169 ) 171 if fv is not None: 172 # Ensure _FillValue is cast to same dtype as data's 173 encoding["_FillValue"] = dtype.type(fv)

ValueError: Variable 'test' has conflicting _FillValue (nan) and missing_value (1.0). Cannot encode data. ```

Anything else we need to know?

The adding of _FillValue on write happens here:

https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/conventions.py#L300

https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/conventions.py#L144-L152

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:27:40) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.14.21-150400.24.55-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: ('de_DE', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2 xarray: 2023.3.0 pandas: 1.5.3 numpy: 1.24.2 scipy: 1.10.1 netCDF4: 1.6.3 pydap: None h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2023.3.1 distributed: 2023.3.1 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2023.3.0 cupy: 11.6.0 pint: 0.20.1 sparse: None flox: None numpy_groupies: None setuptools: 67.6.0 pip: 23.0.1 conda: None pytest: 7.2.2 mypy: None IPython: 8.11.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7722/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
547925778 MDU6SXNzdWU1NDc5MjU3Nzg= 3680 broken output of `find_root_and_group` for h5netcdf kmuehlbauer 5821660 closed 0     4 2020-01-10T08:02:06Z 2022-01-12T08:23:37Z 2020-01-23T06:36:25Z MEMBER      

MCVE Code Sample

```python

Your code here

import netCDF4 import h5netcdf import xarray from xarray.backends.common import find_root_and_group root = netCDF4.Dataset("test.nc", "w", format="NETCDF4") test1 = root.createGroup("test1") test1.createGroup("test2") root.close() h5n = h5netcdf.File('test.nc') print(find_root_and_group(h5n['test1'])[1]) h5n.close() ```

This will output: //test1//test1/test2

Expected Output

/test1/test2

Problem Description

I stumbled over this while working on #3618. Although the function claims to retrieve root and group name of netCDF4/h5netcdf datasets the output for h5netcdf is broken.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 05:03:59) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp151.28.36-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: de_DE.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.14.1+43.g3e9d9046.dirty pandas: 0.25.3 numpy: 1.17.3 scipy: 1.3.3 netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.8.0 distributed: 2.9.0 matplotlib: 3.1.2 cartopy: 0.17.0 seaborn: None numbagg: None setuptools: 42.0.2.post20191201 pip: 19.3.1 conda: None pytest: 5.3.1 IPython: 7.10.2 sphinx: 2.3.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3680/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
811933067 MDU6SXNzdWU4MTE5MzMwNjc= 4927 Times not decoding due to time_reference being in another variable kmuehlbauer 5821660 closed 0     12 2021-02-19T11:11:47Z 2021-02-21T08:34:15Z 2021-02-21T08:34:15Z MEMBER      

What happened:

Decoding of times fails for NetCDF4 CfRadial2 style file due to time_reference-string is in another variable.

What you expected to happen:

time_reference should be extracted from the variable and time should be decoded properly.

Minimal Complete Verifiable Example:

```python

create file with such feature

time = np.arange(10) time_attrs = dict(standard_name="time", units="seconds since time_reference", calendar="gregorian") time_ref_attrs = dict(comments="UTC reference date. Format follows ISO 8601 standard.")

ds = xr.Dataset( data_vars=dict( time=(["time"], time, time_attrs),
time_reference=([], '1970-01-01T00:00:00Z', time_ref_attrs), ))

ds.to_netcdf("test.nc")

breaks with the error below

with xr.open_dataset("test.nc") as ds: print(ds)

ad-hoc fix

with xr.open_dataset("test.nc", decode_times=False) as ds: tr = ds.time.attrs["units"].split(" ") nt = ds[tr[-1]].item() ds.time.attrs["units"] = " ".join([*tr[:-1], nt]) ds = xr.decode_cf(ds) print(ds) ```

```python

AttributeError Traceback (most recent call last) ~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in _decode_cf_datetime_dtype(data, units, calendar, use_cftime) 142 try: --> 143 result = decode_cf_datetime(example_value, units, calendar, use_cftime) 144 except Exception:

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in decode_cf_datetime(num_dates, units, calendar, use_cftime) 224 try: --> 225 dates = _decode_datetime_with_pandas(flat_num_dates, units, calendar) 226 except (KeyError, OutOfBoundsDatetime, OverflowError):

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in _decode_datetime_with_pandas(flat_num_dates, units, calendar) 174 --> 175 delta, ref_date = _unpack_netcdf_time_units(units) 176 delta = _netcdf_to_numpy_timeunit(delta)

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in _unpack_netcdf_time_units(units) 127 delta_units, ref_date = [s.strip() for s in matches.groups()] --> 128 ref_date = _ensure_padded_year(ref_date) 129

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in _ensure_padded_year(ref_date) 102 matches_start_digits = re.match(r"(\d+)(.*)", ref_date) --> 103 ref_year, everything_else = [s for s in matches_start_digits.groups()] 104 ref_date_padded = "{:04d}{}".format(int(ref_year), everything_else)

AttributeError: 'NoneType' object has no attribute 'groups'

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last) <ipython-input-76-da468c6317bb> in <module> 13 ds.to_netcdf("test.nc") 14 ---> 15 with xr.open_dataset("test.nc") as ds: 16 print(ds) 17

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, cache, drop_variables, backend_kwargs, use_cftime, decode_timedelta) 432 from . import apiv2 433 --> 434 return apiv2.open_dataset(**kwargs) 435 436 if mask_and_scale is None:

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/backends/apiv2.py in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, **kwargs) 266 267 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None) --> 268 backend_ds = backend.open_dataset( 269 filename_or_obj, 270 drop_variables=drop_variables,

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in open_dataset(self, filename_or_obj, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta, group, mode, format, clobber, diskless, persist, lock, autoclose) 557 store_entrypoint = StoreBackendEntrypoint() 558 with close_on_error(store): --> 559 ds = store_entrypoint.open_dataset( 560 store, 561 mask_and_scale=mask_and_scale,

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/backends/store.py in open_dataset(self, store, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta) 23 encoding = store.get_encoding() 24 ---> 25 vars, attrs, coord_names = conventions.decode_cf_variables( 26 vars, 27 attrs,

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/conventions.py in decode_cf_variables(variables, attributes, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables, use_cftime, decode_timedelta) 510 and stackable(v.dims[-1]) 511 ) --> 512 new_vars[k] = decode_cf_variable( 513 k, 514 v,

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/conventions.py in decode_cf_variable(name, var, concat_characters, mask_and_scale, decode_times, decode_endianness, stack_char_dim, use_cftime, decode_timedelta) 358 var = times.CFTimedeltaCoder().decode(var, name=name) 359 if decode_times: --> 360 var = times.CFDatetimeCoder(use_cftime=use_cftime).decode(var, name=name) 361 362 dimensions, data, attributes, encoding = variables.unpack_for_decoding(var)

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in decode(self, variable, name) 515 units = pop_to(attrs, encoding, "units") 516 calendar = pop_to(attrs, encoding, "calendar") --> 517 dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime) 518 transform = partial( 519 decode_cf_datetime,

~/miniconda3/envs/h5py3/lib/python3.9/site-packages/xarray/coding/times.py in _decode_cf_datetime_dtype(data, units, calendar, use_cftime) 151 "if it is not installed." 152 ) --> 153 raise ValueError(msg) 154 else: 155 dtype = getattr(result, "dtype", np.dtype("object"))

ValueError: unable to decode time units 'seconds since time_reference' with "calendar 'gregorian'". Try opening your dataset with decode_times=False or installing cftime if it is not installed. ```

Anything else we need to know?:

I've searched the issues to no avail, also internet search wasn't successful so far. Any pointers welcome.

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.9.0 | packaged by conda-forge | (default, Oct 14 2020, 22:59:50) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.8.0-43-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: de_DE.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.3.dev124+g0a309e07.d20210218 pandas: 1.2.1 numpy: 1.20.0 scipy: 1.6.0 netCDF4: 1.5.4 pydap: None h5netcdf: 0.10.0 h5py: 3.1.0 Nio: None zarr: None cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.30.0 distributed: 2.30.1 matplotlib: 3.3.2 cartopy: 0.18.0 seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20201009 pip: 20.2.4 conda: None pytest: 6.1.2 IPython: 7.19.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4927/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
420930870 MDU6SXNzdWU0MjA5MzA4NzA= 2811 concat changes variable order kmuehlbauer 5821660 closed 0     18 2019-03-14T10:11:28Z 2020-09-19T03:01:28Z 2020-09-19T03:01:28Z MEMBER      

Code Sample, a copy-pastable example if possible

A "Minimal, Complete and Verifiable Example" will make it much easier for maintainers to help you: http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports

  • Case 1: Creation of Dataset without Coordinates python data = np.zeros((2,3)) ds = xr.Dataset({'test': (['c', 'b'], data)}) print(ds.dims) ds2 = xr.concat([ds, ds], dim='c') print(ds2.dims) yields (assumed correct) output of: Frozen(SortedKeysDict({'c': 2, 'b': 3})) Frozen(SortedKeysDict({'c': 4, 'b': 3}))
  • Case 2: Creation of Dataset with Coordinates python data = np.zeros((2,3)) ds = xr.Dataset({'test': (['c', 'b'], data)}, coords={'c': (['c'], np.arange(data.shape[0])), 'b': (['b'], np.arange(data.shape[1])),}) print(ds.dims) ds2 = xr.concat([ds, ds], dim='c') print(ds2.dims) yields (assumed false) output of: Frozen(SortedKeysDict({'c': 2, 'b': 3})) Frozen(SortedKeysDict({'b': 3, 'c': 4}))

Problem description

xr.concat changes the dimension order for .dims as well as .sizes to an alphanumerically sorted representation.

Expected Output

xr.concat should not change the dimension order in any case.

Frozen(SortedKeysDict({'c': 2, 'b': 3})) Frozen(SortedKeysDict({'c': 4, 'b': 3}))

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.1 | packaged by conda-forge | (default, Nov 13 2018, 18:33:04) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp150.12.48-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: de_DE.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.11.3 pandas: 0.24.1 numpy: 1.16.1 scipy: 1.2.0 netCDF4: 1.4.2 pydap: None h5netcdf: 0.6.2 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.2.1 cyordereddict: None dask: None distributed: None matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: None setuptools: 40.8.0 pip: 19.0.2 conda: None pytest: 4.2.0 IPython: 7.2.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2811/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
701764313 MDU6SXNzdWU3MDE3NjQzMTM= 4425 deepcopying variable raises `TypeError: h5py objects cannot be pickled` (Dataset.sortby) kmuehlbauer 5821660 closed 0     5 2020-09-15T09:23:20Z 2020-09-18T22:31:09Z 2020-09-18T22:31:09Z MEMBER      

What happened:

When using xr.open_dataset with H5NetCDFDataStore and opened h5py.File handle deepcopy in Dataset.sortby/align leads to TypeError: h5py objects cannot be pickled.

What you expected to happen:

While applying Dataset.sortby no error should be raised.

Minimal Complete Verifiable Example:

```python

create hdf5 file

import h5py f = h5py.File('myfile.h5','w') dset = f.create_dataset("data", (360, 1000)) f.close()

import h5netcdf import xarray as xr import numpy as np

f = h5netcdf.File("myfile.h5", "r", phony_dims="access") s0 = xr.backends.H5NetCDFStore(f) ds = xr.open_dataset(s0, engine="h5netcdf", chunks=None) ds = ds.assign_coords({"phony_dim_0": np.arange(ds.dims['phony_dim_0'], 0, -1)}) ds.sortby('phony_dim_0') ds.close() ```

Error Traceback ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-0e1377169cb3> in <module> 5 ds = xr.open_dataset(s0, engine="h5netcdf", chunks=None) 6 ds = ds.assign_coords({"phony_dim_0": np.arange(ds.dims['phony_dim_0'], 0, -1)}) ----> 7 ds.sortby('phony_dim_0') 8 ds.close() /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/xarray-0.16.1.dev86+g264fdb29-py3.8.egg/xarray/core/dataset.py in sortby(self, variables, ascending) 5293 variables = variables 5294 variables = [v if isinstance(v, DataArray) else self[v] for v in variables] -> 5295 aligned_vars = align(self, *variables, join="left") 5296 aligned_self = aligned_vars[0] 5297 aligned_other_vars = aligned_vars[1:] /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/xarray-0.16.1.dev86+g264fdb29-py3.8.egg/xarray/core/alignment.py in align(join, copy, indexes, exclude, fill_value, *objects) 336 if not valid_indexers: 337 # fast path for no reindexing necessary --> 338 new_obj = obj.copy(deep=copy) 339 else: 340 new_obj = obj.reindex(copy=copy, fill_value=fill_value, **valid_indexers) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/xarray-0.16.1.dev86+g264fdb29-py3.8.egg/xarray/core/dataset.py in copy(self, deep, data) 1076 """ 1077 if data is None: -> 1078 variables = {k: v.copy(deep=deep) for k, v in self._variables.items()} 1079 elif not utils.is_dict_like(data): 1080 raise ValueError("Data must be dict-like") /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/xarray-0.16.1.dev86+g264fdb29-py3.8.egg/xarray/core/dataset.py in <dictcomp>(.0) 1076 """ 1077 if data is None: -> 1078 variables = {k: v.copy(deep=deep) for k, v in self._variables.items()} 1079 elif not utils.is_dict_like(data): 1080 raise ValueError("Data must be dict-like") /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/xarray-0.16.1.dev86+g264fdb29-py3.8.egg/xarray/core/variable.py in copy(self, deep, data) 938 939 if deep: --> 940 data = copy.deepcopy(data) 941 942 else: /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_tuple(x, memo, deepcopy) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in <listcomp>(.0) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_tuple(x, memo, deepcopy) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in <listcomp>(.0) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_tuple(x, memo, deepcopy) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in <listcomp>(.0) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_tuple(x, memo, deepcopy) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in <listcomp>(.0) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_tuple(x, memo, deepcopy) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in <listcomp>(.0) 208 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 210 y = [deepcopy(a, memo) for a in x] 211 # We're not going to put the tuple in the memo, but it's still important we 212 # check for it, in case the tuple contains recursive mutable structures. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 173 174 # If is its own copy, don't memoize. /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 268 if state is not None: 269 if deep: --> 270 state = deepcopy(state, memo) 271 if hasattr(y, '__setstate__'): 272 y.__setstate__(state) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in _deepcopy_dict(x, memo, deepcopy) 228 memo[id(x)] = y 229 for key, value in x.items(): --> 230 y[deepcopy(key, memo)] = deepcopy(value, memo) 231 return y 232 d[dict] = _deepcopy_dict /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/copy.py in deepcopy(x, memo, _nil) 159 reductor = getattr(x, "__reduce_ex__", None) 160 if reductor is not None: --> 161 rv = reductor(4) 162 else: 163 reductor = getattr(x, "__reduce__", None) /home/kai/miniconda/envs/wradlib_devel/lib/python3.8/site-packages/h5py/_hl/base.py in __getnewargs__(self) 306 limitations, look at the h5pickle project on PyPI. 307 """ --> 308 raise TypeError("h5py objects cannot be pickled") 309 310 def __getstate__(self): TypeError: h5py objects cannot be pickled ```

Anything else we need to know?:

When invoked with chunks={} it works as well as if the following code is used:

python ds = xr.open_dataset('myfile.h5', group='/', engine='h5netcdf', backend_kwargs=dict(phony_dims='access')) ds = ds.assign_coords({"phony_dim_0": np.arange(ds.dims['phony_dim_0'], 0, -1)}) ds.sortby('phony_dim_0') ds.close()

This was introduced by #4221, see https://github.com/pydata/xarray/blob/66ab0ae4f3aa3c461357a5a895405e81357796b1/xarray/core/variable.py#L939-L941

Before:

python if deep and ( hasattr(data, "__array_function__") or isinstance(data, dask_array_type) or (not IS_NEP18_ACTIVE and isinstance(data, np.ndarray)) ): data = copy.deepcopy(data)

All three of the above tests return False in my case, so deepcopy should never be used here.

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 | packaged by conda-forge | (default, Aug 29 2020, 01:22:49) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp151.28.67-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: de_DE.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.1.dev86+g264fdb29 pandas: 1.1.1 numpy: 1.19.1 scipy: 1.5.0 netCDF4: 1.5.4 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: 2.4.0 cftime: 1.2.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.8.4 iris: None bottleneck: 1.3.2 dask: 2.19.0 distributed: 2.25.0 matplotlib: 3.3.1 cartopy: 0.18.0 seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20200814 pip: 20.2.2 conda: 4.8.3 pytest: 5.4.3 IPython: 7.18.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4425/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 156.362ms · About: xarray-datasette