home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

4 rows where repo = 13221727 and user = 1991007 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 4

state 1

  • closed 4

repo 1

  • xarray · 4 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1382661146 I_kwDOAMm_X85Sabwa 7068 xr.where overrides coordinate attributes with global attributes sfinkens 1991007 closed 0     1 2022-09-22T15:44:21Z 2023-02-14T11:05:03Z 2023-02-14T11:05:03Z NONE      

What happened?

xr.where(..., keep_attrs=True) overrides coordinate attributes of the result with global attributes from one of its inputs.

What did you expect to happen?

The coordinate attributes to remain unchanged.

Minimal Complete Verifiable Example

```Python import xarray as xr

x_coord = xr.DataArray( [1, 2, 3], dims="x", attrs={"units": "m"} ) a = xr.DataArray( [1, 2, 3], dims="x", coords={"x": x_coord}, attrs={"units": "K"} ) res = xr.where(a > 1, a, 0, keep_attrs=True) assert res.coords["x"].attrs["units"] == "m" # Fails, overridden with "K" ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

Not sure if this is a duplicate of https://github.com/pydata/xarray/issues/2245. If so, feel free to close :)

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.53.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: de_DE.UTF-8 LOCALE: ('de_DE', 'UTF-8') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 2022.6.0 pandas: 1.5.0 numpy: 1.23.3 scipy: 1.9.1 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: 3.3.0 Nio: None zarr: 2.12.0 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.1 cfgrib: None iris: None bottleneck: None dask: 2022.9.1 distributed: 2022.9.1 matplotlib: 3.6.0 cartopy: None seaborn: None numbagg: None fsspec: 2022.8.2 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 65.3.0 pip: 22.2.2 conda: None pytest: 7.1.3 IPython: None sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7068/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
648981227 MDU6SXNzdWU2NDg5ODEyMjc= 4190 Polyfit fails with few non-NaN values sfinkens 1991007 closed 0     1 2020-07-01T13:26:53Z 2020-08-20T08:34:45Z 2020-08-20T08:34:45Z NONE      

What happened: A linear DataArray.polyfit seems to fail if there are less than 3 non-NaN elements along the fitting dimension.

Traceback ``` TypeError: only size-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: Traceback (most recent call last): File "polyfit.py", line 6, in <module> out = arr.polyfit(dim='x', deg=1) File "/home/stephan/venv/variogram/lib/python3.8/site-packages/xarray/core/dataarray.py", line 3455, in polyfit return self._to_temp_dataset().polyfit( File "/home/stephan/venv/variogram/lib/python3.8/site-packages/xarray/core/dataset.py", line 5962, in polyfit coeffs, residuals = duck_array_ops.least_squares( File "/home/stephan/venv/variogram/lib/python3.8/site-packages/xarray/core/duck_array_ops.py", line 625, in least_squares return nputils.least_squares(lhs, rhs, rcond=rcond, skipna=skipna) File "/home/stephan/venv/variogram/lib/python3.8/site-packages/xarray/core/nputils.py", line 239, in least_squares out[:, nan_cols] = np.apply_along_axis( File "<__array_function__ internals>", line 5, in apply_along_axis File "/home/stephan/venv/variogram/lib/python3.8/site-packages/numpy/lib/shape_base.py", line 379, in apply_along_axis res = asanyarray(func1d(inarr_view[ind0], *args, **kwargs)) File "/home/stephan/venv/variogram/lib/python3.8/site-packages/xarray/core/nputils.py", line 227, in _nanpolyfit_1d out[:-1], out[-1], _, _ = np.linalg.lstsq(x[~mask, :], arr[~mask], rcond=rcond) ValueError: setting an array element with a sequence. ```

I've played around with the degree a little bit and the error seems to occur as soon as (# of non-NaN values - degree) < 2

What you expected to happen:

The fit to succeed - I think two non-NaN values should be enough for a linear fit. I also noticed that there is no RankWarning: Polyfit may be poorly conditioned if the degree is larger than the number of non-NaN values.

Minimal Complete Verifiable Example:

```python import xarray as xr import numpy as np

arr = xr.DataArray([np.nan, 1, 2], dims='x', coords={'x': [0, 1, 2]}) arr.polyfit(dim='x', deg=1) ```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-39-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.3 xarray: 0.15.2.dev112+g54b9450b pandas: 1.0.5 numpy: 1.19.0 scipy: 1.5.0 netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.4.0 cftime: 1.1.3 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.19.0 distributed: 2.19.0 matplotlib: 3.2.2 cartopy: None seaborn: None numbagg: None pint: None setuptools: 44.0.0 pip: 20.0.2 conda: None pytest: None IPython: 7.16.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4190/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
547373923 MDU6SXNzdWU1NDczNzM5MjM= 3675 Dataset.expand_dims expands dimensions on coordinate bounds sfinkens 1991007 closed 0     2 2020-01-09T10:03:08Z 2020-01-09T11:03:22Z 2020-01-09T11:03:22Z NONE      

MCVE Code Sample

```python import xarray as xr

ds = xr.Dataset({'data': ('x', [1, 2]), 'x': ('x', [1, 2]), 'x_bnds': (('x', 'bnds'), [[0.5, 1.5], [1.5, 2.5]])}) ds['x'].attrs['bounds'] = 'x_bnds' ds = ds.expand_dims({'time': [0]}) ```

Output: <xarray.Dataset> Dimensions: (bnds: 2, time: 1, x: 2) Coordinates: * time (time) int64 0 * x (x) int64 1 2 Dimensions without coordinates: bnds Data variables: data (time, x) int64 1 2 x_bnds (time, x, bnds) float64 0.5 1.5 1.5 2.5

Expected Output

<xarray.Dataset> Dimensions: (bnds: 2, time: 1, x: 2) Coordinates: * time (time) int64 0 * x (x) int64 1 2 Dimensions without coordinates: bnds Data variables: data (time, x) int64 1 2 x_bnds (x, bnds) float64 0.5 1.5 1.5 2.5

Problem Description

Dataset.expand_dims expands dimensions on coordinate bounds (referenced via the bounds attribute of a coordinate variable). Since coordinates are not expanded I would expect their bounds to remain unchanged, too. At the moment you'd have to expand dimensions before adding bounds to achieve that.

Output of xr.show_versions()

``` INSTALLED VERSIONS ------------------ commit: None python: 3.7.2 (default, Mar 15 2019, 15:45:45) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] python-bits: 64 OS: Linux OS-release: 3.10.0-957.12.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.3 xarray: 0.14.1 pandas: 0.25.3 numpy: 1.17.4 scipy: None netCDF4: 1.5.3 pydap: None h5netcdf: 0.7.4 h5py: 2.10.0 Nio: None zarr: 2.3.2 cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.9.0 distributed: None matplotlib: 3.1.2 cartopy: None seaborn: None numbagg: None setuptools: 40.6.2 pip: 18.1 conda: None pytest: 5.3.2 IPython: 7.10.2 sphinx: None ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3675/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
446016536 MDU6SXNzdWU0NDYwMTY1MzY= 2974 Problems reading grouped netCDF file generated with h5netcdf engine sfinkens 1991007 closed 0     3 2019-05-20T09:35:54Z 2019-05-21T07:13:01Z 2019-05-21T03:08:11Z NONE      

Code Sample, a copy-pastable example if possible

```python import xarray as xr

data1 = [[1, 2], [3, 4]] y1 = [1, 2] x1 = [1, 2]

data2 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] y2 = [1, 2, 3] x2 = [1, 2, 3]

dataset1 = xr.Dataset({'data1': xr.DataArray(data1, dims=('y', 'x'), coords={'y': y1, 'x': x1})}) dataset2 = xr.Dataset({'data2': xr.DataArray(data2, dims=('y', 'x'), coords={'y': y2, 'x': x2})}) dataset1.to_netcdf('test.nc', mode='w', group='grp1', engine='h5netcdf') dataset2.to_netcdf('test.nc', mode='a', group='grp2', engine='h5netcdf')

xr.open_dataset('test.nc', group='grp1', engine='h5netcdf') # works xr.open_dataset('test.nc', group='grp1', engine='netcdf4') # fails ```

Traceback:

``` Traceback (most recent call last): File "test_xr.py", line 21, in <module> print(xr.open_dataset('test.nc', group='grp1', engine='netcdf4')) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/backends/api.py", line 394, in open_dataset ds = maybe_decode_store(store) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/backends/api.py", line 324, in maybe_decode_store drop_variables=drop_variables, use_cftime=use_cftime) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/conventions.py", line 480, in decode_cf ds = Dataset(vars, attrs=attrs) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/dataset.py", line 383, in __init__ self._set_init_vars_and_dims(data_vars, coords, compat) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/dataset.py", line 405, in _set_init_vars_and_dims data_vars, coords, compat=compat) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/merge.py", line 377, in merge_data_and_coords indexes=indexes) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/merge.py", line 446, in merge_core expanded = expand_variable_dicts(aligned) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/merge.py", line 222, in expand_variable_dicts var = as_variable(var, name=name) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/variable.py", line 117, in as_variable obj = obj.to_index_variable() File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/variable.py", line 408, in to_index_variable encoding=self._encoding, fastpath=True) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/variable.py", line 1825, in __init__ self._data = PandasIndexAdapter(self._data) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/indexing.py", line 1219, in __init__ self.array = utils.safe_cast_to_index(array) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/utils.py", line 78, in safe_cast_to_index index = pd.Index(np.asarray(array), **kwargs) File "/cmsaf/nfshome/routcm/Modules_CentOS/python/3.7.2/lib/python3.7/site-packages/numpy/core/numeric.py", line 501, in asarray return array(a, dtype, copy=False, order=order) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/indexing.py", line 510, in __array__ return np.asarray(array[self.key], dtype=None) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/backends/netCDF4_.py", line 64, in __getitem__ self._getitem) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/core/indexing.py", line 778, in explicit_indexing_adapter result = raw_indexing_method(raw_key.tuple) File "/cmsaf/cmsaf-ops3/sfinkens/virtualenvs/h5netcdf/lib/python3.7/site-packages/xarray/backends/netCDF4_.py", line 75, in _getitem array = getitem(original_array, key) File "netCDF4/_netCDF4.pyx", line 4317, in netCDF4._netCDF4.Variable.__getitem__ File "netCDF4/_netCDF4.pyx", line 5249, in netCDF4._netCDF4.Variable._get File "netCDF4/_netCDF4.pyx", line 1842, in netCDF4._netCDF4._ensure_nc_success RuntimeError: NetCDF: Start+count exceeds dimension bound ```

Output of ncdump test.nc ``` ncdump test.nc netcdf test {

group: grp1 { dimensions: y = 3 ; x = 3 ; variables: int64 data1(y, x) ; int64 x(x) ; int64 y(y) ; data:

data1 = NetCDF: Start+count exceeds dimension bound Location: file vardata.c; line 478 ```

Problem description

If datasets with different coordinates have been written to different netCDF groups using the h5netcdf engine, the generated file cannot be read by ncdump or the netcdf4 engine. The file is readable by the h5netcdf engine, though.

If the file is generated using the netcdf4 engine, it is readable by both ncdump and the netcdf4/h5netcdf engines. Also, if the datasets have identical coordinates, there is no such problem.

Expected Output

Since the file is readable by the h5netcdf engine, I'm not sure whether this is actually a bug. But at least I would expect the output file to be readable by ncdump - independent of the engine.

Output of xr.show_versions()

``` commit: None python: 3.7.2 (default, Mar 15 2019, 15:45:45) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] python-bits: 64 OS: Linux OS-release: 3.10.0-957.12.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1 pandas: 0.24.2 numpy: 1.15.4 scipy: 1.2.1 netCDF4: 1.4.3.2 pydap: None h5netcdf: 0.7.1 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: 1.0.22 cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.4 distributed: 1.26.0 matplotlib: 3.0.3 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 40.6.2 pip: 18.1 conda: None pytest: 4.3.1 IPython: 7.3.0 sphinx: 1.8.5 ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2974/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 658.333ms · About: xarray-datasette