home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where state = "open" and user = 22245117 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date)

type 1

  • issue 6

state 1

  • open · 6 ✖

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1706179211 I_kwDOAMm_X85lsjqL 7837 Weighted reductions inconsistency when variables have missing dimensions malmans2 22245117 open 0     0 2023-05-11T16:40:58Z 2023-11-06T06:01:35Z   CONTRIBUTOR      

What happened?

There is some inconsistencies in the error raised by weighted reductions when the dimensions over which to apply the reduction are not present in all variables.

What did you expect to happen?

I think all reduction methods should have the same behaviour. I'm not sure what's the best behaviour, although I probably prefer the mean behaviour (do not raise an error when there are variables with missing dimensions).

Minimal Complete Verifiable Example

```Python import xarray as xr

ds = xr.Dataset({"foo": xr.DataArray([[1]] * 2), "bar": 1}) weighted = ds.weighted(ds["dim_0"])

weighted.mean(ds.dims) # OK weighted.std(ds.dims) # ValueError ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python

ValueError Traceback (most recent call last) Cell In[1], line 7 4 weighted = ds.weighted(ds["dim_0"]) 6 weighted.mean(ds.dims) # OK ----> 7 weighted.std(ds.dims) # ValueError

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:503, in Weighted.std(self, dim, skipna, keep_attrs) 497 def std( 498 self, 499 dim: Dims = None, 500 skipna: bool | None = None, 501 keep_attrs: bool | None = None, 502 ) -> T_Xarray: --> 503 return self._implementation( 504 self._weighted_std, dim=dim, skipna=skipna, keep_attrs=keep_attrs 505 )

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:540, in DatasetWeighted._implementation(self, func, dim, kwargs) 537 def _implementation(self, func, dim, kwargs) -> Dataset: 538 self._check_dim(dim) --> 540 return self.obj.map(func, dim=dim, **kwargs)

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/dataset.py:5964, in Dataset.map(self, func, keep_attrs, args, kwargs) 5962 if keep_attrs is None: 5963 keep_attrs = _get_keep_attrs(default=False) -> 5964 variables = { 5965 k: maybe_wrap_array(v, func(v, *args, kwargs)) 5966 for k, v in self.data_vars.items() 5967 } 5968 if keep_attrs: 5969 for k, v in variables.items():

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/dataset.py:5965, in <dictcomp>(.0) 5962 if keep_attrs is None: 5963 keep_attrs = _get_keep_attrs(default=False) 5964 variables = { -> 5965 k: maybe_wrap_array(v, func(v, args, *kwargs)) 5966 for k, v in self.data_vars.items() 5967 } 5968 if keep_attrs: 5969 for k, v in variables.items():

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:309, in Weighted._weighted_std(self, da, dim, skipna) 301 def _weighted_std( 302 self, 303 da: DataArray, 304 dim: Dims = None, 305 skipna: bool | None = None, 306 ) -> DataArray: 307 """Reduce a DataArray by a weighted std along some dimension(s).""" --> 309 return cast("DataArray", np.sqrt(self._weighted_var(da, dim, skipna)))

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:295, in Weighted._weighted_var(self, da, dim, skipna) 287 def _weighted_var( 288 self, 289 da: DataArray, 290 dim: Dims = None, 291 skipna: bool | None = None, 292 ) -> DataArray: 293 """Reduce a DataArray by a weighted var along some dimension(s).""" --> 295 sum_of_squares = self._sum_of_squares(da, dim=dim, skipna=skipna) 297 sum_of_weights = self._sum_of_weights(da, dim=dim) 299 return sum_of_squares / sum_of_weights

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:259, in Weighted._sum_of_squares(self, da, dim, skipna) 251 def _sum_of_squares( 252 self, 253 da: DataArray, 254 dim: Dims = None, 255 skipna: bool | None = None, 256 ) -> DataArray: 257 """Reduce a DataArray by a weighted sum_of_squares along some dimension(s).""" --> 259 demeaned = da - da.weighted(self.weights).mean(dim=dim) 261 return self._reduce((demeaned**2), self.weights, dim=dim, skipna=skipna)

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:483, in Weighted.mean(self, dim, skipna, keep_attrs) 477 def mean( 478 self, 479 dim: Dims = None, 480 skipna: bool | None = None, 481 keep_attrs: bool | None = None, 482 ) -> T_Xarray: --> 483 return self._implementation( 484 self._weighted_mean, dim=dim, skipna=skipna, keep_attrs=keep_attrs 485 )

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:529, in DataArrayWeighted._implementation(self, func, dim, kwargs) 528 def _implementation(self, func, dim, kwargs) -> DataArray: --> 529 self._check_dim(dim) 531 dataset = self.obj._to_temp_dataset() 532 dataset = dataset.map(func, dim=dim, **kwargs)

File ~/mambaforge/envs/xarray/lib/python3.11/site-packages/xarray/core/weighted.py:203, in Weighted._check_dim(self, dim) 201 missing_dims = set(dims) - set(self.obj.dims) - set(self.weights.dims) 202 if missing_dims: --> 203 raise ValueError( 204 f"{self.class.name} does not contain the dimensions: {missing_dims}" 205 )

ValueError: DataArrayWeighted does not contain the dimensions: {'dim_1'} ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 09:05:00) [Clang 14.0.6 ] python-bits: 64 OS: Darwin OS-release: 22.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: None LOCALE: (None, 'UTF-8') libhdf5: None libnetcdf: None xarray: 2023.4.2 pandas: 2.0.1 numpy: 1.24.3 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None iris: None bottleneck: None dask: None distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 67.7.2 pip: 23.1.2 conda: None pytest: None mypy: None IPython: 8.13.2 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7837/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1717209758 I_kwDOAMm_X85mWoqe 7851 Add pop methods malmans2 22245117 open 0     1 2023-05-19T12:58:17Z 2023-05-19T14:01:42Z   CONTRIBUTOR      

Is your feature request related to a problem?

It's not related to a problem. I would find useful to have pop methods.

Describe the solution you'd like

Is it feasible to add pop methods?

For example, instead of doing this: ```python import xarray as xr

ds = xr.Dataset({"foo": None}) foo = ds["foo"] ds = ds.drop_vars("foo") I would like to be able to do this:python foo = ds.pop_vars("foo") ```

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7851/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
620514214 MDU6SXNzdWU2MjA1MTQyMTQ= 4077 open_mfdataset overwrites variables with different values but overlapping coordinates malmans2 22245117 open 0     12 2020-05-18T21:22:09Z 2022-04-28T15:08:53Z   CONTRIBUTOR      

In the example below I'm opening and concatenating two datasets using open_mfdataset. These datasets have variables with different values but overlapping coordinates. I'm concatenating along y, which is 0...4 in one dataset and 0...5 in the other. The y dimension of the resulting dataset is 0...5 which means that open_mfdataset has overwritten some values without showing any error/warning.

Is this the expected default behavior? I would expect to get at least a warning, but maybe I'm misunderstanding the default arguments.

I tried to play with the arguments, but I couldn't figure out which argument I should change to get an error in these scenarios.

MCVE Code Sample

python import xarray as xr import numpy as np

python for i in range(2): ds = xr.Dataset( {"foo": (("x", "y"), np.random.rand(4, 5 + i))}, coords={"x": np.arange(4), "y": np.arange(5 + i)}, ) print(ds) ds.to_netcdf(f"tmp{i}.nc")

<xarray.Dataset>
Dimensions:  (x: 4, y: 5)
Coordinates:
  * x        (x) int64 0 1 2 3
  * y        (y) int64 0 1 2 3 4
Data variables:
    foo      (x, y) float64 0.1271 0.6117 0.3769 0.1884 ... 0.853 0.5026 0.3762
<xarray.Dataset>
Dimensions:  (x: 4, y: 6)
Coordinates:
  * x        (x) int64 0 1 2 3
  * y        (y) int64 0 1 2 3 4 5
Data variables:
    foo      (x, y) float64 0.2841 0.6098 0.7761 0.0673 ... 0.2954 0.7212 0.3954

python DS = xr.open_mfdataset("tmp*.nc", concat_dim="y", combine="by_coords") print(DS)

<xarray.Dataset>
Dimensions:  (x: 4, y: 6)
Coordinates:
  * x        (x) int64 0 1 2 3
  * y        (y) int64 0 1 2 3 4 5
Data variables:
    foo      (x, y) float64 dask.array<chunksize=(4, 6), meta=np.ndarray>

Versions

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 08:20:52) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.4.0-29-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.15.1 pandas: 1.0.3 numpy: 1.18.4 scipy: None netCDF4: 1.5.3 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.1.3 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.16.0 distributed: 2.16.0 matplotlib: None cartopy: None seaborn: None numbagg: None setuptools: 46.4.0.post20200518 pip: 20.1 conda: None pytest: None IPython: 7.13.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4077/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
559283550 MDU6SXNzdWU1NTkyODM1NTA= 3745 groupby drops the variable used to group malmans2 22245117 open 0     0 2020-02-03T19:25:06Z 2022-04-09T02:25:17Z   CONTRIBUTOR      

MCVE Code Sample

python import xarray as xr ds = xr.tutorial.load_dataset('rasm') ```python

Seasonal mean

ds_season = ds.groupby('time.season').mean() ds_season ```

<xarray.Dataset>
Dimensions:  (season: 4, x: 275, y: 205)
Coordinates:
    yc       (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51
    xc       (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91
  * season   (season) object 'DJF' 'JJA' 'MAM' 'SON'
Dimensions without coordinates: x, y
Data variables:
    Tair     (season, y, x) float64 nan nan nan nan ... 23.13 22.06 21.72 21.94

```python

The seasons are ordered in alphabetical order.

I want to sort them based on time.

But time was dropped, so I have to do this:

time_season = ds['time'].groupby('time.season').mean() ds_season.sortby(time_season) ```

<xarray.Dataset>
Dimensions:  (season: 4, x: 275, y: 205)
Coordinates:
    yc       (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51
    xc       (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91
  * season   (season) object 'SON' 'DJF' 'MAM' 'JJA'
Dimensions without coordinates: x, y
Data variables:
    Tair     (season, y, x) float64 nan nan nan nan ... 29.27 28.39 27.94 28.05

Expected Output

```python

Why does groupby drop time?

I would expect a dataset that looks like this:

ds_season['time'] = time_season ds_season ```

<xarray.Dataset>
Dimensions:  (season: 4, x: 275, y: 205)
Coordinates:
    yc       (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51
    xc       (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91
  * season   (season) object 'DJF' 'JJA' 'MAM' 'SON'
Dimensions without coordinates: x, y
Data variables:
    Tair     (season, y, x) float64 nan nan nan nan ... 23.13 22.06 21.72 21.94
    time     (season) object 1982-01-16 12:00:00 ... 1981-10-17 00:00:00

Problem Description

I often use groupby on time variables. When I do that, the time variable is dropped and replaced (e.g., time is replaced by season, month, year, ...). Most of the time I also want to sort the new dataset based on the original time. The example above shows why this is useful for seasons. Another example would be to sort monthly averages of a dataset that originally had daily data from Sep-2000 to Aug-2001. Why is time dropped? Does it make sense to keep it in the grouped dataset?

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.15.0-1067-oem machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.1 xarray: 0.14.1 pandas: 1.0.0 numpy: 1.18.1 scipy: None netCDF4: 1.4.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.1 dask: 2.10.1 distributed: 2.10.0 matplotlib: 3.1.2 cartopy: None seaborn: None numbagg: None setuptools: 45.1.0.post20200127 pip: 20.0.2 conda: None pytest: None IPython: 7.11.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3745/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
323839238 MDU6SXNzdWUzMjM4MzkyMzg= 2145 Dataset.resample() adds time dimension to independant variables malmans2 22245117 open 0     5 2018-05-17T01:15:01Z 2022-03-21T05:15:52Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

```python ds = ds.resample(time='1D',keep_attrs=True).mean()

```

Problem description

I'm downsampling in time a dataset which also contains timeless variables. I've noticed that resample adds the time dimension to the timeless variables. One workaround is: 1) Split the dataset in a timeless and a time-dependent dataset 2) Resample the time-dependent dataset 3) Merge the two datasets

This is not a big deal, but I was wondering if I'm missing some flag that avoids this behavior. If not, is it something that can be easily implemented in resample? It would be very useful for datasets with variables on staggered grids.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.1.final.0 python-bits: 64 OS: Linux OS-release: 3.10.0-693.17.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: None LOCALE: None.None xarray: 0.10.3 pandas: 0.20.2 numpy: 1.12.1 scipy: 0.19.1 netCDF4: 1.2.4 h5netcdf: 0.5.1 h5py: 2.7.0 Nio: None zarr: None bottleneck: 1.2.1 cyordereddict: None dask: 0.17.4 distributed: 1.21.8 matplotlib: 2.0.2 cartopy: 0.16.0 seaborn: 0.7.1 setuptools: 39.1.0 pip: 9.0.1 conda: 4.5.3 pytest: 3.1.2 IPython: 6.1.0 sphinx: 1.6.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2145/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
911393744 MDU6SXNzdWU5MTEzOTM3NDQ= 5435 Broadcast does not return Datasets with unified chunks malmans2 22245117 open 0     3 2021-06-04T11:09:29Z 2021-06-16T17:41:12Z   CONTRIBUTOR      

What happened: If I broadcast a Dataset with chunked DataArrays, the resulting DataArrays are chunked differently.

What you expected to happen: If I broadcast a dataset with 2 vectors of chunk size 1, I'd expect to get 2D arrays with chunksize (1, 1), rather than (1, N) and (M, 1).

Minimal Complete Verifiable Example:

python import xarray as xr ds = xr.Dataset({i: xr.DataArray(range(i*10), dims=f"dim_{i}").chunk(1) for i in range(1, 3)}) ds.broadcast_like(ds) <xarray.Dataset> Dimensions: (dim_1: 10, dim_2: 20) Dimensions without coordinates: dim_1, dim_2 Data variables: 1 (dim_1, dim_2) int64 dask.array<chunksize=(1, 20), meta=np.ndarray> 2 (dim_1, dim_2) int64 dask.array<chunksize=(10, 1), meta=np.ndarray>

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.9.5 (default, May 18 2021, 19:34:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 5.8.0-53-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.3 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.05.1 distributed: 2021.05.1 matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 52.0.0.post20210125 pip: 21.1.2 conda: None pytest: None IPython: 7.24.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5435/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 640.426ms · About: xarray-datasette