id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 2034528244,I_kwDOAMm_X855RG_0,8537,Doctests failing,43316012,closed,0,,,1,2023-12-10T20:49:43Z,2023-12-11T21:00:03Z,2023-12-11T21:00:03Z,COLLABORATOR,,,,"### What is your issue? The doctest is currently failing with > E UserWarning: h5py is running against HDF5 1.14.3 when it was built against 1.14.2, this may cause problems","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8537/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1936080078,I_kwDOAMm_X85zZjzO,8291,`NamedArray.shape` does not support unknown dimensions,43316012,closed,0,,,1,2023-10-10T19:36:42Z,2023-10-18T06:22:54Z,2023-10-18T06:22:54Z,COLLABORATOR,,,,"### What is your issue? According to the array api standard, the `shape` property returns `tuple[int | None, ...]`. Currently we only support `tuple[int, ...]` This will actually raise some errors if a duckarray actually returns some None. E.g. `NamedArray.size` will fail. (On a side note: dask arrays actually use NaN instead of None for some reason.... Only advantage of this is that `NamedArray.size` will actually also return NaN instead of raising...)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8291/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1361246796,I_kwDOAMm_X85RIvpM,6985,FutureWarning for pandas date_range,43316012,closed,0,,,1,2022-09-04T20:35:17Z,2023-02-06T17:51:48Z,2023-02-06T17:51:48Z,COLLABORATOR,,,,"### What is your issue? Xarray raises a FutureWarning in its date_range, also observable in your tests. The precise warning is: > xarray/coding/cftime_offsets.py:1130: FutureWarning: Argument `closed` is deprecated in favor of `inclusive`. You should discuss if you will adapt the new `inclusive` argument or add a workaround.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6985/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1275747776,I_kwDOAMm_X85MCl3A,6703,"Add coarsen, rolling and weighted to generate_reductions",43316012,open,0,,,1,2022-06-18T09:49:22Z,2022-06-18T16:04:15Z,,COLLABORATOR,,,,"### Is your feature request related to a problem? Coarsen reductions are currently added dynamically which is not very useful for typing. This is a follow-up to @Illviljan in https://github.com/pydata/xarray/pull/6702#discussion_r900700532_ Same goes for Weighted. And similar for Rolling (not sure if it is exactly the same though?) ### Describe the solution you'd like Extend the generate_reductions script to include `DataArrayCoarsen` and `DatasetCoarsen`. Once finished: use type checking in all test_coarsen tests.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6703/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1150618439,I_kwDOAMm_X85ElQtH,6306,Assigning to dataset with missing dim raises ValueError,43316012,open,0,,,1,2022-02-25T16:08:04Z,2022-05-21T20:35:52Z,,COLLABORATOR,,,,"### What happened? I tried to assign values to a dataset with a selector-dict where a variable is missing the dim from the selector-dict. This raises a ValueError. ### What did you expect to happen? I expect that assigning works the same as selecting and it will ignore the missing dims. ### Minimal Complete Verifiable Example ```Python import xarray as xr ds = xr.Dataset({""a"": (""x"", [1, 2, 3]), ""b"": (""y"", [4, 5])}) ds[{""x"": 1}] # this works and returns: # # Dimensions: (y: 2) # Dimensions without coordinates: y # Data variables: # a int64 2 # b (y) int64 4 5 ds[{""x"": 1}] = 1 # this fails and raises a ValueError # ValueError: Variable 'b': indexer {'x': 1} not available ``` ### Relevant log output ```Python Traceback (most recent call last): File ""xarray/core/dataset.py"", line 1591, in _setitem_check var_k = var[key] File ""xarray/core/dataarray.py"", line 740, in __getitem__ return self.isel(indexers=self._item_key_to_dict(key)) File ""xarray/core/dataarray.py"", line 1204, in isel variable = self._variable.isel(indexers, missing_dims=missing_dims) File ""xarray/core/variable.py"", line 1181, in isel indexers = drop_dims_from_indexers(indexers, self.dims, missing_dims) File ""xarray/core/utils.py"", line 834, in drop_dims_from_indexers raise ValueError( ValueError: Dimensions {'x'} do not exist. Expected one or more of ('y',) The above exception was the direct cause of the following exception: Traceback (most recent call last): File """", line 1, in File ""xarray/core/dataset.py"", line 1521, in __setitem__ value = self._setitem_check(key, value) File ""xarray/core/dataset.py"", line 1593, in _setitem_check raise ValueError( ValueError: Variable 'b': indexer {'x': 1} not available ``` ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6306/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1155321209,I_kwDOAMm_X85E3M15,6313,groubpy on array with multiindex renames indices,43316012,closed,0,,,1,2022-03-01T13:08:30Z,2022-03-17T17:11:44Z,2022-03-17T17:11:44Z,COLLABORATOR,,,,"### What happened? When grouping and reducing an array or dataset over a multi-index the coordinates that make up the multi-index get renamed to ""{name_of_multiindex}\_level\_{i}"". It only works correctly when the Multiindex is a ""homogenous grid"", i.e. as obtained by stacking. ### What did you expect to happen? I expect that all coordinates keep their initial names. ### Minimal Complete Verifiable Example ```Python import xarray as xr # this works: d = xr.DataArray(range(4), dims=""t"", coords={""x"": (""t"", [0, 0, 1, 1]), ""y"": (""t"", [0, 1, 0, 1])}) dd = d.set_index({""t"": [""x"", ""y""]}) # returns # # array([0, 1, 2, 3]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 # - y (t) int64 0 1 0 1 dd.groupby(""t"").mean(...) # returns # # array([0., 1., 2., 3.]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 # - y (t) int64 0 1 0 1 # this does not work d2 = xr.DataArray(range(6), dims=""t"", coords={""x"": (""t"", [0, 0, 1, 1, 0, 1]), ""y"": (""t"", [0, 1, 0, 1, 0, 0])}) dd2 = d2.set_index({""t"": [""x"", ""y""]}) # returns # # array([0, 1, 2, 3, 4, 5]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 0 1 # - y (t) int64 0 1 0 1 0 0 dd2.groupby(""t"").mean(...) # returns # # array([2. , 1. , 3.5, 3. ]) # Coordinates: # * t (t) MultiIndex # - t_level_0 (t) int64 0 0 1 1 # - t_level_1 (t) int64 0 1 0 1 ``` ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6313/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue