home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

219 rows where user = 10194086 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, draft, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 131
  • issue 88

state 2

  • closed 201
  • open 18

repo 1

  • xarray 219
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2163675672 PR_kwDOAMm_X85obI_8 8803 missing chunkmanager: update error message mathause 10194086 open 0     4 2024-03-01T15:48:00Z 2024-03-15T11:02:45Z   MEMBER   0 pydata/xarray/pulls/8803

When dask is missing we get the following error message:

python-traceback ValueError: unrecognized chunk manager dask - must be one of: []

this could be confusing - the error message seems geared towards a typo in the requested manager. However, I think it's much more likely that a chunk manager is just not installed. I tried to update the error message - happy to get feedback.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8803/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2137065741 PR_kwDOAMm_X85nAXC5 8756 suppress base & loffset deprecation warnings mathause 10194086 closed 0     2 2024-02-15T17:23:27Z 2024-02-16T09:44:32Z 2024-02-15T19:11:10Z MEMBER   0 pydata/xarray/pulls/8756

Supress some more internal warnings in the test suite.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8756/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2090265314 PR_kwDOAMm_X85kiCi8 8627 unify freq strings (independent of pd version) mathause 10194086 closed 0     4 2024-01-19T10:57:04Z 2024-02-15T17:53:42Z 2024-02-15T16:53:36Z MEMBER   0 pydata/xarray/pulls/8627
  • [ ] Adresses points 2 and 3 and closes #8612
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

Probably not ready for review yet.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8627/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2083501344 I_kwDOAMm_X858L7Ug 8612 more frequency string updates? mathause 10194086 closed 0     5 2024-01-16T09:56:48Z 2024-02-15T16:53:37Z 2024-02-15T16:53:37Z MEMBER      

What is your issue?

I looked a bit into the frequency string update & found 3 issues we could improve upon.

  1. Apart from "M", pandas also deprecated "Y", and "Q", in favor of "YE" and "QE". (And they are discussing renaming "MS" to "MB"). Should we do the same?

  2. Should we translate the new freq strings to the old ones if pandas < 2.2 is installed? Otherwise we get the following situation: python import xarray as xr xr.date_range("1600-02-01", periods=3, freq="M") # deprecation warning xr.date_range("1600-02-01", periods=3, freq="ME") # ValueError: Invalid frequency: ME

  3. date_range_like can emit deprecation warnings without a way to mitigate them if pandas < 2.2 is installed. (When a DatetimeIndex) is passed. Could be nice to translate the old freq string to the new one without a warning.

I have played around with 2. and 3. and can open a PR if you are on board.

@spencerkclark @aulemahal

  • pandas-dev/pandas#55792
  • pandas-dev/pandas#55553
  • pandas-dev/pandas#56840
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8612/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2130313810 PR_kwDOAMm_X85mpS8i 8737 unstack: require unique MultiIndex mathause 10194086 closed 0     2 2024-02-12T14:58:06Z 2024-02-13T09:48:51Z 2024-02-13T09:48:36Z MEMBER   0 pydata/xarray/pulls/8737
  • [x] Closes #7104
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

Unstacking non-unique MultiIndex can lead to silent data loss, so we raise an error.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8737/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1918089795 I_kwDOAMm_X85yU7pD 8252 cannot use negative step to sel from zarr (without dask) mathause 10194086 closed 0     0 2023-09-28T18:52:07Z 2024-02-10T02:57:33Z 2024-02-10T02:57:33Z MEMBER      

What happened?

As per: https://github.com/pydata/xarray/pull/8246#discussion_r1340357405

Passing a negative step in a slice to select a non-chunked zarr-backed datasets raises an error.

What did you expect to happen?

zarr should allow negative step (probably?)

Minimal Complete Verifiable Example

```Python import xarray as xr

create a zarr dataset

air = xr.tutorial.open_dataset("air_temperature") air.to_zarr("test.zarr")

ds = xr.open_dataset("test.zarr", engine="zarr") ds.air[::-1, ].load()

note that this works if the dataset is backed by dask

ds_dask = xr.open_dataset("test.zarr", engine="zarr", chunks="auto") ds_dask.air[::-1, ].load() ```

MVCE confirmation

  • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python File ~/code/xarray/xarray/core/parallelcompat.py:93, in guess_chunkmanager(manager) 91 if isinstance(manager, str): 92 if manager not in chunkmanagers: ---> 93 raise ValueError( 94 f"unrecognized chunk manager {manager} - must be one of: {list(chunkmanagers)}" 95 ) 97 return chunkmanagers[manager] 98 elif isinstance(manager, ChunkManagerEntrypoint): 99 # already a valid ChunkManager so just pass through

ValueError: unrecognized chunk manager dask - must be one of: [] ```

Anything else we need to know?

The error comes from https://github.com/zarr-developers/zarr-python/blob/6ec746ef1242dd9fec26b128cc0b3455d28ad6f0/zarr/indexing.py#L174 so it would need an upstream fix first.

cc @dcherian is this what you had in mind?

Environment

INSTALLED VERSIONS ------------------ commit: f6d69a1f6d952dcd67609c97f3fb3069abdda586 python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 6.2.0-33-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.2 libnetcdf: 4.9.2 xarray: 2023.9.1.dev8+gf6d69a1f pandas: 2.1.1 numpy: 1.24.4 scipy: 1.11.3 netCDF4: 1.6.4 pydap: installed h5netcdf: 1.2.0 h5py: 3.9.0 Nio: None zarr: 2.16.1 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.7.0 bottleneck: 1.3.7 dask: 2023.9.2 distributed: None matplotlib: 3.8.0 cartopy: 0.22.0 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.9.2 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.7.2 numpy_groupies: 0.10.1 setuptools: 68.2.2 pip: 23.2.1 conda: None pytest: 7.4.2 mypy: None IPython: 8.15.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8252/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2126795367 PR_kwDOAMm_X85mdn7J 8727 ruff: move some config to lint section mathause 10194086 closed 0     0 2024-02-09T09:48:17Z 2024-02-09T15:49:03Z 2024-02-09T15:49:03Z MEMBER   0 pydata/xarray/pulls/8727

Fix a warning from ruff concerning the config:

warning: The top-level linter settings are deprecated in favour of their counterparts in the lint section. Please update the following options in pyproject.toml: - 'extend-safe-fixes' -> 'lint.extend-safe-fixes' - 'per-file-ignores' -> 'lint.per-file-ignores'

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8727/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2105703882 I_kwDOAMm_X859gn3K 8679 Dataset.weighted along a dimension not on weights errors mathause 10194086 open 0     2 2024-01-29T15:03:39Z 2024-02-04T11:24:54Z   MEMBER      

What happened?

ds.weighted(weights).mean(dims) errors when reducing over a dimension that is neither on the weights nor on the variable.

What did you expect to happen?

This used to work and was "broken" by #8606. However, we may want to fix this by ignoring (?) those data vars instead (#7027).

Minimal Complete Verifiable Example

```Python import xarray as xr

ds = xr.Dataset({"a": (("y", "x"), [[1, 2]]), "scalar": 1}) weights = xr.DataArray([1, 2], dims="x")

ds.weighted(weights).mean("y") ```

MVCE confirmation

  • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [ ] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

```Python ValueError Traceback (most recent call last) Cell In[1], line 6 3 ds = xr.Dataset({"a": (("y", "x"), [[1, 2]]), "scalar": 1}) 4 weights = xr.DataArray([1, 2], dims="x") ----> 6 ds.weighted(weights).mean("y")

File ~/code/xarray/xarray/util/deprecation_helpers.py:115, in _deprecate_positional_args.<locals>._decorator.<locals>.inner(args, kwargs) 111 kwargs.update({name: arg for name, arg in zip_args}) 113 return func(args[:-n_extra_args], kwargs) --> 115 return func(*args, kwargs)

File ~/code/xarray/xarray/core/weighted.py:497, in Weighted.mean(self, dim, skipna, keep_attrs) 489 @_deprecate_positional_args("v2023.10.0") 490 def mean( 491 self, (...) 495 keep_attrs: bool | None = None, 496 ) -> T_Xarray: --> 497 return self._implementation( 498 self._weighted_mean, dim=dim, skipna=skipna, keep_attrs=keep_attrs 499 )

File ~/code/xarray/xarray/core/weighted.py:558, in DatasetWeighted._implementation(self, func, dim, kwargs) 555 def _implementation(self, func, dim, kwargs) -> Dataset: 556 self._check_dim(dim) --> 558 return self.obj.map(func, dim=dim, **kwargs)

File ~/code/xarray/xarray/core/dataset.py:6924, in Dataset.map(self, func, keep_attrs, args, kwargs) 6922 if keep_attrs is None: 6923 keep_attrs = _get_keep_attrs(default=False) -> 6924 variables = { 6925 k: maybe_wrap_array(v, func(v, *args, kwargs)) 6926 for k, v in self.data_vars.items() 6927 } 6928 if keep_attrs: 6929 for k, v in variables.items():

File ~/code/xarray/xarray/core/dataset.py:6925, in <dictcomp>(.0) 6922 if keep_attrs is None: 6923 keep_attrs = _get_keep_attrs(default=False) 6924 variables = { -> 6925 k: maybe_wrap_array(v, func(v, args, *kwargs)) 6926 for k, v in self.data_vars.items() 6927 } 6928 if keep_attrs: 6929 for k, v in variables.items():

File ~/code/xarray/xarray/core/weighted.py:286, in Weighted._weighted_mean(self, da, dim, skipna) 278 def _weighted_mean( 279 self, 280 da: T_DataArray, 281 dim: Dims = None, 282 skipna: bool | None = None, 283 ) -> T_DataArray: 284 """Reduce a DataArray by a weighted mean along some dimension(s).""" --> 286 weighted_sum = self._weighted_sum(da, dim=dim, skipna=skipna) 288 sum_of_weights = self._sum_of_weights(da, dim=dim) 290 return weighted_sum / sum_of_weights

File ~/code/xarray/xarray/core/weighted.py:276, in Weighted._weighted_sum(self, da, dim, skipna) 268 def _weighted_sum( 269 self, 270 da: T_DataArray, 271 dim: Dims = None, 272 skipna: bool | None = None, 273 ) -> T_DataArray: 274 """Reduce a DataArray by a weighted sum along some dimension(s).""" --> 276 return self._reduce(da, self.weights, dim=dim, skipna=skipna)

File ~/code/xarray/xarray/core/weighted.py:231, in Weighted._reduce(da, weights, dim, skipna) 227 da = da.fillna(0.0) 229 # dot does not broadcast arrays, so this avoids creating a large 230 # DataArray (if weights has additional dimensions) --> 231 return dot(da, weights, dim=dim)

File ~/code/xarray/xarray/util/deprecation_helpers.py:140, in deprecate_dims.<locals>.wrapper(args, kwargs) 132 emit_user_level_warning( 133 "The dims argument has been renamed to dim, and will be removed " 134 "in the future. This renaming is taking place throughout xarray over the " (...) 137 PendingDeprecationWarning, 138 ) 139 kwargs["dim"] = kwargs.pop("dims") --> 140 return func(args, **kwargs)

File ~/code/xarray/xarray/core/computation.py:1885, in dot(dim, arrays, *kwargs) 1883 dim = tuple(d for d, c in dim_counts.items() if c > 1) 1884 else: -> 1885 dim = parse_dims(dim, all_dims=tuple(all_dims)) 1887 dot_dims: set[Hashable] = set(dim) 1889 # dimensions to be parallelized

File ~/code/xarray/xarray/core/utils.py:1046, in parse_dims(dim, all_dims, check_exists, replace_none) 1044 dim = (dim,) 1045 if check_exists: -> 1046 _check_dims(set(dim), set(all_dims)) 1047 return tuple(dim)

File ~/code/xarray/xarray/core/utils.py:1131, in _check_dims(dim, all_dims) 1129 if wrong_dims: 1130 wrong_dims_str = ", ".join(f"'{d!s}'" for d in wrong_dims) -> 1131 raise ValueError( 1132 f"Dimension(s) {wrong_dims_str} do not exist. Expected one or more of {all_dims}" 1133 )

ValueError: Dimension(s) 'y' do not exist. Expected one or more of {'x'} ```

Anything else we need to know?

No response

Environment

Newest main (i.e. 2024.01)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8679/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
2098122391 PR_kwDOAMm_X85k8eI1 8651 allow negative freq strings mathause 10194086 closed 0     2 2024-01-24T12:04:39Z 2024-02-01T09:17:11Z 2024-02-01T09:01:44Z MEMBER   0 pydata/xarray/pulls/8651
  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

This allows negative freq strings as discussed in https://github.com/pydata/xarray/pull/8627#issuecomment-1905981660 Deciding which tests to update was not easy.

The pandas _generate_range function was moved to https://github.com/pandas-dev/pandas/blob/3c96b8ff6d399fbec8d4d533e8e8618c592bb64b/pandas/core/arrays/datetimes.py#L2725 They no longer rollback the end. I had to remove this as well such that the following are eqivalent:

python xr.date_range("2001", "2000", freq="-1YE", calendar="noleap") pd.date_range("2001", "2000", freq="-1YE")

I am slightly nervous about this but all the tests still pass...

Once again cc @spencerkclark

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8651/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2105738254 PR_kwDOAMm_X85lVsgz 8680 use ruff.flake8-tidy-imports to enforce absolute imports mathause 10194086 closed 0     1 2024-01-29T15:19:34Z 2024-01-30T16:42:46Z 2024-01-30T16:38:48Z MEMBER   0 pydata/xarray/pulls/8680

use ruff.flake8-tidy-imports to enforce absolute imports

  • https://github.com/MarcoGorelli/absolufy-imports has been archived (no reason given)
  • removes a pre-commit hook which should make it faster locally
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8680/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2098131640 PR_kwDOAMm_X85k8gJe 8652 new whats-new section mathause 10194086 closed 0     2 2024-01-24T12:10:07Z 2024-01-26T10:07:39Z 2024-01-24T12:59:49Z MEMBER   0 pydata/xarray/pulls/8652
  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8652/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2097971637 PR_kwDOAMm_X85k789- 8649 ruff: use extend-exclude mathause 10194086 closed 0     1 2024-01-24T10:39:46Z 2024-01-24T18:32:20Z 2024-01-24T15:59:11Z MEMBER   0 pydata/xarray/pulls/8649

I think we should use extend-exclude instead of exclude for ruff. We can then also remove ".eggs" as this is in the default.

From https://docs.astral.sh/ruff/settings/#exclude:

Note that you'll typically want to use extend-exclude to modify the excluded paths.

Default value: [".bzr", ".direnv", ".eggs", ".git", ".git-rewrite", ".hg", ".mypy_cache", ".nox", ".pants.d", ".pytype", ".ruff_cache", ".svn", ".tox", ".venv", "__pypackages__", "_build", "buck-out", "build", "dist", "node_modules", "venv"]

(I really dislike how github formats toml files... What would be the correct syntax, then?)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8649/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2094542307 PR_kwDOAMm_X85kwUlb 8642 infer_freq: return 'YE' (#8629 follow-up) mathause 10194086 closed 0     0 2024-01-22T18:53:52Z 2024-01-23T12:44:14Z 2024-01-23T12:44:14Z MEMBER   0 pydata/xarray/pulls/8642

I realized that the return value of infer_freq was not updated. #8627 will try to suppress all warnings in the test suite, so this is just the minimal PR.

Sorry for all the spam @spencerkclark

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8642/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2090340727 PR_kwDOAMm_X85kiTjg 8629 rename "Y" freq string to "YE" (pandas parity) mathause 10194086 closed 0     10 2024-01-19T11:31:58Z 2024-01-22T18:38:06Z 2024-01-22T08:01:24Z MEMBER   0 pydata/xarray/pulls/8629
  • [x] Adresses point 1 of #8612
  • [x] Fixes one of the failures in #8623
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

This renames the frequency string "Y" (formerly "A") to "YE" to achieve pandas parity. It could be better to wait for the conclusion of pandas-dev/pandas#56840 before doing this (but fixing the related failure in #8623 seemed a good reason as any to do it know).

Let me know what you think @spencerkclark @aulemahal

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8629/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2070895451 PR_kwDOAMm_X85jf-2J 8600 fix and test empty CFTimeIndex mathause 10194086 closed 0     1 2024-01-08T17:11:43Z 2024-01-17T12:29:11Z 2024-01-15T21:49:34Z MEMBER   0 pydata/xarray/pulls/8600
  • [x] Closes #7298
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

Otherwise da.indexes and the html repr raise a ValueError. I first had "<undefined>" but I think None is better. cc @spencerkclark @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8600/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1455395909 I_kwDOAMm_X85Wv5RF 7298 html repr fails for empty cftime arrays mathause 10194086 closed 0     1 2022-11-18T16:09:00Z 2024-01-15T21:49:36Z 2024-01-15T21:49:35Z MEMBER      

What happened?

The html repr of a cftime array wants to display the "calendar", which it cannot if it is empty.

What did you expect to happen?

No error.

Minimal Complete Verifiable Example

```Python import numpy as np import xarray as xr

data_obs = np.random.randn(3) time_obs = xr.date_range("2000-01-01", periods=3, freq="YS", calendar="noleap")

obs = xr.DataArray(data_obs, coords={"time": time_obs})

o = obs[:0]

xr.core.formatting_html.array_repr(o) ```

MVCE confirmation

  • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python ValueError Traceback (most recent call last) Input In [1], in <cell line: 12>() 8 obs = xr.DataArray(data_obs, coords={"time": time_obs}) 10 o = obs[:0] ---> 12 xr.core.formatting_html.array_repr(o)

File ~/code/xarray/xarray/core/formatting_html.py:318, in array_repr(arr) 316 if hasattr(arr, "xindexes"): 317 indexes = _get_indexes_dict(arr.xindexes) --> 318 sections.append(index_section(indexes)) 320 sections.append(attr_section(arr.attrs)) 322 return _obj_repr(arr, header_components, sections)

File ~/code/xarray/xarray/core/formatting_html.py:195, in _mapping_section(mapping, name, details_func, max_items_collapse, expand_option_name, enabled) 188 expanded = _get_boolean_with_default( 189 expand_option_name, n_items < max_items_collapse 190 ) 191 collapsed = not expanded 193 return collapsible_section( 194 name, --> 195 details=details_func(mapping), 196 n_items=n_items, 197 enabled=enabled, 198 collapsed=collapsed, 199 )

File ~/code/xarray/xarray/core/formatting_html.py:155, in summarize_indexes(indexes) 154 def summarize_indexes(indexes): --> 155 indexes_li = "".join( 156 f"

  • {summarize_index(v, i)}
  • " 157 for v, i in indexes.items() 158 ) 159 return f"
      {indexes_li}
    "

    File ~/code/xarray/xarray/core/formatting_html.py:156, in <genexpr>(.0) 154 def summarize_indexes(indexes): 155 indexes_li = "".join( --> 156 f"

  • {summarize_index(v, i)}
  • " 157 for v, i in indexes.items() 158 ) 159 return f"
      {indexes_li}
    "

    File ~/code/xarray/xarray/core/formatting_html.py:140, in summarize_index(coord_names, index) 138 index_id = f"index-{uuid.uuid4()}" 139 preview = escape(inline_index_repr(index)) --> 140 details = short_index_repr_html(index) 142 data_icon = _icon("icon-database") 144 return ( 145 f"

    {name}
    " 146 f"
    {preview}
    " (...) 150 f"
    {details}
    " 151 )

    File ~/code/xarray/xarray/core/formatting_html.py:132, in short_index_repr_html(index) 129 if hasattr(index, "repr_html"): 130 return index.repr_html() --> 132 return f"

    {escape(repr(index))}
    "

    File ~/code/xarray/xarray/core/indexes.py:547, in PandasIndex.repr(self) 546 def repr(self): --> 547 return f"PandasIndex({repr(self.index)})"

    File ~/code/xarray/xarray/coding/cftimeindex.py:353, in CFTimeIndex.repr(self) 345 end_str = format_times( 346 self.values[-REPR_ELLIPSIS_SHOW_ITEMS_FRONT_END:], 347 display_width, 348 offset=offset, 349 first_row_offset=offset, 350 ) 351 datastr = "\n".join([front_str, f"{' '*offset}...", end_str]) --> 353 attrs_str = format_attrs(self) 354 # oneliner only if smaller than display_width 355 full_repr_str = f"{klass_name}([{datastr}], {attrs_str})"

    File ~/code/xarray/xarray/coding/cftimeindex.py:272, in format_attrs(index, separator) 267 def format_attrs(index, separator=", "): 268 """Format attributes of CFTimeIndex for repr.""" 269 attrs = { 270 "dtype": f"'{index.dtype}'", 271 "length": f"{len(index)}", --> 272 "calendar": f"'{index.calendar}'", 273 "freq": f"'{index.freq}'" if len(index) >= 3 else None, 274 } 276 attrs_str = [f"{k}={v}" for k, v in attrs.items()] 277 attrs_str = f"{separator}".join(attrs_str)

    File ~/code/xarray/xarray/coding/cftimeindex.py:698, in CFTimeIndex.calendar(self) 695 """The calendar used by the datetimes in the index.""" 696 from .times import infer_calendar_name --> 698 return infer_calendar_name(self)

    File ~/code/xarray/xarray/coding/times.py:374, in infer_calendar_name(dates) 371 return sample.calendar 373 # Error raise if dtype is neither datetime or "O", if cftime is not importable, and if element of 'O' dtype is not cftime. --> 374 raise ValueError("Array does not contain datetime objects.")

    ValueError: Array does not contain datetime objects. ```

    Anything else we need to know?

    Bisected to 7379923de756a2bcc59044d548f8ab7a68b91d4e use _repr_inline_ for indexes that define it.

    Environment

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7298/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    2070231449 PR_kwDOAMm_X85jdtRr 8597 _infer_dtype: remove duplicated code mathause 10194086 closed 0     0 2024-01-08T11:12:18Z 2024-01-08T19:40:06Z 2024-01-08T19:40:06Z MEMBER   0 pydata/xarray/pulls/8597

    By chance I saw that in #4700 the same code block was added twice. I think this can be removed.

    cc @andersy005

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8597/reactions",
        "total_count": 1,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 1,
        "eyes": 0
    }
        xarray 13221727 pull
    2070561434 PR_kwDOAMm_X85je1rK 8598 small string fixes mathause 10194086 closed 0     1 2024-01-08T14:20:56Z 2024-01-08T16:59:27Z 2024-01-08T16:53:00Z MEMBER   0 pydata/xarray/pulls/8598
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8598/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    2025652693 PR_kwDOAMm_X85hJh0D 8521 test and fix empty xindexes repr mathause 10194086 closed 0     4 2023-12-05T08:54:56Z 2024-01-08T10:58:09Z 2023-12-06T17:06:15Z MEMBER   0 pydata/xarray/pulls/8521
    • [x] Closes #8367
    • [x] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    Uses max with a default, which work with empty iterators, in contrast to if col_items else 0.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8521/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1959175248 I_kwDOAMm_X850xqRQ 8367 `da.xindexes` or `da.indexes` raises an error if there are none (in the repr) mathause 10194086 closed 0     1 2023-10-24T12:45:12Z 2023-12-06T17:06:16Z 2023-12-06T17:06:16Z MEMBER      

    What happened?

    da.xindexes or da.indexes raises an error when trying to generate the repr if there are no coords (indexes)

    What did you expect to happen?

    Displaying an empty Mappable?

    Minimal Complete Verifiable Example

    Python xr.DataArray([3, 5]).indexes xr.DataArray([3, 5]).xindexes

    MVCE confirmation

    • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [x] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [x] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
    • [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

    Relevant log output

    ```Python Out[9]: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/.conda/envs/xarray_dev/lib/python3.10/site-packages/IPython/core/formatters.py:708, in PlainTextFormatter.call(self, obj) 701 stream = StringIO() 702 printer = pretty.RepresentationPrinter(stream, self.verbose, 703 self.max_width, self.newline, 704 max_seq_length=self.max_seq_length, 705 singleton_pprinters=self.singleton_printers, 706 type_pprinters=self.type_printers, 707 deferred_pprinters=self.deferred_printers) --> 708 printer.pretty(obj) 709 printer.flush() 710 return stream.getvalue()

    File ~/.conda/envs/xarray_dev/lib/python3.10/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj) 407 return meth(obj, self, cycle) 408 if cls is not object \ 409 and callable(cls.dict.get('repr')): --> 410 return _repr_pprint(obj, self, cycle) 412 return _default_pprint(obj, self, cycle) 413 finally:

    File ~/.conda/envs/xarray_dev/lib/python3.10/site-packages/IPython/lib/pretty.py:778, in repr_pprint(obj, p, cycle) 776 """A pprint that just redirects to the normal repr function.""" 777 # Find newlines and replace them with p.break() --> 778 output = repr(obj) 779 lines = output.splitlines() 780 with p.group():

    File ~/code/xarray/xarray/core/indexes.py:1659, in Indexes.repr(self) 1657 def repr(self): 1658 indexes = formatting._get_indexes_dict(self) -> 1659 return formatting.indexes_repr(indexes)

    File ~/code/xarray/xarray/core/formatting.py:474, in indexes_repr(indexes, max_rows) 473 def indexes_repr(indexes, max_rows: int | None = None) -> str: --> 474 col_width = _calculate_col_width(chain.from_iterable(indexes)) 476 return _mapping_repr( 477 indexes, 478 "Indexes", (...) 482 max_rows=max_rows, 483 )

    File ~/code/xarray/xarray/core/formatting.py:341, in _calculate_col_width(col_items) 340 def _calculate_col_width(col_items): --> 341 max_name_length = max(len(str(s)) for s in col_items) if col_items else 0 342 col_width = max(max_name_length, 7) + 6 343 return col_width

    ValueError: max() arg is an empty sequence ```

    Anything else we need to know?

    No response

    Environment

    INSTALLED VERSIONS ------------------ commit: ccc8f9987b553809fb6a40c52fa1a8a8095c8c5f python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 6.2.0-35-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.2 libnetcdf: 4.9.2 xarray: 2023.9.1.dev8+gf6d69a1f pandas: 2.1.1 numpy: 1.24.4 scipy: 1.11.3 netCDF4: 1.6.4 pydap: installed h5netcdf: 1.2.0 h5py: 3.9.0 Nio: None zarr: 2.16.1 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.7.0 bottleneck: 1.3.7 dask: 2023.9.2 distributed: None matplotlib: 3.8.0 cartopy: 0.22.0 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.9.2 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.7.2 numpy_groupies: 0.10.1 setuptools: 68.2.2 pip: 23.2.1 conda: None pytest: 7.4.2 mypy: 1.5.1 IPython: 8.15.0 sphinx: None
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8367/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    722168932 MDU6SXNzdWU3MjIxNjg5MzI= 4513 where should keep_attrs be set in groupby, resample, weighted etc.? mathause 10194086 closed 0     2 2020-10-15T09:36:43Z 2023-11-10T16:58:35Z 2023-11-10T16:58:35Z MEMBER      

    I really should not open this can of worms but per https://github.com/pydata/xarray/issues/4450#issuecomment-697507489:

    I'm always confused about whether ds.groupby(..., keep_attrs=True).mean() or ds.groupby(...).mean(keep_attrs=True) is correct. (similarly for rolling, coarsen etc.)

    Also as I try to fix the keep_attr behavior in #4510 it would be good to know where they should go. So I tried to figure out how this is currently handled and found the following:

    ds.xxx(keep_attrs=True).yyy() - all fixed

    ds.xxx().yyy(keep_attrs=True) - coarsen (fixed in #5227) - groupby - groupby_bin - resample - rolling (adjusted in #4510) - rolling_exp (fixed in #4592) - weighted

    So the working consensus seems to be to to ds.xxx().yyy(keep_attrs=True) - any comments on that?

    (Edit: looking at this it is only half as bad, "only" coarsen, rolling (#4510), and rolling_exp would need to be fixed.)

    Detailed analysis

    ```python import xarray as xr ds = xr.tutorial.open_dataset("air_temperature") da = ds.air ``` ### coarsen ```python ds.coarsen(time=2, keep_attrs=True).mean() # keeps global attributes ds.coarsen(time=2).mean(keep_attrs=True) # keeps DataArray attributes ds.coarsen(time=2, keep_attrs=True).mean(keep_attrs=True) # keeps both da.coarsen(time=2).mean(keep_attrs=True) # error da.coarsen(time=2, keep_attrs=True).mean() # keeps DataArray attributes ``` ### groupby ```python ds.groupby("time.month").mean(keep_attrs=True) # keeps both da.groupby("time.month").mean(keep_attrs=True) # keeps DataArray attributes ds.groupby("time.month", keep_attrs=True).mean() # error da.groupby("time.month", keep_attrs=True).mean() # error ``` ### groupby_bins ```python ds.groupby_bins(ds.lat, np.arange(0, 90, 10)).mean(keep_attrs=True) # keeps both da.groupby_bins(ds.lat, np.arange(0, 90, 10)).mean(keep_attrs=True) # keeps DataArray attrs ds.groupby_bins(ds.lat, np.arange(0, 90, 10), keep_attrs=True) # errors da.groupby_bins(ds.lat, np.arange(0, 90, 10), keep_attrs=True) # errors ``` ### resample ```python ds.resample(time="A").mean(keep_attrs=True) # keeps both da.resample(time="A").mean(keep_attrs=True) # keeps DataArray attributes ds.resample(time="A", keep_attrs=False).mean() # ignored da.resample(time="A", keep_attrs=False).mean() # ignored ``` ### rolling ```python ds.rolling(time=2).mean(keep_attrs=True) # keeps both da.rolling(time=2).mean(keep_attrs=True) # keeps DataArray attributes ds.rolling(time=2, keep_attrs=True).mean() # DeprecationWarning; keeps both da.rolling(time=2, keep_attrs=True).mean() # DeprecationWarning; keeps DataArray attributes ``` see #4510 ### rolling_exp ```python ds.rolling_exp(time=5, keep_attrs=True).mean() # ignored da.rolling_exp(time=5, keep_attrs=True).mean() # ignored ds.rolling_exp(time=5).mean(keep_attrs=True) # keeps both da.rolling_exp(time=5).mean(keep_attrs=True) # keeps DataArray attributes ``` ### weighted ```python ds.weighted(ds.lat).mean(keep_attrs=True) # keeps both da.weighted(ds.lat).mean(keep_attrs=True) # keeps DataArray attrs ```

    edit: moved rolling after #4510, moved rolling_exp after #4592 and coarsen after #5227

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4513/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1986324822 I_kwDOAMm_X852ZOlW 8436 align fails when more than one xindex is set mathause 10194086 closed 0     2 2023-11-09T20:07:52Z 2023-11-10T12:53:49Z 2023-11-10T12:53:49Z MEMBER      

    What happened?

    I tried a DataArray with more than one dimension coordinate. Unfortunately xr.align fails, which disallows any arithmetic operation - even when the coords are exactly the same.

    What did you expect to happen?

    No response

    Minimal Complete Verifiable Example

    ```Python import numpy as np import xarray as xr

    data = np.arange(12).reshape(3, 4)

    y = [10, 20, 30] s = ["a", "b", "c"]

    x = [1, 2, 3, 4]

    da = xr.DataArray(data, dims=("y", "x"), coords={"x": x, "y": y, "s": ("y", s)}) da = da.set_xindex("s")

    xr.align(da, da.y) # errors da + da # errors da + da.x # errors ```

    MVCE confirmation

    • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
    • [ ] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

    Relevant log output

    ```Python

    ValueError Traceback (most recent call last) /home/mathause/code/mesmer/devel/prepare_for_surfer.ipynb Cell 28 line 1 12 da = xr.DataArray(data, dims=("y", "x"), coords={"x": x, "y": y, "s": ("y", s)}) 13 da = da.set_xindex("s") ---> 15 xr.align(da, da.y) # errors 17 da + da.x # errors

    File ~/.conda/envs/mesmer_dev/lib/python3.9/site-packages/xarray/core/alignment.py:888, in align(join, copy, indexes, exclude, fill_value, *objects) 692 """ 693 Given any number of Dataset and/or DataArray objects, returns new 694 objects with aligned indexes and dimension sizes. ref='~/.conda/envs/mesmer_dev/lib/python3.9/site-packages/xarray/core/alignment.py:0'>0;32m (...) 878 879 """ 880 aligner = Aligner( 881 objects, 882 join=join, ref='~/.conda/envs/mesmer_dev/lib/python3.9/site-packages/xarray/core/alignment.py:0'>0;32m (...) 886 fill_value=fill_value, 887 ) --> 888 aligner.align() 889 return aligner.results

    File ~/.conda/envs/mesmer_dev/lib/python3.9/site-packages/xarray/core/alignment.py:573, in Aligner.align(self) 571 self.find_matching_indexes() 572 self.find_matching_unindexed_dims() --> 573 self.assert_no_index_conflict() 574 self.align_indexes() 575 self.assert_unindexed_dim_sizes_equal()

    File ~/.conda/envs/mesmer_dev/lib/python3.9/site-packages/xarray/core/alignment.py:318, in Aligner.assert_no_index_conflict(self) 314 if dup: 315 items_msg = ", ".join( 316 f"{k!r} ({v} conflicting indexes)" for k, v in dup.items() 317 ) --> 318 raise ValueError( 319 "cannot re-index or align objects with conflicting indexes found for " 320 f"the following {msg}: {items_msg}\n" 321 "Conflicting indexes may occur when\n" 322 "- they relate to different sets of coordinate and/or dimension names\n" 323 "- they don't have the same type\n" 324 "- they may be used to reindex data along common dimensions" 325 )

    ValueError: cannot re-index or align objects with conflicting indexes found for the following dimensions: 'y' (2 conflicting indexes) Conflicting indexes may occur when - they relate to different sets of coordinate and/or dimension names - they don't have the same type - they may be used to reindex data along common dimensions ```

    Anything else we need to know?

    No response

    Environment

    INSTALLED VERSIONS ------------------ commit: feba6984aa914327408fee3c286dae15969d2a2f python: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 6.2.0-36-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.2 libnetcdf: 4.9.2 xarray: 2023.9.1.dev8+gf6d69a1f pandas: 2.1.1 numpy: 1.24.4 scipy: 1.11.3 netCDF4: 1.6.4 pydap: installed h5netcdf: 1.2.0 h5py: 3.9.0 Nio: None zarr: 2.16.1 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.7.0 bottleneck: 1.3.7 dask: 2023.9.2 distributed: None matplotlib: 3.8.0 cartopy: 0.22.0 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.9.2 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.7.2 numpy_groupies: 0.10.1 setuptools: 68.2.2 pip: 23.2.1 conda: None pytest: 7.4.2 mypy: 1.5.1 IPython: 8.15.0 sphinx: None
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8436/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1657036222 I_kwDOAMm_X85ixF2- 7730 flox performance regression for cftime resampling mathause 10194086 closed 0     8 2023-04-06T09:38:03Z 2023-10-15T03:48:44Z 2023-10-15T03:48:44Z MEMBER      

    What happened?

    Running an in-memory groupby operation took much longer than expected. Turning off flox fixed this - but I don't think that's the idea ;-)

    What did you expect to happen?

    flox to be at least on par with our naive implementation

    Minimal Complete Verifiable Example

    ```Python import numpy as np import xarray as xr

    arr = np.random.randn(10, 10, 36530) time = xr.date_range("2000", periods=30365, calendar="noleap") da = xr.DataArray(arr, dims=("y", "x", "time"), coords={"time": time})

    using max

    print("max:") xr.set_options(use_flox=True) %timeit da.groupby("time.year").max("time") %timeit da.groupby("time.year").max("time", engine="flox")

    xr.set_options(use_flox=False) %timeit da.groupby("time.year").max("time")

    as reference

    %timeit [da.sel(time=str(year)).max("time") for year in range(2000, 2030)]

    using mean

    print("mean:") xr.set_options(use_flox=True) %timeit da.groupby("time.year").mean("time") %timeit da.groupby("time.year").mean("time", engine="flox")

    xr.set_options(use_flox=False) %timeit da.groupby("time.year").mean("time")

    as reference

    %timeit [da.sel(time=str(year)).mean("time") for year in range(2000, 2030)] ```

    MVCE confirmation

    • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

    Relevant log output

    ```Python max: 158 ms ± 4.41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 28.1 ms ± 318 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) 11.5 ms ± 52.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

    mean: 95.6 ms ± 10.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 34.8 ms ± 2.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 15.2 ms ± 232 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ```

    Anything else we need to know?

    No response

    Environment

    INSTALLED VERSIONS ------------------ commit: f8127fc9ad24fe8b41cce9f891ab2c98eb2c679a python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.15.0-69-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.1 xarray: main pandas: 1.5.3 numpy: 1.23.5 scipy: 1.10.1 netCDF4: 1.6.3 pydap: installed h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.4.1 bottleneck: 1.3.7 dask: 2023.3.2 distributed: 2023.3.2.1 matplotlib: 3.7.1 cartopy: 0.21.1 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.3.0 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.6.10 numpy_groupies: 0.9.20 setuptools: 67.6.1 pip: 23.0.1 conda: None pytest: 7.2.2 mypy: None IPython: 8.12.0 sphinx: None
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7730/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1177919687 PR_kwDOAMm_X8403yVS 6403 make more args kw only (except 'dim') mathause 10194086 closed 0     9 2022-03-23T10:28:02Z 2023-10-05T20:38:49Z 2023-10-05T20:38:49Z MEMBER   0 pydata/xarray/pulls/6403
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    This makes many arguments keyword-only, except for dim to avoid da.weighted(...).mean("lat", "lon") (i.e. da.weighted(...).mean(dim="lat", skipna="lon")) which silently does the wrong thing. I am sure I forgot some and for some I was unsure so I left them as is.

    Question: do we want an deprecation cycle? Currently it just errors for da.weighted(...).mean("dim", True). Might be nice to do it, however, @dcherian if I am not mistaken you did this without a deprecation in #5950, e.g. for da.mean etc.?

    python import numpy as np import xarray as xr air = xr.tutorial.open_dataset("air_temperature") wgt = np.cos(np.deg2rad(air.lat)) air.weighted(wgt).mean("lat", "lon")

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6403/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1917660013 PR_kwDOAMm_X85bc7Pv 8246 update pytest config and un-xfail some tests mathause 10194086 closed 0     1 2023-09-28T14:21:58Z 2023-09-30T01:26:39Z 2023-09-30T01:26:35Z MEMBER   0 pydata/xarray/pulls/8246
    • [ ] Towards #8239
    • [ ] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    This partly updates the pytest config as suggested in #8239 and un-xfails some tests (or xfails the tests more precisely).

    See https://github.com/pydata/xarray/issues/8239#issuecomment-1739363809 for why we cannot exactly follow the suggestions given in #8239

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8246/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    748684119 MDU6SXNzdWU3NDg2ODQxMTk= 4601 Don't type check __getattr__? mathause 10194086 open 0     8 2020-11-23T10:41:21Z 2023-09-25T05:33:09Z   MEMBER      

    In #4592 I had the issue that mypy did not raise an error on a missing method:

    ```python from xarray.core.common import DataWithCoords

    hasattr(xr.core.common.DataWithCoords, "reduce") # -> False

    def test(x: "DataWithCoords"): x.reduce() # mypy does not error ```

    This is because DataWithCoords implements __getattr__:

    ```python

    class A: pass

    class B: def getattr(self, name): ...

    def testA(x: "A"): x.reduce() # mypy errors

    def testB(x: "B"): x.reduce() # mypy does not error ```

    The solution seems to be to not typecheck __getattr__ (see https://github.com/python/mypy/issues/6251#issuecomment-457287161):

    ```python from typing import no_type_check

    class C: @no_type_check def getattr(self, name): ...

    def testC(x: "C"): x.reduce() # mypy errors ```

    The only __getattr__ within xarray is here:

    https://github.com/pydata/xarray/blob/17358922d480c038e66430735bf4c365a7677df8/xarray/core/common.py#L221

    Using @no_type_check leads to 24 errors and not all of them can be trivially solved. E.g. DataWithCoords wants of use self.isel but does not implement the method. The solution is probably to add isel to DataWithCoords as an ABC or using NotImplemented.

    Thoughts?

    All errors

    ```python-traceback xarray/core/common.py:370: error: "DataWithCoords" has no attribute "isel" xarray/core/common.py:374: error: "DataWithCoords" has no attribute "dims" xarray/core/common.py:378: error: "DataWithCoords" has no attribute "indexes" xarray/core/common.py:381: error: "DataWithCoords" has no attribute "sizes" xarray/core/common.py:698: error: "DataWithCoords" has no attribute "_groupby_cls" xarray/core/common.py:761: error: "DataWithCoords" has no attribute "_groupby_cls" xarray/core/common.py:866: error: "DataWithCoords" has no attribute "_rolling_cls"; maybe "_rolling_exp_cls"? xarray/core/common.py:977: error: "DataWithCoords" has no attribute "_coarsen_cls" xarray/core/common.py:1108: error: "DataWithCoords" has no attribute "dims" xarray/core/common.py:1109: error: "DataWithCoords" has no attribute "dims" xarray/core/common.py:1133: error: "DataWithCoords" has no attribute "indexes" xarray/core/common.py:1144: error: "DataWithCoords" has no attribute "_resample_cls"; maybe "resample"? xarray/core/common.py:1261: error: "DataWithCoords" has no attribute "isel" xarray/core/alignment.py:278: error: "DataAlignable" has no attribute "copy" xarray/core/alignment.py:283: error: "DataAlignable" has no attribute "dims" xarray/core/alignment.py:286: error: "DataAlignable" has no attribute "indexes" xarray/core/alignment.py:288: error: "DataAlignable" has no attribute "sizes" xarray/core/alignment.py:348: error: "DataAlignable" has no attribute "dims" xarray/core/alignment.py:351: error: "DataAlignable" has no attribute "copy" xarray/core/alignment.py:353: error: "DataAlignable" has no attribute "reindex" xarray/core/alignment.py:356: error: "DataAlignable" has no attribute "encoding" xarray/core/weighted.py:157: error: "DataArray" has no attribute "notnull" xarray/core/dataset.py:3792: error: "Dataset" has no attribute "virtual_variables" xarray/core/dataset.py:6135: error: "DataArray" has no attribute "isnull" ```

    Edit: one problem is certainly the method injection, as mypy cannot detect those types.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4601/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    235224055 MDU6SXNzdWUyMzUyMjQwNTU= 1449 time.units truncated when saving to_netcdf mathause 10194086 closed 0     6 2017-06-12T12:58:37Z 2023-09-13T13:25:25Z 2023-09-13T13:25:24Z MEMBER      

    When I manually specify the units attribute for time, and then save the Dataset to_netcdf the string is truncated. See exaple

    import pandas as pd
    import xarray as xr
    
    time = pd.date_range('2000-01-01', '2000-01-31', freq='6h')
    ds = xr.Dataset(coords=dict(time=time))
    
    units = 'days since 1975-01-01 00:00:00'
    calendar = 'gregorian'
    encoding=dict(time=dict(units=units, calendar=calendar))
    
    ds.to_netcdf('test.nc', format='NETCDF4_CLASSIC', encoding=encoding)
    
    ! ncdump -h test.nc
    # time:units = "days since 1975-01-01" ;
    

    Some programs seem to require the hours to be present to interpret the time properly (e.g. panoply). When specifying the hour, a 'T' is added.

    units = 'days since 1975-01-01 01:00:00'
    
    ! ncdump -h test.nc
    # time:units = "days since 1975-01-01T01:00:00" ;
    

    When xarray defines the time.units it works fine.

    ds = xr.Dataset(coords=dict(time=time))
    ds.to_netcdf('test.nc', format='NETCDF4_CLASSIC',)
    
    ! ncdump -h test.nc
    # time:units = "hours since 2000-01-01 00:00:00" ;
    

    xarray version 0.9.6

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/1449/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1876205625 PR_kwDOAMm_X85ZRl7U 8130 to_stacked_array: better error msg & refactor mathause 10194086 closed 0     0 2023-08-31T19:51:08Z 2023-09-10T15:33:41Z 2023-09-10T15:33:37Z MEMBER   0 pydata/xarray/pulls/8130
    • [x] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    I found the error message in ds.to_stacked_array confusing, so I tried to make it clearer. Also renames some if the internal symbols (so should have no user facing change).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/8130/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1371397741 I_kwDOAMm_X85Rvd5t 7027 don't apply `weighted`, `groupby`, etc. to `DataArray` without `dims`? mathause 10194086 open 0     1 2022-09-13T12:44:34Z 2023-08-26T19:13:39Z   MEMBER      

    What is your issue?

    Applying e.g. ds.weighted(weights).mean() applies the operation over all DataArray objects - even if they don't have the dimensions over which it is applied (or is a scalar variable). I don't think this is wanted.

    ```python import xarray as xr

    air = xr.tutorial.open_dataset("air_temperature") air.attrs = {}

    add variable without dims

    air["foo"] = 5

    print("resample") print(air.resample(time="MS").mean(dim="time").foo.dims)

    print("groupby") print(air.groupby("time.year").mean(dim="time").foo.dims)

    print("weighted") print(air.weighted(weights=air.time.dt.year).mean("lat").foo.dims)

    print("where") print(air.where(air.air > 5).foo.dims) ```

    Results resample ('time',) groupby ('year',) weighted ('time',)

    Related #6952 - I am sure there are other issues, but couldn't find them quickly...

    rolling and coarsen don't seem to do this.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7027/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    1719805837 I_kwDOAMm_X85mgieN 7860 diff of cftime.Datetime mathause 10194086 open 0     3 2023-05-22T14:21:06Z 2023-08-04T12:01:33Z   MEMBER      

    What happened?

    A cftime variable returns a timedelta64[ns] when calling diff / + / - and it can then not be added/ subtracted from the original data.

    What did you expect to happen?

    We can add cftime timedeltas.

    Minimal Complete Verifiable Example

    ```Python import xarray as xr

    air = xr.tutorial.open_dataset("air_temperature", use_cftime=True)

    air.time + air.time.diff("time") / 2 ```

    MVCE confirmation

    • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [x] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [x] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.

    Relevant log output

    Python air.time.variable.values[1:] - air.time.variable.values[:-1] returns array([datetime.timedelta(seconds=21600), ...]) but then Python xr.Variable(("time",), np.array([datetime.timedelta(0)])) returns a dtype='timedelta64[ns]' array.

    Anything else we need to know?

    • See upstream PR: xarray-contrib/cf-xarray#441
    • Similar to #7381 (but I don't think it's the same issue, feel free to close if you disagree)
    • That might need a special data type for timedeltas of cftime.Datetime objects, or allowing to add 'timedelta64[ns]' to cftime.Datetime objects
    • The casting comes from

    https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/core/variable.py#L366

    https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/core/variable.py#L272

    Environment

    INSTALLED VERSIONS ------------------ commit: d8ec3a3f6b02a8b941b484b3d254537af84b5fde python: 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:20:04) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.14.21-150400.24.63-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.1 xarray: 2023.2.1.dev20+g06a87062 pandas: 1.5.3 numpy: 1.23.5 scipy: 1.10.1 netCDF4: 1.6.2 pydap: installed h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.13.6 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.4.1 bottleneck: 1.3.6 dask: 2023.2.1 distributed: 2023.2.1 matplotlib: 3.7.0 cartopy: 0.21.1 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.1.0 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.6.8 numpy_groupies: 0.9.20 setuptools: 67.4.0 pip: 23.0.1 conda: None pytest: 7.2.1 mypy: None IPython: 8.11.0 sphinx: None
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7860/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    594669577 MDU6SXNzdWU1OTQ2Njk1Nzc= 3937 compose weighted with groupby, coarsen, resample, rolling etc. mathause 10194086 open 0     7 2020-04-05T22:00:40Z 2023-07-27T18:10:10Z   MEMBER      

    It would be nice to make weighted work with groupby - e.g. #3935 (comment)

    However, it is not entirely clear to me how that should be done. One way would be to do: python da.groupby(...).weighted(weights).mean() this would require that the groupby operation is applied over the weights (how would this be done?) Or should it be

    python da.weighted(weights).groupby(...).mean() but this seems less intuitive to me.

    Or python da.groupby(..., weights=weights).mean()

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3937/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    1596025651 PR_kwDOAMm_X85Kj_KM 7548 supress namespace_package deprecation warning (doctests) mathause 10194086 closed 0     0 2023-02-23T00:15:41Z 2023-02-23T18:38:16Z 2023-02-23T18:38:15Z MEMBER   0 pydata/xarray/pulls/7548

    Suppress the pkg_resources.namespace_package DeprecationError to make the doctest pass again (similar to #7322). This is reported upstream: pydap/pydap#277 and matplotlib/matplotlib#25244

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7548/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1086346755 PR_kwDOAMm_X84wKOjC 6096 Replace distutils.version with packaging.version mathause 10194086 closed 0     9 2021-12-22T00:51:21Z 2023-01-20T21:00:42Z 2021-12-24T14:50:48Z MEMBER   0 pydata/xarray/pulls/6096
    • [x] Closes #6092
    • [x] Passes pre-commit run --all-files
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    One change is that it is no longer possible to compare to a string, i.e. version.parse(xr.__version__) < "0.20.0" errors.

    As mentioned in #6092 there are 3 options - if there is a preference I am happy to update this PR.

    ```python from distutils.version import LooseVersion from packaging import version

    LooseVersion(xr.version) version.parse(xr.version) version.Version(xr.version)

    currently:

    if LooseVersion(mod.version) < LooseVersion(minversion): pass

    options:

    if version.parse(mod.version) < version.parse(minversion): pass

    if version.Version(mod.version) < version.Version(minversion): pass

    if Version(mod.version) < Version(minversion): pass ```

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6096/reactions",
        "total_count": 3,
        "+1": 3,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1466191758 PR_kwDOAMm_X85Dylku 7326 fix doctests: supress urllib3 warning mathause 10194086 closed 0     1 2022-11-28T10:40:46Z 2022-12-05T20:11:16Z 2022-11-28T19:31:03Z MEMBER   0 pydata/xarray/pulls/7326
    • [x] Closes #7322
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7326/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1285767883 PR_kwDOAMm_X846ahUs 6730 move da and ds fixtures to conftest.py mathause 10194086 closed 0     9 2022-06-27T12:56:05Z 2022-12-05T20:11:08Z 2022-07-11T12:44:55Z MEMBER   0 pydata/xarray/pulls/6730

    This PR renames the da and ds fixtures (to da_fixture and ds_fixture) and moves them to conftest.py. This allows to remove the flake8 error suppression for the tests and seems more how the fixtures are intended to be used (from the pytest side). I think the name changes makes it a bit more obvious what happens but moving them to may make it a bit less obvious (if you don't know where to look).

    Removing the flake8 error ignores also unearthed some unused imports:

    https://github.com/pydata/xarray/blob/787a96c15161c9025182291b672b3d3c5548a6c7/setup.cfg#L155-L156

    (What I actually wanted to do is move the tests for rolling to it's own file - but I think it makes sense to do this first.)

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6730/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1344222732 PR_kwDOAMm_X849c2Wu 6934 deprecate_positional_args: remove stray print mathause 10194086 closed 0     0 2022-08-19T09:58:53Z 2022-12-05T20:11:08Z 2022-08-19T10:25:32Z MEMBER   0 pydata/xarray/pulls/6934

    I forgot to remove some debug print statements in #6910 - thanks for noting @shoyer & @dcherian

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6934/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1464824094 PR_kwDOAMm_X85DuSjU 7321 fix flake8 config mathause 10194086 closed 0     2 2022-11-25T18:16:07Z 2022-11-28T10:36:29Z 2022-11-28T10:33:00Z MEMBER   0 pydata/xarray/pulls/7321

    flake8 v6 now errors on inline comments in the config file. I don't like it but oh well...

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7321/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    715730538 MDU6SXNzdWU3MTU3MzA1Mzg= 4491 deprecate pynio backend mathause 10194086 closed 0     21 2020-10-06T14:27:20Z 2022-11-26T15:40:37Z 2022-11-26T15:40:37Z MEMBER      

    We are currently not testing with the newest version of netCDF4 because it is incompatible with pynio (the newest version is 1.5.4, we are at 1.5.3). This is unlikely to be fixed, see conda-forge/pynio-feedstock#90.

    Therefore we need to think how to setup the tests so we use the newest version of netCDF4. Maybe just remove it from py38.yml?

    And long term what to do with the pynio backend? Deprecate? Move to an external repo?

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4491/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1372729718 I_kwDOAMm_X85R0jF2 7036 index refactor: more `_coord_names` than `_variables` on Dataset mathause 10194086 closed 0     3 2022-09-14T10:19:00Z 2022-09-27T10:35:40Z 2022-09-27T10:35:40Z MEMBER      

    What happened?

    xr.core.dataset.DataVariables assumes that everything that is in ds._dataset._variables and not in self._dataset._coord_names is a "data variable". However, since the index refactor we can end up with more _coord_names than _variables which breaks a number of stuff (e.g. the repr).

    What did you expect to happen?

    Well it seems this assumption is now wrong.

    Minimal Complete Verifiable Example

    Python ds = xr.Dataset(coords={"a": ("x", [1, 2, 3]), "b": ("x", ['a', 'b', 'c'])}) ds.set_index(z=['a', 'b']).reset_index("z", drop=True)

    MVCE confirmation

    • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
    • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
    • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
    • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

    Relevant log output

    Python ValueError: __len__() should return >= 0

    Anything else we need to know?

    The error comes from here

    https://github.com/pydata/xarray/blob/63ba862d03c8d0cd8b44d2071bc360e9fed4519d/xarray/core/dataset.py#L368

    Bisected to #5692 - which probably does not help too much.

    Environment

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/7036/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1118802352 PR_kwDOAMm_X84xzhTi 6212 better warning filter for assert_* mathause 10194086 closed 0     1 2022-01-31T00:22:37Z 2022-09-05T07:52:09Z 2022-09-05T07:52:06Z MEMBER   0 pydata/xarray/pulls/6212

    In #4864 I added a a decorator for the xarray.testing.assert_* functions to ensure warnings that were to errors (pytest.mark.filterwarnings("error")) do not error in assert_* (see https://github.com/pydata/xarray/pull/4760#issuecomment-774101639). As a solution I added

    https://github.com/pydata/xarray/blob/5470d933452d88deb17cc9294a164c4a03f55dec/xarray/testing.py#L32

    However, this is sub-optimal because this now removes all ignore filters! As dask stuff only gets evaluated in assert_* filters like warnings.filterwarnings("ignore", "Mean of empty slice") don't work for dask arrays!

    I thought of setting

    python warnings.simplefilter("ignore")

    but this could suppress warnings we want to keep.

    So now I remove all "error" warning filters and keep the rest. Note that the original filters get restored after with warnings.catch_warnings():. ().


    I am not sure I expressed myself very clearly... let me know and I can try again. @keewis you had a look at #4864 maybe you can review this PR as well?

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6212/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1355581692 PR_kwDOAMm_X84-Cbgk 6967 fix _deprecate_positional_args helper mathause 10194086 closed 0     0 2022-08-30T11:02:33Z 2022-09-02T21:54:07Z 2022-09-02T21:54:03Z MEMBER   0 pydata/xarray/pulls/6967

    I tried to use the _deprecate_positional_args decorator from #6934 & it turns out that it still had some errors - passing on the arguments did not work properly in certain cases... I now added tests for this as well (which I should have done in the first place...).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6967/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1355361572 PR_kwDOAMm_X84-Brev 6966 enable pydap in tests again mathause 10194086 closed 0     1 2022-08-30T08:18:07Z 2022-09-01T10:16:05Z 2022-09-01T10:16:03Z MEMBER   0 pydata/xarray/pulls/6966

    5844 excluded pydap from our tests - but the new version has been released in the meantime (on conda not on pypi, though, pydap/pydap#268) - so let's see if this still works.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6966/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1355349486 PR_kwDOAMm_X84-Bo54 6965 no longer install pydap for 'io' extras in py3.10 mathause 10194086 closed 0     2 2022-08-30T08:08:12Z 2022-09-01T10:15:30Z 2022-09-01T10:15:27Z MEMBER   0 pydata/xarray/pulls/6965
    • [x] Closes #6960
    • [ ] Tests added - tested manually
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6965/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1331969418 PR_kwDOAMm_X8480cLZ 6890 tests don't use `pytest.warns(None)` mathause 10194086 closed 0     0 2022-08-08T14:36:01Z 2022-08-30T12:15:33Z 2022-08-08T17:27:53Z MEMBER   0 pydata/xarray/pulls/6890

    Get rid of some warnings in the tests.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6890/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1344900323 PR_kwDOAMm_X849fIGC 6937 terminology: fix italics [skip-ci] mathause 10194086 closed 0     0 2022-08-19T21:13:52Z 2022-08-20T07:30:41Z 2022-08-20T07:30:41Z MEMBER   0 pydata/xarray/pulls/6937
    • [x] Closes #6932

    @zmoon - obviously it would be nice if we had a linter for this but this is for another time.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6937/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1337166287 PR_kwDOAMm_X849FuuD 6910 decorator to deprecate positional arguments mathause 10194086 closed 0     7 2022-08-12T12:48:47Z 2022-08-18T18:14:09Z 2022-08-18T15:59:52Z MEMBER   0 pydata/xarray/pulls/6910
    • [x] Supersedes #6403, see also #5531
    • [x] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    Adds a helper function to deprecate positional arguments. IMHO this offers a good trade-off between magic and complexity. (As mentioned this was adapted from scikit-learn).

    edit: I suggest to actually deprecate positional arguments in another PR.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6910/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1155634014 PR_kwDOAMm_X84zvnTl 6316 fix typos (using codespell) mathause 10194086 closed 0     2 2022-03-01T17:52:24Z 2022-07-18T13:33:02Z 2022-03-02T13:57:29Z MEMBER   0 pydata/xarray/pulls/6316

    fix some typos (using codespell). Called using:

    bash codespell --skip=".git,.mypy_cache,*.tex" --ignore-words-list coo,nd,inferrable,hist,ND,splitted,soler,slowy,ba,ser,nin,te,fo -w -i 3

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6316/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1094725752 I_kwDOAMm_X85BQDB4 6142 dimensions: type as `str | Iterable[Hashable]`? mathause 10194086 open 0     14 2022-01-05T20:39:00Z 2022-06-26T11:57:40Z   MEMBER      

    What happened?

    We generally type dimensions as:

    python dims: Hashable | Iterable[Hashable]

    However, this is in conflict with passing a tuple of independent dimensions to a method - e.g. da.mean(("x", "y")) because a tuple is also hashable.

    Also mypy requires an isinstance(dims, Hashable) check when typing a function. We use an isinstance(dims, str) check in many places to wrap a single dimension in a list. Changing this to isinstance(dims, Hashable) will change the behavior for tuples.

    What did you expect to happen?

    In the community call today we discussed to change this to

    python dims: str | Iterable[Hashable]

    i.e. if a single dim is passed it has to be a string and wrapping it in a list is a convenience function. Special use cases with Hashable types should be wrapped in a Iterable by the user. This probably best reflects the current state of the repo (dims = [dims] if isinstance(dims, str) else dims).

    The disadvantage could be that it is a bit more difficult to explain in the docstrings?

    @shoyer - did I get this right from the discussion?


    Other options

    1. Require str as dimension names.

    This could be too restrictive. @keewis mentioned that tuple dimension names are already used somwehere in the xarray repo. Also we discussed in another issue or PR (which I cannot find right know) that we want to keep allowing Hashable.

    1. Disallow passing tuples (only allow tuples if a dimension is a tuple), require lists to pass several dimensions.

    This is too restrictive in the other direction and will probably lead to a lot of downstream troubles. Naming a single dimension with a tuple will be a very rare case, in contrast to passing several dimension names as a tuple.

    1. Special case tuples. We could potentially check if dims is a tuple and if there are any dimension names consisting of a tuple. Seems more complicated and potentially brittle for probably small gains (IMO).

    Minimal Complete Verifiable Example

    No response

    Relevant log output

    No response

    Anything else we need to know?

    • We need to check carefully where general Hashable are really allowed. E.g. dims of a DataArray are typed as

    https://github.com/pydata/xarray/blob/e056cacdca55cc9d9118c830ca622ea965ebcdef/xarray/core/dataarray.py#L380

    but tuples are not actually allowed:

    ```python import xarray as xr xr.DataArray([1], dims=("x", "y"))

    ValueError: different number of dimensions on data and dims: 1 vs 2

    xr.DataArray([1], dims=[("x", "y")])

    TypeError: dimension ('x', 'y') is not a string

    ```

    • We need to be careful typing functions where only one dim is allowed, e.g. xr.concat, which should probably set dim: Hashable (and make sure it works).
    • Do you have examples for other real-world hashable types except for str and tuple? (Would be good for testing purposes).

    Environment

    N/A

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6142/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    685739084 MDU6SXNzdWU2ODU3MzkwODQ= 4375 allow using non-dimension coordinates in polyfit mathause 10194086 open 0     1 2020-08-25T19:40:55Z 2022-04-09T02:58:48Z   MEMBER      

    polyfit currently only allows to fit along a dimension and not along a non-dimension coordinate (or a virtual coordinate)

    Example: ```python da = xr.DataArray( [1, 3, 2], dims=["x"], coords=dict(x=["a", "b", "c"], y=("x", [0, 1, 2])) )

    print(da)

    da.polyfit("y", 1) Output:python <xarray.DataArray (x: 3)> array([1, 3, 2]) Coordinates: * x (x) <U1 'a' 'b' 'c' y (x) int64 0 1 2


    KeyError Traceback (most recent call last) <ipython-input-80-9bb2dacf50f7> in <module> 5 print(da) 6 ----> 7 da.polyfit("y", 1)

    ~/.conda/envs/ipcc_ar6/lib/python3.7/site-packages/xarray/core/dataarray.py in polyfit(self, dim, deg, skipna, rcond, w, full, cov) 3507 """ 3508 return self._to_temp_dataset().polyfit( -> 3509 dim, deg, skipna=skipna, rcond=rcond, w=w, full=full, cov=cov 3510 ) 3511

    ~/.conda/envs/ipcc_ar6/lib/python3.7/site-packages/xarray/core/dataset.py in polyfit(self, dim, deg, skipna, rcond, w, full, cov) 6005 skipna_da = skipna 6006 -> 6007 x = get_clean_interp_index(self, dim, strict=False) 6008 xname = "{}_".format(self[dim].name) 6009 order = int(deg) + 1

    ~/.conda/envs/ipcc_ar6/lib/python3.7/site-packages/xarray/core/missing.py in get_clean_interp_index(arr, dim, use_coordinate, strict) 246 247 if use_coordinate is True: --> 248 index = arr.get_index(dim) 249 250 else: # string

    ~/.conda/envs/ipcc_ar6/lib/python3.7/site-packages/xarray/core/common.py in get_index(self, key) 378 """ 379 if key not in self.dims: --> 380 raise KeyError(key) 381 382 try:

    KeyError: 'y' ```

    Describe the solution you'd like

    Would be nice if that worked.

    Describe alternatives you've considered

    One could just set the non-dimension coordinate as index, e.g.: da = da.set_index(x="y")

    Additional context

    Allowing this may be as easy as replacing

    https://github.com/pydata/xarray/blob/9c85dd5f792805bea319f01f08ee51b83bde0f3b/xarray/core/missing.py#L248

    by index = arr[dim] but I might be missing something. Or probably a use_coordinate must be threaded through to get_clean_interp_index (although I am a bit confused by this argument).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4375/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    144630996 MDU6SXNzdWUxNDQ2MzA5OTY= 810 correct DJF mean mathause 10194086 closed 0     4 2016-03-30T15:36:42Z 2022-04-06T16:19:47Z 2016-05-04T12:56:30Z MEMBER      

    This started as a question and I add it as reference. Maybe you have a comment.

    There are several ways to calculate time series of seasonal data (starting from monthly or daily data):

    ```

    load libraries

    import pandas as pd import matplotlib.pyplot import numpy as np import xarray as xr

    Create Example Dataset

    time = pd.date_range('2000.01.01', '2010.12.31', freq='M') data = np.random.rand(*time.shape) ds = xr.DataArray(data, coords=dict(time=time))

    (1) using resample

    ds_res = ds.resample('Q-FEB', 'time') ds_res = ds_res.sel(time=ds_res['time.month'] == 2) ds_res = ds_res.groupby('time.year').mean('time')

    (2) this is wrong

    ds_season = ds.where(ds['time.season'] == 'DJF').groupby('time.year').mean('time')

    (3) using where and rolling

    mask other months with nan

    ds_DJF = ds.where(ds['time.season'] == 'DJF')

    rolling mean -> only Jan is not nan

    however, we loose Jan/ Feb in the first year and Dec in the last

    ds_DJF = ds_DJF.rolling(min_periods=3, center=True, time=3).mean()

    make annual mean

    ds_DJF = ds_DJF.groupby('time.year').mean('time')

    ds_res.plot(marker='*') ds_season.plot() ds_DJF.plot()

    plt.show() ```

    (1) The first is to use resample with 'Q-FEB' as argument. This works fine. It does include Jan/ Feb in the first year, and Dec in the last year + 1. If this makes sense can be debated. One case where this does not work is when you have, say, two regions in your data set, for one you want to calculate DJF and for the other you want NovDecJan.

    (2) Using 'time.season' is wrong as it combines Jan, Feb and Dec from the same year.

    (3) The third uses where and rolling and you lose 'incomplete' seasons. If you replace ds.where(ds['time.season'] == 'DJF') with ds.groupby('time.month').where(summer_months), where summer_months is a boolean array it works also for non-standard 'summers' (or seasons) across the globe.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/810/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    310833761 MDU6SXNzdWUzMTA4MzM3NjE= 2037 to_netcdf -> _fill_value without NaN mathause 10194086 open 0     8 2018-04-03T13:20:19Z 2022-03-10T10:59:17Z   MEMBER      

    Code Sample, a copy-pastable example if possible

    ```python

    Your code here

    import xarray as xr import numpy as np x = np.arange(10.) da = xr.Dataset(data_vars=dict(data=('dim1', x)), coords=dict(dim1=('dim1', x))) da.to_netcdf('tst.nc')

    ```

    Problem description

    Apologies if this was discussed somwhere and it probably does not matter much, but tst.nc has _FillValue although it is not really necessary.

    Output of xr.show_versions()

    # Paste the output here xr.show_versions() here
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/2037/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    1150224882 PR_kwDOAMm_X84zdCrl 6303 quantile: use skipna=None mathause 10194086 closed 0     0 2022-02-25T09:24:05Z 2022-03-03T09:43:38Z 2022-03-03T09:43:35Z MEMBER   0 pydata/xarray/pulls/6303
    • [x] Tests added
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    skipna=None did not skip missing values for quantile, inconsistent with other methods. Discovered while testing #6059.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6303/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1149708477 PR_kwDOAMm_X84zbVnG 6302 from_dict: doctest mathause 10194086 closed 0     0 2022-02-24T20:17:24Z 2022-02-28T09:11:05Z 2022-02-28T09:11:02Z MEMBER   0 pydata/xarray/pulls/6302
    • [x] Closes #6136

    Convert the code block in xr.DataArray.from_dict and xr.Dataset.from_dict to doctest/ examples.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6302/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1150251120 I_kwDOAMm_X85Ej3Bw 6304 add join argument to xr.broadcast? mathause 10194086 open 0     1 2022-02-25T09:52:14Z 2022-02-25T21:50:16Z   MEMBER      

    Is your feature request related to a problem?

    xr.broadcast always does an outer join:

    https://github.com/pydata/xarray/blob/de965f342e1c9c5de92ab135fbc4062e21e72453/xarray/core/alignment.py#L702

    https://github.com/pydata/xarray/blob/de965f342e1c9c5de92ab135fbc4062e21e72453/xarray/core/alignment.py#L768

    This is not how the (default) broadcasting (arithmetic join) works, e.g. the following first does an inner join and then broadcasts:

    ```python import xarray as xr

    da1 = xr.DataArray([[0, 1, 2]], dims=("y", "x"), coords={"x": [0, 1, 2]}) da2 = xr.DataArray([0, 1, 2, 3, 4], dims="x", coords={"x": [0, 1, 2, 3, 4]}) da1 + da2 ```

    <xarray.DataArray (y: 1, x: 3)> array([[0, 2, 4]]) Coordinates: * x (x) int64 0 1 2 Dimensions without coordinates: y

    Describe the solution you'd like

    Add a join argument to xr.broadcast. I would propose to leave the default as is

    python def broadcast(*args, exclude=None, join="outer"): args = align(*args, join=join, copy=False, exclude=exclude)

    Describe alternatives you've considered

    • We could make broadcast respect options -> arithmetic_join but that would be a breaking change and I am not sure how the deprecation should/ would be handled...
    • We could leave it as is.

    Additional context

    • xr.broadcast should not be used often because this is should happen automatically in most cases
    • in #6059 I use broadcast because I couldn't get it to work otherwise (maybe there is a better way?). However, the "outer elements" are immediately discarded again - so it's kind of pointless to do an outer join.

    ```python import numpy as np import xarray as xr

    da = xr.DataArray(np.arange(6).reshape(3, 2), coords={"dim_0": [0, 1, 2]}) w = xr.DataArray([1, 1, 1, 1, 1, 1], coords={"dim_0": [0, 1, 2, 4, 5, 6]}) da.weighted(w).quantile(0.5) ```

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6304/reactions",
        "total_count": 4,
        "+1": 4,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    1126086052 PR_kwDOAMm_X84yLQ48 6251 use `warnings.catch_warnings(record=True)` instead of `pytest.warns(None)` mathause 10194086 closed 0     4 2022-02-07T14:42:26Z 2022-02-18T16:51:58Z 2022-02-18T16:51:55Z MEMBER   0 pydata/xarray/pulls/6251

    pytest v7.0.0 no longer want's us to use pytest.warns(None) to test for no warning, so we can use warnings.catch_warnings(record=True) instead.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6251/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1088615118 PR_kwDOAMm_X84wRifr 6108 quantile: rename interpolation arg to method mathause 10194086 closed 0     3 2021-12-25T15:06:44Z 2022-02-08T17:09:47Z 2022-02-07T09:40:05Z MEMBER   0 pydata/xarray/pulls/6108

    numpy/numpy#20327 introduces some changes to np.quantile (and related) for the upcoming numpy release (v1.22.0). It renames the interpolation keyword to method and offers some new interpolation methods. This PR does two things

    1. it restores compatibility with numpy 1.22
    2. it renames the interpolation keyword to method in xarray - this change is not strictly necessary but I thought better to be consistent with numpy

    3. [x] Tests added

    4. [x] Passes pre-commit run --all-files
    5. [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    (Side note in dask.array.percentile the method keyword is used differently from the interpolation keyword (docs). However, xarray does not use the dask function.)


    TODO: need to import ArrayLike from npcompat.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6108/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1125661464 PR_kwDOAMm_X84yJ3Rz 6248 test bottleneck master in upstream CI [test-upstream] [skip-ci] mathause 10194086 closed 0     1 2022-02-07T08:25:35Z 2022-02-07T09:05:28Z 2022-02-07T09:05:24Z MEMBER   0 pydata/xarray/pulls/6248
    • [x] Closes #6186

    pydata/bottleneck#378 was merged - so this should work again.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6248/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1111644832 I_kwDOAMm_X85CQlqg 6186 upstream dev CI: enable bottleneck again mathause 10194086 closed 0     2 2022-01-22T18:11:25Z 2022-02-07T09:05:24Z 2022-02-07T09:05:24Z MEMBER      

    bottleneck cannot be built with python 3.10. See https://github.com/pydata/xarray/actions/runs/1731371015

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6186/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1118836906 PR_kwDOAMm_X84xzojx 6213 fix or suppress test warnings mathause 10194086 closed 0     1 2022-01-31T01:34:20Z 2022-02-01T09:40:15Z 2022-02-01T09:40:11Z MEMBER   0 pydata/xarray/pulls/6213

    Fixes or suppresses a number of warnings that turn up in our upstream CI.

    pd.Index.is_monotonic is an alias for pd.Index.is_monotonic_increasing and does not stand for pd.Index.is_monotonic_increasing or pd.Index.is_monotonic_decreasing.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6213/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1118168483 PR_kwDOAMm_X84xxms4 6208 Revert "MNT: prepare h5netcdf backend for (coming) change in dimension handling" mathause 10194086 closed 0     8 2022-01-29T10:27:11Z 2022-01-29T13:48:17Z 2022-01-29T13:20:51Z MEMBER   0 pydata/xarray/pulls/6208

    Reverts pydata/xarray#6200

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6208/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1114414215 PR_kwDOAMm_X84xlfet 6194 doc: fix pd datetime parsing warning [skip-ci] mathause 10194086 closed 0     0 2022-01-25T22:12:53Z 2022-01-28T08:37:18Z 2022-01-28T05:41:49Z MEMBER   0 pydata/xarray/pulls/6194

    And another tiny one... The somewhat ambiguous date string triggers a warning in pandas which makes our doc build fail.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6194/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1115026697 PR_kwDOAMm_X84xneFL 6195 MAINT: pandas 1.4: no longer use get_loc with method mathause 10194086 closed 0     5 2022-01-26T13:35:04Z 2022-01-27T22:11:04Z 2022-01-27T21:06:40Z MEMBER   0 pydata/xarray/pulls/6195
    • [x] Closes #5721
    • [ ] Tests added
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

    Fixed as per @shoyer & @spencerkclark suggestion from https://github.com/pydata/xarray/issues/5721#issuecomment-903095007

    Now that pandas 1.4 is out it would be good to get this fixed (there are about 5000 warnings in our tests, mostly because of interp, though). Also leads to a warning in our docs which breaks them (although that can also be fixed with an :okwarning: directive).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6195/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    975385095 MDU6SXNzdWU5NzUzODUwOTU= 5721 pandas deprecates Index.get_loc with method mathause 10194086 closed 0     7 2021-08-20T08:24:16Z 2022-01-27T21:06:40Z 2022-01-27T21:06:40Z MEMBER      

    pandas deprecates the method keyword in Index.get_loc, see pandas-dev/pandas#42269. Therefore we end up with about 5000 warnings in our upstream tests:

    FutureWarning: Passing method to Index.get_loc is deprecated and will raise in a future version. Use index.get_indexer([item], method=...) instead

    We should fix this before pandas releases because the warning will not be silent (FutureWarning) or ask pandas to give us more time and use a DeprecationWarning at the moment.

    We use this here: https://github.com/pydata/xarray/blob/4bb9d9c6df77137f05e85c7cc6508fe7a93dc0e4/xarray/core/indexes.py#L233-L235 Is this only ever called with one item? Then we might be able to use ```python indexer = self.index.get_indexer( [label_value], method=method, tolerance=tolerance ).item() if indexer == -1: raise KeyError(label_value) ``` --- https://github.com/pydata/xarray/blob/3956b73a7792f41e4410349f2c40b9a9a80decd2/xarray/core/missing.py#L571-L572 This one could be easy to fix (replace with `imin = index.get_indexer([minval], method="nearest").item()`) --- It is also defined in `CFTimeIndex`, which complicates things: https://github.com/pydata/xarray/blob/eea76733770be03e78a0834803291659136bca31/xarray/coding/cftimeindex.py#L461-L466 because `get_indexer` expects an iterable and thus the `if isinstance(key, str)` test no longer works.

    @benbovy @spencerkclark

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5721/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1114392372 PR_kwDOAMm_X84xla15 6192 fix cftime doctests mathause 10194086 closed 0     0 2022-01-25T21:43:55Z 2022-01-26T21:45:19Z 2022-01-26T21:45:17Z MEMBER   0 pydata/xarray/pulls/6192

    Fixes the doctests for the newest version of cftime. @spencerkclark

    This of course means that the doctests will fail for environments with older versions of cftime present. I don't think there is anything we can do.

    Thanks for pytest-accept b.t.w @max-sixty

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6192/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1039272725 PR_kwDOAMm_X84t1ecc 5914 #5740 follow up: supress xr.ufunc warnings in tests mathause 10194086 closed 0     2 2021-10-29T07:53:07Z 2022-01-26T08:41:41Z 2021-10-29T15:16:03Z MEMBER   0 pydata/xarray/pulls/5914

    5740 changed PendingDeprecationWarning to FutureWarning - suppress the warnings again in the test suite.

    https://github.com/pydata/xarray/blob/36f05d70c864ee7c61603c8a43ba721bf7f434b3/xarray/ufuncs.py#L47-L49

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5914/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1083281083 PR_kwDOAMm_X84wATnw 6082 cftime: 'gregorian' -> 'standard' [test-upstream] mathause 10194086 closed 0     3 2021-12-17T13:51:07Z 2022-01-26T08:41:33Z 2021-12-22T11:40:05Z MEMBER   0 pydata/xarray/pulls/6082
    • [x] Closes #6016

    cftime 1.5.2 renames "gregorian" to "standard". AFAIK this only changes the repr of cftime indices and does not seem to influence the creation of cftime indices.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6082/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1088419434 PR_kwDOAMm_X84wQ-nD 6107 is_dask_collection: micro optimization mathause 10194086 closed 0     1 2021-12-24T15:04:42Z 2022-01-26T08:41:28Z 2021-12-29T16:27:55Z MEMBER   0 pydata/xarray/pulls/6107

    In #6096 I realized that DuckArrayModule("dask") is called a lot in our tests - 145'835 times. Most of those are from is_dask_collection (is_duck_dask_array) This change avoids that the instance needs to be built every time.

    ```python import xarray as xr

    %timeit xr.core.pycompat.DuckArrayModule("dask").available %timeit xr.core.pycompat.dsk.available ```

    18.9 µs ± 97.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 77.1 ns ± 1.22 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

    Which leads to an incredible speed up of our tests of about 2.7 s :grin: ((18.9 - 0.0771) * 145835 / 1e6).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6107/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    752870062 MDExOlB1bGxSZXF1ZXN0NTI5MDc4NDA0 4616 don't type check __getattr__ mathause 10194086 closed 0     4 2020-11-29T08:53:09Z 2022-01-26T08:41:18Z 2021-10-18T14:06:30Z MEMBER   1 pydata/xarray/pulls/4616
    • [x] Closes #4601
    • [x] Passes isort . && black . && mypy . && flake8
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    It's not pretty as I had to define a number of empty methods... I think this should wait for 0.17

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4616/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    778069594 MDExOlB1bGxSZXF1ZXN0NTQ4MjI1MDQz 4760 WIP: testing.assert_* check dtype mathause 10194086 closed 0     8 2021-01-04T12:45:00Z 2022-01-26T08:41:17Z 2021-10-18T14:06:38Z MEMBER   1 pydata/xarray/pulls/4760
    • [x] Closes #4727
    • [ ] Tests added
    • [ ] Passes isort . && black . && mypy . && flake8
    • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
    • [ ] New functions/methods are listed in api.rst

    This adds a dtype check for equal, identical, broadcast_equal, and the xr.testing.assert_* functions. It is far from complete: tests and documentation are still missing, but I wanted to get it online for feedback.

    When I set check_dtype=True there are around 600 failures. Fixing that is for another PR. #4759 should help a bit.

    • [ ] I added the checks to lazy_array_equiv, however, sometimes dask can get the dtype wrong before the compute (see below). Do you think I need to put it in the non-lazy part?

    ```python import numpy as np import xarray as xr

    da = xr.DataArray(np.array([0, np.nan], dtype=object)).chunk()

    da.prod().dtype # -> dtype('O') da.prod().compute().dtype # -> dtype('int64')

    ```

    • [ ] check_dtype is still missing from assert_duckarray_allclose & assert_duckarray_equal - do you think there are required?

    • [ ] The dtypes of array elements are not tested (see below). I don't think I'll implement that here.

    ```python da0 = xr.DataArray(np.array([0], dtype=object)) da1 = xr.DataArray(np.array([0.], dtype=object))

    xr.testting.assert_equal(da0, da1, check_dtype=True) ```

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4760/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1107431006 PR_kwDOAMm_X84xOsZX 6171 unpin dask again mathause 10194086 closed 0     1 2022-01-18T22:37:31Z 2022-01-26T08:41:02Z 2022-01-18T23:39:12Z MEMBER   0 pydata/xarray/pulls/6171
    • dask 2022.01 is out, so we can remove the pin
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6171/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1109572055 PR_kwDOAMm_X84xVqMq 6177 remove no longer necessary version checks mathause 10194086 closed 0     2 2022-01-20T17:24:21Z 2022-01-26T08:40:55Z 2022-01-21T18:00:51Z MEMBER   0 pydata/xarray/pulls/6177

    I hunted down some version checks that should no longer be necessary as we have moved beyond the minimum versions.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6177/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1114401347 PR_kwDOAMm_X84xlcvk 6193 don't install bottleneck wheel for upstream CI mathause 10194086 closed 0     3 2022-01-25T21:55:49Z 2022-01-26T08:31:42Z 2022-01-26T08:31:39Z MEMBER   0 pydata/xarray/pulls/6193
    • [x] see #6186

    I think it would be good to re-enable the upstream CI, even if this means we have to stick to py3.9 for the moment. I just subscribed to pydata/bottleneck#378, so I should see when we can switch to 3.10.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6193/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1099288617 PR_kwDOAMm_X84wzh1F 6155 typing fixes for mypy 0.931 and numpy 1.22 mathause 10194086 closed 0     2 2022-01-11T15:19:43Z 2022-01-13T17:13:00Z 2022-01-13T17:12:57Z MEMBER   0 pydata/xarray/pulls/6155

    typing fixes for mypy 0.931 and numpy 1.22. Also tested with numpy 1.20 which probably many still have installed.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6155/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    432074821 MDU6SXNzdWU0MzIwNzQ4MjE= 2889 nansum vs nanmean for all-nan vectors mathause 10194086 closed 0     3 2019-04-11T15:04:39Z 2022-01-05T21:59:48Z 2019-04-11T16:08:02Z MEMBER      

    ```python import xarray as xr import numpy as np

    ds = xr.DataArray([np.NaN, np.NaN])

    ds.mean() ds.sum()

    ```

    Problem description

    ds.mean() returns NaN, ds.sum() returns 0. This comes from numpy (cp np.nanmean vs. np.nansum), so it might have to be discussed upstream, but I wanted to ask the xarray community first on their opinion. This is also relevant for #422 (what happens if the all weights are NaN or sum up to 0).

    Expected Output

    I would expect both to return np.nan.

    Output of xr.show_versions()

    INSTALLED VERSIONS ------------------ commit: None python: 3.7.3 | packaged by conda-forge | (default, Mar 27 2019, 23:01:00) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.4.176-96-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1 pandas: 0.24.2 numpy: 1.16.2 scipy: 1.2.1 netCDF4: 1.5.0.1 pydap: None h5netcdf: 0.7.1 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: 1.2.0 PseudonetCDF: None rasterio: 1.0.22 cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.5 distributed: 1.26.1 matplotlib: 3.0.3 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 41.0.0 pip: 19.0.3 conda: None pytest: 4.4.0 IPython: 7.4.0 sphinx: 2.0.1
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/2889/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1078998718 PR_kwDOAMm_X84vyLHe 6077 disable pytest-xdist (to check CI failure) mathause 10194086 closed 0     3 2021-12-13T20:43:38Z 2022-01-03T08:30:02Z 2021-12-22T12:55:23Z MEMBER   0 pydata/xarray/pulls/6077

    Our CI fails with some pytest-xdist error. Let's see if we get a clearer picture when disabling parallel tests. (Maybe some interaction between dask and pytest-xdist?).

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6077/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1090752550 PR_kwDOAMm_X84wYT5m 6127 Revert "disable pytest-xdist (to check CI failure)" mathause 10194086 closed 0     2 2021-12-29T21:15:36Z 2022-01-03T08:29:52Z 2022-01-03T08:29:49Z MEMBER   0 pydata/xarray/pulls/6127
    • [x] Closes #6101

    Reverts pydata/xarray#6077 (after dask has been pinned in #6111)

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6127/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1086797050 I_kwDOAMm_X85AxzT6 6101 enable pytest-xdist again (after dask release) mathause 10194086 closed 0     0 2021-12-22T12:57:03Z 2022-01-03T08:29:48Z 2022-01-03T08:29:48Z MEMBER      

    I disabled pytest-xdist because a dask issue renders our CI unusable. As soon as dask releases a new version we should revert #6077 again.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6101/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
      completed xarray 13221727 issue
    1086360190 PR_kwDOAMm_X84wKRVp 6097 fix tests for h5netcdf v0.12 mathause 10194086 closed 0     6 2021-12-22T01:22:09Z 2021-12-23T20:29:33Z 2021-12-23T20:29:12Z MEMBER   0 pydata/xarray/pulls/6097

    h5netcdf no longer warns for invalid netCDF (unless passing save_kwargs = {"invalid_netcdf": True}. We need to adapt our tests.

    @kmuehlbauer


    edit: I added h5netcdf to the upstream tests - I can also revert this change if you prefer.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/6097/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1036287825 PR_kwDOAMm_X84tryph 5899 [test-upstream] fix pd skipna=None mathause 10194086 closed 0     2 2021-10-26T13:16:21Z 2021-10-28T11:54:49Z 2021-10-28T11:46:04Z MEMBER   0 pydata/xarray/pulls/5899
    • [x] Closes #5872
    • [x] Passes pre-commit run --all-files

    pandas will disallow skipna=None (pandas-dev/pandas#44178) - this fixes a test which relies on this. I don't think we have any user facing use of this. AFAIK we don't use pandas for reductions anywhere)

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5899/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    1029142676 PR_kwDOAMm_X84tVCEd 5875 fix test with pseudonetcdf 3.2 mathause 10194086 closed 0     5 2021-10-18T13:49:23Z 2021-10-22T21:24:09Z 2021-10-22T21:23:34Z MEMBER   0 pydata/xarray/pulls/5875

    Fixes one part of #5872

    pseudoNETCDF adds two attrs to ict files, which breaks the following two tests:

    Test 1: https://github.com/pydata/xarray/blob/07de257c5884df49335496ee6347fb633a7c302c/xarray/tests/test_backends.py#L3944 Test 2:

    https://github.com/pydata/xarray/blob/07de257c5884df49335496ee6347fb633a7c302c/xarray/tests/test_backends.py#L4030

    I reproduced the test file so that the tests pass again. To reproduce the file I used the following bit of code:

    ```python import xarray as xr from xarray.tests import test_backends

    fN = "xarray/tests/data/example.ict" fmtkw = {"format": "ffi1001"}

    ds = xr.open_dataset(fN, engine="pseudonetcdf", backend_kwargs={"format": "ffi1001"})

    c = test_backends.TestPseudoNetCDFFormat() c.save(ds, fN, **fmtkw) ```

    The save method is here:

    https://github.com/pydata/xarray/blob/07de257c5884df49335496ee6347fb633a7c302c/xarray/tests/test_backends.py#L4124

    @barronh I would appreciate your review here - I am not sure if this is the right approach.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5875/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    877166445 MDExOlB1bGxSZXF1ZXN0NjMxMTcwNzI4 5265 Warn ignored keep attrs mathause 10194086 closed 0     1 2021-05-06T07:20:16Z 2021-10-18T14:06:37Z 2021-05-06T16:31:05Z MEMBER   0 pydata/xarray/pulls/5265
    • [x] Part of #4513
    • [x] Tests added
    • [x] Passes pre-commit run --all-files
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    This PR warns when passing keep_attrs to resample(..., keep_attrs=True) and rolling_exp(..., keep_attrs=True) as they have no effect (rightfully). Also removes keep_attrs from the docstring of resample.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5265/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    869763597 MDExOlB1bGxSZXF1ZXN0NjI1MDc0NjA5 5227 coarsen: better keep_attrs mathause 10194086 closed 0     0 2021-04-28T09:56:45Z 2021-10-18T14:06:35Z 2021-04-29T17:40:57Z MEMBER   0 pydata/xarray/pulls/5227
    • [x] Part of #4513 (maybe the last one - need to double check)
    • [x] Tests added
    • [x] Passes pre-commit run --all-files
    • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

    As per https://github.com/pydata/xarray/issues/3891#issuecomment-612522628 I also changed the default to True.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5227/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    758033677 MDExOlB1bGxSZXF1ZXN0NTMzMjc0NDY3 4656 unpin pip 20.2 again mathause 10194086 closed 0     7 2020-12-06T22:00:12Z 2021-10-18T14:06:34Z 2021-04-18T21:42:25Z MEMBER   0 pydata/xarray/pulls/4656

    Another enormous PR from my side ;) unpin pip again. numpy probably fixed the issue re the name of the nightly build. But I also need to doublecheck if scipy is ok.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4656/reactions",
        "total_count": 2,
        "+1": 2,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    802400938 MDExOlB1bGxSZXF1ZXN0NTY4NTUwNDEx 4865 fix da.pad example for numpy 1.20 mathause 10194086 closed 0     4 2021-02-05T19:00:04Z 2021-10-18T14:06:33Z 2021-02-07T21:57:34Z MEMBER   0 pydata/xarray/pulls/4865
    • [x] Closes #4858
    • [x] Passes pre-commit run --all-files
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4865/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    794344392 MDExOlB1bGxSZXF1ZXN0NTYxODc2OTg5 4845 iris update doc url mathause 10194086 closed 0     1 2021-01-26T15:51:18Z 2021-10-18T14:06:31Z 2021-01-26T17:30:20Z MEMBER   0 pydata/xarray/pulls/4845

    iris moved its documentation to https://scitools-iris.readthedocs.io/en/stable/

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4845/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    738958305 MDExOlB1bGxSZXF1ZXN0NTE3NzA0OTI2 4569 pin h5py to v2.10 mathause 10194086 closed 0     0 2020-11-09T11:46:39Z 2021-10-18T14:06:28Z 2020-11-09T12:52:27Z MEMBER   0 pydata/xarray/pulls/4569

    There is a compatibility issue with h5py v3. Pin h5py to version 2 for the moment. I can open an issue shortly.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4569/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    724975973 MDExOlB1bGxSZXF1ZXN0NTA2Mjc3OTk4 4525 unpin eccodes again mathause 10194086 closed 0     2 2020-10-19T21:07:23Z 2021-10-18T14:06:27Z 2020-10-19T22:21:13Z MEMBER   0 pydata/xarray/pulls/4525
    • [x] Closes #4521
    • [x] Passes isort . && black . && mypy . && flake8

    That was fast - eccodes already fixed the issue.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4525/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    684430261 MDExOlB1bGxSZXF1ZXN0NDcyMzE4MzUw 4371 mention all ignored flake8 errors mathause 10194086 closed 0     1 2020-08-24T07:17:03Z 2021-10-18T14:06:18Z 2020-08-24T10:45:05Z MEMBER   0 pydata/xarray/pulls/4371

    and put the comment on the same line

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4371/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    577830239 MDExOlB1bGxSZXF1ZXN0Mzg1NTIyOTEy 3849 update installation instruction mathause 10194086 closed 0     6 2020-03-09T11:14:13Z 2021-10-18T14:06:16Z 2020-03-09T14:07:03Z MEMBER   0 pydata/xarray/pulls/3849
    • [x] Closes #3756
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3849/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    572269093 MDExOlB1bGxSZXF1ZXN0MzgxMDAyMTU2 3805 un-xfail tests that append to netCDF files with scipy mathause 10194086 closed 0     3 2020-02-27T18:23:56Z 2021-10-18T14:06:14Z 2020-03-09T07:18:07Z MEMBER   0 pydata/xarray/pulls/3805
    • [x] Closes #2019
    • [ ] Tests added
    • [x] Passes isort -rc . && black . && mypy . && flake8
    • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
    • [x] reverts #2021

    Let's see if this passes....

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3805/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    539059754 MDExOlB1bGxSZXF1ZXN0MzU0MDk5Mzkz 3635 Fix/quantile wrong errmsg mathause 10194086 closed 0     2 2019-12-17T13:16:40Z 2021-10-18T14:06:13Z 2019-12-17T13:50:06Z MEMBER   0 pydata/xarray/pulls/3635
    • [x] Closes #3634
    • [x] Tests added
    • [x] Passes black . && mypy . && flake8
    • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

    np.nanquantile was added in numpy 1.15.0, the current minimum requirement for numpy is 1.14.0, therefore we have to test this ourselves.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3635/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    928539812 MDExOlB1bGxSZXF1ZXN0Njc2NTI5NjQ4 5522 typing for numpy 1.21 mathause 10194086 closed 0     2 2021-06-23T18:40:28Z 2021-10-18T14:05:47Z 2021-06-24T08:58:07Z MEMBER   0 pydata/xarray/pulls/5522
    • [x] Closes #5517
    • [x] Passes pre-commit run --all-files

    The minimal typing for numpy 1.21. As always I am by no means a typing specialist.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5522/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    307783090 MDU6SXNzdWUzMDc3ODMwOTA= 2007 rolling: allow control over padding mathause 10194086 open 0     20 2018-03-22T19:27:07Z 2021-07-14T19:10:47Z   MEMBER      

    Code Sample, a copy-pastable example if possible

    ```python import numpy as np import xarray as xr

    x = np.arange(1, 366) y = np.random.randn(365) ds = xr.DataArray(y, dims=dict(dayofyear=x))

    ds.rolling(center=True, dayofyear=31).mean() ```

    Problem description

    rolling cannot directly handle periodic boundary conditions (lon, dayofyear, ...), but could be very helpful to e.g. calculate climate indices. Also I cannot really think of an easy way to append the first elements to the end of the dataset and then calculate rolling.

    Is there a way to do this? Should xarray support this feature?

    This might also belong to SO...

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/2007/reactions",
        "total_count": 3,
        "+1": 3,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    788534915 MDU6SXNzdWU3ODg1MzQ5MTU= 4824 combine_by_coords can succed when it shouldn't mathause 10194086 open 0     15 2021-01-18T20:39:29Z 2021-07-08T17:44:38Z   MEMBER      

    What happened:

    combine_by_coords can succeed when it should not - depending on the name of the dimensions (which determines the order of operations in combine_by_coords).

    What you expected to happen:

    • I think it should throw an error in both cases.

    Minimal Complete Verifiable Example:

    ```python import numpy as np import xarray as xr

    data = np.arange(5).reshape(1, 5) x = np.arange(5) x_name = "lat"

    da0 = xr.DataArray(data, dims=("t", x_name), coords={"t": [1], x_name: x}).to_dataset(name="a") x = x + 1e-6 da1 = xr.DataArray(data, dims=("t", x_name), coords={"t": [2], x_name: x}).to_dataset(name="a") ds = xr.combine_by_coords((da0, da1))

    ds ```

    returns: python <xarray.Dataset> Dimensions: (lat: 10, t: 2) Coordinates: * lat (lat) float64 0.0 1e-06 1.0 1.0 2.0 2.0 3.0 3.0 4.0 4.0 * t (t) int64 1 2 Data variables: a (t, lat) float64 0.0 nan 1.0 nan 2.0 nan ... 2.0 nan 3.0 nan 4.0 Thus lat is interlaced - it don't think combine_by_coords should do this. If you set

    python x_name = "lat" and run the example again, it returns:

    ```python-traceback ValueError: Resulting object does not have monotonic global indexes along dimension x

    ```

    Anything else we need to know?:

    • this is vaguely related to #4077 but I think it is separate
    • combine_by_coords concatenates over all dimensions where the coords are different - therefore compat="override" doesn't actually do anything? Or does it?

    https://github.com/pydata/xarray/blob/ba42c08af9afbd9e79d47bda404bf4a92a7314a0/xarray/core/combine.py#L69

    cc @dcherian @TomNicholas

    Environment:

    Output of <tt>xr.show_versions()</tt>
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4824/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    773750763 MDU6SXNzdWU3NzM3NTA3NjM= 4727 xr.testing.assert_equal does not test for dtype mathause 10194086 open 0     5 2020-12-23T13:14:41Z 2021-07-04T04:08:51Z   MEMBER      

    In #4622 @toddrjen points out that xr.testing.assert_equal does not test for the dtype, only for the value. Therefore the following does not raise an error:

    ```python import numpy as np import xarray as xr import pandas as pd

    xr.testing.assert_equal( xr.DataArray(np.array(1, dtype=int)), xr.DataArray(np.array(1, dtype=float)) ) xr.testing.assert_equal( xr.DataArray(np.array(1, dtype=int)), xr.DataArray(np.array(1, dtype=object)) ) xr.testing.assert_equal( xr.DataArray(np.array("a", dtype=str)), xr.DataArray(np.array("a", dtype=object)) ) ```

    This comes back to numpy, i.e. the following is True:

    python np.array(1, dtype=int) == np.array(1, dtype=float)

    Depending on the situation one or the other is desirable or not. Thus, I would suggest to add a check_dtype argument to xr.testing.assert_equal and also to DataArray.equals (and Dataset and Variable and identical). I have not seen such an option in numpy, but pandas has it (e.g. pd.testing.assert_series_equal(left, right, check_dtype=True, ...). I would not change __eq__.

    • Thoughts?
    • What should the default be? We could try True first and see how many failures this creates?
    • What to do with coords and indexes? pd.testing.assert_series_equal has a check_index_type keyword. Probably we need check_coords_type as well? This makes the whole thing much more complicated... Also #4543
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/4727/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    559217441 MDU6SXNzdWU1NTkyMTc0NDE= 3744 Contour with vmin/ vmax differs from matplotlib mathause 10194086 open 0     0 2020-02-03T17:11:24Z 2021-07-04T02:03:02Z   MEMBER      

    MCVE Code Sample

    ```python import numpy as np import xarray as xr import matplotlib as mpl import matplotlib.pyplot as plt

    data = xr.DataArray(np.arange(24).reshape(4, 6))

    data.plot.contour(vmax=10, add_colorbar=True) ```

    Expected Output

    python h = plt.contour(data.values, vmax=10) plt.colorbar(h)

    Problem Description

    A contour(vmax=vmax) plot differs between xarray and matplotlib. I think the problem is here:

    https://github.com/pydata/xarray/blob/95e4f6c7a636878c94b892ee8d49866823d0748f/xarray/plot/utils.py#L265

    xarray calculates the levels from vmax while matplotlib (probably) calculates the levels from data.max() and uses vmax only for the norm. For contourf and pcolormesh this is not so relevant as the capped values are then drawn with the over color. However, there may also be a good reason for this behavior.

    Output of xr.show_versions()

    INSTALLED VERSIONS ------------------ commit: 4c96d53e6caa78d56b785f4edee49bbd4037a82f python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:33:48) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.12.14-lp151.28.36-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.6.2 xarray: 999 (master) pandas: 0.25.3 numpy: 1.17.3 scipy: 1.4.1 netCDF4: 1.5.1.2 pydap: installed h5netcdf: 0.7.4 h5py: 2.10.0 Nio: 1.5.5 zarr: 2.4.0 cftime: 1.0.4.2 nc_time_axis: 1.2.0 PseudoNetCDF: installed rasterio: 1.1.0 cfgrib: 0.9.7.6 iris: 2.2.0 bottleneck: 1.3.1 dask: 2.9.2 distributed: 2.9.2 matplotlib: 3.1.2 cartopy: 0.17.0 seaborn: 0.9.0 numbagg: installed setuptools: 45.0.0.post20200113 pip: 19.3.1 conda: None pytest: 5.3.3 IPython: 7.11.1 sphinx: None
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3744/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    587048587 MDU6SXNzdWU1ODcwNDg1ODc= 3883 weighted operations: performance optimisations mathause 10194086 open 0     3 2020-03-24T15:31:54Z 2021-07-04T02:01:28Z   MEMBER      

    There was a discussion on the performance of the weighted mean/ sum in terms of memory footprint but also speed, and there may indeed be some things that can be optimized. See the posts at the end of the PR. However, the optimal implementation will probably depend on the use case and some profiling will be required.

    I'll just open an issue to keep track of this. @seth-p

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/3883/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 issue
    235542564 MDExOlB1bGxSZXF1ZXN0MTI1MzU1MTI5 1451 inconsistent time.units fmt in encode_cf_datetime mathause 10194086 closed 0     7 2017-06-13T12:49:31Z 2021-06-24T08:45:18Z 2021-06-23T16:14:27Z MEMBER   0 pydata/xarray/pulls/1451
    • do not change user-specified units
    • always format infered units as 'YYYY-mmmm-ddTHH:MM:SS'

    This is my naïve approach.

    • [ ] Closes #1449
    • [ ] Tests added / passed
    • [ ] Passes git diff upstream/master | flake8 --diff
    • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/1451/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    913958248 MDExOlB1bGxSZXF1ZXN0NjYzOTE2NDQw 5451 Silence some test warnings mathause 10194086 closed 0     1 2021-06-07T21:12:50Z 2021-06-09T17:55:48Z 2021-06-09T17:27:21Z MEMBER   0 pydata/xarray/pulls/5451

    Silences a number of warnings that accumulated in our test suite (c.f. #3266). The changes are mostly unrelated but small.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5451/reactions",
        "total_count": 0,
        "+1": 0,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull
    913916040 MDExOlB1bGxSZXF1ZXN0NjYzODgwMjI1 5450 plt.gca() no longer accepts kwargs mathause 10194086 closed 0     0 2021-06-07T20:10:57Z 2021-06-09T17:27:02Z 2021-06-09T17:26:58Z MEMBER   0 pydata/xarray/pulls/5450

    matplotlib warns: Calling gca() with keyword arguments was deprecated in Matplotlib 3.4. Starting two minor releases later, gca() will take no keyword arguments. The gca() function should only be used to get the current axes, or if no axes exist, create new axes with default keyword arguments. To create a new axes with non-default arguments, use plt.axes() or plt.subplot().

    This only uses plt.gca() if there are active axes, else it calls plt.axes(**kwargs). Note that this can silently ignore some arguments. However, that this is already the case.

    {
        "url": "https://api.github.com/repos/pydata/xarray/issues/5450/reactions",
        "total_count": 1,
        "+1": 1,
        "-1": 0,
        "laugh": 0,
        "hooray": 0,
        "confused": 0,
        "heart": 0,
        "rocket": 0,
        "eyes": 0
    }
        xarray 13221727 pull

    Next page

    Advanced export

    JSON shape: default, array, newline-delimited, object

    CSV options:

    CREATE TABLE [issues] (
       [id] INTEGER PRIMARY KEY,
       [node_id] TEXT,
       [number] INTEGER,
       [title] TEXT,
       [user] INTEGER REFERENCES [users]([id]),
       [state] TEXT,
       [locked] INTEGER,
       [assignee] INTEGER REFERENCES [users]([id]),
       [milestone] INTEGER REFERENCES [milestones]([id]),
       [comments] INTEGER,
       [created_at] TEXT,
       [updated_at] TEXT,
       [closed_at] TEXT,
       [author_association] TEXT,
       [active_lock_reason] TEXT,
       [draft] INTEGER,
       [pull_request] TEXT,
       [body] TEXT,
       [reactions] TEXT,
       [performed_via_github_app] TEXT,
       [state_reason] TEXT,
       [repo] INTEGER REFERENCES [repos]([id]),
       [type] TEXT
    );
    CREATE INDEX [idx_issues_repo]
        ON [issues] ([repo]);
    CREATE INDEX [idx_issues_milestone]
        ON [issues] ([milestone]);
    CREATE INDEX [idx_issues_assignee]
        ON [issues] ([assignee]);
    CREATE INDEX [idx_issues_user]
        ON [issues] ([user]);
    Powered by Datasette · Queries took 47.427ms · About: xarray-datasette