home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

134 rows where state = "closed" and user = 35968931 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, draft, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 99
  • issue 35

state 1

  • closed · 134 ✖

repo 1

  • xarray 134
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2019566184 I_kwDOAMm_X854YCJo 8494 Filter expected warnings in the test suite TomNicholas 35968931 closed 0     1 2023-11-30T21:50:15Z 2024-04-29T16:57:07Z 2024-04-29T16:56:16Z MEMBER      

FWIW one thing I'd be keen for to do generally — though maybe this isn't the place to start it — is handle warnings in the test suite when we add a new warning — i.e. filter them out where we expect them.

In this case, that would be the loading the netCDF files that have duplicate dims.

Otherwise warnings become a huge block of text without much salience. I mostly see the 350 lines of them and think "meh mostly units & cftime", but then something breaks on a new upstream release that was buried in there, or we have a supported code path that is raising warnings internally.

(I'm not sure whether it's possible to generally enforce that — maybe we could raise on any warnings coming from within xarray? Would be a non-trivial project to get us there though...)

Originally posted by @max-sixty in https://github.com/pydata/xarray/issues/8491#issuecomment-1834615826

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8494/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2021386895 PR_kwDOAMm_X85g7QZD 8500 Deprecate ds.dims returning dict TomNicholas 35968931 closed 0     1 2023-12-01T18:29:28Z 2024-04-28T20:04:00Z 2023-12-06T17:52:24Z MEMBER   0 pydata/xarray/pulls/8500
  • [x] Closes first step of #8496, would require another PR later to actually change the return type. Also really resolves the second half of #921.
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8500/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2224036575 I_kwDOAMm_X86EkBrf 8905 Variable doesn't have an .expand_dims method TomNicholas 35968931 closed 0     4 2024-04-03T22:19:10Z 2024-04-28T19:54:08Z 2024-04-28T19:54:08Z MEMBER      

Is your feature request related to a problem?

DataArray and Dataset have an .expand_dims method, but it looks like Variable doesn't.

Describe the solution you'd like

Variable should also have this method, the only difference being that it wouldn't create any coordinates or indexes.

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8905/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2254350395 PR_kwDOAMm_X85tPTua 8960 Option to not auto-create index during expand_dims TomNicholas 35968931 closed 0     2 2024-04-20T03:27:23Z 2024-04-27T16:48:30Z 2024-04-27T16:48:24Z MEMBER   0 pydata/xarray/pulls/8960
  • [x] Solves part of #8871 by pulling out part of https://github.com/pydata/xarray/pull/8872#issuecomment-2027571714
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~

TODO: - [x] Add new kwarg to DataArray.expand_dims - [ ] Add examples to docstrings? - [x] Check it actually solves the problem in #8872

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8960/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2100707586 PR_kwDOAMm_X85lFQn3 8669 Fix automatic broadcasting when wrapping array api class TomNicholas 35968931 closed 0     0 2024-01-25T16:05:19Z 2024-04-20T05:58:05Z 2024-01-26T16:41:30Z MEMBER   0 pydata/xarray/pulls/8669
  • [x] Closes #8665
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8669/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2240895281 PR_kwDOAMm_X85siDno 8934 Correct save_mfdataset docstring TomNicholas 35968931 closed 0     0 2024-04-12T20:51:35Z 2024-04-14T19:58:46Z 2024-04-14T11:14:42Z MEMBER   0 pydata/xarray/pulls/8934

Noticed the **kwargs part of the docstring was mangled - see here

  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [ ] ~~User visible changes (including notable bug fixes) are documented in whats-new.rst~~
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8934/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2198196326 I_kwDOAMm_X86DBdBm 8860 Ugly error in constructor when no data passed TomNicholas 35968931 closed 0     2 2024-03-20T17:55:52Z 2024-04-10T22:46:55Z 2024-04-10T22:46:54Z MEMBER      

What happened?

Passing no data to the Dataset constructor can result in a very unhelpful "tuple index out of range" error when this is a clear case of malformed input that we should be able to catch.

What did you expect to happen?

An error more like "tuple must be of form (dims, data[, attrs])"

Minimal Complete Verifiable Example

Python xr.Dataset({"t": ()})

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

```Python

IndexError Traceback (most recent call last) Cell In[2], line 1 ----> 1 xr.Dataset({"t": ()})

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:693, in Dataset.init(self, data_vars, coords, attrs) 690 if isinstance(coords, Dataset): 691 coords = coords._variables --> 693 variables, coord_names, dims, indexes, _ = merge_data_and_coords( 694 data_vars, coords 695 ) 697 self._attrs = dict(attrs) if attrs else None 698 self._close = None

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:422, in merge_data_and_coords(data_vars, coords) 418 coords = create_coords_with_default_indexes(coords, data_vars) 420 # exclude coords from alignment (all variables in a Coordinates object should 421 # already be aligned together) and use coordinates' indexes to align data_vars --> 422 return merge_core( 423 [data_vars, coords], 424 compat="broadcast_equals", 425 join="outer", 426 explicit_coords=tuple(coords), 427 indexes=coords.xindexes, 428 priority_arg=1, 429 skip_align_args=[1], 430 )

File ~/Documents/Work/Code/xarray/xarray/core/merge.py:718, in merge_core(objects, compat, join, combine_attrs, priority_arg, explicit_coords, indexes, fill_value, skip_align_args) 715 for pos, obj in skip_align_objs: 716 aligned.insert(pos, obj) --> 718 collected = collect_variables_and_indexes(aligned, indexes=indexes) 719 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) 720 variables, out_indexes = merge_collected( 721 collected, prioritized, compat=compat, combine_attrs=combine_attrs 722 )

File ~/Documents/Work/Code/xarray/xarray/core/merge.py:358, in collect_variables_and_indexes(list_of_mappings, indexes) 355 indexes_.pop(name, None) 356 append_all(coords_, indexes_) --> 358 variable = as_variable(variable, name=name, auto_convert=False) 359 if name in indexes: 360 append(name, variable, indexes[name])

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:126, in as_variable(obj, name, auto_convert) 124 obj = obj.copy(deep=False) 125 elif isinstance(obj, tuple): --> 126 if isinstance(obj[1], DataArray): 127 raise TypeError( 128 f"Variable {name!r}: Using a DataArray object to construct a variable is" 129 " ambiguous, please extract the data using the .data property." 130 ) 131 try:

IndexError: tuple index out of range ```

Anything else we need to know?

No response

Environment

Xarray main

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8860/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2057651682 PR_kwDOAMm_X85i2Byx 8573 ddof vs correction kwargs in std/var TomNicholas 35968931 closed 0     0 2023-12-27T18:10:52Z 2024-04-04T16:46:55Z 2024-04-04T16:46:55Z MEMBER   0 pydata/xarray/pulls/8573
  • [x] Attempt to closes issue described in https://github.com/pydata/xarray/issues/8566#issuecomment-1870472827
  • [x] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8573/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2218574880 PR_kwDOAMm_X85rVXJC 8899 New empty whatsnew entry TomNicholas 35968931 closed 0     0 2024-04-01T16:04:27Z 2024-04-01T17:49:09Z 2024-04-01T17:49:06Z MEMBER   0 pydata/xarray/pulls/8899

Should have been done as part of the last release https://github.com/pydata/xarray/releases/tag/v2024.03.0

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8899/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2213406564 PR_kwDOAMm_X85rEF-X 8886 Allow multidimensional variable with same name as dim when constructing dataset via coords TomNicholas 35968931 closed 0     2 2024-03-28T14:37:27Z 2024-03-28T17:07:10Z 2024-03-28T16:28:09Z MEMBER   0 pydata/xarray/pulls/8886

Supercedes #8884 as a way to close #8883, in light of me having learnt that this is now allowed! https://github.com/pydata/xarray/issues/8883#issuecomment-2024645815. So this is really a follow-up to #7989.

  • [x] Closes #8883
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8886/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2212186122 I_kwDOAMm_X86D20gK 8883 Coordinates object permits invalid state TomNicholas 35968931 closed 0     2 2024-03-28T01:49:21Z 2024-03-28T16:28:11Z 2024-03-28T16:28:11Z MEMBER      

What happened?

It is currently possible to create a Coordinates object where a variable shares a name with a dimension, but the variable is not 1D. This is explicitly forbidden by the xarray data model.

What did you expect to happen?

If you try to pass the resulting object into the Dataset constructor you get the expected error telling you that this is forbidden, but that error should have been raised by Coordinates.__init__.

Minimal Complete Verifiable Example

```Python In [1]: from xarray.core.coordinates import Coordinates

In [2]: from xarray.core.variable import Variable

In [4]: import numpy as np

In [5]: var = Variable(data=np.arange(6).reshape(2, 3), dims=['x', 'y'])

In [6]: var Out[6]: <xarray.Variable (x: 2, y: 3)> Size: 48B array([[0, 1, 2], [3, 4, 5]])

In [7]: coords = Coordinates(coords={'x': var}, indexes={})

In [8]: coords Out[8]: Coordinates: x (x, y) int64 48B 0 1 2 3 4 5

In [10]: import xarray as xr

In [11]: ds = xr.Dataset(coords=coords)

MergeError Traceback (most recent call last) Cell In[11], line 1 ----> 1 ds = xr.Dataset(coords=coords)

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:693, in Dataset.init(self, data_vars, coords, attrs) 690 if isinstance(coords, Dataset): 691 coords = coords._variables --> 693 variables, coord_names, dims, indexes, _ = merge_data_and_coords( 694 data_vars, coords 695 ) 697 self._attrs = dict(attrs) if attrs else None 698 self._close = None

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:422, in merge_data_and_coords(data_vars, coords) 418 coords = create_coords_with_default_indexes(coords, data_vars) 420 # exclude coords from alignment (all variables in a Coordinates object should 421 # already be aligned together) and use coordinates' indexes to align data_vars --> 422 return merge_core( 423 [data_vars, coords], 424 compat="broadcast_equals", 425 join="outer", 426 explicit_coords=tuple(coords), 427 indexes=coords.xindexes, 428 priority_arg=1, 429 skip_align_args=[1], 430 )

File ~/Documents/Work/Code/xarray/xarray/core/merge.py:731, in merge_core(objects, compat, join, combine_attrs, priority_arg, explicit_coords, indexes, fill_value, skip_align_args) 729 coord_names.intersection_update(variables) 730 if explicit_coords is not None: --> 731 assert_valid_explicit_coords(variables, dims, explicit_coords) 732 coord_names.update(explicit_coords) 733 for dim, size in dims.items():

File ~/Documents/Work/Code/xarray/xarray/core/merge.py:577, in assert_valid_explicit_coords(variables, dims, explicit_coords) 575 for coord_name in explicit_coords: 576 if coord_name in dims and variables[coord_name].dims != (coord_name,): --> 577 raise MergeError( 578 f"coordinate {coord_name} shares a name with a dataset dimension, but is " 579 "not a 1D variable along that dimension. This is disallowed " 580 "by the xarray data model." 581 )

MergeError: coordinate x shares a name with a dataset dimension, but is not a 1D variable along that dimension. This is disallowed by the xarray data model. ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

I noticed this whilst working on #8872

Environment

main

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8883/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2212211084 PR_kwDOAMm_X85rABMo 8884 Forbid invalid Coordinates object TomNicholas 35968931 closed 0     2 2024-03-28T02:14:01Z 2024-03-28T14:38:43Z 2024-03-28T14:38:03Z MEMBER   0 pydata/xarray/pulls/8884
  • [x] Closes #8883
  • [x] Tests added
  • [ ] ~~User visible changes (including notable bug fixes) are documented in whats-new.rst~~
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8884/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2119537681 PR_kwDOAMm_X85mE7Im 8711 Opt out of auto creating index variables TomNicholas 35968931 closed 0     11 2024-02-05T22:04:36Z 2024-03-26T13:55:16Z 2024-03-26T13:50:14Z MEMBER   0 pydata/xarray/pulls/8711

Tries fixing #8704 by cherry-picking from #8124 as @benbovy suggested in https://github.com/pydata/xarray/issues/8704#issuecomment-1926868422

  • [x] Closes #8704
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8711/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2117248281 I_kwDOAMm_X85-MqUZ 8704 Currently no way to create a Coordinates object without indexes for 1D variables TomNicholas 35968931 closed 0     4 2024-02-04T18:30:18Z 2024-03-26T13:50:16Z 2024-03-26T13:50:15Z MEMBER      

What happened?

The workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263 does not seem to work on main, meaning that I think there is currently no way to create an xr.Coordinates object without 1D variables being coerced to indexes. This means there is no way to create a Dataset object without 1D variables becoming IndexVariables being coerced to indexes.

What did you expect to happen?

I expected to at least be able to use the workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263, i.e.

python xr.Coordinates({'x': ('x', uarr)}, indexes={}) where uarr is an un-indexable array-like.

Minimal Complete Verifiable Example

```Python class UnindexableArrayAPI: ...

class UnindexableArray: """ Presents like an N-dimensional array but doesn't support changes of any kind, nor can it be coerced into a np.ndarray or pd.Index. """

_shape: tuple[int, ...]
_dtype: np.dtype

def __init__(self, shape: tuple[int, ...], dtype: np.dtype) -> None:
    self._shape = shape
    self._dtype = dtype
    self.__array_namespace__ = UnindexableArrayAPI

@property
def dtype(self) -> np.dtype:
    return self._dtype

@property
def shape(self) -> tuple[int, ...]:
    return self._shape

@property
def ndim(self) -> int:
    return len(self.shape)

@property
def size(self) -> int:
    return np.prod(self.shape)

@property
def T(self) -> Self:
    raise NotImplementedError()

def __repr__(self) -> str:
    return f"UnindexableArray(shape={self.shape}, dtype={self.dtype})"

def _repr_inline_(self, max_width):
    """
    Format to a single line with at most max_width characters. Used by xarray.
    """
    return self.__repr__()

def __getitem__(self, key, /) -> Self:
    """
    Only supports extremely limited indexing.

    I only added this method because xarray will apparently attempt to index into its lazy indexing classes even if the operation would be a no-op anyway.
    """
    from xarray.core.indexing import BasicIndexer

    if isinstance(key, BasicIndexer) and key.tuple == ((slice(None),) * self.ndim):
        # no-op
        return self
    else:
        raise NotImplementedError()

def __array__(self) -> np.ndarray:
    raise NotImplementedError("UnindexableArrays can't be converted into numpy arrays or pandas Index objects")

```

```python uarr = UnindexableArray(shape=(3,), dtype=np.dtype('int32'))

xr.Variable(data=uarr, dims=['x']) # works fine

xr.Coordinates({'x': ('x', uarr)}, indexes={}) # works in xarray v2023.08.0 but in versions after that it triggers the NotImplementedError in `__array__`:python


NotImplementedError Traceback (most recent call last) Cell In[59], line 1 ----> 1 xr.Coordinates({'x': ('x', uarr)}, indexes={})

File ~/Documents/Work/Code/xarray/xarray/core/coordinates.py:301, in Coordinates.init(self, coords, indexes) 299 variables = {} 300 for name, data in coords.items(): --> 301 var = as_variable(data, name=name) 302 if var.dims == (name,) and indexes is None: 303 index, index_vars = create_default_index_implicit(var, list(coords))

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:159, in as_variable(obj, name) 152 raise TypeError( 153 f"Variable {name!r}: unable to convert object into a variable without an " 154 f"explicit list of dimensions: {obj!r}" 155 ) 157 if name is not None and name in obj.dims and obj.ndim == 1: 158 # automatically convert the Variable into an Index --> 159 obj = obj.to_index_variable() 161 return obj

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:572, in Variable.to_index_variable(self) 570 def to_index_variable(self) -> IndexVariable: 571 """Return this variable as an xarray.IndexVariable""" --> 572 return IndexVariable( 573 self._dims, self._data, self._attrs, encoding=self._encoding, fastpath=True 574 )

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2642, in IndexVariable.init(self, dims, data, attrs, encoding, fastpath) 2640 # Unlike in Variable, always eagerly load values into memory 2641 if not isinstance(self._data, PandasIndexingAdapter): -> 2642 self._data = PandasIndexingAdapter(self._data)

File ~/Documents/Work/Code/xarray/xarray/core/indexing.py:1481, in PandasIndexingAdapter.init(self, array, dtype) 1478 def init(self, array: pd.Index, dtype: DTypeLike = None): 1479 from xarray.core.indexes import safe_cast_to_index -> 1481 self.array = safe_cast_to_index(array) 1483 if dtype is None: 1484 self._dtype = get_valid_numpy_dtype(array)

File ~/Documents/Work/Code/xarray/xarray/core/indexes.py:469, in safe_cast_to_index(array) 459 emit_user_level_warning( 460 ( 461 "pandas.Index does not support the float16 dtype." (...) 465 category=DeprecationWarning, 466 ) 467 kwargs["dtype"] = "float64" --> 469 index = pd.Index(np.asarray(array), **kwargs) 471 return _maybe_cast_to_cftimeindex(index)

Cell In[55], line 63, in UnindexableArray.array(self) 62 def array(self) -> np.ndarray: ---> 63 raise NotImplementedError("UnindexableArrays can't be converted into numpy arrays or pandas Index objects")

NotImplementedError: UnindexableArrays can't be converted into numpy arrays or pandas Index objects ```

MVCE confirmation

  • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [x] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [x] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

Context is #8699

Environment

Versions described above

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8704/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1945654275 PR_kwDOAMm_X85c7HL_ 8319 Move parallelcompat and chunkmanagers to NamedArray TomNicholas 35968931 closed 0     9 2023-10-16T16:34:26Z 2024-02-12T22:09:24Z 2024-02-12T22:09:24Z MEMBER   0 pydata/xarray/pulls/8319

@dcherian I got to this point before realizing that simply moving parallelcompat.py over isn't what it says in the design doc, which instead talks about

  • Could this functionality be left in Xarray proper for now? Alternative array types like JAX also have some notion of "chunks" for parallel arrays, but the details differ in a number of ways from the Dask/Cubed.
  • Perhaps variable.chunk/load methods should become functions defined in xarray that convert Variable objects. This is easy so long as xarray can reach in and replace .data

I personally think that simply moving parallelcompat makes sense so long as you expect people to use chunked NamedArray objects. I see the chunked arrays as special cases of duck arrays, and my understanding is that NamedArray is supposed to have full support for duckarrays.

cc @andersy005

  • [x] As requested in #8238
  • [ ] ~~Tests added~~
  • [ ] ~~User visible changes (including notable bug fixes) are documented in whats-new.rst~~
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8319/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2098882374 I_kwDOAMm_X859GmdG 8660 dtype encoding ignored during IO? TomNicholas 35968931 closed 0     3 2024-01-24T18:50:47Z 2024-02-05T17:35:03Z 2024-02-05T17:35:02Z MEMBER      

What happened?

When I set the .encoding['dtype'] attribute before saving a to disk, the actual on-disk representation appears to store a record of the dtype encoding, but when opening it back up in xarray I get the same dtype I had before, not the one specified in the encoding. Is that what's supposed to happen? How does this work? (This happens with both zarr and netCDF.)

What did you expect to happen?

I expected that setting .encoding['dtype'] would mean that once I open the data back up, it would be in the new dtype that I set in the encoding.

Minimal Complete Verifiable Example

```Python air = xr.tutorial.open_dataset('air_temperature')

air['air'].dtype # returns dtype('float32')

air['air'].encoding['dtype'] # returns dtype('int16'), which already seems weird

air.to_zarr('air.zarr') # I would assume here that the encoding actually does something during IO

now if I check the zarr .zarray metadata for the air variable it says

"dtype":"<i2"`

air2 = xr.open_dataset('air.zarr', engine='zarr') # open it back up

air2['air'].dtype # returns dtype('float32'), but I expected dtype('int16')

(the same thing happens also with saving to netCDF instead of Zarr) ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

I know I didn't explicitly cast with .asdtype, but I'm still confused as to what the relation between the dtype encoding is supposed to be here.

I am probably just misunderstanding how this is supposed to work, but then this is arguably a docs issue, because here it says "[the encoding dtype field] controls the type of the data written on disk", which I would have thought also affects the data you get back when you open it up again?

Environment

main branch of xarray

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8660/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2099530269 I_kwDOAMm_X859JEod 8665 Error when broadcasting array API compliant class TomNicholas 35968931 closed 0     1 2024-01-25T04:11:14Z 2024-01-26T16:41:31Z 2024-01-26T16:41:31Z MEMBER      

What happened?

Broadcasting fails for array types that strictly follow the array API standard.

What did you expect to happen?

With a normal numpy array this obviously works fine.

Minimal Complete Verifiable Example

```Python import numpy.array_api as nxp

arr = nxp.asarray([[1, 2, 3], [4, 5, 6]], dtype=np.dtype('float32'))

var = xr.Variable(data=arr, dims=['x', 'y'])

var.isel(x=0) # this is fine

var * var.isel(x=0) # this is not


IndexError Traceback (most recent call last) Cell In[31], line 1 ----> 1 var * var.isel(x=0)

File ~/Documents/Work/Code/xarray/xarray/core/_typed_ops.py:487, in VariableOpsMixin.mul(self, other) 486 def mul(self, other: VarCompatible) -> Self | T_DataArray: --> 487 return self._binary_op(other, operator.mul)

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2406, in Variable._binary_op(self, other, f, reflexive) 2404 other_data, self_data, dims = _broadcast_compat_data(other, self) 2405 else: -> 2406 self_data, other_data, dims = _broadcast_compat_data(self, other) 2407 keep_attrs = _get_keep_attrs(default=False) 2408 attrs = self._attrs if keep_attrs else None

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2922, in _broadcast_compat_data(self, other) 2919 def _broadcast_compat_data(self, other): 2920 if all(hasattr(other, attr) for attr in ["dims", "data", "shape", "encoding"]): 2921 # other satisfies the necessary Variable API for broadcast_variables -> 2922 new_self, new_other = _broadcast_compat_variables(self, other) 2923 self_data = new_self.data 2924 other_data = new_other.data

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in _broadcast_compat_variables(*variables) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables)

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in <genexpr>(.0) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables)

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1479, in Variable.set_dims(self, dims, shape) 1477 expanded_data = duck_array_ops.broadcast_to(self.data, tmp_shape) 1478 else: -> 1479 expanded_data = self.data[(None,) * (len(expanded_dims) - self.ndim)] 1481 expanded_var = Variable( 1482 expanded_dims, expanded_data, self._attrs, self._encoding, fastpath=True 1483 ) 1484 return expanded_var.transpose(*dims)

File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:555, in Array.getitem(self, key) 550 """ 551 Performs the operation getitem. 552 """ 553 # Note: Only indices required by the spec are allowed. See the 554 # docstring of _validate_index --> 555 self._validate_index(key) 556 if isinstance(key, Array): 557 # Indexing self._array with array_api arrays can be erroneous 558 key = key._array

File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:348, in Array._validate_index(self, key) 344 elif n_ellipsis == 0: 345 # Note boolean masks must be the sole index, which we check for 346 # later on. 347 if not key_has_mask and n_single_axes < self.ndim: --> 348 raise IndexError( 349 f"{self.ndim=}, but the multi-axes index only specifies " 350 f"{n_single_axes} dimensions. If this was intentional, " 351 "add a trailing ellipsis (...) which expands into as many " 352 "slices (:) as necessary - this is what np.ndarray arrays " 353 "implicitly do, but such flat indexing behaviour is not " 354 "specified in the Array API." 355 ) 357 if n_ellipsis == 0: 358 indexed_shape = self.shape

IndexError: self.ndim=1, but the multi-axes index only specifies 0 dimensions. If this was intentional, add a trailing ellipsis (...) which expands into as many slices (:) as necessary - this is what np.ndarray arrays implicitly do, but such flat indexing behaviour is not specified in the Array API. ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

No response

Environment

main branch of xarray, numpy 1.26.0

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8665/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2099622643 PR_kwDOAMm_X85lBkos 8668 Fix unstack method when wrapping array api class TomNicholas 35968931 closed 0     0 2024-01-25T05:54:38Z 2024-01-26T16:06:04Z 2024-01-26T16:06:01Z MEMBER   0 pydata/xarray/pulls/8668
  • [x] Closes #8666
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8668/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2099550299 I_kwDOAMm_X859JJhb 8666 Error unstacking array API compliant class TomNicholas 35968931 closed 0     0 2024-01-25T04:35:09Z 2024-01-26T16:06:02Z 2024-01-26T16:06:02Z MEMBER      

What happened?

Unstacking fails for array types that strictly follow the array API standard.

What did you expect to happen?

This obviously works fine with a normal numpy array.

Minimal Complete Verifiable Example

```Python import numpy.array_api as nxp

arr = nxp.asarray([[1, 2, 3], [4, 5, 6]], dtype=np.dtype('float32'))

da = xr.DataArray( arr, coords=[("x", ["a", "b"]), ("y", [0, 1, 2])], ) da stacked = da.stack(z=("x", "y")) stacked.indexes["z"] stacked.unstack()


AttributeError Traceback (most recent call last) Cell In[65], line 8 6 stacked = da.stack(z=("x", "y")) 7 stacked.indexes["z"] ----> 8 roundtripped = stacked.unstack() 9 arr.identical(roundtripped)

File ~/Documents/Work/Code/xarray/xarray/util/deprecation_helpers.py:115, in _deprecate_positional_args.<locals>._decorator.<locals>.inner(args, kwargs) 111 kwargs.update({name: arg for name, arg in zip_args}) 113 return func(args[:-n_extra_args], kwargs) --> 115 return func(*args, kwargs)

File ~/Documents/Work/Code/xarray/xarray/core/dataarray.py:2913, in DataArray.unstack(self, dim, fill_value, sparse) 2851 @_deprecate_positional_args("v2023.10.0") 2852 def unstack( 2853 self, (...) 2857 sparse: bool = False, 2858 ) -> Self: 2859 """ 2860 Unstack existing dimensions corresponding to MultiIndexes into 2861 multiple new dimensions. (...) 2911 DataArray.stack 2912 """ -> 2913 ds = self._to_temp_dataset().unstack(dim, fill_value=fill_value, sparse=sparse) 2914 return self._from_temp_dataset(ds)

File ~/Documents/Work/Code/xarray/xarray/util/deprecation_helpers.py:115, in _deprecate_positional_args.<locals>._decorator.<locals>.inner(args, kwargs) 111 kwargs.update({name: arg for name, arg in zip_args}) 113 return func(args[:-n_extra_args], kwargs) --> 115 return func(*args, kwargs)

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:5581, in Dataset.unstack(self, dim, fill_value, sparse) 5579 for d in dims: 5580 if needs_full_reindex: -> 5581 result = result._unstack_full_reindex( 5582 d, stacked_indexes[d], fill_value, sparse 5583 ) 5584 else: 5585 result = result._unstack_once(d, stacked_indexes[d], fill_value, sparse)

File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:5474, in Dataset._unstack_full_reindex(self, dim, index_and_vars, fill_value, sparse) 5472 if name not in index_vars: 5473 if dim in var.dims: -> 5474 variables[name] = var.unstack({dim: new_dim_sizes}) 5475 else: 5476 variables[name] = var

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1684, in Variable.unstack(self, dimensions, **dimensions_kwargs) 1682 result = self 1683 for old_dim, dims in dimensions.items(): -> 1684 result = result._unstack_once_full(dims, old_dim) 1685 return result

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1574, in Variable._unstack_once_full(self, dim, old_dim) 1571 reordered = self.transpose(*dim_order) 1573 new_shape = reordered.shape[: len(other_dims)] + new_dim_sizes -> 1574 new_data = reordered.data.reshape(new_shape) 1575 new_dims = reordered.dims[: len(other_dims)] + new_dim_names 1577 return type(self)( 1578 new_dims, new_data, self._attrs, self._encoding, fastpath=True 1579 )

AttributeError: 'Array' object has no attribute 'reshape' ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

It fails on the arr.reshape call, because the array API standard has reshape be a function, not a method.

We do in fact have an array API-compatible version of reshape defined in duck_array_ops.py, it just apparently isn't yet used everywhere we call reshape.

https://github.com/pydata/xarray/blob/037a39e249e5387bc15de447c57bfd559fd5a574/xarray/core/duck_array_ops.py#L363

Environment

main branch of xarray, numpy 1.26.0

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8666/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2098535717 PR_kwDOAMm_X85k94wv 8655 Small improvement to HOW_TO_RELEASE.md TomNicholas 35968931 closed 0     1 2024-01-24T15:35:16Z 2024-01-24T21:46:02Z 2024-01-24T21:46:01Z MEMBER   0 pydata/xarray/pulls/8655

Clarify step 8. by pointing to where the ReadTheDocs build actually is

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8655/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2092346228 PR_kwDOAMm_X85ko-Y2 8632 Pin sphinx-book-theme to 1.0.1 to try to deal with #8619 TomNicholas 35968931 closed 0     2 2024-01-21T02:18:49Z 2024-01-23T20:16:13Z 2024-01-23T18:28:35Z MEMBER   0 pydata/xarray/pulls/8632
  • [x] Hopefully closes #8619
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8632/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2086704542 PR_kwDOAMm_X85kVyF6 8617 Release summary for release v2024.01.0 TomNicholas 35968931 closed 0     1 2024-01-17T18:02:29Z 2024-01-17T21:23:45Z 2024-01-17T19:21:11Z MEMBER   0 pydata/xarray/pulls/8617

Someone give this a thumbs up if it looks good

  • [x] Closes #8616
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8617/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1519552711 PR_kwDOAMm_X85GqAro 7418 Import datatree in xarray? TomNicholas 35968931 closed 0     18 2023-01-04T20:48:09Z 2023-12-22T17:38:04Z 2023-12-22T17:38:04Z MEMBER   0 pydata/xarray/pulls/7418

I want datatree to live in xarray main, as right now it's in a separate package but imports many xarray internals.

This presents a few questions: 1) At what stage is datatree "ready" to moved in here? At what stage should it become encouraged public API? 2) What's a good way to slowly roll the feature out? 3) How do I decrease the bus factor on datatree's code? Can I get some code reviews during the merging process? :pray: 4) Should I make a new CI environment just for testing datatree stuff?

Today @jhamman and @keewis suggested for now I make it so that you can from xarray import DataTree, using the current xarray-datatree package as an optional dependency. That way I can create a smoother on-ramp, get some more users testing it, but without committing all the code into this repo yet.

@pydata/xarray what do you think? Any other thoughts about best practices when moving a good few thousand lines of code into xarray?

  • [x] First step towards moving solution of #4118 into this repository
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7418/reactions",
    "total_count": 6,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 2
}
    xarray 13221727 pull
1820788594 PR_kwDOAMm_X85WW40r 8019 Generalize cumulative reduction (scan) to non-dask types TomNicholas 35968931 closed 0     2 2023-07-25T17:22:07Z 2023-12-18T19:30:18Z 2023-12-18T19:30:18Z MEMBER   0 pydata/xarray/pulls/8019
  • [x] Needed for https://github.com/tomwhite/cubed/issues/277#issuecomment-1648567431 - should have been added in #7019
  • [ ] ~~Tests added~~ (would go in cubed-xarray)
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst (new ABC method will be documented on chunked array types page automatically)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8019/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1048697792 PR_kwDOAMm_X84uSksS 5961 [Experimental] Refactor Dataset to store variables in a manifest TomNicholas 35968931 closed 0     7 2021-11-09T14:51:03Z 2023-12-06T17:38:53Z 2023-12-06T17:38:52Z MEMBER   0 pydata/xarray/pulls/5961

This PR is part of an experiment to see how to integrate a DataTree into xarray.

What is does is refactor Dataset to store variables in a DataManifest class, which is also capable of maintaining a ledger of child tree nodes. The point of this is to prevent name collisions between stored variables and child datatree nodes, as first mentioned in https://github.com/TomNicholas/datatree/issues/38 and explained further in https://github.com/TomNicholas/datatree/issues/2.

("Manifest" in the old sense, of a noun meaning "a document giving comprehensive details of a ship and its cargo and other contents")

  • [x] Would eventually close https://github.com/TomNicholas/datatree/issues/38
  • [ ] Tests added
  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5961/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1084220684 PR_kwDOAMm_X84wDPg5 6086 Type protocol for internal variable mapping TomNicholas 35968931 closed 0     9 2021-12-19T23:32:04Z 2023-12-06T17:20:48Z 2023-12-06T17:19:30Z MEMBER   1 pydata/xarray/pulls/6086

In #5961 and #6083 I've been experimenting extending Dataset to store variables in a custom mapping object (instead of always in a dict), so as to eventually fix this mutability problem with DataTree.

I've been writing out new storage class implementations in those PRs, but on Friday @shoyer suggested that I could instead simply alter the allowed type for ._variables in xarray.Dataset's type hints. That would allow me to mess about with storage class implementations outside of xarray, whilst guaranteeing type compatibility with xarray main itself with absolutely minimal changes (hopefully no runtime changes to Dataset at all!).

The idea is to define a protocol in xarray which specifies the structural subtyping behaviour of any custom variable storage class that I might want to set as Dataset._variables. The type hint for the ._variables attribute then refers to this protocol, and will be satisfied as long as whatever object is set as ._variables has compatibly-typed methods. Adding type hints to the ._construct_direct and ._replace constructors is enough to propagate this new type specification all over the codebase.

In practice this means writing a protocol which describes the type behaviour of all the methods on dict that currently get used by ._variable accesses.

So far I've written out a CopyableMutableMapping protocol which defines all the methods needed. The issues I'm stuck on at the moment are:

1) The typing behaviour of overloaded methods, specifically update. (setdefault also has similar problems but I think I can safely omit that from the protocol definition because we don't call ._variables.setdefault() anywhere.) Mypy complains that CopyableMutableMapping is not a compatible type when Dict is specified because the type specification of overloaded methods isn't quite right somehow:

```
xarray/core/computation.py:410: error: Argument 1 to "_construct_direct" of "Dataset" has incompatible type "Dict[Hashable, Variable]"; expected "CopyableMutableMapping[Hashable, Variable]"  [arg-type]
xarray/core/computation.py:410: note: Following member(s) of "Dict[Hashable, Variable]" have conflicts:
xarray/core/computation.py:410: note:     Expected:
xarray/core/computation.py:410: note:         @overload
xarray/core/computation.py:410: note:         def update(self, other: Mapping[Hashable, Variable], **kwargs: Variable) -> None
xarray/core/computation.py:410: note:         @overload
xarray/core/computation.py:410: note:         def update(self, other: Iterable[Tuple[Hashable, Variable]], **kwargs: Variable) -> None
xarray/core/computation.py:410: note:         <1 more overload not shown>
xarray/core/computation.py:410: note:     Got:
xarray/core/computation.py:410: note:         @overload
xarray/core/computation.py:410: note:         def update(self, Mapping[Hashable, Variable], **kwargs: Variable) -> None
xarray/core/computation.py:410: note:         @overload
xarray/core/computation.py:410: note:         def update(self, Iterable[Tuple[Hashable, Variable]], **kwargs: Variable) -> None
```
I don't understand what the inconsistency is because I literally looked up the exact way that [the type stubs](https://github.com/python/typeshed/blob/e6911530d4d52db0fbdf05be3aff89e520ee39bc/stdlib/typing.pyi#L490) for `Dict` were written (via `MutableMapping`).

2) Making functions which expect a Mapping accept my CopyableMutableMapping. I would have thought this would just work because I think my protocol defines all the methods which Mapping has, so CopyableMutableMapping should automatically become a subtype of Mapping. But instead I get errors like this with no further information as to what to do about it.

```xarray/core/dataset.py:785: error: Argument 1 to "Frozen" has incompatible type "CopyableMutableMapping[Hashable, Variable]"; expected "Mapping[Hashable, Variable]"  [arg-type]```

3) I'm expecting to get a runtime problem whenever we assert isinstance(ds._variables, dict), which happens in a few places. I'm no sure what the best way to deal with that is, but I'm hoping that simply adding @typing.runtime_checkable to the protocol class definition will be enough?

Once that passes mypy I will write a test that checks that if I define my own custom variable storage class I can _construct_direct a Dataset which uses it without any errors. At that point I can be confident that Dataset is general enough to hold whichever exact variable storage class I end up needing for DataTree.

@max-sixty this is entirely a typing challenge, so I'm tagging you in case you're interested :)

  • [ ] Would supercede #5961 and #6083
  • [ ] Tests added
  • [ ] Passes pre-commit run --all-files

EDIT: Also using Protocol at all is only available in Python 3.8+

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6086/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2027528985 PR_kwDOAMm_X85hQBHP 8525 Remove PR labeler bot TomNicholas 35968931 closed 0     3 2023-12-06T02:31:56Z 2023-12-06T02:45:46Z 2023-12-06T02:45:41Z MEMBER   0 pydata/xarray/pulls/8525

RIP

  • [x] Closes #8524
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8525/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1974681146 PR_kwDOAMm_X85edMm- 8404 Hypothesis strategy for generating Variable objects TomNicholas 35968931 closed 0     6 2023-11-02T17:04:03Z 2023-12-05T22:45:57Z 2023-12-05T22:45:57Z MEMBER   0 pydata/xarray/pulls/8404

Breaks out just the part of #6908 needed for generating arbitrary xarray.Variable objects. (so ignore the ginormous number of commits)

EDIT: Check out this test which performs a mean on any subset of any Variable object!

```python In [36]: from xarray.testing.strategies import variables

In [37]: variables().example() <xarray.Variable (ĭ: 3)> array([-2.22507386e-313-6.62447795e+016j, nan-6.46207519e+185j, -2.22507386e-309+3.33333333e-001j]) ```

@andersy005 @maxrjones @jhamman I thought this might be useful for the NamedArray testing. (xref #8370 and #8244)

@keewis and @Zac-HD sorry for letting that PR languish for literally a year :sweat_smile: This PR addresses your feedback about accepting a callable that returns a strategy generating arrays. That suggestion makes some things a bit more complex in user code but actually allows me to simplify the internals of the variables strategy significantly. I'm actually really happy with this PR - I think it solves what we were discussing, and is a sensible checkpoint to merge before going back to making strategies for generating composite objects like DataArrays/Datasets work.

  • [x] Closes part of #6911
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8404/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2017285297 PR_kwDOAMm_X85gtObP 8491 Warn on repeated dimension names during construction TomNicholas 35968931 closed 0     13 2023-11-29T19:30:51Z 2023-12-01T01:37:36Z 2023-12-01T00:40:18Z MEMBER   0 pydata/xarray/pulls/8491
  • [x] Closes #2226 and #1499 by forbidding those situations (but we should leave #3731 open as the "official" place to discuss supporting repeated dimensions
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8491/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
552500673 MDU6SXNzdWU1NTI1MDA2NzM= 3709 Feature Proposal: `xarray.interactive` module TomNicholas 35968931 closed 0     36 2020-01-20T20:42:22Z 2023-10-27T18:24:49Z 2021-07-29T15:37:21Z MEMBER      

Feature proposal: xarray.interactive module

I've been experimenting with ipython widgets in jupyter notebooks, and I've been working on how we might use them to make xarray more interactive.

Motivation:

For most users who are exploring their data, it will be common to find themselves rerunning the same cells repeatedly but with slightly different values. In xarray's case that will often be in an .isel() or .sel() call, or selecting variables from a dataset. IPython widgets allow you to interact with your functions in a very intuitive way, which we could exploit. There are lots of tutorials on how to interact with pandas data (e.g. this great one), but I haven't seen any for interacting with xarray objects.

Relationship to other libraries:

Some downstream plotting libaries (such as @hvplot) already use widgets when interactively plotting xarray-derived data structures, but they don't seem to go the full N dimensions. This also isn't something that should be confined to plotting functions - you often choose slices or variables at the start of analysis, not just at the end. I'll come back to this idea later.

The default ipython widgets are pretty good, but we could write an xarray.interactive module in such a way that downstream developers can easily replace them with their own widgets.

Usage examples:

```python

imports

import ipywidgets as widgets import xarray.plot as xplot import xarray.interactive as interactive

Load tutorial data

ds = xr.tutorial.open_dataset('air_temperature')['air'] ```

Plotting against multiple dimensions interactively python interactive.isel(da, xplot.plot, lat=10, lon=50)

Interactively select a range from a dimension python def plot_mean_over_time(da): da.mean(dim=time) interactive.isel(da, plot_mean_over_time, time=slice(100, 500))

Animate over one dimension python from ipywidgets import Play interactive.isel(da, xplot.plot, time=Play())

API ideas:

We can write a function like this

python interactive.isel(da, func=xplot.plot, time=10)

which could also be used as a decorator something like this python @interactive.isel(da, time=10) def plot(da) return xplot.plot(da)

It would be nicer to be able to do this python @Interactive(da).isel(time=10) def plot(da) return xplot.plot(da) but Guido forbade it.

But we can attach these functions to an accessor to get python da.interactive.isel(xplot.plot, time=10)

Other ideas

Select variables from datasets ```python @interactive.data_vars(da1=ds['n'], da2=ds['T'], ...) def correlation(da1, da2, ...) ...

Would produce a dropdown list of variables for each dataset

```

Choose dimensions to apply functions over ```python @interactive.dims(dim='time') def mean(da, dim) ...

Would produce a dropdown list of dimensions in the dataarray

```

General interactive.explore() method to see variation over any number of dimensions, the default being all of them.

What do people think about this? Is it something that makes sense to include within xarray itself? (Dependencies aren't a problem because it's fine to have ipywidgets as an optional dependency just for this module.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3709/reactions",
    "total_count": 6,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1806973709 PR_kwDOAMm_X85VoNVM 7992 Docs page on interoperability TomNicholas 35968931 closed 0     3 2023-07-17T05:02:29Z 2023-10-26T16:08:56Z 2023-10-26T16:04:33Z MEMBER   0 pydata/xarray/pulls/7992

Builds upon #7991 by adding a page to the internals enumerating all the different ways in which xarray is interoperable.

Would be nice if https://github.com/pydata/xarray/pull/6975 were merged so that I could link to it from this new page.

  • [x] Addresses comment in https://github.com/pydata/xarray/pull/6975#issuecomment-1246487152
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7992/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1036473974 PR_kwDOAMm_X84tsaL3 5900 Add .chunksizes property TomNicholas 35968931 closed 0     2 2021-10-26T15:51:09Z 2023-10-20T16:00:15Z 2021-10-29T18:12:22Z MEMBER   0 pydata/xarray/pulls/5900

Adds a new .chunksizes property to Dataset, DataArray and Variable, which returns a mapping from dimensions names to chunk sizes in all cases.

Supercedes #5846 because this PR is backwards-compatible.

  • [x] Closes #5843
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5900/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1083507645 PR_kwDOAMm_X84wBDeq 6083 Manifest as variables attribute TomNicholas 35968931 closed 0     2 2021-12-17T18:14:26Z 2023-09-14T15:37:38Z 2023-09-14T15:37:37Z MEMBER   1 pydata/xarray/pulls/6083

Another attempt like #5961

@shoyer

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6083/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
663235664 MDU6SXNzdWU2NjMyMzU2NjQ= 4243 Manually drop DataArray from memory? TomNicholas 35968931 closed 0     3 2020-07-21T18:54:40Z 2023-09-12T16:17:12Z 2023-09-12T16:17:12Z MEMBER      

Is it possible to deliberately drop data associated with a particular DataArray from memory?

Obviously da.close() exists, but what happens if you did for example python ds = open_dataset(file) da = ds[var] da.compute() # something that loads da into memory da.close() # is the memory freed up again now? ds.something() # what about now?

Also does calling python's built-in garbage collector (i.e. gc.collect()) do anything in this instance?

The context of this question is that I'm trying to resave some massive variables (~65GB each) that were loaded from thousands of files into just a few files for each variable. I would love to use @rabernat 's new rechunker package but I'm not sure how easily I can convert my current netCDF data to Zarr, and I'm interested in this question no matter how I end up solving the problem.

I don't currently have a particularly good understanding of file I/O and memory management in xarray, but would like to improve it. Can anyone recommend a tool I can use to answer this kind of question myself on my own machine? I suppose it would need to be able to tell me the current memory usage of specific objects, not just the total memory usage.

(@johnomotani you might be interested)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4243/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1806949831 PR_kwDOAMm_X85VoH2o 7991 Docs page on internal design TomNicholas 35968931 closed 0     1 2023-07-17T04:46:55Z 2023-09-08T15:41:32Z 2023-09-08T15:41:32Z MEMBER   0 pydata/xarray/pulls/7991

Adds a new page to the xarray internals documentation giving an overview of the internal design of xarray.

This should be helpful for xarray contributors and for developers of extensions because nowhere in the docs does it really explain how DataArray and Dataset are constructed from Variable.

  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7991/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1368740629 PR_kwDOAMm_X84-uWtE 7019 Generalize handling of chunked array types TomNicholas 35968931 closed 0     30 2022-09-10T22:02:18Z 2023-07-24T20:40:29Z 2023-05-18T17:34:31Z MEMBER   0 pydata/xarray/pulls/7019

Initial attempt to get cubed working within xarray, as an alternative to dask.

  • [x] Closes #6807, at least for the case of cubed
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
  • [x] Correct type hints

I've added a manager kwarg to the .chunk methods so you can do da.chunk(manager="cubed") to convert to a chunked cubed.CoreArray, with the default still being da.chunk(manager="dask"). (I couldn't think of a better name than "manager", as "backend" and "executor" are already taken.)

~~At the moment it should work except for an import error that I don't understand, see below.~~

Fro cubed to work at all with this PR we would also need: - [x] Cubed to expose the correct array type consistently https://github.com/tomwhite/cubed/issues/123 - [x] A cubed version of apply_gufunc https://github.com/tomwhite/cubed/pull/119 - implemented in https://github.com/tomwhite/cubed/pull/149 :partying_face:

To-dos for me on this PR: - [x] Re-route xarray.apply_ufunc through cubed.apply_gufunc instead of dask's apply_gufunc when appropriate, - [x] Add from_array_kwargs to opening functions, e.g. open_zarr, and open_dataset, - [x] Add from_array_kwargs to creation functions, such as full_like, - [x] Add store_kwargs as a way to propagate cubed-specific kwargs when saving to_zarr.

To complete this project more generally we should also: - [ ] Have cubed.apply_gufunc support multiple output arguments https://github.com/tomwhite/cubed/issues/152 - [x] Have a top-level cubed.unify_chunks to match dask.array.core.unify_chunks - [ ] Write a test suite for wrapping cubed arrays, which would be best done via #6894 - [ ] Generalise xarray.map_blocks to work on cubed arrays, ideally by first rewriting xarray's implementation of map_blocks to use dask.array.map_blocks

cc @tomwhite

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7019/reactions",
    "total_count": 4,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 2,
    "eyes": 0
}
    xarray 13221727 pull
1810167498 PR_kwDOAMm_X85VzHaS 7999 Core team member guide TomNicholas 35968931 closed 0     4 2023-07-18T15:26:01Z 2023-07-21T14:51:57Z 2023-07-21T13:48:26Z MEMBER   0 pydata/xarray/pulls/7999

Adds a guide for core developers of xarray. Mostly adapted from napari's core dev guide, but with some extra sections and ideas from the pandas maintainance guide.

@pydata/xarray please give your feedback on this! If you prefer to give feedback in a non-public channel for whatever reason then please use the private core team email.

  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7999/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1801849622 I_kwDOAMm_X85rZgsW 7982 Use Meilisearch in our docs TomNicholas 35968931 closed 0     1 2023-07-12T22:29:45Z 2023-07-19T19:49:53Z 2023-07-19T19:49:53Z MEMBER      

Is your feature request related to a problem?

Just saw this cool search thing for sphinx in a lightning talk at SciPy called Meilisearch

Cc @dcherian

Describe the solution you'd like

Read about it here

https://sphinxdocs.ansys.com/version/stable/user_guide/options.html

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7982/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1807782455 I_kwDOAMm_X85rwJI3 7996 Stable docs build not showing latest changes after release TomNicholas 35968931 closed 0     3 2023-07-17T13:24:58Z 2023-07-17T20:48:19Z 2023-07-17T20:48:19Z MEMBER      

What happened?

I released xarray version v2023.07.0 last night, but I'm not seeing changes to the documentation reflected in the https://docs.xarray.dev/en/stable/ build. (In particular the Internals section now should have an entire extra page on wrapping chunked arrays.) I can however see the newest additions on https://docs.xarray.dev/en/latest/ build. Is that how it's supposed to work?

What did you expect to happen?

No response

Minimal Complete Verifiable Example

No response

MVCE confirmation

  • [ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

No response

Environment

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7996/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1807044282 PR_kwDOAMm_X85VodDN 7993 Update whats-new.rst for new release TomNicholas 35968931 closed 0     0 2023-07-17T06:03:19Z 2023-07-17T06:03:43Z 2023-07-17T06:03:42Z MEMBER   0 pydata/xarray/pulls/7993

Needed because I started the release process earlier this week by writing a whatsnew, that apparently got merged, but the release hasn't been issued since. I'll self-merge this and release now.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7993/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1799476089 PR_kwDOAMm_X85VO0Wz 7979 Release summary for v2023.07.0 TomNicholas 35968931 closed 0     0 2023-07-11T17:59:28Z 2023-07-13T16:33:43Z 2023-07-13T16:33:43Z MEMBER   0 pydata/xarray/pulls/7979  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7979/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1753401384 PR_kwDOAMm_X85Szs7X 7911 Duck array documentation improvements TomNicholas 35968931 closed 0     0 2023-06-12T19:10:41Z 2023-07-10T09:36:05Z 2023-06-29T14:39:22Z MEMBER   0 pydata/xarray/pulls/7911

Draft improvements to the user guide page on using duck arrays.

Intended as part of the scipy tutorial effort, though I wasn't sure whether to concentrate on content in the main xarray docs or the tutorial repo.

(I wrote this on a train without enough internet to update my conda environment so I will come back and fix anything that doesn't run.)

  • [x] Part of https://github.com/xarray-contrib/xarray-tutorial/issues/170
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

cc @dcherian and @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7911/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1779880070 PR_kwDOAMm_X85UMTE7 7951 Chunked array docs TomNicholas 35968931 closed 0     3 2023-06-28T23:01:42Z 2023-07-05T20:33:33Z 2023-07-05T20:08:19Z MEMBER   0 pydata/xarray/pulls/7951

Builds upon #7911

  • [x] Documentation for #7019
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7951/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1786830423 PR_kwDOAMm_X85Uj4NA 7960 Update minimum version of typing extensions in pre-commit TomNicholas 35968931 closed 0     1 2023-07-03T21:27:40Z 2023-07-05T19:09:04Z 2023-07-05T15:43:40Z MEMBER   0 pydata/xarray/pulls/7960

Attempt to fix the pre-commit build failure I keep seeing in the CI (e.g. this failure from https://github.com/pydata/xarray/pull/7881)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7960/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1773373878 PR_kwDOAMm_X85T2T_2 7941 Allow cubed arrays to be passed to flox groupby TomNicholas 35968931 closed 0     0 2023-06-25T16:48:56Z 2023-06-26T15:28:06Z 2023-06-26T15:28:03Z MEMBER   0 pydata/xarray/pulls/7941

Generalizes a small check for chunked arrays in groupby so it now allows cubed arrays through to flox rather than just dask arrays. Does not actually mean that flox groupby will work with cubed yet though, see https://github.com/tomwhite/cubed/issues/223 and https://github.com/xarray-contrib/flox/issues/224

  • [x] Should have been done in #7019
  • [ ] ~~Tests added~~ (The place to test this would be in [cubed-xarray]
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7941/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1768095127 PR_kwDOAMm_X85Tkubk 7934 Release summary for v2023.06.0 TomNicholas 35968931 closed 0     4 2023-06-21T17:34:29Z 2023-06-23T03:02:12Z 2023-06-23T03:02:11Z MEMBER   0 pydata/xarray/pulls/7934

Release summary:

This release adds features to curvefit, improves the performance of concatenation, and fixes various bugs.


For some reason when I try to use git log "$(git tag --sort=v:refname | tail -1).." --format=%aN | sort -u | perl -pe 's/\n/$1, /' to return the list of all contributors since last release, it only returns Deepak :laughing: I'm not sure what's going wrong there - I definitely have all the git tags fetched, and other people have definitely contributed since the last version!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7934/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1716200316 PR_kwDOAMm_X85Q1k5D 7847 Array API fixes for astype TomNicholas 35968931 closed 0     0 2023-05-18T20:09:32Z 2023-05-19T15:11:17Z 2023-05-19T15:11:16Z MEMBER   0 pydata/xarray/pulls/7847

Follows on from #7067 and #6804, ensuring that we call xp.astype() on arrays rather than arr.astype(), as the latter is commonly-implemented by array libraries but not part of the array API standard.

A bit of a pain to test in isolation because I made the changes so that xarray's .pad would work with array-API-conforming libraries, but actually np.pad is not part of the array API either, so it's going to coerce to numpy for that reason anyway.

(This PR replaces #7815, as making a new branch was easier than merging/rebasing with all the changes in #7019.)

  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7847/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1716345200 PR_kwDOAMm_X85Q2EmD 7849 Whats new for release of v2023.05.0 TomNicholas 35968931 closed 0     0 2023-05-18T22:30:32Z 2023-05-19T02:18:03Z 2023-05-19T02:17:55Z MEMBER   0 pydata/xarray/pulls/7849

Summary:

This release adds some new methods and operators, updates our deprecation policy for python versions, fixes some bugs with groupby, and introduces experimental support for alternative chunked parallel array computation backends via a new plugin system!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7849/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1695244129 PR_kwDOAMm_X85PvJSS 7815 Array API fixes for astype TomNicholas 35968931 closed 0     2 2023-05-04T04:33:52Z 2023-05-18T20:10:48Z 2023-05-18T20:10:43Z MEMBER   0 pydata/xarray/pulls/7815

While it's common for duck arrays to have a .astype method, this doesn't exist in the new array API standard. We now have duck_array_ops.astype to deal with this, but for some reason changing it in just a couple more places broke practically every pint test in test_units.py :confused: @keewis

Builds on top of #7019 with just one extra commit to separate out this issue.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7815/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1308715638 I_kwDOAMm_X85OAWp2 6807 Alternative parallel execution frameworks in xarray TomNicholas 35968931 closed 0     12 2022-07-18T21:48:10Z 2023-05-18T17:34:33Z 2023-05-18T17:34:33Z MEMBER      

Is your feature request related to a problem?

Since early on the project xarray has supported wrapping dask.array objects in a first-class manner. However recent work on flexible array wrapping has made it possible to wrap all sorts of array types (and with #6804 we should support wrapping any array that conforms to the array API standard).

Currently though the only way to parallelize array operations with xarray "automatically" is to use dask. (You could use xarray-beam or other options too but they don't "automatically" generate the computation for you like dask does.)

When dask is the only type of parallel framework exposing an array-like API then there is no need for flexibility, but now we have nascent projects like cubed to consider too. @tomwhite

Describe the solution you'd like

Refactor the internals so that dask is one option among many, and that any newer options can plug in in an extensible way.

In particular cubed deliberately uses the same API as dask.array, exposing: 1) the methods needed to conform to the array API standard 2) a .chunk and .compute method, which we could dispatch to 3) dask-like functions to create computation graphs including blockwise, map_blocks, and rechunk

I would like to see xarray able to wrap any array-like object which offers this set of methods / functions, and call the corresponding version of that method for the correct library (i.e. dask vs cubed) automatically.

That way users could try different parallel execution frameworks simply via a switch like python ds.chunk(**chunk_pattern, manager="dask") and see which one works best for their particular problem.

Describe alternatives you've considered

If we leave it the way it is now then xarray will not be truly flexible in this respect.

Any library can wrap (or subclass if they are really brave) xarray objects to provide parallelism but that's not the same level of flexibility.

Additional context

cubed repo

PR about making xarray able to wrap objects conforming to the new array API standard

cc @shoyer @rabernat @dcherian @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6807/reactions",
    "total_count": 6,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 3,
    "rocket": 2,
    "eyes": 1
}
  completed xarray 13221727 issue
1615570467 PR_kwDOAMm_X85LlkLA 7595 Clarifications in contributors guide TomNicholas 35968931 closed 0     5 2023-03-08T16:35:45Z 2023-03-13T17:55:43Z 2023-03-13T17:51:24Z MEMBER   0 pydata/xarray/pulls/7595

Add suggestions @paigem made in #7439, as well as fix a few small formatting things and broken links.

I would like to merge this so that it can be helpful for the new contributors we will hopefully get through Outreachy.

  • [x] Closes #7439
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7595/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1579829674 PR_kwDOAMm_X85JuG-F 7518 State which variables not present in drop vars error message TomNicholas 35968931 closed 0     0 2023-02-10T15:00:35Z 2023-03-09T20:47:47Z 2023-03-09T20:47:47Z MEMBER   0 pydata/xarray/pulls/7518

Makes the error message more informative

  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7518/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1573538162 PR_kwDOAMm_X85JY_1l 7509 Update apply_ufunc output_sizes error message TomNicholas 35968931 closed 0     0 2023-02-07T01:35:08Z 2023-02-07T15:45:54Z 2023-02-07T05:01:36Z MEMBER   0 pydata/xarray/pulls/7509
  • [x] Closes poor error message reported in https://github.com/pydata/xarray/discussions/7503
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7509/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1470025851 PR_kwDOAMm_X85D_b_W 7338 Docs: add example of writing and reading groups to netcdf TomNicholas 35968931 closed 0     0 2022-11-30T18:01:32Z 2022-12-01T16:24:08Z 2022-12-01T16:24:04Z MEMBER   0 pydata/xarray/pulls/7338
  • [x] Came from https://github.com/pydata/xarray/discussions/7329#discussioncomment-4256845
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~

@dcherian

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7338/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1426383543 I_kwDOAMm_X85VBOK3 7232 ds.Coarsen.construct demotes non-dimensional coordinates to variables TomNicholas 35968931 closed 0     0 2022-10-27T23:39:32Z 2022-10-28T17:46:51Z 2022-10-28T17:46:51Z MEMBER      

What happened?

ds.Coarsen.construct demotes non-dimensional coordinates to variables

What did you expect to happen?

All variables that were coordinates before the coarsen.construct stay as coordinates afterwards.

Minimal Complete Verifiable Example

```Python In [3]: da = xr.DataArray(np.arange(24), dims=["time"]) ...: da = da.assign_coords(day=365 * da) ...: ds = da.to_dataset(name="T")

In [4]: ds Out[4]: <xarray.Dataset> Dimensions: (time: 24) Coordinates: day (time) int64 0 365 730 1095 1460 1825 ... 6935 7300 7665 8030 8395 Dimensions without coordinates: time Data variables: T (time) int64 0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 18 19 20 21 22 23

In [5]: ds.coarsen(time=12).construct(time=("year", "month")) Out[5]: <xarray.Dataset> Dimensions: (year: 2, month: 12) Coordinates: day (year, month) int64 0 365 730 1095 1460 ... 7300 7665 8030 8395 Dimensions without coordinates: year, month Data variables: T (year, month) int64 0 1 2 3 4 5 6 7 8 ... 16 17 18 19 20 21 22 23 ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

No response

Anything else we need to know?

No response

Environment

main

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7232/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1426387580 PR_kwDOAMm_X85BtKwb 7233 Ensure Coarsen.construct keeps all coords TomNicholas 35968931 closed 0     0 2022-10-27T23:46:49Z 2022-10-28T17:46:50Z 2022-10-28T17:46:50Z MEMBER   0 pydata/xarray/pulls/7233
  • [x] Closes #7232
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7233/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1417378270 PR_kwDOAMm_X85BPGqR 7192 Example using Coarsen.construct to split map into regions TomNicholas 35968931 closed 0     3 2022-10-20T22:14:31Z 2022-10-21T18:14:59Z 2022-10-21T18:14:56Z MEMBER   0 pydata/xarray/pulls/7192

I realised there is very little documentation on Coarsen.construct, so I added this example.

Unsure whether it should instead live in the page on reshaping and reorganising data though, as it is essentially a reshape operation. EDIT: Now on the reshape page

  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~

cc @jbusecke @paigem

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7192/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1391319978 PR_kwDOAMm_X84_4UWs 7107 2022.09.0 release summary TomNicholas 35968931 closed 0     0 2022-09-29T18:34:02Z 2022-09-29T21:57:43Z 2022-09-29T21:54:14Z MEMBER   0 pydata/xarray/pulls/7107

Thumbs up if it looks fine to you

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7107/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1386723044 PR_kwDOAMm_X84_pBKj 7090 Fill in missing docstrings for ndarray properties TomNicholas 35968931 closed 0     0 2022-09-26T21:05:37Z 2022-09-26T22:24:13Z 2022-09-26T22:05:34Z MEMBER   0 pydata/xarray/pulls/7090
  • [ ] ~~Closes #xxxx~~
  • [ ] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7090/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1370416843 PR_kwDOAMm_X84-z6DG 7023 Remove dask_array_type checks TomNicholas 35968931 closed 0     3 2022-09-12T19:31:04Z 2022-09-13T00:35:25Z 2022-09-13T00:35:22Z MEMBER   0 pydata/xarray/pulls/7023
  • [ ] From https://github.com/pydata/xarray/pull/7019#discussion_r968606140
  • [ ] ~~Tests added~~
  • [ ] ~~User visible changes (including notable bug fixes) are documented in whats-new.rst~~
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7023/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
592312709 MDExOlB1bGxSZXF1ZXN0Mzk3MzIwNzgx 3925 sel along 1D non-index coordinates TomNicholas 35968931 closed 0     13 2020-04-02T02:23:56Z 2022-09-07T14:31:58Z 2022-09-07T14:31:58Z MEMBER   0 pydata/xarray/pulls/3925

As a user, I find not being able to select along one-dimensional non-dimensional coordinates actually comes up fairly often. I think it's quite common to use multiple coordinates to be able to choose between plotting in different coordinate systems (or units) easily.

I've tried to close #2028 in the simplest (but also least efficient) way which was suggested by @shoyer (suggestion 1 here).

This should be temporary anyway: it will get superseded by the explicit indexes refactor. If there is another approach which would achieve the same functionality as this PR but actually bring us closer to #1603 then I would be happy to take a stab at that instead.

I don't really know what to do about the failing test in groupby arithmetic - I think it's caused here but I'm not sure what to replace the triple error type catching (?!) with.

  • [x] Closes #2028
  • [x] Tests added
  • [ ] Passes isort -rc . && black . && mypy . && flake8
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3925/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1338010273 PR_kwDOAMm_X849IeCt 6913 Fix core team page TomNicholas 35968931 closed 0     0 2022-08-13T17:05:51Z 2022-08-15T13:39:47Z 2022-08-15T13:39:43Z MEMBER   0 pydata/xarray/pulls/6913

Adds missing core team members @alexamici and @aurghs to docs, as well as fixing @benbovy 's username.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6913/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1337587854 PR_kwDOAMm_X849HJCV 6912 Automatic PR labeler TomNicholas 35968931 closed 0     2 2022-08-12T18:40:27Z 2022-08-12T19:52:49Z 2022-08-12T19:47:19Z MEMBER   0 pydata/xarray/pulls/6912

GH action to automatically label new PRs according to which files they touch.

Idea stolen from dask, see https://github.com/dask/dask/pull/7506 . Their PR labelling by file/module is specified here.

(My first use of this bot so might well be a mistake.)

@max-sixty you will probably enjoy this extra automation :robot:

  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6912/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1078842125 PR_kwDOAMm_X84vxops 6076 Add labels to dataset diagram TomNicholas 35968931 closed 0     0 2021-12-13T18:21:02Z 2022-07-11T14:49:40Z 2022-01-03T16:58:51Z MEMBER   0 pydata/xarray/pulls/6076

While making a talk I made a version of our data structure diagram but with added labels along the bottom:

I think this helps clarify the relationship between Variables, DataArrays, and Datasets for new users.

I just made it quickly in inkscape by adding to the previous png - I only realised afterwards that the original was made in LaTeX, so maybe it would be better to add labels directly to that code?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6076/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
936313924 MDExOlB1bGxSZXF1ZXN0NjgzMDY3OTU5 5571 Rely on NEP-18 to dispatch to dask in duck_array_ops TomNicholas 35968931 closed 0     20 2021-07-03T19:24:33Z 2022-07-09T18:12:05Z 2021-09-29T17:48:40Z MEMBER   0 pydata/xarray/pulls/5571

Removes special-casing for dask in duck_array_ops.py, instead relying on NEP-18 to call it when the input is a dask array.

Probably actually don't need the _dask_or_eager_func() (now _module_func()) helper function at all, because all remaining instances look like pandas_isnull = _module_func("isnull", module=pd), which could just be pandas_isnull = pd.isnull.

Only problem is that I seem to have broken one (parameterized) test: test_duck_array_ops.py::test_min_count[True-True-None-sum-True-bool_-1] fails with

```python @pytest.mark.parametrize("dim_num", [1, 2]) @pytest.mark.parametrize("dtype", [float, int, np.float32, np.bool_]) @pytest.mark.parametrize("dask", [False, True]) @pytest.mark.parametrize("func", ["sum", "prod"]) @pytest.mark.parametrize("aggdim", [None, "x"]) @pytest.mark.parametrize("contains_nan", [True, False]) @pytest.mark.parametrize("skipna", [True, False, None]) def test_min_count(dim_num, dtype, dask, func, aggdim, contains_nan, skipna): if dask and not has_dask: pytest.skip("requires dask")

    da = construct_dataarray(dim_num, dtype, contains_nan=contains_nan, dask=dask)
    min_count = 3

    # If using Dask, the function call should be lazy.
    with raise_if_dask_computes():
      actual = getattr(da, func)(dim=aggdim, skipna=skipna, min_count=min_count)

/home/tegn500/Documents/Work/Code/xarray/xarray/tests/test_duck_array_ops.py:578:


/home/tegn500/Documents/Work/Code/xarray/xarray/core/common.py:56: in wrapped_func return self.reduce(func, dim, axis, skipna=skipna, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/dataarray.py:2638: in reduce var = self.variable.reduce(func, dim, axis, keep_attrs, keepdims, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/variable.py:1725: in reduce data = func(self.data, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:328: in f return func(values, axis=axis, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/nanops.py:106: in nansum a, mask = _replace_nan(a, 0) /home/tegn500/Documents/Work/Code/xarray/xarray/core/nanops.py:23: in _replace_nan mask = isnull(a) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:83: in isnull return pandas_isnull(data) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:40: in f return getattr(module, name)(args, kwargs) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/pandas/core/dtypes/missing.py:127: in isna return _isna(obj) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/pandas/core/dtypes/missing.py:166: in _isna return _isna_ndarraylike(np.asarray(obj), inf_as_na=inf_as_na) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/numpy/core/_asarray.py:102: in asarray return array(a, dtype, copy=False, order=order) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:1502: in array x = self.compute() /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/base.py:285: in compute (result,) = compute(self, traverse=False, kwargs) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/base.py:567: in compute results = schedule(dsk, keys, *kwargs)


self = <xarray.tests.CountingScheduler object at 0x7f0804db2310> dsk = {('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 0): array([False, False, False, False], dtype=object), ('xar...pe=object), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 3): array([nan, False, False, nan], dtype=object)} keys = [[('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 0), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7'...array-<this-array>-29953318277423606f95b509ad1a9aa7', 2), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 3)]] kwargs = {}

def __call__(self, dsk, keys, **kwargs):
    self.total_computes += 1
    if self.total_computes > self.max_computes:
      raise RuntimeError(
            "Too many computes. Total: %d > max: %d."
            % (self.total_computes, self.max_computes)
        )

E RuntimeError: Too many computes. Total: 1 > max: 0.

/home/tegn500/Documents/Work/Code/xarray/xarray/tests/init.py:118: RuntimeError ```

  • [x] Closes #5559
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5571/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1223270563 PR_kwDOAMm_X843L_J2 6566 New inline_array kwarg for open_dataset TomNicholas 35968931 closed 0     11 2022-05-02T19:39:07Z 2022-05-11T22:12:24Z 2022-05-11T20:26:43Z MEMBER   0 pydata/xarray/pulls/6566

Exposes the inline_array kwarg of dask.array.from_array in xr.open_dataset, and ds/da/variable.chunk.

What setting this to True does is inline the array into the opening/chunking task, which avoids an an extra array object at the start of the task graph. That's useful because the presence of that single common task connecting otherwise independent parts of the graph can confuse the graph optimizer.

With open_dataset(..., inline_array=False):

With open_dataset(..., inline_array=True):

In our case (xGCM) this is important because once inlined the optimizer understands that all the remaining parts of the graph are embarrasingly-parallel, and realizes that it can fuze all our chunk-wise padding tasks into one padding task per chunk.

I think this option could help in any case where someone is opening data from a Zarr store (the reason we had this opener task) or a netCDF file.

The value of the kwarg should be kept optional because in theory inlining is a tradeoff between fewer tasks and more memory use, but I think there might be a case for setting the default to be True?

Questions: 1) How should I test this? 2) Should it default to False or True? 3) inline_array or inline? (inline_array doesn't really make sense for open_dataset, which creates multiple arrays)

  • [x] Closes #1895
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

@rabernat @jbusecke

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6566/reactions",
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 2,
    "eyes": 0
}
    xarray 13221727 pull
1200309334 PR_kwDOAMm_X842BOIk 6471 Support **kwargs form in `.chunk()` TomNicholas 35968931 closed 0     6 2022-04-11T17:37:38Z 2022-04-12T03:34:49Z 2022-04-11T19:36:40Z MEMBER   0 pydata/xarray/pulls/6471

Also adds some explicit tests (and type hinting) for Variable.chunk(), as I don't think it had dedicated tests before.

  • [x] Closes #6459
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6471/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1157289286 PR_kwDOAMm_X84z1Xnf 6319 v2022.03.0 release notes TomNicholas 35968931 closed 0     2 2022-03-02T14:43:34Z 2022-03-02T19:49:25Z 2022-03-02T15:49:23Z MEMBER   0 pydata/xarray/pulls/6319  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6319/reactions",
    "total_count": 4,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 4,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1150694186 PR_kwDOAMm_X84zenJA 6307 Drop duplicates over multiple dims, and add Dataset.drop_duplicates TomNicholas 35968931 closed 0     0 2022-02-25T17:34:12Z 2022-03-01T23:13:38Z 2022-02-25T21:08:30Z MEMBER   0 pydata/xarray/pulls/6307

Allows for dropping duplicates over multiple dims at once, and adds Dataset.drop_duplicates.

  • [x] Inspired by this discussion question
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6307/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1039034826 PR_kwDOAMm_X84t0t3V 5912 Remove lock kwarg TomNicholas 35968931 closed 0     4 2021-10-28T23:36:13Z 2021-12-29T16:34:45Z 2021-12-29T16:34:45Z MEMBER   0 pydata/xarray/pulls/5912

These were due to be removed post-0.19.

  • [x] Completes deprecation cycle started in #5256
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5912/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1041675013 PR_kwDOAMm_X84t8yv7 5924 v0.20 Release notes TomNicholas 35968931 closed 0     2 2021-11-01T21:53:29Z 2021-11-02T19:22:46Z 2021-11-02T16:37:45Z MEMBER   0 pydata/xarray/pulls/5924

@pydata/xarray the release notes for your approval

5889

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5924/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1034238626 I_kwDOAMm_X849pTqi 5889 Release v0.20? TomNicholas 35968931 closed 0     13 2021-10-23T19:31:01Z 2021-11-02T18:38:50Z 2021-11-02T18:38:50Z MEMBER      

We should do another release soon. The last one was v0.19 on July 23rd, so it's been 3 months.

(In particular I personally want to get some small pint compatibility fixes released such as https://github.com/pydata/xarray/pull/5571 and https://github.com/pydata/xarray/pull/5886, so that the code in this blog post advertising pint-xarray integration all works.)

There's been plenty of changes since then, and there are more we could merge quite quickly. It's a breaking release because we changed some dependencies, so should be called v0.20.0.

@benbovy how does the ongoing index refactor stuff affect this release? Do we need to wait so it can all be announced? Can we release with merged index refactor stuff just silently sitting there?

Small additions we could merge, feel free to suggest more @pydata/xarray : - https://github.com/pydata/xarray/pull/5834 - https://github.com/pydata/xarray/pull/5662 - #5233 - #5900 - #5365 - #5845 - #5904 - #5911 - #5905 - #5847 - #5916

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5889/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1039714252 PR_kwDOAMm_X84t25p8 5916 Update open_rasterio deprecation version number TomNicholas 35968931 closed 0     2 2021-10-29T15:56:04Z 2021-11-02T18:03:59Z 2021-11-02T18:03:58Z MEMBER   0 pydata/xarray/pulls/5916  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5916/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1039833986 PR_kwDOAMm_X84t3SlI 5917 Update minimum dependencies for 0.20 TomNicholas 35968931 closed 0     14 2021-10-29T18:38:37Z 2021-11-01T21:14:03Z 2021-11-01T21:14:02Z MEMBER   0 pydata/xarray/pulls/5917

=============== ====== ==== Package Old New =============== ====== ==== cartopy 0.17 0.18 cftime 1.1 1.2 dask 2.15 2.30 distributed 2.15 2.30 hdf5 1.10 1.12 lxml 4.5 4.6 matplotlib-base 3.2 3.3 numba 0.49 0.51 numpy 1.17 1.18 pandas 1.0 1.1 pint 0.15 0.16 scipy 1.4 1.5 seaborn 0.10 0.11 sparse 0.8 0.11 toolz 0.10 0.11 zarr 2.4 2.5 =============== ====== ====

  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5917/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1012428149 PR_kwDOAMm_X84shL9H 5834 Combine by coords dataarray bugfix TomNicholas 35968931 closed 0     3 2021-09-30T17:17:00Z 2021-10-29T19:57:36Z 2021-10-29T19:57:36Z MEMBER   0 pydata/xarray/pulls/5834

Also reorganised the logic that deals with combining mixed sets of objects (i.e. named dataarrays, unnamed dataarrays, datasets) that was added in #4696.

TODO - same reorganisation / testing but for combine_nested as well as combine_by_coords. EDIT: I'm going to do this in a separate PR, so that this bugfix can be merged without it.

  • [x] Closes #5833
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5834/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1020282789 I_kwDOAMm_X8480Eel 5843 Why are `da.chunks` and `ds.chunks` properties inconsistent? TomNicholas 35968931 closed 0     6 2021-10-07T17:21:01Z 2021-10-29T18:12:22Z 2021-10-29T18:12:22Z MEMBER      

Basically the title, but what I'm referring to is this:

```python In [2]: da = xr.DataArray([[0, 1], [2, 3]], name='foo').chunk(1)

In [3]: ds = da.to_dataset()

In [4]: da.chunks Out[4]: ((1, 1), (1, 1))

In [5]: ds.chunks Out[5]: Frozen({'dim_0': (1, 1), 'dim_1': (1, 1)}) ```

Why does DataArray.chunks return a tuple and Dataset.chunks return a frozen dictionary?

This seems a bit silly, for a few reasons:

1) it means that some perfectly reasonable code might fail unnecessarily if passed a DataArray instead of a Dataset or vice versa, such as

```python
def is_core_dim_chunked(obj, core_dim):
    return len(obj.chunks[core_dim]) > 1
```
which will work as intended for a dataset but raises a `TypeError` for a dataarray.

2) it breaks the pattern we use for .sizes, where

```python
In [14]: da.sizes
Out[14]: Frozen({'dim_0': 2, 'dim_1': 2})

In [15]: ds.sizes
Out[15]: Frozen({'dim_0': 2, 'dim_1': 2})
```

3) if you want the chunks as a tuple they are always accessible via da.data.chunks, which is a more sensible place to look to find the chunks without dimension names.

4) It's an undocumented difference, as the docstrings for ds.chunks and da.chunks both only say

`"""Block dimensions for this dataset’s data or None if it’s not a dask array."""`

which doesn't tell me anything about the return type, or warn me that the return types are different.

EDIT: In fact `DataArray.chunk` doesn't even appear to be listed on the API docs page at all.

In our codebase this difference is mostly washed out by us using ._to_temp_dataset() all the time, and also by the way that the .chunk() method accepts both the tuple and dict form, so both of these invariants hold (but in different ways):

ds == ds.chunk(ds.chunks) da == da.chunk(da.chunks)

I'm not sure whether making this consistent is worth the effort of a significant breaking change though :confused:

(Sort of related to https://github.com/pydata/xarray/issues/2103)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5843/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1033884661 PR_kwDOAMm_X84tkKtA 5886 Use .to_numpy() for quantified facetgrids TomNicholas 35968931 closed 0     6 2021-10-22T19:25:24Z 2021-10-28T22:42:43Z 2021-10-28T22:41:59Z MEMBER   0 pydata/xarray/pulls/5886

Follows on from https://github.com/pydata/xarray/pull/5561 by replacing .values with .to_numpy() in more places in the plotting code. This allows pint.Quantity arrays to be plotted without issuing a UnitStrippedWarning (and will generalise better to other duck arrays later).

I noticed the need for this when trying out this example (but trying it without the .dequantify() call first).

(@Illviljan in theory .values should be replaced with .to_numpy() everywhere in the plotting code by the way)

  • [ ] Closes #xxxx
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5886/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1020555552 PR_kwDOAMm_X84s6zAH 5846 Change return type of DataArray.chunks and Dataset.chunks to a dict TomNicholas 35968931 closed 0     3 2021-10-08T00:02:20Z 2021-10-26T15:52:00Z 2021-10-26T15:51:59Z MEMBER   1 pydata/xarray/pulls/5846

Rectifies the the issue in #5843 by making DataArray.chunks and Variable.chunks consistent with Dataset.chunks. This would obviously need a deprecation cycle before it were merged.

Currently a WIP - I changed the behaviour but this obviously broke quite a few tests and I haven't looked at them yet.

  • [x] Closes #5843
  • [ ] Tests added
  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5846/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1016576623 PR_kwDOAMm_X84stU8v 5839 Dataset.__setitem__ raise on being passed a Dataset (for single key) TomNicholas 35968931 closed 0     1 2021-10-05T17:18:43Z 2021-10-23T19:01:24Z 2021-10-23T19:01:24Z MEMBER   0 pydata/xarray/pulls/5839

Inspired by confusion in #5833, this PR slightly clarifies the error thrown when the user attempts to do ds['var'] = xr.Dataset. The original error is TypeError: cannot directly convert an xarray.Dataset into a numpy array. Instead, create an xarray.DataArray first, either with indexing on the Dataset or by invoking the `to_array()` method. while the new error is TypeError: Cannot assign a Dataset to a single key - only a DataArray or Variable object can be stored under a single key.

  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • ~~New functions/methods are listed in api.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5839/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
957001788 MDExOlB1bGxSZXF1ZXN0NzAwNTEyNjg3 5653 Roll coords deprecation TomNicholas 35968931 closed 0     4 2021-07-30T19:16:59Z 2021-10-01T19:24:02Z 2021-10-01T18:54:22Z MEMBER   0 pydata/xarray/pulls/5653

The default behaviour of da.roll() caught me out whilst trying to hand-write a diff function, so I completed the transition to defaulting to roll_coords=False as the default. It's been throwing a warning for 3 years so I think it's time!

I also improved the docstrings and added type hints whilst there, although mypy doesn't seem to like some of the type hinting :/

  • [x] Completes deprecation started in #2360
  • [x] Tests updated
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5653/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
959317311 MDExOlB1bGxSZXF1ZXN0NzAyNDQ5NDg1 5669 Combine='by_coords' and concat dim deprecation in open_mfdataset TomNicholas 35968931 closed 0     2 2021-08-03T17:03:44Z 2021-10-01T18:52:00Z 2021-10-01T18:52:00Z MEMBER   0 pydata/xarray/pulls/5669

Noticed this hadn't been completed in https://github.com/pydata/xarray/discussions/5659

  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5669/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
928490583 MDExOlB1bGxSZXF1ZXN0Njc2NDg2ODM0 5519 Type hints for combine functions TomNicholas 35968931 closed 0     4 2021-06-23T17:33:36Z 2021-09-30T20:16:45Z 2021-09-30T19:52:47Z MEMBER   0 pydata/xarray/pulls/5519

Added type hints to combine_nested and combine_by_coords.

Builds on #4696 because that PR generalised the argument types to include DataArrays, but I couldn't see that branch in the list to base this PR off of.

The "nested list-of-lists" argument to combine_nested opens up a can of worms: the only way I can see to specify the type of a nested list of arbitrary depth is to define the type recursively, but mypy does not currently support recursive type definitions, though some other type checkers can, e.g. Microsoft's Pylance does. We're going to have the same problem when specifying types for open_mfdataset. For now this problem is just ignored by the type checker, meaning that we don't actually check the type of the nested-list-of-lists.

  • [x] Passes pre-commit run --all-files
  • [ ] ~~User visible changes (including notable bug fixes) are documented in whats-new.rst~~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5519/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
935062144 MDU6SXNzdWU5MzUwNjIxNDQ= 5559 UserWarning when wrapping pint & dask arrays together TomNicholas 35968931 closed 0     4 2021-07-01T17:25:03Z 2021-09-29T17:48:39Z 2021-09-29T17:48:39Z MEMBER      

With pint-xarray you can create a chunked, unit-aware xarray object, but calling a calculation method and then computing doesn't appear to behave as hoped.

```python da = xr.DataArray([1,2,3], attrs={'units': 'metres'})

chunked = da.chunk(1).pint.quantify() ```

python print(chunked.compute()) <xarray.DataArray (dim_0: 3)> <Quantity([1 2 3], 'meter')> Dimensions without coordinates: dim_0 So far this is fine, but if we try to take a mean before computing we get

python print(chunked.mean().compute()) <xarray.DataArray ()> <Quantity(dask.array<true_divide, shape=(), dtype=float64, chunksize=(), chunktype=numpy.ndarray>, 'meter')> /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:3139: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior. warnings.warn( This is not correct: as well as the UserWarning, the return value of compute is a dask array, meaning we need to compute a second time to actually get the answer: python print(chunked.mean().compute().compute()) <xarray.DataArray ()> <Quantity(2.0, 'meter')> /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:3139: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior. warnings.warn(

If we try chunking the other way (chunked = da.pint.quantify().pint.chunk(1)) then we get all the same results.

xref https://github.com/xarray-contrib/pint-xarray/issues/116 and https://github.com/pydata/xarray/pull/4972 @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5559/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
940054482 MDU6SXNzdWU5NDAwNTQ0ODI= 5588 Release v0.19? TomNicholas 35968931 closed 0     15 2021-07-08T17:00:26Z 2021-07-23T23:15:39Z 2021-07-23T21:12:53Z MEMBER      

Yesterday in the dev call we discussed the need for another release. Not sure if this should be a bugfix release (i.e. v0.18.3) or a full release (i.e. v0.19). Last release (v0.18.2) was 19th May, with v0.18.0 on 6th May.

@pydata/xarray

Bug fixes:

  • 5581 and the fix #5359 (this one needs to be released soon really)

  • 5528

  • Probably various smaller ones

New features:

  • 4696

  • 5514

  • 5476

  • 5464

  • 5445

Internal: - master -> main #5520 - #5506

Nice to merge first?:

  • [x] #5568 and #5561
  • [ ] #5571
  • [x] #5586
  • [ ] #5493
  • [x] #4909
  • [ ] #5580
  • [ ] #4863
  • [ ] #5501
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5588/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
951882363 MDExOlB1bGxSZXF1ZXN0Njk2MTk4NDcx 5632 v0.19.0 release notes TomNicholas 35968931 closed 0     5 2021-07-23T20:38:49Z 2021-07-23T21:39:50Z 2021-07-23T21:12:53Z MEMBER   0 pydata/xarray/pulls/5632

Release notes:

rst This release brings improvements to plotting of categorical data, the ability to specify how attributes are combined in xarray operations, a new high-level :py:func:`unify_chunks` function, as well as various deprecations, bug fixes, and minor improvements.

  • [x] Closes #5588
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5632/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
935317034 MDExOlB1bGxSZXF1ZXN0NjgyMjU1NDE5 5561 Plots get labels from pint arrays TomNicholas 35968931 closed 0     6 2021-07-02T00:44:28Z 2021-07-21T23:06:21Z 2021-07-21T22:38:34Z MEMBER   0 pydata/xarray/pulls/5561

Stops you needing to call .pint.dequantify() before plotting.

Builds on top of #5568, so that should be merged first.

  • [x] Closes (1) from https://github.com/pydata/xarray/issues/3245#issue-484240082
  • [x] Tests added
  • [x] Tests passing
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5561/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
936045730 MDExOlB1bGxSZXF1ZXN0NjgyODYzMjgz 5568 Add to_numpy() and as_numpy() methods TomNicholas 35968931 closed 0     9 2021-07-02T20:17:40Z 2021-07-21T22:06:47Z 2021-07-21T21:42:48Z MEMBER   0 pydata/xarray/pulls/5568
  • [x] Closes #3245
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5568/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
400678645 MDExOlB1bGxSZXF1ZXN0MjQ1ODA4Nzg3 2690 Add create_test_data to public testing API TomNicholas 35968931 closed 0     11 2019-01-18T11:08:01Z 2021-06-24T08:51:36Z 2021-06-23T16:14:28Z MEMBER   0 pydata/xarray/pulls/2690
  • [x] Closes #2686 and #1839
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2690/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
602579471 MDExOlB1bGxSZXF1ZXN0NDA1NTc4NTA2 3982 Combine by point coords TomNicholas 35968931 closed 0     1 2020-04-19T00:00:30Z 2021-06-24T08:48:51Z 2021-06-23T15:58:30Z MEMBER   0 pydata/xarray/pulls/3982

This PR was based off of #3926, though it probably doesn't need to be and could be rebased if we wanted to merge this first.

  • [x] Closes #3774
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3982/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
404945709 MDExOlB1bGxSZXF1ZXN0MjQ5MDE0MTc3 2729 [WIP] Feature: Animated 1D plots TomNicholas 35968931 closed 0     14 2019-01-30T20:15:52Z 2021-06-24T08:46:31Z 2021-06-23T16:14:28Z MEMBER   0 pydata/xarray/pulls/2729

This is an attempt at a proof-of-principle for making animated plots in the way I suggested in #2355. (Also relevant for #2030.)

This example code: ```python import matplotlib.pyplot as plt import xarray as xr

Load data as done in plotting tutorial

airtemps = xr.tutorial.open_dataset('air_temperature') air = airtemps.air - 273.15 air.attrs = airtemps.air.attrs air.attrs['units'] = 'deg C'

Downsample to make reasonably-sized gif

data = air.isel(lat=10, time=slice(None,None,40))

Create animated plot

anim = data.plot(animate_over='time') anim.save('line1.gif', writer='imagemagick') plt.show() ``` now produces this gif: ~~The units on the timeline are formatted incorrectly because this PR isn't merged yet~~

I think it looks pretty good! It even animates the title properly. The actual animation creation only takes one line to do.

This currently only works for a plot with a single line, which is animated over a coordinate dimension. ~~It also required some minor modifications/bugfixes to animatplot, so it probably isn't reproducible right out of the box yet.~~ If you want to try this out then use the develop branch of my forked version of animatplot.

The reason I've put this up is because I wanted to

  1. show people the level of complexity required, and
  2. get people's opinion on the implementation.

I feel like although it required only ~100 lines extra to do this then the logic is very fragmented and scattered through the plot.line and plot._infer_line_data functions. In 2D this would get even more complicated, but I can't see a good way to abstract the case of animation out?

(@t-makaro I expect you will be interested in this)

EDIT: To-Do list:

  • [x] Animate single line
  • [x] Animated line and static line on same axes
  • [x] Animate multiple lines on same axes
  • [x] Multiple animated line plots on same figure
  • [ ] ~~FacetGrids of multiple animated lines~~ (will leave for a later PR)
  • [ ] Complete set of tests
  • [x] Add animatplot as optional dependency
  • [x] Add new CI tests using animatplot
  • [ ] New documentation page
  • [x] Fix issues with formatting of timeline label (fixed by https://github.com/t-makaro/animatplot/pull/46)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2729/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
911663002 MDU6SXNzdWU5MTE2NjMwMDI= 5438 Add Union Operators for Dataset TomNicholas 35968931 closed 0     2 2021-06-04T16:21:06Z 2021-06-04T16:35:36Z 2021-06-04T16:35:36Z MEMBER      

As of python 3.9, python dictionaries now support being merged via python c = a | b and updated via python c = a |= b see PEP 584.

xarray.Dataset is dict-like, so it would make sense to support the same syntax for merging. The way to achieve that is by adding new dunder methods to xarray.Dataset, something like

```python def or(self, other): if not isinstance(other, xr.Dataset): return NotImplemented new = xr.merge(self, other) return new

def ror(self, other): if not isinstance(other, xr.Dataset): return NotImplemented new = xr.merge(self, other) return new

def ior(self, other): self.merge(other) return self ```

The distinction between the intent of these different operators is whether a new object is returned or the original object is updated.

This would allow things like (ds1 | ds2).to_netcdf()

(This feature doesn't require python 3.9, it merely echoes a feature that is only available in 3.9+)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5438/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
905974760 MDExOlB1bGxSZXF1ZXN0NjU3MDE1ODU4 5398 Multi dimensional histogram (see #5400 instead) TomNicholas 35968931 closed 0     0 2021-05-28T19:59:02Z 2021-05-30T15:34:33Z 2021-05-28T20:00:08Z MEMBER   0 pydata/xarray/pulls/5398

Initial work on integrating the multi-dimensional dask-powered histogram functionality from xhistogram into xarray. Just working on the skeleton to fit around the histogram algorithm for now, to be filled in later.

  • [ ] Closes #4610
  • [x] API skeleton
  • [x] Redirect plot.hist
  • [ ] Tests added
  • [ ] Type hinting
  • [ ] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

EDIT: Didn't notice that using git commit --amend has polluted the git history for this branch...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5398/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
902830027 MDExOlB1bGxSZXF1ZXN0NjU0MTU2NTA5 5383 Corrected reference to blockwise to refer to apply_gufunc instead TomNicholas 35968931 closed 0     2 2021-05-26T19:23:53Z 2021-05-26T21:34:06Z 2021-05-26T21:34:06Z MEMBER   0 pydata/xarray/pulls/5383

I noticed that the apply_ufunc tutorial notebook says that xarray.apply_ufunc uses dask.array.blockwise, but that's no longer true as of PR #4060 .

  • [x] Passes pre-commit run --all-files
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5383/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
877944829 MDExOlB1bGxSZXF1ZXN0NjMxODI1Nzky 5274 Update release guide TomNicholas 35968931 closed 0     3 2021-05-06T19:50:53Z 2021-05-13T17:44:47Z 2021-05-13T17:44:47Z MEMBER   0 pydata/xarray/pulls/5274

Updated the release guide to account for what is now automated via github actions, and any other bits I felt could be clearer.

Now only 16 easy steps!

  • Motivated by #5232 and #5244
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5274/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
887597884 MDExOlB1bGxSZXF1ZXN0NjQwODE5NTMz 5289 Explained what a deprecation cycle is TomNicholas 35968931 closed 0     2 2021-05-11T15:15:08Z 2021-05-13T16:38:19Z 2021-05-13T16:38:19Z MEMBER   0 pydata/xarray/pulls/5289

Inspired by a question asked in #4696, but does not close that issue - [x] Passes pre-commit run --all-files - [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5289/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
871234354 MDExOlB1bGxSZXF1ZXN0NjI2Mjg2ODQy 5237 Add deprecation warnings for lock kwarg TomNicholas 35968931 closed 0     2 2021-04-29T16:45:45Z 2021-05-04T19:17:31Z 2021-05-04T19:17:31Z MEMBER   0 pydata/xarray/pulls/5237

Does this need a test?

  • [x] Closes #5073
  • [ ] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5237/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
874768820 MDExOlB1bGxSZXF1ZXN0NjI5MjU0ODU0 5255 Warn instead of error on combine='nested' with concat_dim supplied TomNicholas 35968931 closed 0     0 2021-05-03T17:38:10Z 2021-05-04T02:45:52Z 2021-05-04T02:45:52Z MEMBER   0 pydata/xarray/pulls/5255

Changes error introduced in #5231 into a warning, as discussed.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5255/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
870266283 MDExOlB1bGxSZXF1ZXN0NjI1NDg5NTQx 5231 open_mfdataset: Raise if combine='by_coords' and concat_dim=None TomNicholas 35968931 closed 0     1 2021-04-28T19:16:19Z 2021-04-30T12:41:17Z 2021-04-30T12:41:17Z MEMBER   0 pydata/xarray/pulls/5231

Fixes bug which allowed incorrect arguments to be passed to open_mfdataset without complaint.

The combination open_mfdataset(files, combine='by_coords', concat_dim='t') should never have been permitted, and in fact it wasn't permitted until the last part of the deprecation process from the old auto_combine. It makes no sense to pass this combination because the combine_by_coords function does not have a concat_dim argument at all!

The effect was pretty benign - the concat_dim arg wasn't really used for anything in that case, and the result of passing dodgy datasets would just be a less informative error. However there were multiple tests which assumed this behaviour was okay - I had to remove that particular parametrization for a bunch of your join tests @dcherian because they now fail with a different (clearer) error.

I also noticed a related issue which I fixed - internally open_mfdataset was performing a rearrangement of the input datasets that it needs for the case combine='nested', even in the case combine='by_coords'. I hadn't previously realised that we can just skip this rearrangement without issue, so open_mfdataset(combine='by_coords') should be a little bit faster now.

  • [x] Closes #5230
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5231/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
871111282 MDU6SXNzdWU4NzExMTEyODI= 5236 Error collecting tests due to optional pint import TomNicholas 35968931 closed 0     2 2021-04-29T15:01:13Z 2021-04-29T15:32:08Z 2021-04-29T15:32:08Z MEMBER      

When I try to run xarray's test suite locally with pytest I've suddenly started getting this weird error:

``` (xarray-dev) tegn500@fusion192:~/Documents/Work/Code/xarray$ pytest xarray/tests/test_backends.py ==================================================================================== test session starts ===================================================================================== platform linux -- Python 3.9.2, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /home/tegn500/Documents/Work/Code/xarray, configfile: setup.cfg collected 0 items / 1 error

=========================================================================================== ERRORS =========================================================================================== __________ ERROR collecting xarray/tests/test_backends.py __________ ../../../../anaconda3/envs/xarray-dev/lib/python3.9/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) <frozen importlib._bootstrap>:1030: in _gcd_import ??? <frozen importlib._bootstrap>:1007: in _find_and_load ??? <frozen importlib._bootstrap>:972: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:228: in _call_with_frames_removed ??? <frozen importlib._bootstrap>:1030: in _gcd_import ??? <frozen importlib._bootstrap>:1007: in _find_and_load ??? <frozen importlib._bootstrap>:986: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:680: in _load_unlocked ??? <frozen importlib._bootstrap_external>:790: in exec_module ??? <frozen importlib._bootstrap>:228: in _call_with_frames_removed ??? xarray/tests/init.py:84: in <module> has_pint_0_15, requires_pint_0_15 = _importorskip("pint", minversion="0.15") xarray/tests/init.py:46: in _importorskip if LooseVersion(mod.version) < LooseVersion(minversion): E AttributeError: module 'pint' has no attribute 'version' ================================================================================== short test summary info =================================================================================== ERROR xarray/tests/test_backends.py - AttributeError: module 'pint' has no attribute 'version' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ====================================================================================== 1 error in 0.88s ====================================================================================== ```

I'm not sure whether this is my fault or a problem with xarray somehow. @keewis have you seen this happen before? This is with a fresh conda environment, running locally on my laptop, and on python 3.9.2. Pint isn't even in this environment. I can force it to proceed with the tests by also catching the attribute error, i.e.

python def _importorskip(modname, minversion=None): try: mod = importlib.import_module(modname) has = True if minversion is not None: if LooseVersion(mod.__version__) < LooseVersion(minversion): raise ImportError("Minimum version not satisfied") except (ImportError, AttributeError): has = False

but I obviously shouldn't need to do that. Any ideas?

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: a5e72c9aacbf26936844840b75dd59fe7d13f1e6 python: 3.9.2 | packaged by conda-forge | (default, Feb 21 2021, 05:02:46) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 4.8.10-040810-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.15.2.dev545+ga5e72c9 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.8.1 cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.04.1 distributed: 2021.04.1 matplotlib: 3.4.1 cartopy: installed seaborn: None numbagg: None pint: installed setuptools: 49.6.0.post20210108 pip: 21.1 conda: None pytest: 6.2.3 IPython: None sphinx: None

Conda Environment:

Output of <tt>conda list</tt> # packages in environment at /home/tegn500/anaconda3/envs/xarray-dev: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 1_gnu conda-forge alsa-lib 1.2.3 h516909a_0 conda-forge asciitree 0.3.3 py_2 conda-forge attrs 20.3.0 pyhd3deb0d_0 conda-forge bokeh 2.3.1 py39hf3d152e_0 conda-forge bottleneck 1.3.2 py39hce5d2b2_3 conda-forge bzip2 1.0.8 h7f98852_4 conda-forge c-ares 1.17.1 h7f98852_1 conda-forge ca-certificates 2020.12.5 ha878542_0 conda-forge certifi 2020.12.5 py39hf3d152e_1 conda-forge cftime 1.4.1 py39hce5d2b2_0 conda-forge click 7.1.2 pyh9f0ad1d_0 conda-forge cloudpickle 1.6.0 py_0 conda-forge curl 7.76.1 h979ede3_1 conda-forge cycler 0.10.0 py_2 conda-forge cytoolz 0.11.0 py39h3811e60_3 conda-forge dask 2021.4.1 pyhd8ed1ab_0 conda-forge dask-core 2021.4.1 pyhd8ed1ab_0 conda-forge dbus 1.13.6 h48d8840_2 conda-forge distributed 2021.4.1 py39hf3d152e_0 conda-forge expat 2.3.0 h9c3ff4c_0 conda-forge fasteners 0.14.1 py_3 conda-forge fontconfig 2.13.1 hba837de_1005 conda-forge freetype 2.10.4 h0708190_1 conda-forge fsspec 2021.4.0 pyhd8ed1ab_0 conda-forge gettext 0.19.8.1 h0b5b191_1005 conda-forge glib 2.68.1 h9c3ff4c_0 conda-forge glib-tools 2.68.1 h9c3ff4c_0 conda-forge gst-plugins-base 1.18.4 hf529b03_2 conda-forge gstreamer 1.18.4 h76c114f_2 conda-forge hdf4 4.2.13 h10796ff_1005 conda-forge hdf5 1.10.6 nompi_h6a2412b_1114 conda-forge heapdict 1.0.1 py_0 conda-forge icu 68.1 h58526e2_0 conda-forge iniconfig 1.1.1 pyh9f0ad1d_0 conda-forge jinja2 2.11.3 pyh44b312d_0 conda-forge jpeg 9d h36c2ea0_0 conda-forge kiwisolver 1.3.1 py39h1a9c180_1 conda-forge krb5 1.17.2 h926e7f8_0 conda-forge lcms2 2.12 hddcbb42_0 conda-forge ld_impl_linux-64 2.35.1 hea4e1c9_2 conda-forge libblas 3.9.0 8_openblas conda-forge libcblas 3.9.0 8_openblas conda-forge libclang 11.1.0 default_ha53f305_0 conda-forge libcurl 7.76.1 hc4aaa36_1 conda-forge libedit 3.1.20191231 he28a2e2_2 conda-forge libev 4.33 h516909a_1 conda-forge libevent 2.1.10 hcdb4288_3 conda-forge libffi 3.3 h58526e2_2 conda-forge libgcc-ng 9.3.0 h2828fa1_19 conda-forge libgfortran-ng 9.3.0 hff62375_19 conda-forge libgfortran5 9.3.0 hff62375_19 conda-forge libglib 2.68.1 h3e27bee_0 conda-forge libgomp 9.3.0 h2828fa1_19 conda-forge libiconv 1.16 h516909a_0 conda-forge liblapack 3.9.0 8_openblas conda-forge libllvm11 11.1.0 hf817b99_2 conda-forge libnetcdf 4.8.0 nompi_hfa85936_101 conda-forge libnghttp2 1.43.0 h812cca2_0 conda-forge libogg 1.3.4 h7f98852_1 conda-forge libopenblas 0.3.12 pthreads_h4812303_1 conda-forge libopus 1.3.1 h7f98852_1 conda-forge libpng 1.6.37 h21135ba_2 conda-forge libpq 13.2 hfd2b0eb_2 conda-forge libssh2 1.9.0 ha56f1ee_6 conda-forge libstdcxx-ng 9.3.0 h6de172a_19 conda-forge libtiff 4.2.0 hdc55705_1 conda-forge libuuid 2.32.1 h7f98852_1000 conda-forge libvorbis 1.3.7 h9c3ff4c_0 conda-forge libwebp-base 1.2.0 h7f98852_2 conda-forge libxcb 1.13 h7f98852_1003 conda-forge libxkbcommon 1.0.3 he3ba5ed_0 conda-forge libxml2 2.9.10 h72842e0_4 conda-forge libzip 1.7.3 h4de3113_0 conda-forge locket 0.2.0 py_2 conda-forge lz4-c 1.9.3 h9c3ff4c_0 conda-forge markupsafe 1.1.1 py39h3811e60_3 conda-forge matplotlib 3.4.1 py39hf3d152e_0 conda-forge matplotlib-base 3.4.1 py39h2fa2bec_0 conda-forge monotonic 1.5 py_0 conda-forge more-itertools 8.7.0 pyhd8ed1ab_1 conda-forge msgpack-python 1.0.2 py39h1a9c180_1 conda-forge mysql-common 8.0.23 ha770c72_1 conda-forge mysql-libs 8.0.23 h935591d_1 conda-forge ncurses 6.2 h58526e2_4 conda-forge netcdf4 1.5.6 nompi_py39hc6dca20_103 conda-forge nspr 4.30 h9c3ff4c_0 conda-forge nss 3.64 hb5efdd6_0 conda-forge numcodecs 0.7.3 py39he80948d_0 conda-forge numpy 1.20.2 py39hdbf815f_0 conda-forge olefile 0.46 pyh9f0ad1d_1 conda-forge openjpeg 2.4.0 hf7af979_0 conda-forge openssl 1.1.1k h7f98852_0 conda-forge packaging 20.9 pyh44b312d_0 conda-forge pandas 1.2.4 py39hde0f152_0 conda-forge partd 1.2.0 pyhd8ed1ab_0 conda-forge pcre 8.44 he1b5a44_0 conda-forge pillow 8.1.2 py39hf95b381_1 conda-forge pip 21.1 pyhd8ed1ab_0 conda-forge pluggy 0.13.1 py39hf3d152e_4 conda-forge psutil 5.8.0 py39h3811e60_1 conda-forge pthread-stubs 0.4 h36c2ea0_1001 conda-forge py 1.10.0 pyhd3deb0d_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyqt 5.12.3 py39hf3d152e_7 conda-forge pyqt-impl 5.12.3 py39h0fcd23e_7 conda-forge pyqt5-sip 4.19.18 py39he80948d_7 conda-forge pyqtchart 5.12 py39h0fcd23e_7 conda-forge pyqtwebengine 5.12.1 py39h0fcd23e_7 conda-forge pytest 6.2.3 py39hf3d152e_0 conda-forge python 3.9.2 hffdb5ce_0_cpython conda-forge python-dateutil 2.8.1 py_0 conda-forge python_abi 3.9 1_cp39 conda-forge pytz 2021.1 pyhd8ed1ab_0 conda-forge pyyaml 5.4.1 py39h3811e60_0 conda-forge qt 5.12.9 hda022c4_4 conda-forge readline 8.1 h46c0cb4_0 conda-forge scipy 1.6.3 py39hee8e79c_0 conda-forge setuptools 49.6.0 py39hf3d152e_3 conda-forge six 1.15.0 pyh9f0ad1d_0 conda-forge sortedcontainers 2.3.0 pyhd8ed1ab_0 conda-forge sqlite 3.35.5 h74cdb3f_0 conda-forge tblib 1.7.0 pyhd8ed1ab_0 conda-forge tk 8.6.10 h21135ba_1 conda-forge toml 0.10.2 pyhd8ed1ab_0 conda-forge toolz 0.11.1 py_0 conda-forge tornado 6.1 py39h3811e60_1 conda-forge typing_extensions 3.7.4.3 py_0 conda-forge tzdata 2021a he74cb21_0 conda-forge wheel 0.36.2 pyhd3deb0d_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge zarr 2.8.1 pyhd8ed1ab_0 conda-forge zict 2.0.0 py_0 conda-forge zlib 1.2.11 h516909a_1010 conda-forge zstd 1.4.9 ha95c52a_0 conda-forge
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5236/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
671609109 MDU6SXNzdWU2NzE2MDkxMDk= 4300 General curve fitting method TomNicholas 35968931 closed 0     9 2020-08-02T12:35:49Z 2021-03-31T16:55:53Z 2021-03-31T16:55:53Z MEMBER      

Xarray should have a general curve-fitting function as part of its main API.

Motivation

Yesterday I wanted to fit a simple decaying exponential function to the data in a DataArray and realised there currently isn't an immediate way to do this in xarray. You have to either pull out the .values (losing the power of dask), or use apply_ufunc (complicated).

This is an incredibly common, domain-agnostic task, so although I don't think we should support various kinds of unusual optimisation procedures (which could always go in an extension package instead), I think a basic fitting method is within scope for the main library. There are SO questions asking how to achieve this.

We already have .polyfit and polyval anyway, which are more specific. (@AndrewWilliams3142 and @aulemahal I expect you will have thoughts on how implement this generally.)

Proposed syntax

I want something like this to work:

```python def exponential_decay(xdata, A=10, L=5): return A*np.exp(-xdata/L)

returns a dataset containing the optimised values of each parameter

fitted_params = da.fit(exponential_decay)

fitted_line = exponential_decay(da.x, A=fitted_params['A'], L=fitted_params['L'])

Compare

da.plot(ax) fitted_line.plot(ax) ```

It would also be nice to be able to fit in multiple dimensions. That means both for example fitting a 2D function to 2D data:

```python def hat(xdata, ydata, h=2, r0=1): r = xdata2 + ydata2 return h*np.exp(-r/r0)

fitted_params = da.fit(hat)

fitted_hat = hat(da.x, da.y, h=fitted_params['h'], r0=fitted_params['r0']) ```

but also repeatedly fitting a 1D function to 2D data:

```python

da now has a y dimension too

fitted_params = da.fit(exponential_decay, fit_along=['x'])

As fitted_params now has y-dependence, broadcasting means fitted_lines does too

fitted_lines = exponential_decay(da.x, A=fitted_params.A, L=fitted_params.L) `` The latter would be useful for fitting the same curve to multiple model runs, but means we need some kind offit_alongordim` argument, which would default to all dims.

So the method docstring would end up like ```python def fit(self, f, fit_along=None, skipna=None, full=False, cov=False): """ Fits the function f to the DataArray.

Expects the function f to have a signature like
`result = f(*coords, **params)`
for example
`result_da = f(da.xcoord, da.ycoord, da.zcoord, A=5, B=None)`
The names of the `**params` kwargs will be used to name the output variables.

Returns
-------
fit_results - A single dataset which contains the variables (for each parameter in the fitting function):
`param1`
    The optimised fit coefficients for parameter one.
`param1_residuals`
    The residuals of the fit for parameter one.
...
"""

```

Questions

1) Should it wrap scipy.optimise.curve_fit, or reimplement it?

Wrapping it is simpler, but as it just calls `least_squares` [under the hood](https://github.com/scipy/scipy/blob/v1.5.2/scipy/optimize/minpack.py#L532-L834) then reimplementing it would mean we could use the dask-powered version of `least_squares` (like [`da.polyfit does`](https://github.com/pydata/xarray/blob/9058114f70d07ef04654d1d60718442d0555b84b/xarray/core/dataset.py#L5987)).

2) What form should we expect the curve-defining function to come in?

`scipy.optimize.curve_fit` expects the curve to act as `ydata = f(xdata, *params) + eps`, but in xarray then `xdata` could be one or multiple coords or dims, not necessarily a single array. Might it work to require a signature like `result_da = f(da.xcoord, da.ycoord, da.zcoord, ..., **params)`? Then the `.fit` method would be work out how many coords to pass to `f` based on the dimension of the `da` and the `fit_along` argument. But then the order of coord arguments in the signature of `f` would matter, which doesn't seem very xarray-like.

3) Is it okay to inspect parameters of the curve-defining function?

If we tell the user the curve-defining function has to have a signature like `da = func(*coords, **params)`, then we could read the names of the parameters by inspecting the function kwargs. Is that a good idea or might it end up being unreliable? Is the `inspect` standard library module the right thing to use for that? This could also be used to provide default guesses for the fitting parameters.
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4300/reactions",
    "total_count": 4,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  completed xarray 13221727 issue

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 37.091ms · About: xarray-datasette