issues
134 rows where state = "closed" and user = 35968931 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2019566184 | I_kwDOAMm_X854YCJo | 8494 | Filter expected warnings in the test suite | TomNicholas 35968931 | closed | 0 | 1 | 2023-11-30T21:50:15Z | 2024-04-29T16:57:07Z | 2024-04-29T16:56:16Z | MEMBER | FWIW one thing I'd be keen for to do generally — though maybe this isn't the place to start it — is handle warnings in the test suite when we add a new warning — i.e. filter them out where we expect them. In this case, that would be the loading the netCDF files that have duplicate dims. Otherwise warnings become a huge block of text without much salience. I mostly see the 350 lines of them and think "meh mostly units & cftime", but then something breaks on a new upstream release that was buried in there, or we have a supported code path that is raising warnings internally. (I'm not sure whether it's possible to generally enforce that — maybe we could raise on any warnings coming from within xarray? Would be a non-trivial project to get us there though...) Originally posted by @max-sixty in https://github.com/pydata/xarray/issues/8491#issuecomment-1834615826 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8494/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2021386895 | PR_kwDOAMm_X85g7QZD | 8500 | Deprecate ds.dims returning dict | TomNicholas 35968931 | closed | 0 | 1 | 2023-12-01T18:29:28Z | 2024-04-28T20:04:00Z | 2023-12-06T17:52:24Z | MEMBER | 0 | pydata/xarray/pulls/8500 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2224036575 | I_kwDOAMm_X86EkBrf | 8905 | Variable doesn't have an .expand_dims method | TomNicholas 35968931 | closed | 0 | 4 | 2024-04-03T22:19:10Z | 2024-04-28T19:54:08Z | 2024-04-28T19:54:08Z | MEMBER | Is your feature request related to a problem?
Describe the solution you'd likeVariable should also have this method, the only difference being that it wouldn't create any coordinates or indexes. Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8905/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2254350395 | PR_kwDOAMm_X85tPTua | 8960 | Option to not auto-create index during expand_dims | TomNicholas 35968931 | closed | 0 | 2 | 2024-04-20T03:27:23Z | 2024-04-27T16:48:30Z | 2024-04-27T16:48:24Z | MEMBER | 0 | pydata/xarray/pulls/8960 |
TODO:
- [x] Add new kwarg to |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2100707586 | PR_kwDOAMm_X85lFQn3 | 8669 | Fix automatic broadcasting when wrapping array api class | TomNicholas 35968931 | closed | 0 | 0 | 2024-01-25T16:05:19Z | 2024-04-20T05:58:05Z | 2024-01-26T16:41:30Z | MEMBER | 0 | pydata/xarray/pulls/8669 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2240895281 | PR_kwDOAMm_X85siDno | 8934 | Correct save_mfdataset docstring | TomNicholas 35968931 | closed | 0 | 0 | 2024-04-12T20:51:35Z | 2024-04-14T19:58:46Z | 2024-04-14T11:14:42Z | MEMBER | 0 | pydata/xarray/pulls/8934 | Noticed the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2198196326 | I_kwDOAMm_X86DBdBm | 8860 | Ugly error in constructor when no data passed | TomNicholas 35968931 | closed | 0 | 2 | 2024-03-20T17:55:52Z | 2024-04-10T22:46:55Z | 2024-04-10T22:46:54Z | MEMBER | What happened?Passing no data to the What did you expect to happen?An error more like "tuple must be of form (dims, data[, attrs])" Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output```PythonIndexError Traceback (most recent call last) Cell In[2], line 1 ----> 1 xr.Dataset({"t": ()}) File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:693, in Dataset.init(self, data_vars, coords, attrs) 690 if isinstance(coords, Dataset): 691 coords = coords._variables --> 693 variables, coord_names, dims, indexes, _ = merge_data_and_coords( 694 data_vars, coords 695 ) 697 self._attrs = dict(attrs) if attrs else None 698 self._close = None File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:422, in merge_data_and_coords(data_vars, coords) 418 coords = create_coords_with_default_indexes(coords, data_vars) 420 # exclude coords from alignment (all variables in a Coordinates object should 421 # already be aligned together) and use coordinates' indexes to align data_vars --> 422 return merge_core( 423 [data_vars, coords], 424 compat="broadcast_equals", 425 join="outer", 426 explicit_coords=tuple(coords), 427 indexes=coords.xindexes, 428 priority_arg=1, 429 skip_align_args=[1], 430 ) File ~/Documents/Work/Code/xarray/xarray/core/merge.py:718, in merge_core(objects, compat, join, combine_attrs, priority_arg, explicit_coords, indexes, fill_value, skip_align_args) 715 for pos, obj in skip_align_objs: 716 aligned.insert(pos, obj) --> 718 collected = collect_variables_and_indexes(aligned, indexes=indexes) 719 prioritized = _get_priority_vars_and_indexes(aligned, priority_arg, compat=compat) 720 variables, out_indexes = merge_collected( 721 collected, prioritized, compat=compat, combine_attrs=combine_attrs 722 ) File ~/Documents/Work/Code/xarray/xarray/core/merge.py:358, in collect_variables_and_indexes(list_of_mappings, indexes) 355 indexes_.pop(name, None) 356 append_all(coords_, indexes_) --> 358 variable = as_variable(variable, name=name, auto_convert=False) 359 if name in indexes: 360 append(name, variable, indexes[name]) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:126, in as_variable(obj, name, auto_convert) 124 obj = obj.copy(deep=False) 125 elif isinstance(obj, tuple): --> 126 if isinstance(obj[1], DataArray): 127 raise TypeError( 128 f"Variable {name!r}: Using a DataArray object to construct a variable is" 129 " ambiguous, please extract the data using the .data property." 130 ) 131 try: IndexError: tuple index out of range ``` Anything else we need to know?No response EnvironmentXarray |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8860/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2057651682 | PR_kwDOAMm_X85i2Byx | 8573 | ddof vs correction kwargs in std/var | TomNicholas 35968931 | closed | 0 | 0 | 2023-12-27T18:10:52Z | 2024-04-04T16:46:55Z | 2024-04-04T16:46:55Z | MEMBER | 0 | pydata/xarray/pulls/8573 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2218574880 | PR_kwDOAMm_X85rVXJC | 8899 | New empty whatsnew entry | TomNicholas 35968931 | closed | 0 | 0 | 2024-04-01T16:04:27Z | 2024-04-01T17:49:09Z | 2024-04-01T17:49:06Z | MEMBER | 0 | pydata/xarray/pulls/8899 | Should have been done as part of the last release https://github.com/pydata/xarray/releases/tag/v2024.03.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2213406564 | PR_kwDOAMm_X85rEF-X | 8886 | Allow multidimensional variable with same name as dim when constructing dataset via coords | TomNicholas 35968931 | closed | 0 | 2 | 2024-03-28T14:37:27Z | 2024-03-28T17:07:10Z | 2024-03-28T16:28:09Z | MEMBER | 0 | pydata/xarray/pulls/8886 | Supercedes #8884 as a way to close #8883, in light of me having learnt that this is now allowed! https://github.com/pydata/xarray/issues/8883#issuecomment-2024645815. So this is really a follow-up to #7989.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8886/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2212186122 | I_kwDOAMm_X86D20gK | 8883 | Coordinates object permits invalid state | TomNicholas 35968931 | closed | 0 | 2 | 2024-03-28T01:49:21Z | 2024-03-28T16:28:11Z | 2024-03-28T16:28:11Z | MEMBER | What happened?It is currently possible to create a What did you expect to happen?If you try to pass the resulting object into the Minimal Complete Verifiable Example```Python In [1]: from xarray.core.coordinates import Coordinates In [2]: from xarray.core.variable import Variable In [4]: import numpy as np In [5]: var = Variable(data=np.arange(6).reshape(2, 3), dims=['x', 'y']) In [6]: var Out[6]: <xarray.Variable (x: 2, y: 3)> Size: 48B array([[0, 1, 2], [3, 4, 5]]) In [7]: coords = Coordinates(coords={'x': var}, indexes={}) In [8]: coords Out[8]: Coordinates: x (x, y) int64 48B 0 1 2 3 4 5 In [10]: import xarray as xr In [11]: ds = xr.Dataset(coords=coords)MergeError Traceback (most recent call last) Cell In[11], line 1 ----> 1 ds = xr.Dataset(coords=coords) File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:693, in Dataset.init(self, data_vars, coords, attrs) 690 if isinstance(coords, Dataset): 691 coords = coords._variables --> 693 variables, coord_names, dims, indexes, _ = merge_data_and_coords( 694 data_vars, coords 695 ) 697 self._attrs = dict(attrs) if attrs else None 698 self._close = None File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:422, in merge_data_and_coords(data_vars, coords) 418 coords = create_coords_with_default_indexes(coords, data_vars) 420 # exclude coords from alignment (all variables in a Coordinates object should 421 # already be aligned together) and use coordinates' indexes to align data_vars --> 422 return merge_core( 423 [data_vars, coords], 424 compat="broadcast_equals", 425 join="outer", 426 explicit_coords=tuple(coords), 427 indexes=coords.xindexes, 428 priority_arg=1, 429 skip_align_args=[1], 430 ) File ~/Documents/Work/Code/xarray/xarray/core/merge.py:731, in merge_core(objects, compat, join, combine_attrs, priority_arg, explicit_coords, indexes, fill_value, skip_align_args) 729 coord_names.intersection_update(variables) 730 if explicit_coords is not None: --> 731 assert_valid_explicit_coords(variables, dims, explicit_coords) 732 coord_names.update(explicit_coords) 733 for dim, size in dims.items(): File ~/Documents/Work/Code/xarray/xarray/core/merge.py:577, in assert_valid_explicit_coords(variables, dims, explicit_coords) 575 for coord_name in explicit_coords: 576 if coord_name in dims and variables[coord_name].dims != (coord_name,): --> 577 raise MergeError( 578 f"coordinate {coord_name} shares a name with a dataset dimension, but is " 579 "not a 1D variable along that dimension. This is disallowed " 580 "by the xarray data model." 581 ) MergeError: coordinate x shares a name with a dataset dimension, but is not a 1D variable along that dimension. This is disallowed by the xarray data model. ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?I noticed this whilst working on #8872 Environment
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2212211084 | PR_kwDOAMm_X85rABMo | 8884 | Forbid invalid Coordinates object | TomNicholas 35968931 | closed | 0 | 2 | 2024-03-28T02:14:01Z | 2024-03-28T14:38:43Z | 2024-03-28T14:38:03Z | MEMBER | 0 | pydata/xarray/pulls/8884 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2119537681 | PR_kwDOAMm_X85mE7Im | 8711 | Opt out of auto creating index variables | TomNicholas 35968931 | closed | 0 | 11 | 2024-02-05T22:04:36Z | 2024-03-26T13:55:16Z | 2024-03-26T13:50:14Z | MEMBER | 0 | pydata/xarray/pulls/8711 | Tries fixing #8704 by cherry-picking from #8124 as @benbovy suggested in https://github.com/pydata/xarray/issues/8704#issuecomment-1926868422
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2117248281 | I_kwDOAMm_X85-MqUZ | 8704 | Currently no way to create a Coordinates object without indexes for 1D variables | TomNicholas 35968931 | closed | 0 | 4 | 2024-02-04T18:30:18Z | 2024-03-26T13:50:16Z | 2024-03-26T13:50:15Z | MEMBER | What happened?The workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263 does not seem to work on What did you expect to happen?I expected to at least be able to use the workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263, i.e.
Minimal Complete Verifiable Example```Python class UnindexableArrayAPI: ... class UnindexableArray: """ Presents like an N-dimensional array but doesn't support changes of any kind, nor can it be coerced into a np.ndarray or pd.Index. """
``` ```python uarr = UnindexableArray(shape=(3,), dtype=np.dtype('int32')) xr.Variable(data=uarr, dims=['x']) # works fine xr.Coordinates({'x': ('x', uarr)}, indexes={}) # works in xarray v2023.08.0
NotImplementedError Traceback (most recent call last) Cell In[59], line 1 ----> 1 xr.Coordinates({'x': ('x', uarr)}, indexes={}) File ~/Documents/Work/Code/xarray/xarray/core/coordinates.py:301, in Coordinates.init(self, coords, indexes) 299 variables = {} 300 for name, data in coords.items(): --> 301 var = as_variable(data, name=name) 302 if var.dims == (name,) and indexes is None: 303 index, index_vars = create_default_index_implicit(var, list(coords)) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:159, in as_variable(obj, name) 152 raise TypeError( 153 f"Variable {name!r}: unable to convert object into a variable without an " 154 f"explicit list of dimensions: {obj!r}" 155 ) 157 if name is not None and name in obj.dims and obj.ndim == 1: 158 # automatically convert the Variable into an Index --> 159 obj = obj.to_index_variable() 161 return obj File ~/Documents/Work/Code/xarray/xarray/core/variable.py:572, in Variable.to_index_variable(self) 570 def to_index_variable(self) -> IndexVariable: 571 """Return this variable as an xarray.IndexVariable""" --> 572 return IndexVariable( 573 self._dims, self._data, self._attrs, encoding=self._encoding, fastpath=True 574 ) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2642, in IndexVariable.init(self, dims, data, attrs, encoding, fastpath) 2640 # Unlike in Variable, always eagerly load values into memory 2641 if not isinstance(self._data, PandasIndexingAdapter): -> 2642 self._data = PandasIndexingAdapter(self._data) File ~/Documents/Work/Code/xarray/xarray/core/indexing.py:1481, in PandasIndexingAdapter.init(self, array, dtype) 1478 def init(self, array: pd.Index, dtype: DTypeLike = None): 1479 from xarray.core.indexes import safe_cast_to_index -> 1481 self.array = safe_cast_to_index(array) 1483 if dtype is None: 1484 self._dtype = get_valid_numpy_dtype(array) File ~/Documents/Work/Code/xarray/xarray/core/indexes.py:469, in safe_cast_to_index(array)
459 emit_user_level_warning(
460 (
461 " Cell In[55], line 63, in UnindexableArray.array(self) 62 def array(self) -> np.ndarray: ---> 63 raise NotImplementedError("UnindexableArrays can't be converted into numpy arrays or pandas Index objects") NotImplementedError: UnindexableArrays can't be converted into numpy arrays or pandas Index objects ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?Context is #8699 EnvironmentVersions described above |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1945654275 | PR_kwDOAMm_X85c7HL_ | 8319 | Move parallelcompat and chunkmanagers to NamedArray | TomNicholas 35968931 | closed | 0 | 9 | 2023-10-16T16:34:26Z | 2024-02-12T22:09:24Z | 2024-02-12T22:09:24Z | MEMBER | 0 | pydata/xarray/pulls/8319 | @dcherian I got to this point before realizing that simply moving
I personally think that simply moving parallelcompat makes sense so long as you expect people to use chunked cc @andersy005
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2098882374 | I_kwDOAMm_X859GmdG | 8660 | dtype encoding ignored during IO? | TomNicholas 35968931 | closed | 0 | 3 | 2024-01-24T18:50:47Z | 2024-02-05T17:35:03Z | 2024-02-05T17:35:02Z | MEMBER | What happened?When I set the What did you expect to happen?I expected that setting Minimal Complete Verifiable Example```Python air = xr.tutorial.open_dataset('air_temperature') air['air'].dtype # returns dtype('float32') air['air'].encoding['dtype'] # returns dtype('int16'), which already seems weird air.to_zarr('air.zarr') # I would assume here that the encoding actually does something during IO now if I check the zarr
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2099530269 | I_kwDOAMm_X859JEod | 8665 | Error when broadcasting array API compliant class | TomNicholas 35968931 | closed | 0 | 1 | 2024-01-25T04:11:14Z | 2024-01-26T16:41:31Z | 2024-01-26T16:41:31Z | MEMBER | What happened?Broadcasting fails for array types that strictly follow the array API standard. What did you expect to happen?With a normal numpy array this obviously works fine. Minimal Complete Verifiable Example```Python import numpy.array_api as nxp arr = nxp.asarray([[1, 2, 3], [4, 5, 6]], dtype=np.dtype('float32')) var = xr.Variable(data=arr, dims=['x', 'y']) var.isel(x=0) # this is fine var * var.isel(x=0) # this is not IndexError Traceback (most recent call last) Cell In[31], line 1 ----> 1 var * var.isel(x=0) File ~/Documents/Work/Code/xarray/xarray/core/_typed_ops.py:487, in VariableOpsMixin.mul(self, other) 486 def mul(self, other: VarCompatible) -> Self | T_DataArray: --> 487 return self._binary_op(other, operator.mul) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2406, in Variable._binary_op(self, other, f, reflexive) 2404 other_data, self_data, dims = _broadcast_compat_data(other, self) 2405 else: -> 2406 self_data, other_data, dims = _broadcast_compat_data(self, other) 2407 keep_attrs = _get_keep_attrs(default=False) 2408 attrs = self._attrs if keep_attrs else None File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2922, in _broadcast_compat_data(self, other)
2919 def _broadcast_compat_data(self, other):
2920 if all(hasattr(other, attr) for attr in ["dims", "data", "shape", "encoding"]):
2921 # File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in _broadcast_compat_variables(*variables) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2899, in <genexpr>(.0) 2893 """Create broadcast compatible variables, with the same dimensions. 2894 2895 Unlike the result of broadcast_variables(), some variables may have 2896 dimensions of size 1 instead of the size of the broadcast dimension. 2897 """ 2898 dims = tuple(_unified_dims(variables)) -> 2899 return tuple(var.set_dims(dims) if var.dims != dims else var for var in variables) File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1479, in Variable.set_dims(self, dims, shape) 1477 expanded_data = duck_array_ops.broadcast_to(self.data, tmp_shape) 1478 else: -> 1479 expanded_data = self.data[(None,) * (len(expanded_dims) - self.ndim)] 1481 expanded_var = Variable( 1482 expanded_dims, expanded_data, self._attrs, self._encoding, fastpath=True 1483 ) 1484 return expanded_var.transpose(*dims) File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:555, in Array.getitem(self, key) 550 """ 551 Performs the operation getitem. 552 """ 553 # Note: Only indices required by the spec are allowed. See the 554 # docstring of _validate_index --> 555 self._validate_index(key) 556 if isinstance(key, Array): 557 # Indexing self._array with array_api arrays can be erroneous 558 key = key._array File ~/miniconda3/envs/dev3.11/lib/python3.12/site-packages/numpy/array_api/_array_object.py:348, in Array._validate_index(self, key) 344 elif n_ellipsis == 0: 345 # Note boolean masks must be the sole index, which we check for 346 # later on. 347 if not key_has_mask and n_single_axes < self.ndim: --> 348 raise IndexError( 349 f"{self.ndim=}, but the multi-axes index only specifies " 350 f"{n_single_axes} dimensions. If this was intentional, " 351 "add a trailing ellipsis (...) which expands into as many " 352 "slices (:) as necessary - this is what np.ndarray arrays " 353 "implicitly do, but such flat indexing behaviour is not " 354 "specified in the Array API." 355 ) 357 if n_ellipsis == 0: 358 indexed_shape = self.shape IndexError: self.ndim=1, but the multi-axes index only specifies 0 dimensions. If this was intentional, add a trailing ellipsis (...) which expands into as many slices (:) as necessary - this is what np.ndarray arrays implicitly do, but such flat indexing behaviour is not specified in the Array API. ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environmentmain branch of xarray, numpy 1.26.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2099622643 | PR_kwDOAMm_X85lBkos | 8668 | Fix unstack method when wrapping array api class | TomNicholas 35968931 | closed | 0 | 0 | 2024-01-25T05:54:38Z | 2024-01-26T16:06:04Z | 2024-01-26T16:06:01Z | MEMBER | 0 | pydata/xarray/pulls/8668 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2099550299 | I_kwDOAMm_X859JJhb | 8666 | Error unstacking array API compliant class | TomNicholas 35968931 | closed | 0 | 0 | 2024-01-25T04:35:09Z | 2024-01-26T16:06:02Z | 2024-01-26T16:06:02Z | MEMBER | What happened?Unstacking fails for array types that strictly follow the array API standard. What did you expect to happen?This obviously works fine with a normal numpy array. Minimal Complete Verifiable Example```Python import numpy.array_api as nxp arr = nxp.asarray([[1, 2, 3], [4, 5, 6]], dtype=np.dtype('float32')) da = xr.DataArray( arr, coords=[("x", ["a", "b"]), ("y", [0, 1, 2])], ) da stacked = da.stack(z=("x", "y")) stacked.indexes["z"] stacked.unstack() AttributeError Traceback (most recent call last) Cell In[65], line 8 6 stacked = da.stack(z=("x", "y")) 7 stacked.indexes["z"] ----> 8 roundtripped = stacked.unstack() 9 arr.identical(roundtripped) File ~/Documents/Work/Code/xarray/xarray/util/deprecation_helpers.py:115, in _deprecate_positional_args.<locals>._decorator.<locals>.inner(args, kwargs) 111 kwargs.update({name: arg for name, arg in zip_args}) 113 return func(args[:-n_extra_args], kwargs) --> 115 return func(*args, kwargs) File ~/Documents/Work/Code/xarray/xarray/core/dataarray.py:2913, in DataArray.unstack(self, dim, fill_value, sparse) 2851 @_deprecate_positional_args("v2023.10.0") 2852 def unstack( 2853 self, (...) 2857 sparse: bool = False, 2858 ) -> Self: 2859 """ 2860 Unstack existing dimensions corresponding to MultiIndexes into 2861 multiple new dimensions. (...) 2911 DataArray.stack 2912 """ -> 2913 ds = self._to_temp_dataset().unstack(dim, fill_value=fill_value, sparse=sparse) 2914 return self._from_temp_dataset(ds) File ~/Documents/Work/Code/xarray/xarray/util/deprecation_helpers.py:115, in _deprecate_positional_args.<locals>._decorator.<locals>.inner(args, kwargs) 111 kwargs.update({name: arg for name, arg in zip_args}) 113 return func(args[:-n_extra_args], kwargs) --> 115 return func(*args, kwargs) File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:5581, in Dataset.unstack(self, dim, fill_value, sparse) 5579 for d in dims: 5580 if needs_full_reindex: -> 5581 result = result._unstack_full_reindex( 5582 d, stacked_indexes[d], fill_value, sparse 5583 ) 5584 else: 5585 result = result._unstack_once(d, stacked_indexes[d], fill_value, sparse) File ~/Documents/Work/Code/xarray/xarray/core/dataset.py:5474, in Dataset._unstack_full_reindex(self, dim, index_and_vars, fill_value, sparse) 5472 if name not in index_vars: 5473 if dim in var.dims: -> 5474 variables[name] = var.unstack({dim: new_dim_sizes}) 5475 else: 5476 variables[name] = var File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1684, in Variable.unstack(self, dimensions, **dimensions_kwargs) 1682 result = self 1683 for old_dim, dims in dimensions.items(): -> 1684 result = result._unstack_once_full(dims, old_dim) 1685 return result File ~/Documents/Work/Code/xarray/xarray/core/variable.py:1574, in Variable._unstack_once_full(self, dim, old_dim) 1571 reordered = self.transpose(*dim_order) 1573 new_shape = reordered.shape[: len(other_dims)] + new_dim_sizes -> 1574 new_data = reordered.data.reshape(new_shape) 1575 new_dims = reordered.dims[: len(other_dims)] + new_dim_names 1577 return type(self)( 1578 new_dims, new_data, self._attrs, self._encoding, fastpath=True 1579 ) AttributeError: 'Array' object has no attribute 'reshape' ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?It fails on the We do in fact have an array API-compatible version of Environmentmain branch of xarray, numpy 1.26.0 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2098535717 | PR_kwDOAMm_X85k94wv | 8655 | Small improvement to HOW_TO_RELEASE.md | TomNicholas 35968931 | closed | 0 | 1 | 2024-01-24T15:35:16Z | 2024-01-24T21:46:02Z | 2024-01-24T21:46:01Z | MEMBER | 0 | pydata/xarray/pulls/8655 | Clarify step 8. by pointing to where the ReadTheDocs build actually is |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2092346228 | PR_kwDOAMm_X85ko-Y2 | 8632 | Pin sphinx-book-theme to 1.0.1 to try to deal with #8619 | TomNicholas 35968931 | closed | 0 | 2 | 2024-01-21T02:18:49Z | 2024-01-23T20:16:13Z | 2024-01-23T18:28:35Z | MEMBER | 0 | pydata/xarray/pulls/8632 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2086704542 | PR_kwDOAMm_X85kVyF6 | 8617 | Release summary for release v2024.01.0 | TomNicholas 35968931 | closed | 0 | 1 | 2024-01-17T18:02:29Z | 2024-01-17T21:23:45Z | 2024-01-17T19:21:11Z | MEMBER | 0 | pydata/xarray/pulls/8617 | Someone give this a thumbs up if it looks good
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8617/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1519552711 | PR_kwDOAMm_X85GqAro | 7418 | Import datatree in xarray? | TomNicholas 35968931 | closed | 0 | 18 | 2023-01-04T20:48:09Z | 2023-12-22T17:38:04Z | 2023-12-22T17:38:04Z | MEMBER | 0 | pydata/xarray/pulls/7418 | I want datatree to live in xarray main, as right now it's in a separate package but imports many xarray internals. This presents a few questions: 1) At what stage is datatree "ready" to moved in here? At what stage should it become encouraged public API? 2) What's a good way to slowly roll the feature out? 3) How do I decrease the bus factor on datatree's code? Can I get some code reviews during the merging process? :pray: 4) Should I make a new CI environment just for testing datatree stuff? Today @jhamman and @keewis suggested for now I make it so that you can @pydata/xarray what do you think? Any other thoughts about best practices when moving a good few thousand lines of code into xarray?
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7418/reactions", "total_count": 6, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 } |
xarray 13221727 | pull | |||||
1820788594 | PR_kwDOAMm_X85WW40r | 8019 | Generalize cumulative reduction (scan) to non-dask types | TomNicholas 35968931 | closed | 0 | 2 | 2023-07-25T17:22:07Z | 2023-12-18T19:30:18Z | 2023-12-18T19:30:18Z | MEMBER | 0 | pydata/xarray/pulls/8019 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8019/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1048697792 | PR_kwDOAMm_X84uSksS | 5961 | [Experimental] Refactor Dataset to store variables in a manifest | TomNicholas 35968931 | closed | 0 | 7 | 2021-11-09T14:51:03Z | 2023-12-06T17:38:53Z | 2023-12-06T17:38:52Z | MEMBER | 0 | pydata/xarray/pulls/5961 | This PR is part of an experiment to see how to integrate a What is does is refactor ("Manifest" in the old sense, of a noun meaning "a document giving comprehensive details of a ship and its cargo and other contents")
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1084220684 | PR_kwDOAMm_X84wDPg5 | 6086 | Type protocol for internal variable mapping | TomNicholas 35968931 | closed | 0 | 9 | 2021-12-19T23:32:04Z | 2023-12-06T17:20:48Z | 2023-12-06T17:19:30Z | MEMBER | 1 | pydata/xarray/pulls/6086 | In #5961 and #6083 I've been experimenting extending I've been writing out new storage class implementations in those PRs, but on Friday @shoyer suggested that I could instead simply alter the allowed type for The idea is to define a protocol in xarray which specifies the structural subtyping behaviour of any custom variable storage class that I might want to set as In practice this means writing a protocol which describes the type behaviour of all the methods on So far I've written out a 1) The typing behaviour of overloaded methods, specifically
2) Making functions which expect a
3) I'm expecting to get a runtime problem whenever we Once that passes mypy I will write a test that checks that if I define my own custom variable storage class I can @max-sixty this is entirely a typing challenge, so I'm tagging you in case you're interested :)
EDIT: Also using |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2027528985 | PR_kwDOAMm_X85hQBHP | 8525 | Remove PR labeler bot | TomNicholas 35968931 | closed | 0 | 3 | 2023-12-06T02:31:56Z | 2023-12-06T02:45:46Z | 2023-12-06T02:45:41Z | MEMBER | 0 | pydata/xarray/pulls/8525 | RIP
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1974681146 | PR_kwDOAMm_X85edMm- | 8404 | Hypothesis strategy for generating Variable objects | TomNicholas 35968931 | closed | 0 | 6 | 2023-11-02T17:04:03Z | 2023-12-05T22:45:57Z | 2023-12-05T22:45:57Z | MEMBER | 0 | pydata/xarray/pulls/8404 | Breaks out just the part of #6908 needed for generating arbitrary EDIT: Check out this test which performs a mean on any subset of any Variable object! ```python In [36]: from xarray.testing.strategies import variables In [37]: variables().example() <xarray.Variable (ĭ: 3)> array([-2.22507386e-313-6.62447795e+016j, nan-6.46207519e+185j, -2.22507386e-309+3.33333333e-001j]) ``` @andersy005 @maxrjones @jhamman I thought this might be useful for the @keewis and @Zac-HD sorry for letting that PR languish for literally a year :sweat_smile: This PR addresses your feedback about accepting a callable that returns a strategy generating arrays. That suggestion makes some things a bit more complex in user code but actually allows me to simplify the internals of the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8404/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2017285297 | PR_kwDOAMm_X85gtObP | 8491 | Warn on repeated dimension names during construction | TomNicholas 35968931 | closed | 0 | 13 | 2023-11-29T19:30:51Z | 2023-12-01T01:37:36Z | 2023-12-01T00:40:18Z | MEMBER | 0 | pydata/xarray/pulls/8491 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8491/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
552500673 | MDU6SXNzdWU1NTI1MDA2NzM= | 3709 | Feature Proposal: `xarray.interactive` module | TomNicholas 35968931 | closed | 0 | 36 | 2020-01-20T20:42:22Z | 2023-10-27T18:24:49Z | 2021-07-29T15:37:21Z | MEMBER | Feature proposal:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3709/reactions", "total_count": 6, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1806973709 | PR_kwDOAMm_X85VoNVM | 7992 | Docs page on interoperability | TomNicholas 35968931 | closed | 0 | 3 | 2023-07-17T05:02:29Z | 2023-10-26T16:08:56Z | 2023-10-26T16:04:33Z | MEMBER | 0 | pydata/xarray/pulls/7992 | Builds upon #7991 by adding a page to the internals enumerating all the different ways in which xarray is interoperable. Would be nice if https://github.com/pydata/xarray/pull/6975 were merged so that I could link to it from this new page.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1036473974 | PR_kwDOAMm_X84tsaL3 | 5900 | Add .chunksizes property | TomNicholas 35968931 | closed | 0 | 2 | 2021-10-26T15:51:09Z | 2023-10-20T16:00:15Z | 2021-10-29T18:12:22Z | MEMBER | 0 | pydata/xarray/pulls/5900 | Adds a new Supercedes #5846 because this PR is backwards-compatible.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5900/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1083507645 | PR_kwDOAMm_X84wBDeq | 6083 | Manifest as variables attribute | TomNicholas 35968931 | closed | 0 | 2 | 2021-12-17T18:14:26Z | 2023-09-14T15:37:38Z | 2023-09-14T15:37:37Z | MEMBER | 1 | pydata/xarray/pulls/6083 | Another attempt like #5961 @shoyer
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
663235664 | MDU6SXNzdWU2NjMyMzU2NjQ= | 4243 | Manually drop DataArray from memory? | TomNicholas 35968931 | closed | 0 | 3 | 2020-07-21T18:54:40Z | 2023-09-12T16:17:12Z | 2023-09-12T16:17:12Z | MEMBER | Is it possible to deliberately drop data associated with a particular DataArray from memory? Obviously Also does calling python's built-in garbage collector (i.e. The context of this question is that I'm trying to resave some massive variables (~65GB each) that were loaded from thousands of files into just a few files for each variable. I would love to use @rabernat 's new rechunker package but I'm not sure how easily I can convert my current netCDF data to Zarr, and I'm interested in this question no matter how I end up solving the problem. I don't currently have a particularly good understanding of file I/O and memory management in xarray, but would like to improve it. Can anyone recommend a tool I can use to answer this kind of question myself on my own machine? I suppose it would need to be able to tell me the current memory usage of specific objects, not just the total memory usage. (@johnomotani you might be interested) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1806949831 | PR_kwDOAMm_X85VoH2o | 7991 | Docs page on internal design | TomNicholas 35968931 | closed | 0 | 1 | 2023-07-17T04:46:55Z | 2023-09-08T15:41:32Z | 2023-09-08T15:41:32Z | MEMBER | 0 | pydata/xarray/pulls/7991 | Adds a new page to the xarray internals documentation giving an overview of the internal design of xarray. This should be helpful for xarray contributors and for developers of extensions because nowhere in the docs does it really explain how
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7991/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1368740629 | PR_kwDOAMm_X84-uWtE | 7019 | Generalize handling of chunked array types | TomNicholas 35968931 | closed | 0 | 30 | 2022-09-10T22:02:18Z | 2023-07-24T20:40:29Z | 2023-05-18T17:34:31Z | MEMBER | 0 | pydata/xarray/pulls/7019 | Initial attempt to get cubed working within xarray, as an alternative to dask.
I've added a ~~At the moment it should work except for an import error that I don't understand, see below.~~ Fro cubed to work at all with this PR we would also need:
- [x] Cubed to expose the correct array type consistently https://github.com/tomwhite/cubed/issues/123
- [x] A cubed version of To-dos for me on this PR:
- [x] Re-route To complete this project more generally we should also:
- [ ] Have cc @tomwhite |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7019/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 } |
xarray 13221727 | pull | |||||
1810167498 | PR_kwDOAMm_X85VzHaS | 7999 | Core team member guide | TomNicholas 35968931 | closed | 0 | 4 | 2023-07-18T15:26:01Z | 2023-07-21T14:51:57Z | 2023-07-21T13:48:26Z | MEMBER | 0 | pydata/xarray/pulls/7999 | Adds a guide for core developers of xarray. Mostly adapted from napari's core dev guide, but with some extra sections and ideas from the pandas maintainance guide. @pydata/xarray please give your feedback on this! If you prefer to give feedback in a non-public channel for whatever reason then please use the private core team email.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7999/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1801849622 | I_kwDOAMm_X85rZgsW | 7982 | Use Meilisearch in our docs | TomNicholas 35968931 | closed | 0 | 1 | 2023-07-12T22:29:45Z | 2023-07-19T19:49:53Z | 2023-07-19T19:49:53Z | MEMBER | Is your feature request related to a problem?Just saw this cool search thing for sphinx in a lightning talk at SciPy called Meilisearch Cc @dcherian Describe the solution you'd likeRead about it here https://sphinxdocs.ansys.com/version/stable/user_guide/options.html Describe alternatives you've consideredNo response Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1807782455 | I_kwDOAMm_X85rwJI3 | 7996 | Stable docs build not showing latest changes after release | TomNicholas 35968931 | closed | 0 | 3 | 2023-07-17T13:24:58Z | 2023-07-17T20:48:19Z | 2023-07-17T20:48:19Z | MEMBER | What happened?I released xarray version v2023.07.0 last night, but I'm not seeing changes to the documentation reflected in the What did you expect to happen?No response Minimal Complete Verifiable ExampleNo response MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1807044282 | PR_kwDOAMm_X85VodDN | 7993 | Update whats-new.rst for new release | TomNicholas 35968931 | closed | 0 | 0 | 2023-07-17T06:03:19Z | 2023-07-17T06:03:43Z | 2023-07-17T06:03:42Z | MEMBER | 0 | pydata/xarray/pulls/7993 | Needed because I started the release process earlier this week by writing a whatsnew, that apparently got merged, but the release hasn't been issued since. I'll self-merge this and release now. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1799476089 | PR_kwDOAMm_X85VO0Wz | 7979 | Release summary for v2023.07.0 | TomNicholas 35968931 | closed | 0 | 0 | 2023-07-11T17:59:28Z | 2023-07-13T16:33:43Z | 2023-07-13T16:33:43Z | MEMBER | 0 | pydata/xarray/pulls/7979 | { "url": "https://api.github.com/repos/pydata/xarray/issues/7979/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1753401384 | PR_kwDOAMm_X85Szs7X | 7911 | Duck array documentation improvements | TomNicholas 35968931 | closed | 0 | 0 | 2023-06-12T19:10:41Z | 2023-07-10T09:36:05Z | 2023-06-29T14:39:22Z | MEMBER | 0 | pydata/xarray/pulls/7911 | Draft improvements to the user guide page on using duck arrays. Intended as part of the scipy tutorial effort, though I wasn't sure whether to concentrate on content in the main xarray docs or the tutorial repo. (I wrote this on a train without enough internet to update my conda environment so I will come back and fix anything that doesn't run.)
cc @dcherian and @keewis |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1779880070 | PR_kwDOAMm_X85UMTE7 | 7951 | Chunked array docs | TomNicholas 35968931 | closed | 0 | 3 | 2023-06-28T23:01:42Z | 2023-07-05T20:33:33Z | 2023-07-05T20:08:19Z | MEMBER | 0 | pydata/xarray/pulls/7951 | Builds upon #7911
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1786830423 | PR_kwDOAMm_X85Uj4NA | 7960 | Update minimum version of typing extensions in pre-commit | TomNicholas 35968931 | closed | 0 | 1 | 2023-07-03T21:27:40Z | 2023-07-05T19:09:04Z | 2023-07-05T15:43:40Z | MEMBER | 0 | pydata/xarray/pulls/7960 | Attempt to fix the pre-commit build failure I keep seeing in the CI (e.g. this failure from https://github.com/pydata/xarray/pull/7881) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1773373878 | PR_kwDOAMm_X85T2T_2 | 7941 | Allow cubed arrays to be passed to flox groupby | TomNicholas 35968931 | closed | 0 | 0 | 2023-06-25T16:48:56Z | 2023-06-26T15:28:06Z | 2023-06-26T15:28:03Z | MEMBER | 0 | pydata/xarray/pulls/7941 | Generalizes a small check for chunked arrays in groupby so it now allows cubed arrays through to flox rather than just dask arrays. Does not actually mean that flox groupby will work with cubed yet though, see https://github.com/tomwhite/cubed/issues/223 and https://github.com/xarray-contrib/flox/issues/224
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1768095127 | PR_kwDOAMm_X85Tkubk | 7934 | Release summary for v2023.06.0 | TomNicholas 35968931 | closed | 0 | 4 | 2023-06-21T17:34:29Z | 2023-06-23T03:02:12Z | 2023-06-23T03:02:11Z | MEMBER | 0 | pydata/xarray/pulls/7934 | Release summary: This release adds features to For some reason when I try to use
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1716200316 | PR_kwDOAMm_X85Q1k5D | 7847 | Array API fixes for astype | TomNicholas 35968931 | closed | 0 | 0 | 2023-05-18T20:09:32Z | 2023-05-19T15:11:17Z | 2023-05-19T15:11:16Z | MEMBER | 0 | pydata/xarray/pulls/7847 | Follows on from #7067 and #6804, ensuring that we call A bit of a pain to test in isolation because I made the changes so that xarray's .pad would work with array-API-conforming libraries, but actually (This PR replaces #7815, as making a new branch was easier than merging/rebasing with all the changes in #7019.)
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1716345200 | PR_kwDOAMm_X85Q2EmD | 7849 | Whats new for release of v2023.05.0 | TomNicholas 35968931 | closed | 0 | 0 | 2023-05-18T22:30:32Z | 2023-05-19T02:18:03Z | 2023-05-19T02:17:55Z | MEMBER | 0 | pydata/xarray/pulls/7849 | Summary: This release adds some new methods and operators, updates our deprecation policy for python versions, fixes some bugs with groupby, and introduces experimental support for alternative chunked parallel array computation backends via a new plugin system! |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1695244129 | PR_kwDOAMm_X85PvJSS | 7815 | Array API fixes for astype | TomNicholas 35968931 | closed | 0 | 2 | 2023-05-04T04:33:52Z | 2023-05-18T20:10:48Z | 2023-05-18T20:10:43Z | MEMBER | 0 | pydata/xarray/pulls/7815 | While it's common for duck arrays to have a Builds on top of #7019 with just one extra commit to separate out this issue.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1308715638 | I_kwDOAMm_X85OAWp2 | 6807 | Alternative parallel execution frameworks in xarray | TomNicholas 35968931 | closed | 0 | 12 | 2022-07-18T21:48:10Z | 2023-05-18T17:34:33Z | 2023-05-18T17:34:33Z | MEMBER | Is your feature request related to a problem?Since early on the project xarray has supported wrapping Currently though the only way to parallelize array operations with xarray "automatically" is to use dask. (You could use xarray-beam or other options too but they don't "automatically" generate the computation for you like dask does.) When dask is the only type of parallel framework exposing an array-like API then there is no need for flexibility, but now we have nascent projects like cubed to consider too. @tomwhite Describe the solution you'd likeRefactor the internals so that dask is one option among many, and that any newer options can plug in in an extensible way. In particular cubed deliberately uses the same API as I would like to see xarray able to wrap any array-like object which offers this set of methods / functions, and call the corresponding version of that method for the correct library (i.e. dask vs cubed) automatically. That way users could try different parallel execution frameworks simply via a switch like
Describe alternatives you've consideredIf we leave it the way it is now then xarray will not be truly flexible in this respect. Any library can wrap (or subclass if they are really brave) xarray objects to provide parallelism but that's not the same level of flexibility. Additional contextPR about making xarray able to wrap objects conforming to the new array API standard cc @shoyer @rabernat @dcherian @keewis |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6807/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 2, "eyes": 1 } |
completed | xarray 13221727 | issue | ||||||
1615570467 | PR_kwDOAMm_X85LlkLA | 7595 | Clarifications in contributors guide | TomNicholas 35968931 | closed | 0 | 5 | 2023-03-08T16:35:45Z | 2023-03-13T17:55:43Z | 2023-03-13T17:51:24Z | MEMBER | 0 | pydata/xarray/pulls/7595 | Add suggestions @paigem made in #7439, as well as fix a few small formatting things and broken links. I would like to merge this so that it can be helpful for the new contributors we will hopefully get through Outreachy.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7595/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1579829674 | PR_kwDOAMm_X85JuG-F | 7518 | State which variables not present in drop vars error message | TomNicholas 35968931 | closed | 0 | 0 | 2023-02-10T15:00:35Z | 2023-03-09T20:47:47Z | 2023-03-09T20:47:47Z | MEMBER | 0 | pydata/xarray/pulls/7518 | Makes the error message more informative
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1573538162 | PR_kwDOAMm_X85JY_1l | 7509 | Update apply_ufunc output_sizes error message | TomNicholas 35968931 | closed | 0 | 0 | 2023-02-07T01:35:08Z | 2023-02-07T15:45:54Z | 2023-02-07T05:01:36Z | MEMBER | 0 | pydata/xarray/pulls/7509 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1470025851 | PR_kwDOAMm_X85D_b_W | 7338 | Docs: add example of writing and reading groups to netcdf | TomNicholas 35968931 | closed | 0 | 0 | 2022-11-30T18:01:32Z | 2022-12-01T16:24:08Z | 2022-12-01T16:24:04Z | MEMBER | 0 | pydata/xarray/pulls/7338 |
@dcherian |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1426383543 | I_kwDOAMm_X85VBOK3 | 7232 | ds.Coarsen.construct demotes non-dimensional coordinates to variables | TomNicholas 35968931 | closed | 0 | 0 | 2022-10-27T23:39:32Z | 2022-10-28T17:46:51Z | 2022-10-28T17:46:51Z | MEMBER | What happened?
What did you expect to happen?All variables that were coordinates before the coarsen.construct stay as coordinates afterwards. Minimal Complete Verifiable Example```Python In [3]: da = xr.DataArray(np.arange(24), dims=["time"]) ...: da = da.assign_coords(day=365 * da) ...: ds = da.to_dataset(name="T") In [4]: ds Out[4]: <xarray.Dataset> Dimensions: (time: 24) Coordinates: day (time) int64 0 365 730 1095 1460 1825 ... 6935 7300 7665 8030 8395 Dimensions without coordinates: time Data variables: T (time) int64 0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 18 19 20 21 22 23 In [5]: ds.coarsen(time=12).construct(time=("year", "month")) Out[5]: <xarray.Dataset> Dimensions: (year: 2, month: 12) Coordinates: day (year, month) int64 0 365 730 1095 1460 ... 7300 7665 8030 8395 Dimensions without coordinates: year, month Data variables: T (year, month) int64 0 1 2 3 4 5 6 7 8 ... 16 17 18 19 20 21 22 23 ``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1426387580 | PR_kwDOAMm_X85BtKwb | 7233 | Ensure Coarsen.construct keeps all coords | TomNicholas 35968931 | closed | 0 | 0 | 2022-10-27T23:46:49Z | 2022-10-28T17:46:50Z | 2022-10-28T17:46:50Z | MEMBER | 0 | pydata/xarray/pulls/7233 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1417378270 | PR_kwDOAMm_X85BPGqR | 7192 | Example using Coarsen.construct to split map into regions | TomNicholas 35968931 | closed | 0 | 3 | 2022-10-20T22:14:31Z | 2022-10-21T18:14:59Z | 2022-10-21T18:14:56Z | MEMBER | 0 | pydata/xarray/pulls/7192 | I realised there is very little documentation on Unsure whether it should instead live in the page on reshaping and reorganising data though, as it is essentially a reshape operation. EDIT: Now on the reshape page
cc @jbusecke @paigem |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7192/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1391319978 | PR_kwDOAMm_X84_4UWs | 7107 | 2022.09.0 release summary | TomNicholas 35968931 | closed | 0 | 0 | 2022-09-29T18:34:02Z | 2022-09-29T21:57:43Z | 2022-09-29T21:54:14Z | MEMBER | 0 | pydata/xarray/pulls/7107 | Thumbs up if it looks fine to you |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7107/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1386723044 | PR_kwDOAMm_X84_pBKj | 7090 | Fill in missing docstrings for ndarray properties | TomNicholas 35968931 | closed | 0 | 0 | 2022-09-26T21:05:37Z | 2022-09-26T22:24:13Z | 2022-09-26T22:05:34Z | MEMBER | 0 | pydata/xarray/pulls/7090 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1370416843 | PR_kwDOAMm_X84-z6DG | 7023 | Remove dask_array_type checks | TomNicholas 35968931 | closed | 0 | 3 | 2022-09-12T19:31:04Z | 2022-09-13T00:35:25Z | 2022-09-13T00:35:22Z | MEMBER | 0 | pydata/xarray/pulls/7023 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7023/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
592312709 | MDExOlB1bGxSZXF1ZXN0Mzk3MzIwNzgx | 3925 | sel along 1D non-index coordinates | TomNicholas 35968931 | closed | 0 | 13 | 2020-04-02T02:23:56Z | 2022-09-07T14:31:58Z | 2022-09-07T14:31:58Z | MEMBER | 0 | pydata/xarray/pulls/3925 | As a user, I find not being able to select along one-dimensional non-dimensional coordinates actually comes up fairly often. I think it's quite common to use multiple coordinates to be able to choose between plotting in different coordinate systems (or units) easily. I've tried to close #2028 in the simplest (but also least efficient) way which was suggested by @shoyer (suggestion 1 here). This should be temporary anyway: it will get superseded by the explicit indexes refactor. If there is another approach which would achieve the same functionality as this PR but actually bring us closer to #1603 then I would be happy to take a stab at that instead. I don't really know what to do about the failing test in groupby arithmetic - I think it's caused here but I'm not sure what to replace the triple error type catching (?!) with.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3925/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1338010273 | PR_kwDOAMm_X849IeCt | 6913 | Fix core team page | TomNicholas 35968931 | closed | 0 | 0 | 2022-08-13T17:05:51Z | 2022-08-15T13:39:47Z | 2022-08-15T13:39:43Z | MEMBER | 0 | pydata/xarray/pulls/6913 | Adds missing core team members @alexamici and @aurghs to docs, as well as fixing @benbovy 's username. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6913/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1337587854 | PR_kwDOAMm_X849HJCV | 6912 | Automatic PR labeler | TomNicholas 35968931 | closed | 0 | 2 | 2022-08-12T18:40:27Z | 2022-08-12T19:52:49Z | 2022-08-12T19:47:19Z | MEMBER | 0 | pydata/xarray/pulls/6912 | GH action to automatically label new PRs according to which files they touch. Idea stolen from dask, see https://github.com/dask/dask/pull/7506 . Their PR labelling by file/module is specified here. (My first use of this bot so might well be a mistake.) @max-sixty you will probably enjoy this extra automation :robot:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1078842125 | PR_kwDOAMm_X84vxops | 6076 | Add labels to dataset diagram | TomNicholas 35968931 | closed | 0 | 0 | 2021-12-13T18:21:02Z | 2022-07-11T14:49:40Z | 2022-01-03T16:58:51Z | MEMBER | 0 | pydata/xarray/pulls/6076 | While making a talk I made a version of our data structure diagram but with added labels along the bottom: I think this helps clarify the relationship between I just made it quickly in inkscape by adding to the previous png - I only realised afterwards that the original was made in LaTeX, so maybe it would be better to add labels directly to that code? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6076/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
936313924 | MDExOlB1bGxSZXF1ZXN0NjgzMDY3OTU5 | 5571 | Rely on NEP-18 to dispatch to dask in duck_array_ops | TomNicholas 35968931 | closed | 0 | 20 | 2021-07-03T19:24:33Z | 2022-07-09T18:12:05Z | 2021-09-29T17:48:40Z | MEMBER | 0 | pydata/xarray/pulls/5571 | Removes special-casing for dask in Probably actually don't need the Only problem is that I seem to have broken one (parameterized) test: ```python @pytest.mark.parametrize("dim_num", [1, 2]) @pytest.mark.parametrize("dtype", [float, int, np.float32, np.bool_]) @pytest.mark.parametrize("dask", [False, True]) @pytest.mark.parametrize("func", ["sum", "prod"]) @pytest.mark.parametrize("aggdim", [None, "x"]) @pytest.mark.parametrize("contains_nan", [True, False]) @pytest.mark.parametrize("skipna", [True, False, None]) def test_min_count(dim_num, dtype, dask, func, aggdim, contains_nan, skipna): if dask and not has_dask: pytest.skip("requires dask")
/home/tegn500/Documents/Work/Code/xarray/xarray/tests/test_duck_array_ops.py:578: /home/tegn500/Documents/Work/Code/xarray/xarray/core/common.py:56: in wrapped_func return self.reduce(func, dim, axis, skipna=skipna, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/dataarray.py:2638: in reduce var = self.variable.reduce(func, dim, axis, keep_attrs, keepdims, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/variable.py:1725: in reduce data = func(self.data, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:328: in f return func(values, axis=axis, kwargs) /home/tegn500/Documents/Work/Code/xarray/xarray/core/nanops.py:106: in nansum a, mask = _replace_nan(a, 0) /home/tegn500/Documents/Work/Code/xarray/xarray/core/nanops.py:23: in _replace_nan mask = isnull(a) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:83: in isnull return pandas_isnull(data) /home/tegn500/Documents/Work/Code/xarray/xarray/core/duck_array_ops.py:40: in f return getattr(module, name)(args, kwargs) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/pandas/core/dtypes/missing.py:127: in isna return _isna(obj) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/pandas/core/dtypes/missing.py:166: in _isna return _isna_ndarraylike(np.asarray(obj), inf_as_na=inf_as_na) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/numpy/core/_asarray.py:102: in asarray return array(a, dtype, copy=False, order=order) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:1502: in array x = self.compute() /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/base.py:285: in compute (result,) = compute(self, traverse=False, kwargs) /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/base.py:567: in compute results = schedule(dsk, keys, *kwargs) self = <xarray.tests.CountingScheduler object at 0x7f0804db2310> dsk = {('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 0): array([False, False, False, False], dtype=object), ('xar...pe=object), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 3): array([nan, False, False, nan], dtype=object)} keys = [[('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 0), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7'...array-<this-array>-29953318277423606f95b509ad1a9aa7', 2), ('xarray-<this-array>-29953318277423606f95b509ad1a9aa7', 3)]] kwargs = {}
/home/tegn500/Documents/Work/Code/xarray/xarray/tests/init.py:118: RuntimeError ```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5571/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1223270563 | PR_kwDOAMm_X843L_J2 | 6566 | New inline_array kwarg for open_dataset | TomNicholas 35968931 | closed | 0 | 11 | 2022-05-02T19:39:07Z | 2022-05-11T22:12:24Z | 2022-05-11T20:26:43Z | MEMBER | 0 | pydata/xarray/pulls/6566 | Exposes the What setting this to True does is inline the array into the opening/chunking task, which avoids an an extra array object at the start of the task graph. That's useful because the presence of that single common task connecting otherwise independent parts of the graph can confuse the graph optimizer. With With In our case (xGCM) this is important because once inlined the optimizer understands that all the remaining parts of the graph are embarrasingly-parallel, and realizes that it can fuze all our chunk-wise padding tasks into one padding task per chunk. I think this option could help in any case where someone is opening data from a Zarr store (the reason we had this opener task) or a netCDF file. The value of the kwarg should be kept optional because in theory inlining is a tradeoff between fewer tasks and more memory use, but I think there might be a case for setting the default to be True? Questions:
1) How should I test this?
2) Should it default to
@rabernat @jbusecke |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6566/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 } |
xarray 13221727 | pull | |||||
1200309334 | PR_kwDOAMm_X842BOIk | 6471 | Support **kwargs form in `.chunk()` | TomNicholas 35968931 | closed | 0 | 6 | 2022-04-11T17:37:38Z | 2022-04-12T03:34:49Z | 2022-04-11T19:36:40Z | MEMBER | 0 | pydata/xarray/pulls/6471 | Also adds some explicit tests (and type hinting) for
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1157289286 | PR_kwDOAMm_X84z1Xnf | 6319 | v2022.03.0 release notes | TomNicholas 35968931 | closed | 0 | 2 | 2022-03-02T14:43:34Z | 2022-03-02T19:49:25Z | 2022-03-02T15:49:23Z | MEMBER | 0 | pydata/xarray/pulls/6319 | { "url": "https://api.github.com/repos/pydata/xarray/issues/6319/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1150694186 | PR_kwDOAMm_X84zenJA | 6307 | Drop duplicates over multiple dims, and add Dataset.drop_duplicates | TomNicholas 35968931 | closed | 0 | 0 | 2022-02-25T17:34:12Z | 2022-03-01T23:13:38Z | 2022-02-25T21:08:30Z | MEMBER | 0 | pydata/xarray/pulls/6307 | Allows for dropping duplicates over multiple dims at once, and adds
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1039034826 | PR_kwDOAMm_X84t0t3V | 5912 | Remove lock kwarg | TomNicholas 35968931 | closed | 0 | 4 | 2021-10-28T23:36:13Z | 2021-12-29T16:34:45Z | 2021-12-29T16:34:45Z | MEMBER | 0 | pydata/xarray/pulls/5912 | These were due to be removed post-0.19.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5912/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1041675013 | PR_kwDOAMm_X84t8yv7 | 5924 | v0.20 Release notes | TomNicholas 35968931 | closed | 0 | 2 | 2021-11-01T21:53:29Z | 2021-11-02T19:22:46Z | 2021-11-02T16:37:45Z | MEMBER | 0 | pydata/xarray/pulls/5924 | @pydata/xarray the release notes for your approval 5889 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1034238626 | I_kwDOAMm_X849pTqi | 5889 | Release v0.20? | TomNicholas 35968931 | closed | 0 | 13 | 2021-10-23T19:31:01Z | 2021-11-02T18:38:50Z | 2021-11-02T18:38:50Z | MEMBER | We should do another release soon. The last one was v0.19 on July 23rd, so it's been 3 months. (In particular I personally want to get some small pint compatibility fixes released such as https://github.com/pydata/xarray/pull/5571 and https://github.com/pydata/xarray/pull/5886, so that the code in this blog post advertising pint-xarray integration all works.) There's been plenty of changes since then, and there are more we could merge quite quickly. It's a breaking release because we changed some dependencies, so should be called @benbovy how does the ongoing index refactor stuff affect this release? Do we need to wait so it can all be announced? Can we release with merged index refactor stuff just silently sitting there? Small additions we could merge, feel free to suggest more @pydata/xarray : - https://github.com/pydata/xarray/pull/5834 - https://github.com/pydata/xarray/pull/5662 - #5233 - #5900 - #5365 - #5845 - #5904 - #5911 - #5905 - #5847 - #5916 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5889/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1039714252 | PR_kwDOAMm_X84t25p8 | 5916 | Update open_rasterio deprecation version number | TomNicholas 35968931 | closed | 0 | 2 | 2021-10-29T15:56:04Z | 2021-11-02T18:03:59Z | 2021-11-02T18:03:58Z | MEMBER | 0 | pydata/xarray/pulls/5916 | { "url": "https://api.github.com/repos/pydata/xarray/issues/5916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1039833986 | PR_kwDOAMm_X84t3SlI | 5917 | Update minimum dependencies for 0.20 | TomNicholas 35968931 | closed | 0 | 14 | 2021-10-29T18:38:37Z | 2021-11-01T21:14:03Z | 2021-11-01T21:14:02Z | MEMBER | 0 | pydata/xarray/pulls/5917 | =============== ====== ==== Package Old New =============== ====== ==== cartopy 0.17 0.18 cftime 1.1 1.2 dask 2.15 2.30 distributed 2.15 2.30 hdf5 1.10 1.12 lxml 4.5 4.6 matplotlib-base 3.2 3.3 numba 0.49 0.51 numpy 1.17 1.18 pandas 1.0 1.1 pint 0.15 0.16 scipy 1.4 1.5 seaborn 0.10 0.11 sparse 0.8 0.11 toolz 0.10 0.11 zarr 2.4 2.5 =============== ====== ====
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1012428149 | PR_kwDOAMm_X84shL9H | 5834 | Combine by coords dataarray bugfix | TomNicholas 35968931 | closed | 0 | 3 | 2021-09-30T17:17:00Z | 2021-10-29T19:57:36Z | 2021-10-29T19:57:36Z | MEMBER | 0 | pydata/xarray/pulls/5834 | Also reorganised the logic that deals with combining mixed sets of objects (i.e. named dataarrays, unnamed dataarrays, datasets) that was added in #4696. TODO - same reorganisation / testing but for
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1020282789 | I_kwDOAMm_X8480Eel | 5843 | Why are `da.chunks` and `ds.chunks` properties inconsistent? | TomNicholas 35968931 | closed | 0 | 6 | 2021-10-07T17:21:01Z | 2021-10-29T18:12:22Z | 2021-10-29T18:12:22Z | MEMBER | Basically the title, but what I'm referring to is this: ```python In [2]: da = xr.DataArray([[0, 1], [2, 3]], name='foo').chunk(1) In [3]: ds = da.to_dataset() In [4]: da.chunks Out[4]: ((1, 1), (1, 1)) In [5]: ds.chunks Out[5]: Frozen({'dim_0': (1, 1), 'dim_1': (1, 1)}) ``` Why does This seems a bit silly, for a few reasons: 1) it means that some perfectly reasonable code might fail unnecessarily if passed a DataArray instead of a Dataset or vice versa, such as
2) it breaks the pattern we use for
3) if you want the chunks as a tuple they are always accessible via 4) It's an undocumented difference, as the docstrings for
In our codebase this difference is mostly washed out by us using
I'm not sure whether making this consistent is worth the effort of a significant breaking change though :confused: (Sort of related to https://github.com/pydata/xarray/issues/2103) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1033884661 | PR_kwDOAMm_X84tkKtA | 5886 | Use .to_numpy() for quantified facetgrids | TomNicholas 35968931 | closed | 0 | 6 | 2021-10-22T19:25:24Z | 2021-10-28T22:42:43Z | 2021-10-28T22:41:59Z | MEMBER | 0 | pydata/xarray/pulls/5886 | Follows on from https://github.com/pydata/xarray/pull/5561 by replacing I noticed the need for this when trying out this example (but trying it without the (@Illviljan in theory
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5886/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1020555552 | PR_kwDOAMm_X84s6zAH | 5846 | Change return type of DataArray.chunks and Dataset.chunks to a dict | TomNicholas 35968931 | closed | 0 | 3 | 2021-10-08T00:02:20Z | 2021-10-26T15:52:00Z | 2021-10-26T15:51:59Z | MEMBER | 1 | pydata/xarray/pulls/5846 | Rectifies the the issue in #5843 by making Currently a WIP - I changed the behaviour but this obviously broke quite a few tests and I haven't looked at them yet.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1016576623 | PR_kwDOAMm_X84stU8v | 5839 | Dataset.__setitem__ raise on being passed a Dataset (for single key) | TomNicholas 35968931 | closed | 0 | 1 | 2021-10-05T17:18:43Z | 2021-10-23T19:01:24Z | 2021-10-23T19:01:24Z | MEMBER | 0 | pydata/xarray/pulls/5839 | Inspired by confusion in #5833, this PR slightly clarifies the error thrown when the user attempts to do
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
957001788 | MDExOlB1bGxSZXF1ZXN0NzAwNTEyNjg3 | 5653 | Roll coords deprecation | TomNicholas 35968931 | closed | 0 | 4 | 2021-07-30T19:16:59Z | 2021-10-01T19:24:02Z | 2021-10-01T18:54:22Z | MEMBER | 0 | pydata/xarray/pulls/5653 | The default behaviour of I also improved the docstrings and added type hints whilst there, although mypy doesn't seem to like some of the type hinting :/
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
959317311 | MDExOlB1bGxSZXF1ZXN0NzAyNDQ5NDg1 | 5669 | Combine='by_coords' and concat dim deprecation in open_mfdataset | TomNicholas 35968931 | closed | 0 | 2 | 2021-08-03T17:03:44Z | 2021-10-01T18:52:00Z | 2021-10-01T18:52:00Z | MEMBER | 0 | pydata/xarray/pulls/5669 | Noticed this hadn't been completed in https://github.com/pydata/xarray/discussions/5659
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
928490583 | MDExOlB1bGxSZXF1ZXN0Njc2NDg2ODM0 | 5519 | Type hints for combine functions | TomNicholas 35968931 | closed | 0 | 4 | 2021-06-23T17:33:36Z | 2021-09-30T20:16:45Z | 2021-09-30T19:52:47Z | MEMBER | 0 | pydata/xarray/pulls/5519 | Added type hints to Builds on #4696 because that PR generalised the argument types to include DataArrays, but I couldn't see that branch in the list to base this PR off of. The "nested list-of-lists" argument to
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5519/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
935062144 | MDU6SXNzdWU5MzUwNjIxNDQ= | 5559 | UserWarning when wrapping pint & dask arrays together | TomNicholas 35968931 | closed | 0 | 4 | 2021-07-01T17:25:03Z | 2021-09-29T17:48:39Z | 2021-09-29T17:48:39Z | MEMBER | With ```python da = xr.DataArray([1,2,3], attrs={'units': 'metres'}) chunked = da.chunk(1).pint.quantify() ```
If we try chunking the other way ( xref https://github.com/xarray-contrib/pint-xarray/issues/116 and https://github.com/pydata/xarray/pull/4972 @keewis |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
940054482 | MDU6SXNzdWU5NDAwNTQ0ODI= | 5588 | Release v0.19? | TomNicholas 35968931 | closed | 0 | 15 | 2021-07-08T17:00:26Z | 2021-07-23T23:15:39Z | 2021-07-23T21:12:53Z | MEMBER | Yesterday in the dev call we discussed the need for another release. Not sure if this should be a bugfix release (i.e. v0.18.3) or a full release (i.e. v0.19). Last release (v0.18.2) was 19th May, with v0.18.0 on 6th May. @pydata/xarray Bug fixes:
New features:
Internal:
- Nice to merge first?:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5588/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
951882363 | MDExOlB1bGxSZXF1ZXN0Njk2MTk4NDcx | 5632 | v0.19.0 release notes | TomNicholas 35968931 | closed | 0 | 5 | 2021-07-23T20:38:49Z | 2021-07-23T21:39:50Z | 2021-07-23T21:12:53Z | MEMBER | 0 | pydata/xarray/pulls/5632 | Release notes:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
935317034 | MDExOlB1bGxSZXF1ZXN0NjgyMjU1NDE5 | 5561 | Plots get labels from pint arrays | TomNicholas 35968931 | closed | 0 | 6 | 2021-07-02T00:44:28Z | 2021-07-21T23:06:21Z | 2021-07-21T22:38:34Z | MEMBER | 0 | pydata/xarray/pulls/5561 | Stops you needing to call Builds on top of #5568, so that should be merged first.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
936045730 | MDExOlB1bGxSZXF1ZXN0NjgyODYzMjgz | 5568 | Add to_numpy() and as_numpy() methods | TomNicholas 35968931 | closed | 0 | 9 | 2021-07-02T20:17:40Z | 2021-07-21T22:06:47Z | 2021-07-21T21:42:48Z | MEMBER | 0 | pydata/xarray/pulls/5568 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
400678645 | MDExOlB1bGxSZXF1ZXN0MjQ1ODA4Nzg3 | 2690 | Add create_test_data to public testing API | TomNicholas 35968931 | closed | 0 | 11 | 2019-01-18T11:08:01Z | 2021-06-24T08:51:36Z | 2021-06-23T16:14:28Z | MEMBER | 0 | pydata/xarray/pulls/2690 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
602579471 | MDExOlB1bGxSZXF1ZXN0NDA1NTc4NTA2 | 3982 | Combine by point coords | TomNicholas 35968931 | closed | 0 | 1 | 2020-04-19T00:00:30Z | 2021-06-24T08:48:51Z | 2021-06-23T15:58:30Z | MEMBER | 0 | pydata/xarray/pulls/3982 | This PR was based off of #3926, though it probably doesn't need to be and could be rebased if we wanted to merge this first.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
404945709 | MDExOlB1bGxSZXF1ZXN0MjQ5MDE0MTc3 | 2729 | [WIP] Feature: Animated 1D plots | TomNicholas 35968931 | closed | 0 | 14 | 2019-01-30T20:15:52Z | 2021-06-24T08:46:31Z | 2021-06-23T16:14:28Z | MEMBER | 0 | pydata/xarray/pulls/2729 | This is an attempt at a proof-of-principle for making animated plots in the way I suggested in #2355. (Also relevant for #2030.) This example code: ```python import matplotlib.pyplot as plt import xarray as xr Load data as done in plotting tutorialairtemps = xr.tutorial.open_dataset('air_temperature') air = airtemps.air - 273.15 air.attrs = airtemps.air.attrs air.attrs['units'] = 'deg C' Downsample to make reasonably-sized gifdata = air.isel(lat=10, time=slice(None,None,40)) Create animated plotanim = data.plot(animate_over='time')
anim.save('line1.gif', writer='imagemagick')
plt.show()
```
now produces this gif:
I think it looks pretty good! It even animates the title properly. The actual animation creation only takes one line to do. This currently only works for a plot with a single line, which is animated over a coordinate dimension. ~~It also required some minor modifications/bugfixes to animatplot, so it probably isn't reproducible right out of the box yet.~~ If you want to try this out then use the develop branch of my forked version of animatplot. The reason I've put this up is because I wanted to
I feel like although it required only ~100 lines extra to do this then the logic is very fragmented and scattered through the (@t-makaro I expect you will be interested in this) EDIT: To-Do list:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2729/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
911663002 | MDU6SXNzdWU5MTE2NjMwMDI= | 5438 | Add Union Operators for Dataset | TomNicholas 35968931 | closed | 0 | 2 | 2021-06-04T16:21:06Z | 2021-06-04T16:35:36Z | 2021-06-04T16:35:36Z | MEMBER | As of python 3.9, python dictionaries now support being merged via
```python def or(self, other): if not isinstance(other, xr.Dataset): return NotImplemented new = xr.merge(self, other) return new def ror(self, other): if not isinstance(other, xr.Dataset): return NotImplemented new = xr.merge(self, other) return new def ior(self, other): self.merge(other) return self ``` The distinction between the intent of these different operators is whether a new object is returned or the original object is updated. This would allow things like (This feature doesn't require python 3.9, it merely echoes a feature that is only available in 3.9+) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
905974760 | MDExOlB1bGxSZXF1ZXN0NjU3MDE1ODU4 | 5398 | Multi dimensional histogram (see #5400 instead) | TomNicholas 35968931 | closed | 0 | 0 | 2021-05-28T19:59:02Z | 2021-05-30T15:34:33Z | 2021-05-28T20:00:08Z | MEMBER | 0 | pydata/xarray/pulls/5398 | Initial work on integrating the multi-dimensional dask-powered histogram functionality from xhistogram into xarray. Just working on the skeleton to fit around the histogram algorithm for now, to be filled in later.
EDIT: Didn't notice that using |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
902830027 | MDExOlB1bGxSZXF1ZXN0NjU0MTU2NTA5 | 5383 | Corrected reference to blockwise to refer to apply_gufunc instead | TomNicholas 35968931 | closed | 0 | 2 | 2021-05-26T19:23:53Z | 2021-05-26T21:34:06Z | 2021-05-26T21:34:06Z | MEMBER | 0 | pydata/xarray/pulls/5383 | I noticed that the apply_ufunc tutorial notebook says that
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
877944829 | MDExOlB1bGxSZXF1ZXN0NjMxODI1Nzky | 5274 | Update release guide | TomNicholas 35968931 | closed | 0 | 3 | 2021-05-06T19:50:53Z | 2021-05-13T17:44:47Z | 2021-05-13T17:44:47Z | MEMBER | 0 | pydata/xarray/pulls/5274 | Updated the release guide to account for what is now automated via github actions, and any other bits I felt could be clearer. Now only 16 easy steps!
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5274/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
887597884 | MDExOlB1bGxSZXF1ZXN0NjQwODE5NTMz | 5289 | Explained what a deprecation cycle is | TomNicholas 35968931 | closed | 0 | 2 | 2021-05-11T15:15:08Z | 2021-05-13T16:38:19Z | 2021-05-13T16:38:19Z | MEMBER | 0 | pydata/xarray/pulls/5289 | Inspired by a question asked in #4696, but does not close that issue
- [x] Passes |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5289/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
871234354 | MDExOlB1bGxSZXF1ZXN0NjI2Mjg2ODQy | 5237 | Add deprecation warnings for lock kwarg | TomNicholas 35968931 | closed | 0 | 2 | 2021-04-29T16:45:45Z | 2021-05-04T19:17:31Z | 2021-05-04T19:17:31Z | MEMBER | 0 | pydata/xarray/pulls/5237 | Does this need a test?
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
874768820 | MDExOlB1bGxSZXF1ZXN0NjI5MjU0ODU0 | 5255 | Warn instead of error on combine='nested' with concat_dim supplied | TomNicholas 35968931 | closed | 0 | 0 | 2021-05-03T17:38:10Z | 2021-05-04T02:45:52Z | 2021-05-04T02:45:52Z | MEMBER | 0 | pydata/xarray/pulls/5255 | Changes error introduced in #5231 into a warning, as discussed. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
870266283 | MDExOlB1bGxSZXF1ZXN0NjI1NDg5NTQx | 5231 | open_mfdataset: Raise if combine='by_coords' and concat_dim=None | TomNicholas 35968931 | closed | 0 | 1 | 2021-04-28T19:16:19Z | 2021-04-30T12:41:17Z | 2021-04-30T12:41:17Z | MEMBER | 0 | pydata/xarray/pulls/5231 | Fixes bug which allowed incorrect arguments to be passed to The combination The effect was pretty benign - the I also noticed a related issue which I fixed - internally
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
871111282 | MDU6SXNzdWU4NzExMTEyODI= | 5236 | Error collecting tests due to optional pint import | TomNicholas 35968931 | closed | 0 | 2 | 2021-04-29T15:01:13Z | 2021-04-29T15:32:08Z | 2021-04-29T15:32:08Z | MEMBER | When I try to run xarray's test suite locally with pytest I've suddenly started getting this weird error: ``` (xarray-dev) tegn500@fusion192:~/Documents/Work/Code/xarray$ pytest xarray/tests/test_backends.py ==================================================================================== test session starts ===================================================================================== platform linux -- Python 3.9.2, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 rootdir: /home/tegn500/Documents/Work/Code/xarray, configfile: setup.cfg collected 0 items / 1 error =========================================================================================== ERRORS =========================================================================================== __________ ERROR collecting xarray/tests/test_backends.py __________ ../../../../anaconda3/envs/xarray-dev/lib/python3.9/importlib/init.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) <frozen importlib._bootstrap>:1030: in _gcd_import ??? <frozen importlib._bootstrap>:1007: in _find_and_load ??? <frozen importlib._bootstrap>:972: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:228: in _call_with_frames_removed ??? <frozen importlib._bootstrap>:1030: in _gcd_import ??? <frozen importlib._bootstrap>:1007: in _find_and_load ??? <frozen importlib._bootstrap>:986: in _find_and_load_unlocked ??? <frozen importlib._bootstrap>:680: in _load_unlocked ??? <frozen importlib._bootstrap_external>:790: in exec_module ??? <frozen importlib._bootstrap>:228: in _call_with_frames_removed ??? xarray/tests/init.py:84: in <module> has_pint_0_15, requires_pint_0_15 = _importorskip("pint", minversion="0.15") xarray/tests/init.py:46: in _importorskip if LooseVersion(mod.version) < LooseVersion(minversion): E AttributeError: module 'pint' has no attribute 'version' ================================================================================== short test summary info =================================================================================== ERROR xarray/tests/test_backends.py - AttributeError: module 'pint' has no attribute 'version' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ====================================================================================== 1 error in 0.88s ====================================================================================== ``` I'm not sure whether this is my fault or a problem with xarray somehow. @keewis have you seen this happen before? This is with a fresh conda environment, running locally on my laptop, and on python 3.9.2. Pint isn't even in this environment. I can force it to proceed with the tests by also catching the attribute error, i.e.
but I obviously shouldn't need to do that. Any ideas? Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: a5e72c9aacbf26936844840b75dd59fe7d13f1e6 python: 3.9.2 | packaged by conda-forge | (default, Feb 21 2021, 05:02:46) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 4.8.10-040810-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_GB.UTF-8 LOCALE: en_GB.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.8.0 xarray: 0.15.2.dev545+ga5e72c9 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.3 netCDF4: 1.5.6 pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.8.1 cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.04.1 distributed: 2021.04.1 matplotlib: 3.4.1 cartopy: installed seaborn: None numbagg: None pint: installed setuptools: 49.6.0.post20210108 pip: 21.1 conda: None pytest: 6.2.3 IPython: None sphinx: NoneConda Environment: Output of <tt>conda list</tt># packages in environment at /home/tegn500/anaconda3/envs/xarray-dev: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 1_gnu conda-forge alsa-lib 1.2.3 h516909a_0 conda-forge asciitree 0.3.3 py_2 conda-forge attrs 20.3.0 pyhd3deb0d_0 conda-forge bokeh 2.3.1 py39hf3d152e_0 conda-forge bottleneck 1.3.2 py39hce5d2b2_3 conda-forge bzip2 1.0.8 h7f98852_4 conda-forge c-ares 1.17.1 h7f98852_1 conda-forge ca-certificates 2020.12.5 ha878542_0 conda-forge certifi 2020.12.5 py39hf3d152e_1 conda-forge cftime 1.4.1 py39hce5d2b2_0 conda-forge click 7.1.2 pyh9f0ad1d_0 conda-forge cloudpickle 1.6.0 py_0 conda-forge curl 7.76.1 h979ede3_1 conda-forge cycler 0.10.0 py_2 conda-forge cytoolz 0.11.0 py39h3811e60_3 conda-forge dask 2021.4.1 pyhd8ed1ab_0 conda-forge dask-core 2021.4.1 pyhd8ed1ab_0 conda-forge dbus 1.13.6 h48d8840_2 conda-forge distributed 2021.4.1 py39hf3d152e_0 conda-forge expat 2.3.0 h9c3ff4c_0 conda-forge fasteners 0.14.1 py_3 conda-forge fontconfig 2.13.1 hba837de_1005 conda-forge freetype 2.10.4 h0708190_1 conda-forge fsspec 2021.4.0 pyhd8ed1ab_0 conda-forge gettext 0.19.8.1 h0b5b191_1005 conda-forge glib 2.68.1 h9c3ff4c_0 conda-forge glib-tools 2.68.1 h9c3ff4c_0 conda-forge gst-plugins-base 1.18.4 hf529b03_2 conda-forge gstreamer 1.18.4 h76c114f_2 conda-forge hdf4 4.2.13 h10796ff_1005 conda-forge hdf5 1.10.6 nompi_h6a2412b_1114 conda-forge heapdict 1.0.1 py_0 conda-forge icu 68.1 h58526e2_0 conda-forge iniconfig 1.1.1 pyh9f0ad1d_0 conda-forge jinja2 2.11.3 pyh44b312d_0 conda-forge jpeg 9d h36c2ea0_0 conda-forge kiwisolver 1.3.1 py39h1a9c180_1 conda-forge krb5 1.17.2 h926e7f8_0 conda-forge lcms2 2.12 hddcbb42_0 conda-forge ld_impl_linux-64 2.35.1 hea4e1c9_2 conda-forge libblas 3.9.0 8_openblas conda-forge libcblas 3.9.0 8_openblas conda-forge libclang 11.1.0 default_ha53f305_0 conda-forge libcurl 7.76.1 hc4aaa36_1 conda-forge libedit 3.1.20191231 he28a2e2_2 conda-forge libev 4.33 h516909a_1 conda-forge libevent 2.1.10 hcdb4288_3 conda-forge libffi 3.3 h58526e2_2 conda-forge libgcc-ng 9.3.0 h2828fa1_19 conda-forge libgfortran-ng 9.3.0 hff62375_19 conda-forge libgfortran5 9.3.0 hff62375_19 conda-forge libglib 2.68.1 h3e27bee_0 conda-forge libgomp 9.3.0 h2828fa1_19 conda-forge libiconv 1.16 h516909a_0 conda-forge liblapack 3.9.0 8_openblas conda-forge libllvm11 11.1.0 hf817b99_2 conda-forge libnetcdf 4.8.0 nompi_hfa85936_101 conda-forge libnghttp2 1.43.0 h812cca2_0 conda-forge libogg 1.3.4 h7f98852_1 conda-forge libopenblas 0.3.12 pthreads_h4812303_1 conda-forge libopus 1.3.1 h7f98852_1 conda-forge libpng 1.6.37 h21135ba_2 conda-forge libpq 13.2 hfd2b0eb_2 conda-forge libssh2 1.9.0 ha56f1ee_6 conda-forge libstdcxx-ng 9.3.0 h6de172a_19 conda-forge libtiff 4.2.0 hdc55705_1 conda-forge libuuid 2.32.1 h7f98852_1000 conda-forge libvorbis 1.3.7 h9c3ff4c_0 conda-forge libwebp-base 1.2.0 h7f98852_2 conda-forge libxcb 1.13 h7f98852_1003 conda-forge libxkbcommon 1.0.3 he3ba5ed_0 conda-forge libxml2 2.9.10 h72842e0_4 conda-forge libzip 1.7.3 h4de3113_0 conda-forge locket 0.2.0 py_2 conda-forge lz4-c 1.9.3 h9c3ff4c_0 conda-forge markupsafe 1.1.1 py39h3811e60_3 conda-forge matplotlib 3.4.1 py39hf3d152e_0 conda-forge matplotlib-base 3.4.1 py39h2fa2bec_0 conda-forge monotonic 1.5 py_0 conda-forge more-itertools 8.7.0 pyhd8ed1ab_1 conda-forge msgpack-python 1.0.2 py39h1a9c180_1 conda-forge mysql-common 8.0.23 ha770c72_1 conda-forge mysql-libs 8.0.23 h935591d_1 conda-forge ncurses 6.2 h58526e2_4 conda-forge netcdf4 1.5.6 nompi_py39hc6dca20_103 conda-forge nspr 4.30 h9c3ff4c_0 conda-forge nss 3.64 hb5efdd6_0 conda-forge numcodecs 0.7.3 py39he80948d_0 conda-forge numpy 1.20.2 py39hdbf815f_0 conda-forge olefile 0.46 pyh9f0ad1d_1 conda-forge openjpeg 2.4.0 hf7af979_0 conda-forge openssl 1.1.1k h7f98852_0 conda-forge packaging 20.9 pyh44b312d_0 conda-forge pandas 1.2.4 py39hde0f152_0 conda-forge partd 1.2.0 pyhd8ed1ab_0 conda-forge pcre 8.44 he1b5a44_0 conda-forge pillow 8.1.2 py39hf95b381_1 conda-forge pip 21.1 pyhd8ed1ab_0 conda-forge pluggy 0.13.1 py39hf3d152e_4 conda-forge psutil 5.8.0 py39h3811e60_1 conda-forge pthread-stubs 0.4 h36c2ea0_1001 conda-forge py 1.10.0 pyhd3deb0d_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pyqt 5.12.3 py39hf3d152e_7 conda-forge pyqt-impl 5.12.3 py39h0fcd23e_7 conda-forge pyqt5-sip 4.19.18 py39he80948d_7 conda-forge pyqtchart 5.12 py39h0fcd23e_7 conda-forge pyqtwebengine 5.12.1 py39h0fcd23e_7 conda-forge pytest 6.2.3 py39hf3d152e_0 conda-forge python 3.9.2 hffdb5ce_0_cpython conda-forge python-dateutil 2.8.1 py_0 conda-forge python_abi 3.9 1_cp39 conda-forge pytz 2021.1 pyhd8ed1ab_0 conda-forge pyyaml 5.4.1 py39h3811e60_0 conda-forge qt 5.12.9 hda022c4_4 conda-forge readline 8.1 h46c0cb4_0 conda-forge scipy 1.6.3 py39hee8e79c_0 conda-forge setuptools 49.6.0 py39hf3d152e_3 conda-forge six 1.15.0 pyh9f0ad1d_0 conda-forge sortedcontainers 2.3.0 pyhd8ed1ab_0 conda-forge sqlite 3.35.5 h74cdb3f_0 conda-forge tblib 1.7.0 pyhd8ed1ab_0 conda-forge tk 8.6.10 h21135ba_1 conda-forge toml 0.10.2 pyhd8ed1ab_0 conda-forge toolz 0.11.1 py_0 conda-forge tornado 6.1 py39h3811e60_1 conda-forge typing_extensions 3.7.4.3 py_0 conda-forge tzdata 2021a he74cb21_0 conda-forge wheel 0.36.2 pyhd3deb0d_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xz 5.2.5 h516909a_1 conda-forge yaml 0.2.5 h516909a_0 conda-forge zarr 2.8.1 pyhd8ed1ab_0 conda-forge zict 2.0.0 py_0 conda-forge zlib 1.2.11 h516909a_1010 conda-forge zstd 1.4.9 ha95c52a_0 conda-forge |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
671609109 | MDU6SXNzdWU2NzE2MDkxMDk= | 4300 | General curve fitting method | TomNicholas 35968931 | closed | 0 | 9 | 2020-08-02T12:35:49Z | 2021-03-31T16:55:53Z | 2021-03-31T16:55:53Z | MEMBER | Xarray should have a general curve-fitting function as part of its main API. MotivationYesterday I wanted to fit a simple decaying exponential function to the data in a DataArray and realised there currently isn't an immediate way to do this in xarray. You have to either pull out the This is an incredibly common, domain-agnostic task, so although I don't think we should support various kinds of unusual optimisation procedures (which could always go in an extension package instead), I think a basic fitting method is within scope for the main library. There are SO questions asking how to achieve this. We already have Proposed syntaxI want something like this to work: ```python def exponential_decay(xdata, A=10, L=5): return A*np.exp(-xdata/L) returns a dataset containing the optimised values of each parameterfitted_params = da.fit(exponential_decay) fitted_line = exponential_decay(da.x, A=fitted_params['A'], L=fitted_params['L']) Compareda.plot(ax) fitted_line.plot(ax) ``` It would also be nice to be able to fit in multiple dimensions. That means both for example fitting a 2D function to 2D data: ```python def hat(xdata, ydata, h=2, r0=1): r = xdata2 + ydata2 return h*np.exp(-r/r0) fitted_params = da.fit(hat) fitted_hat = hat(da.x, da.y, h=fitted_params['h'], r0=fitted_params['r0']) ``` but also repeatedly fitting a 1D function to 2D data: ```python da now has a y dimension toofitted_params = da.fit(exponential_decay, fit_along=['x']) As fitted_params now has y-dependence, broadcasting means fitted_lines does toofitted_lines = exponential_decay(da.x, A=fitted_params.A, L=fitted_params.L)
So the method docstring would end up like ```python def fit(self, f, fit_along=None, skipna=None, full=False, cov=False): """ Fits the function f to the DataArray.
``` Questions1) Should it wrap
2) What form should we expect the curve-defining function to come in?
3) Is it okay to inspect parameters of the curve-defining function?
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4300/reactions", "total_count": 4, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);