id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 1462173557,I_kwDOAMm_X85XJv91,7316,Support for python 3.11,43316012,closed,0,,,2,2022-11-23T17:52:18Z,2024-03-15T06:07:26Z,2024-03-15T06:07:26Z,COLLABORATOR,,,,"### Is your feature request related to a problem? Now that python 3.11 has been released, we should start to support it officially. ### Describe the solution you'd like I guess the first step would be to replace python 3.10 as a maximum version in the tests and see what crashes (and get lucky). ### Describe alternatives you've considered _No response_ ### Additional context _No response_","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7316/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 2021585639,PR_kwDOAMm_X85g77tr,8503,Add option to define custom format of units in plots,43316012,open,0,,,5,2023-12-01T21:09:18Z,2024-02-02T22:09:11Z,,COLLABORATOR,,0,pydata/xarray/pulls/8503," - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` We encountered the issue that we should plot units as `(unit)` instead of `[unit]`. This PR enables us to do exactly this, easier to change this at the source ;) I think setting this as a global option is the correct approach, but feel free to propose alternatives :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8503/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2024737017,PR_kwDOAMm_X85hGgaB,8520,Allow configuring open_dataset via backend instances,43316012,open,0,,,9,2023-12-04T21:03:12Z,2024-01-14T21:40:38Z,,COLLABORATOR,,0,pydata/xarray/pulls/8520,"Support passing instances of `BackendEntryPoints` as the `engine` argument. Closes #8447 Then instead of passing a long list of options to the `open_dataset` method directly, you can also configure the entrypoint in the constructor and pass it as the engine. It would look something like this: ```python engine = NetCDF4BackendEntrypoint(mode=""a"", clobber=False) ds = xr.open_dataset(""some_file.nc"", engine=engine) ``` While this is actually even more lines of code, the main advantage is to have better discoverability of the options. TODO: - [x] Adapt netcdf4 backend - [x] Adapt h5netcdf backend - [x] Find out if h5netcdf backend should have ""autoclose"" and ""mode"" options (https://github.com/pydata/xarray/pull/8520#pullrequestreview-1769368001_) - [x] What to do with ""decode_vlen_strings"" option in h5netcdf (was this deprecated?) - [x] Adapt zarr backend - [x] Adapt scipy backend - [x] Adapt pydap backend - [ ] `output_grid` seems to be always set to `True`? is this intentional, why not remove it instead? - [x] ~`verify` and `user_charset` are non-existent in pydap?~ > I still had pydap version 3.2, in 3.4 they exist... - [x] typing is only my first impression. Not easy if upstream libs are untyped :/ - [x] ~Adapt pynio backend~ > Won't adapt because deprecated - [x] Fix docstrings to include init options - [x] Check if `lock=True` is allowed > Not allowed, otherwise scipy backend breaks - [ ] Change default to `lock=True` instead of `None`? Maybe a later PR? - [ ] Rename `XXXBackendEntrypoint` > `XXXBackend` ? - [x] ~The `autoclose` argument seems to do nothing?~ > Actually it is used in `BaseNetCDF4Array`, all good - [x] ~Move `group` to open_dataset instead of backend option?~ > Its not really a decoder either. Not sure, for now leave it in the init... - [ ] Improve `_resolve_decoders_kwargs`, this function has a lot of implicit assumtions? Maybe remove `open_dataset_parameters` alltogether? - [x] Add tests for passing backend directly via engine argument - [x] `open_dataset` now has `**kwargs` to support backwards compatibility. Probably we should raise if unsupported stuff is added (i.e. typos) otherwise this could be confusing? (i.e. see test in zarr that checks for deprecated `auto_chunk`)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8520/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2047459696,PR_kwDOAMm_X85iTmr2,8559,Support non-str Hashables in DataArray,43316012,closed,0,,,3,2023-12-18T21:09:13Z,2024-01-14T20:38:59Z,2024-01-14T20:38:59Z,COLLABORATOR,,0,pydata/xarray/pulls/8559," - [x] Closes #8546 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Probably we should add a whole bunch of tests for this. For now only testing the constructor. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8559/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1914212923,PR_kwDOAMm_X85bRN9f,8234,Improved typing of align & broadcast,43316012,closed,0,,,1,2023-09-26T20:02:22Z,2023-12-18T20:28:03Z,2023-10-09T10:21:40Z,COLLABORATOR,,0,pydata/xarray/pulls/8234,"- [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` This PR improves the typing of align. Before: the type of the inputs was reduced to the common superclass and the return type was the same. This often required casts or ignores when mixing classes (e.g. `da, ds = xr.align(da, ds)`. Now: the return types are exactly the same as the input types if the number of passed arguments is <=5. Only downside: it requires some ugly overloads with type ignores on align. Maybe someone knows how to type this better?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8234/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2038622503,I_kwDOAMm_X855gukn,8548,Shaping the future of Backends,43316012,open,0,,,3,2023-12-12T22:08:50Z,2023-12-15T17:14:59Z,,COLLABORATOR,,,,"### What is your issue? Backends in xarray are used to read and write files (or in general objects) and transform them into useful xarray Datasets. This issue will collect ideas on how to continuously improve them. # Current state Along the reading and writing process there are many implicit and explicit configuration possibilities. There are many backend specific options and many en-,decoder specific options. Most of them are currently difficult or even impossible to discover. There is the infamous `open_dataset` method which can do everything, but there are also some specialized methods like `open_zarr` or `to_netcdf`. The only really formalized way to extend xarray capabilities is via the `BackendEntrypoint`. Currently only for reading files. This has proven to work and things are going so well that people are discussing getting rid of the special reading methods (#7495). A major critique in this thread is again the discoverability of configuration options. ## Problems To name a few: - Discoverability of configuration options is poor - No distinction between backend and encoding options - New options are simply added as another keyword argument to `open_dataset` - No writing support for backends ## What already improved - Adding URL and description attributes to the backends (#7000, #7200) - Add static typing - Allow creating instances of backends with their respective options (#8520) # The future After listing all the problems, lets see how we can improve the situation and make backends an allrounder solution to reading and writing all kinds of files. ## What happens behind the scenes In general the reading and writing of Datasets in xarray is a three-step process. ``` [ done by backend.open_dataset] Dataset < chunking < decoding < opening_in_store < file Dataset > validating > encoding > storing_in_store > file ``` Probably you could consider combining the chunking and decoding as well as validation and encoding into a single logical step in the pipeline. This view should help decide how to set up a future architecture of backends. You can see that there is a common middle object in this process, a in-memory representation of the file on disc between en-, decoding and the abstract store. This is actually a `xarray.Dataset` and is internally called a ""backend dataset"". ## `write_dataset` method A quite natural extension of backends would be to implement a `write_dataset` method (name pending). This would allow backends to fulfill the complete right side of the pipeline. ## Transformer class Due to a lack of a common word for a class that handles ""encoding"" and ""decoding"" I will call them transformer here. The process of en- and decoding is currently done ""hardcoded"" by the respective `open_dataset` and `to_netcdf` methods. One could imagine to introduce the concept of a common class that handles both. This class could handle the implemented CF or netcdf encoding conventions. But it would also allow users to define their own storing conventions (Why not create a custom transformer that adds indexes based on variable attributes?) The possibilities are endless, and an interface that fulfills all the requirements still has to be found. This would homogenize the reading and writing process to ``` Dataset <> Transformer <> Backend <> file ``` As a bonus this would increase discoverability of the configuration options of the decoding options (then transformer arguments). The new interface then could be ```python backend = Netcdf4BackendEntrypoint(group=""data"") decoder = CFTransformer(cftime=True) ds = xr.open_dataset(""file.nc"", engine=backend, decoder=decoder) ``` while of course still allowing to pass all options simply as kwarg (since this is still the easiest way of telling beginners how to open files) The final improvement here would be to add additional entrypoints for these transformers ;) # Disclaimer Now this issue is just a bunch of random ideas that require quite some refinement or they might even turn out to be nonsense. So lets have a exciting discussion about these things :) If you have something to add to the above points I will include your ideas as well. This is meant as a collection of ideas on how to improve our backends :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8548/reactions"", ""total_count"": 5, ""+1"": 5, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 2034528244,I_kwDOAMm_X855RG_0,8537,Doctests failing,43316012,closed,0,,,1,2023-12-10T20:49:43Z,2023-12-11T21:00:03Z,2023-12-11T21:00:03Z,COLLABORATOR,,,,"### What is your issue? The doctest is currently failing with > E UserWarning: h5py is running against HDF5 1.14.3 when it was built against 1.14.2, this may cause problems","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8537/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 2021528727,PR_kwDOAMm_X85g7vZA,8502,change type of curvefit's p0 and bounds to mapping,43316012,closed,0,,,0,2023-12-01T20:18:19Z,2023-12-02T13:08:50Z,2023-12-01T22:02:38Z,COLLABORATOR,,0,pydata/xarray/pulls/8502," - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Mini PR to improve the typing of curvefit. Using `dict` is problematic since it is invariant, while Mapping is `covariant`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8502/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2021517557,PR_kwDOAMm_X85g7s9a,8501,Update to mypy1.7,43316012,closed,0,,,1,2023-12-01T20:08:46Z,2023-12-02T13:08:45Z,2023-12-01T22:02:21Z,COLLABORATOR,,0,pydata/xarray/pulls/8501," - [x] Closes #8448 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I guess we update manually for now?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8501/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1943539215,PR_kwDOAMm_X85c0AkW,8309,Move variable typed ops to NamedArray,43316012,open,0,,,1,2023-10-14T20:22:07Z,2023-10-26T21:55:01Z,,COLLABORATOR,,1,pydata/xarray/pulls/8309," - xref https://github.com/pydata/xarray/issues/8238 This is highly WIP and probably everything is broken right now... Just creating this now, so other people don't work on the same :) Feel free to continue here with me. @pydata/xarray 1. what do we do with commonly used functions, is it ok to copy them? 2. Moving the typed ops requires a lot of functions to be added to NamedArray, is there a consensus of what we want to move? Is it basically everything? 3. Slowly the utils module is becomming a graveyard of stuff we dont want to put elsewhere, maybe we should at least move the typing stuff over to a types module.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8309/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1936080078,I_kwDOAMm_X85zZjzO,8291,`NamedArray.shape` does not support unknown dimensions,43316012,closed,0,,,1,2023-10-10T19:36:42Z,2023-10-18T06:22:54Z,2023-10-18T06:22:54Z,COLLABORATOR,,,,"### What is your issue? According to the array api standard, the `shape` property returns `tuple[int | None, ...]`. Currently we only support `tuple[int, ...]` This will actually raise some errors if a duckarray actually returns some None. E.g. `NamedArray.size` will fail. (On a side note: dask arrays actually use NaN instead of None for some reason.... Only advantage of this is that `NamedArray.size` will actually also return NaN instead of raising...)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8291/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1928972239,PR_kwDOAMm_X85cC_Wb,8276,Give NamedArray Generic dimension type,43316012,open,0,,,3,2023-10-05T20:02:56Z,2023-10-16T13:41:45Z,,COLLABORATOR,,1,pydata/xarray/pulls/8276," - [x] Towards #8199 - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` This aims at making the dimenion type a generic parameter. I thought I will start with NamedArray when testing this out because it is much less interconnected.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8276/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1915876808,I_kwDOAMm_X85yMfXI,8236,DataArray with multiple (Pandas)Indexes on the same dimension is impossible to align,43316012,closed,0,,,3,2023-09-27T15:52:05Z,2023-10-02T06:53:27Z,2023-10-01T07:19:09Z,COLLABORATOR,,,,"### What happened? I have a DataArray with a single dimension and multiple (Pandas)Indexes assigned to various coordinates for efficient indexing using sel. Edit: the problem is even worse than originally described below: such a DataArray breaks all alignment and it's basically unusable... ---- When I try to add an additional coordinate without any index (I simply use the tuple[dimension, values] way) I get a ValueError about aligning with conflicting indexes. If the original DataArray only has a single (Pandas)Index everything works as expected. ### What did you expect to happen? I expected that I can simply assign new coordinates without an index. ### Minimal Complete Verifiable Example ```Python import xarray as xr da = xr.DataArray( [1, 2, 3], dims=""t"", coords={ ""a"": (""t"", [3, 4, 5]), ""b"": (""t"", [5, 6, 7]) } ) # set one index da2 = da.set_xindex(""a"") # set second index (same dimension, maybe thats a problem?) da3 = da2.set_xindex(""b"") # this works da2.coords[""c""] = (""t"", [2, 3, 4]) # this does not da3.coords[""c""] = (""t"", [2, 3, 4]) ``` ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. ### Relevant log output > ValueError: cannot re-index or align objects with conflicting indexes found for the following dimensions: 't' (2 conflicting indexes) Conflicting indexes may occur when - they relate to different sets of coordinate and/or dimension names - they don't have the same type - they may be used to reindex data along common dimensions ### Anything else we need to know? _No response_ ### Environment
INSTALLED VERSIONS ------------------ commit: None python: 3.9.10 (main, Mar 21 2022, 13:08:11) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.66.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.0 xarray: 2022.12.0 pandas: 2.0.2 numpy: 1.24.3 scipy: 1.10.0 netCDF4: 1.6.2 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.6.3 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 58.1.0 pip: 21.2.4 conda: None pytest: 7.3.2 mypy: 1.0.0 IPython: 8.8.0 sphinx: None
I have not yet tried this with a newer version of xarray....","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8236/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,not_planned,13221727,issue 1827647823,PR_kwDOAMm_X85WuBEl,8030,Fix static typing with Matplotlib 3.8,43316012,closed,0,,,14,2023-07-29T20:32:25Z,2023-09-26T19:01:29Z,2023-09-17T05:02:58Z,COLLABORATOR,,0,pydata/xarray/pulls/8030,"- [x] Closes #7802 - [x] Tests added","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8030/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1897167470,PR_kwDOAMm_X85aX_Ms,8184,Fix several warnings in the tests,43316012,closed,0,,,1,2023-09-14T19:21:37Z,2023-09-26T19:01:13Z,2023-09-15T20:41:03Z,COLLABORATOR,,0,pydata/xarray/pulls/8184,"Mainly deprecated ""closed"" argument in `date_range` and passing `pd.MultiIndex` directly to the constructor.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8184/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1901722232,PR_kwDOAMm_X85anII7,8204,Rewrite typed_ops,43316012,closed,0,,,5,2023-09-18T20:51:22Z,2023-09-26T19:00:49Z,2023-09-25T04:43:54Z,COLLABORATOR,,0,pydata/xarray/pulls/8204," - [x] Related to #7780 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8204/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1899895419,I_kwDOAMm_X85xPhp7,8199,Use Generic Types instead of Hashable or Any,43316012,open,0,,,2,2023-09-17T19:41:39Z,2023-09-18T14:16:02Z,,COLLABORATOR,,,,"### Is your feature request related to a problem? Currently, part of the static type of a DataArray or Dataset is a `Mapping[Hashable, DataArray]`. I'm quite sure that 99% of the users will actually use `str` key values (aka. variable names), while some exotic people (me included) want to use e.g. Enums for their keys. Currently, we allow to use anything as keys as long as it is hashable, but once the DataArray/set is created, the type information of the keys is lost. Consider e.g. ```python for name, da in Dataset({""a"": (""t"", np.arange(5))}).items(): reveal_type(name) # hashable reveal_type(da.dims) # tuple[hashable, ...] ``` Woudn't that be nice if this would actually return `str`, so you don't have to cast it or assert it everytime? This could be solved by making these classes generic. Another related issue is the underlying data. This could be introduced as a Generic type as well. Probably, this should reach some common ground on all wrapping array libs that are out there. Every one should use a Generic Array class that keeps track of the type of the wrapped array, e.g. `dask.array.core.Array[np.ndarray]`. In return, we could do `DataArray[np.ndarray]` or then `DataArray[dask.array.core.Array[nd.ndarray]]`. ### Describe the solution you'd like The implementation would be something along the lines of: ```python KeyT = TypeVar(""KeyT"", bound=Hashable) DataT = TypeVar(""DataT"", bound=) class DataArray(Generic[KeyT, DataT]): _coords: dict[KeyT, Variable[DataT]] _indexes: dict[KeyT, Index[DataT]] _name: KeyT | None _variable: Variable[DataT] def __init__( self, data: DataT = dtypes.NA, coords: Sequence[Sequence[DataT] | pd.Index | DataArray[KeyT]] | Mapping[KeyT, DataT] | None = None, dims: str | Sequence[KeyT] | None = None, name: KeyT | None = None, attrs: Mapping[KeyT, Any] | None = None, # internal parameters indexes: Mapping[KeyT, Index] | None = None, fastpath: bool = False, ) -> None: ... ``` Now you could create a ""classical"" DataArray: ```python da = DataArray(np.arange(10), {""t"": np.arange(10)}, dims=[""t""]) # will be of type # DataArray[str, np.ndarray] ``` while you could also create something more fancy ```python da2 = DataArray(dask.array.array([1, 2, 3]), {}, dims=[(""tup1"", ""tup2),]) # will be of type # DataArray[tuple[str, str], dask.array.core.Array] ``` Any whenever you access the dimensions / coord names / underlying data you will get the correct type. For now I only see three mayor problems: 1) non-array types (like lists or anything iterable) will get cast to a `np.ndarray` and I have no idea how to tell the type checker that `DataArray([1, 2, 3], {}, ""a"")` should be `DataArray[str, np.ndarray]` and not `DataArray[str, list[int]]`. Depending on the Protocol in the bound TypeVar this might even fail static type analysis or require tons of special casing and overloads. 2) How does the type checker extract the dimension type for Datasets? This is quite convoluted and I am not sure this can be typed correctly... 3) The parallel compute workflows are quite dynamic and I am not sure if static type checking can keep track of the underlying datatype... What does `DataArray([1, 2, 3], dims=""a"").chunk({""a"": 2})` return? Is it `DataArray[str, dask.array.core.Array]`? But what about other chunking frameworks? ### Describe alternatives you've considered One could even extend this and add more Generic types. Different types for dimensions and variable names would be a first (and probably quite a nice) feature addition. One could even go so far and type the keys and values of variables and coords (for Datasets) differently. This came up e.g. in https://github.com/pydata/xarray/issues/3967 However, this would create a ridiculous amount of Generic types and is probably more confusing than helpful. ### Additional context Probably this feature should be done in consecutive PRs that each implement one Generic each, otherwise this will be a giant task!","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8199/reactions"", ""total_count"": 5, ""+1"": 5, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1620317764,PR_kwDOAMm_X85L1Yr2,7612,Fix `pcolormesh` with str coords,43316012,closed,0,,,4,2023-03-12T10:50:35Z,2023-09-13T18:48:08Z,2023-03-16T18:55:30Z,COLLABORATOR,,0,pydata/xarray/pulls/7612," - [x] Closes #6775 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7612/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1581313830,PR_kwDOAMm_X85Jy8wm,7523,allow refreshing of backends,43316012,closed,0,,,4,2023-02-12T16:07:05Z,2023-09-13T18:46:56Z,2023-03-31T15:14:56Z,COLLABORATOR,,0,pydata/xarray/pulls/7523," - [x] Closes #7478 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` Don't know yet how to effectively test this, I guess there is some tricky solution with some mocking.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7523/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1275752720,I_kwDOAMm_X85MCnEQ,6704,Future of `DataArray.rename`,43316012,open,0,,,11,2022-06-18T10:14:43Z,2023-09-11T00:53:31Z,,COLLABORATOR,,,,"### What is your issue? In https://github.com/pydata/xarray/pull/6665 the question came up what to do with `DataArray.rename` in light of the new index refactor. To be consistent with `Dataset` we should introduce a - `DataArray.rename_dims` - `DataArray.rename_vars` - `DataArray.rename` Several open questions about the behavior (Similar things apply to `Dataset.rename{, _dims, _vars}`): - [ ] Should `rename_dims` also rename indexes (dimension coordinates)? - [ ] Should `rename_vars` also rename the DataArray? - [ ] What to do if the `DataArray` has the same name as one of its coordinates? - [ ] Should `rename` still rename everything (like it is now) or only the name (Possibly with some deprecation cycle)? The current implementation of `DataArray.rename` is a bit inconsistent: As stated by @max-sixty in https://github.com/pydata/xarray/issues/6665#issuecomment-1154368202_: - rename operates on DataArray as described in https://github.com/pydata/xarray/pull/6665#issuecomment-1150810485.%C2%A0Generally I'm less keen on ""different types have different semantics"", and here a positional arg would mean a DataArray rename, and kwarg would mean var rename. But it does work locally to DataArray quite well. - rename only exists on DataArrays for the name of the DataArray, and we use rename_vars & rename_dims for both DataArrays & Datasets. So Dataset.rename is soft-deprecated. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6704/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1401132297,PR_kwDOAMm_X85AZG38,7142,Fix Codecov,43316012,closed,0,,,8,2022-10-07T12:55:00Z,2023-08-30T18:58:19Z,2023-08-30T18:47:33Z,COLLABORATOR,,0,pydata/xarray/pulls/7142,"- [x] Closes #7141 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7142/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1401066481,I_kwDOAMm_X85TgpPx,7141,Coverage shows reduced value since mypy flag was added,43316012,closed,0,,,3,2022-10-07T12:01:15Z,2023-08-30T18:47:35Z,2023-08-30T18:47:35Z,COLLABORATOR,,,,"### What is your issue? The coverage was reduced from ~94% to ~68% after merging #7126 See https://app.codecov.io/gh/pydata/xarray or our badge I _think_ this is because the unittests never included the tests directory while mypy does. And codecov uses the sum of both coverage reports to come up with its number. Adding the flag to the badge also does not seem to help? Not sure how or even if that is possible to solve, maybe we need to ask in codecov?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7141/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1648748263,I_kwDOAMm_X85iRebn,7703,Readthedocs build failing,43316012,closed,0,,,3,2023-03-31T06:20:53Z,2023-03-31T15:45:10Z,2023-03-31T15:45:10Z,COLLABORATOR,,,,"### What is your issue? It seems that the readthedocs build is failing since some upstream update. `pydata-sphinx-theme` seems to be incompatible with the `sphinx-book-theme`. Maybe we have to pin to a specific or a maximum version for now.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7703/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1419825696,I_kwDOAMm_X85UoNIg,7199,Deprecate cfgrib backend,43316012,closed,0,,,4,2022-10-23T15:09:14Z,2023-03-29T15:19:53Z,2023-03-29T15:19:53Z,COLLABORATOR,,,,"### What is your issue? Since cfgrib 0.9.9 (04/2021) it comes with its own xarray backend plugin (looks mainly like a copy of our internal version). We should deprecate our internal plugin. The deprecation is complicated since we usually bind the minimum version to a minor step, but cfgrib seems to be on 0.9 since 4 years already. Maybye an exception like for netCDF4? Anyway, if we decide to leave it as it is for now, this ticket is just a reminder to remove it someday :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7199/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1580266844,PR_kwDOAMm_X85JvlXi,7521,use numpys SupportsDtype,43316012,closed,0,,,11,2023-02-10T20:17:10Z,2023-03-18T14:08:20Z,2023-02-28T23:23:46Z,COLLABORATOR,,0,pydata/xarray/pulls/7521," - [x] Closes #7479 I don't know how I feel about using private numpy classes that might change anytime. Maybe within a if TYPE_CHECKING block it is not too bad? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7521/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1620509524,PR_kwDOAMm_X85L19Mo,7616,add a test for scatter colorbar extend,43316012,closed,0,,,0,2023-03-12T20:58:45Z,2023-03-13T19:47:50Z,2023-03-13T19:47:50Z,COLLABORATOR,,0,pydata/xarray/pulls/7616," - [x] Closes #4975","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7616/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1615980379,PR_kwDOAMm_X85Lm7SK,7600,Enable blacks `skip_magic_trailing_comma` options,43316012,closed,0,,,3,2023-03-08T21:36:46Z,2023-03-09T20:41:21Z,2023-03-09T20:40:25Z,COLLABORATOR,,0,pydata/xarray/pulls/7600,"This little config change will make black remove trailing commas when they are not necessary to fit something into a single line. It is a pure design choice but personally I like the clean up it does when function signatures simplify (although this happens rarely with more and more type hints added). I can understand that some people prefer the manual control over what is multiline and what is not. Feel free to vote on it :) For me it adds cheap LOCs so it looks like I am working hard, haha.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7600/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1503621596,PR_kwDOAMm_X85F0ZZm,7392,Support complex arrays in xr.corr,43316012,closed,0,,,2,2022-12-19T21:22:25Z,2023-03-02T20:22:54Z,2023-02-14T16:38:27Z,COLLABORATOR,,0,pydata/xarray/pulls/7392," - [x] Closes #7340 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7392/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1603831809,I_kwDOAMm_X85fmIgB,7572,`test_open_nczarr` failing,43316012,closed,0,,,3,2023-02-28T21:20:22Z,2023-03-02T16:49:25Z,2023-03-02T16:49:25Z,COLLABORATOR,,,,"### What is your issue? In the latest CI runs it seems that `test_backends.py::TestNCZarr::test_open_nczarr` is failing with > KeyError: 'Zarr object is missing the attribute `_ARRAY_DIMENSIONS` and the NCZarr metadata, which are required for xarray to determine variable dimensions.' I don't see an obvious reason for this, especially since the zarr version has not changed compared to some runs that were successful (2.13.6).","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7572/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1361246796,I_kwDOAMm_X85RIvpM,6985,FutureWarning for pandas date_range,43316012,closed,0,,,1,2022-09-04T20:35:17Z,2023-02-06T17:51:48Z,2023-02-06T17:51:48Z,COLLABORATOR,,,,"### What is your issue? Xarray raises a FutureWarning in its date_range, also observable in your tests. The precise warning is: > xarray/coding/cftime_offsets.py:1130: FutureWarning: Argument `closed` is deprecated in favor of `inclusive`. You should discuss if you will adapt the new `inclusive` argument or add a workaround.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6985/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1548948097,I_kwDOAMm_X85cUxKB,7457,Typing of internal datatypes,43316012,open,0,,,5,2023-01-19T11:08:43Z,2023-01-19T19:49:19Z,,COLLABORATOR,,,,"### Is your feature request related to a problem? Currently there is no static typing of the underlying data structures used in `DataArray`s. Simply running `reveal_type(da.data)` returns `Any`. Adding static typing support to that is unfortunately non-trivial since xarray supports a wide variety of duck-types. This also comes with internal typing difficulties. ### Describe the solution you'd like I think the way to go is making the `DataArray` class generic in it's underlying data type. Something like `DataArray[np.ndarray]` or `DataArray[dask.array]`. The implementation would require a TypeVar that is bound to some minimal required Protocol for internal consistency (I think at least it needs `dtype` and `shape` attributes). Datasets would have to be typed the same way, this means only one datatype for all variables is possible, when you mix it it will fall back to the common ancestor which will be the before mentioned protocol. This is basically the same restriction that a dict has. Now to the main issue that I see with this approach: I don't know how to type coordinates. They have the same problems than mentioned above for Datasets. I think it is very common to have dask arrays in the variables but simple numpy arrays in the coordinates, so either one excludes them from the typing or in such cases the common generic typing falls back to the protocol again. Not sure what is the best approach here. ### Describe alternatives you've considered Since the most common workflow for beginners and intermediate-advanced users is to stick with the DataArrays themself and never touch the underlying data, I am not sure if this change is as beneficial as I want it to be. Maybe it just complicates things and leaving it as `Any` is easier to solve for advanced users that then have to cast or ignore this. ### Additional context It came up in this discussion: https://github.com/pydata/xarray/pull/7020#discussion_r972617770_","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7457/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1368900431,PR_kwDOAMm_X84-u2Jv,7020,Typing of abstract base classes,43316012,open,0,,,6,2022-09-11T10:27:01Z,2023-01-19T10:48:20Z,,COLLABORATOR,,0,pydata/xarray/pulls/7020,"This PR adds some typing to several abstract base classes that are used in xarray. Most of it is working, only one major point I could not figure out: What is the type of `NDArrayMixin.array`??? I would appreciate it if someone that has more insight into this would help me. Several minor open points: - What is the return value of `ExplicitlyIndexed.__getitem__` - What is the return value of `ExplicitlyIndexed.transpose` - What is the return value of `AbstractArray.data` - `Variable.values` seems to be able to return scalar values which is incompatible with the `AbstractArray` definition. Overall it seems that typing has helped to find some problems again :) Mypy should fail for tests, I did not adopt them yet, want to solve the outstanding issues first.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7020/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1462057503,PR_kwDOAMm_X85DlALl,7315,Fix polyval overloads,43316012,closed,0,,,1,2022-11-23T16:27:21Z,2022-12-08T20:10:16Z,2022-11-26T15:42:51Z,COLLABORATOR,,0,pydata/xarray/pulls/7315," - [x] Closes #7312 - [ ] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ Turns out the default value of arguments is important for overloads, haha.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7315/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1377097243,PR_kwDOAMm_X84_J8JL,7051,Add parse_dims func,43316012,closed,0,,,6,2022-09-18T15:36:59Z,2022-12-08T20:10:01Z,2022-11-30T23:36:33Z,COLLABORATOR,,0,pydata/xarray/pulls/7051,"This PR adds a `utils.parse_dims` function for parsing one or more dimensions. Currently every function that accepts multiple dimensions does this by itself. I decided to first see if it would be useful to centralize the dimension parsing and collect inputs before adding it to other functions.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7051/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1421441672,PR_kwDOAMm_X85BcmP0,7209,Optimize some copying,43316012,closed,0,,,8,2022-10-24T21:00:21Z,2022-12-08T20:09:49Z,2022-11-30T23:36:56Z,COLLABORATOR,,0,pydata/xarray/pulls/7209,"- [x] Potentially closes #7181 - [x] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I have passed along some more memo dicts, which could prevent some double deep-copying of the same data (don't know how exactly, but who knows :P) Also, I have found some copy calls that did not pass along the deep argument (I am not sure if that breaks things, lets find out). And finally I have found some places where shallow copies are enough. All together it should improve the performance a lot when copying things around.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7209/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1468671915,PR_kwDOAMm_X85D65Bg,7335,Enable mypy warn unused ignores,43316012,closed,0,,,1,2022-11-29T20:42:08Z,2022-12-08T20:09:06Z,2022-12-01T16:14:07Z,COLLABORATOR,,0,pydata/xarray/pulls/7335,"This PR adds the mypy option ""warn_unused_ignores"" which will raise an error if a `# type: ignore` is used where it is no longer necessary. Should enable us to keep our types updated. I am not sure if this will lead to many issues whenever e.g. numpy changes/improves their typing, so we might get errors whenever there is a new version. Maybe it is not that bad, or maybe we can also remove the option again and only do it manually from time to time?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7335/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1464905814,I_kwDOAMm_X85XULBW,7322,Doctests failing,43316012,closed,0,,,4,2022-11-25T20:20:29Z,2022-11-28T19:31:04Z,2022-11-28T19:31:04Z,COLLABORATOR,,,,"### What is your issue? It seems that some update in urllib3 causes our doctests to fail. The reason seems to be that botocore uses an interesting construction to import deprecated urllib3 things: ```python try: # pyopenssl will be removed in urllib3 2.0, we'll fall back to ssl_ at that point. # This can be removed once our urllib3 floor is raised to >= 2.0. with warnings.catch_warnings(): warnings.simplefilter(""ignore"", category=DeprecationWarning) # Always import the original SSLContext, even if it has been patched from urllib3.contrib.pyopenssl import ( orig_util_SSLContext as SSLContext, ) except ImportError: from urllib3.util.ssl_ import ``` I assume that this fails because we use `-Werror` which translates the warning into an error which then is not ignored... Not sure if this is an issue with botocore or we have to catch this?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7322/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1446613571,PR_kwDOAMm_X85Cw17l,7283,Fix mypy 0.990 types,43316012,closed,0,,,2,2022-11-12T21:34:14Z,2022-11-18T15:42:37Z,2022-11-16T18:41:58Z,COLLABORATOR,,0,pydata/xarray/pulls/7283," - [x] Related to #7270 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7283/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1412019155,PR_kwDOAMm_X85A9Bge,7179,Lazy Imports,43316012,closed,0,,,13,2022-10-17T18:23:09Z,2022-11-16T23:32:21Z,2022-10-28T16:25:40Z,COLLABORATOR,,0,pydata/xarray/pulls/7179," - [x] Hopefully Closes #6726 - [x] ~Tests added~ - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7179/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,,13221727,pull 1424707135,PR_kwDOAMm_X85Bnixp,7228,Raise TypeError if plotting empty data,43316012,closed,0,,,3,2022-10-26T21:19:30Z,2022-11-10T23:00:42Z,2022-10-28T16:44:31Z,COLLABORATOR,,0,pydata/xarray/pulls/7228,"- [x] Closes #7156 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7228/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1442702272,PR_kwDOAMm_X85Cjnvl,7276,Import nc_time_axis when needed,43316012,closed,0,,,2,2022-11-09T20:24:45Z,2022-11-10T23:00:15Z,2022-11-10T21:45:27Z,COLLABORATOR,,0,pydata/xarray/pulls/7276," - [x] Closes #7275 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7276/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1419882372,PR_kwDOAMm_X85BXXw0,7200,Backends descriptions,43316012,closed,0,,,3,2022-10-23T18:23:32Z,2022-10-26T19:45:15Z,2022-10-26T16:01:04Z,COLLABORATOR,,0,pydata/xarray/pulls/7200," - [x] Closes #7049 - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7200/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1410498749,PR_kwDOAMm_X85A38a6,7168,Fix broken test that fails CI upstream,43316012,closed,0,,,1,2022-10-16T14:02:42Z,2022-10-17T17:48:07Z,2022-10-16T16:16:51Z,COLLABORATOR,,0,pydata/xarray/pulls/7168,"- [x] Closes #7158 Technically does not close all fails, but if we close it, the CI will open a new issue anyway and the discussion is not relevant anymore :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7168/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1411110855,PR_kwDOAMm_X85A59Gg,7176,Add import ASV benchmark,43316012,closed,0,,,0,2022-10-17T08:11:58Z,2022-10-17T17:47:59Z,2022-10-17T15:25:18Z,COLLABORATOR,,0,pydata/xarray/pulls/7176,related to https://github.com/pydata/xarray/issues/6726,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7176/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1393482925,PR_kwDOAMm_X84__h6j,7113,Minor tests improvements,43316012,closed,0,,,0,2022-10-01T17:19:42Z,2022-10-16T13:56:28Z,2022-10-02T15:38:16Z,COLLABORATOR,,0,pydata/xarray/pulls/7113,"- silences some user warning from renaming dims. - fix coverage config","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7113/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1377128403,PR_kwDOAMm_X84_KB9v,7052,Add typing to plot methods,43316012,closed,0,,,32,2022-09-18T17:40:36Z,2022-10-16T13:54:26Z,2022-10-16T09:26:55Z,COLLABORATOR,,0,pydata/xarray/pulls/7052,"- [x] Closes #6949 - [x] Tests added (typing) - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7052/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,,13221727,pull 1347715262,I_kwDOAMm_X85QVIC-,6949,Plot accessors miss static typing,43316012,closed,0,,,0,2022-08-23T10:38:56Z,2022-10-16T09:26:55Z,2022-10-16T09:26:55Z,COLLABORATOR,,,,"### What happened? The plot accessors i.e. `dataarray.plot` of type `_PlotMethods` are missing static typing especially of function attributes. See #6947 for an example. The problem is that many plotting methods are added using hooks via decorators, something that mypy does not understand. ### What did you expect to happen? As a quick fix: type the plot accessors as `_PlotMethods | Any` to avoid false positives in mypy. Better to either restructure the accessor with static methods instead of hooks or figure out another way of telling static type checkers about these methods. Anyway: mypy should not complain. ### Minimal Complete Verifiable Example ```Python import xarray as xr da = xr.DataArray([[1,2,3], [4,5,6]], dims=[""x"", ""y""]) da.plot.contourf(x=""x"", y=""y"") # mypy complains: # error: ""_PlotMethods"" has no attribute ""contourf"" ``` ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment On mobile, can edit it later if required. Newest xarray should have this problem, before the accessor was Any.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6949/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1388372090,I_kwDOAMm_X85SwOB6,7094,Align typing of dimension inputs,43316012,open,0,,,5,2022-09-27T20:59:17Z,2022-10-13T18:02:16Z,,COLLABORATOR,,,,"### What is your issue? Currently the input type for ""one or more dims"" is changing from function to function. There are some open PRs that move to `str | Iterable[Hashable]` which allows the use of tuples as dimensions. Some changes are still required: - [ ] Accept None in all functions that accept dims as default, this would simplify typing alot (see https://github.com/pydata/xarray/pull/7048#discussion_r973813607) - [ ] Check if we can always include ellipsis ""..."" in dim arguments (see https://github.com/pydata/xarray/pull/7048#pullrequestreview-1111498309) - [ ] `Iterable[Hashable]` includes sets, which do not preserve the ordering (see https://github.com/pydata/xarray/pull/6971#discussion_r981166670). This means we need to distinguish between the cases where the order matters (constructor, transpose etc.) and where it does not (drop_dims, reductions etc.). Probably this needs to be typed as a `str | Sequence[Hashable]` (a numpy.ndarray is not a Sequence, but who uses this for dimensions anyway?).","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7094/reactions"", ""total_count"": 5, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1318808368,PR_kwDOAMm_X848JE7r,6834,Bump minimum numpy version to 1.20,43316012,closed,0,,,18,2022-07-26T22:21:54Z,2022-10-12T21:49:49Z,2022-10-12T17:08:51Z,COLLABORATOR,,0,pydata/xarray/pulls/6834," - [x] Closes #6818 - [ ] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ Alternative to https://github.com/pydata/xarray/pull/6821","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6834/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1401013957,PR_kwDOAMm_X85AYtU2,7140,Fixes deepcopy of Index,43316012,closed,0,,,0,2022-10-07T11:18:10Z,2022-10-07T21:57:12Z,2022-10-07T21:57:12Z,COLLABORATOR,,0,pydata/xarray/pulls/7140,"Forgot to add this to #7112 Also, correct the date of the 2022.09 release in whats-new, see #7135 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7140/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1393443839,PR_kwDOAMm_X84__aKC,7112,Support of repr and deepcopy of recursive arrays,43316012,closed,0,,,2,2022-10-01T15:24:40Z,2022-10-07T11:10:32Z,2022-10-06T22:04:01Z,COLLABORATOR,,0,pydata/xarray/pulls/7112," - [x] Closes #7111 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` `xarray.testing.assert_identical` and probably more do not work yet.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7112/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1396832809,PR_kwDOAMm_X85AKhqW,7126,Upload mypy coverage report to codecov,43316012,closed,0,,,1,2022-10-04T20:55:02Z,2022-10-06T21:33:51Z,2022-10-06T20:38:14Z,COLLABORATOR,,0,pydata/xarray/pulls/7126,"Not sure if that is the correct approach (to simply use a mypy flag) but lets see what people think about it. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7126/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1393837094,PR_kwDOAMm_X85AAmD9,7114,Fix typing of backends,43316012,closed,0,,,2,2022-10-02T17:20:56Z,2022-10-06T21:33:39Z,2022-10-06T21:30:01Z,COLLABORATOR,,0,pydata/xarray/pulls/7114,"While adding type hints to `test_backends` I noticed that the `open_dataset` method and the abstract `BackendEntrypoint` were missing stream types as inputs. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7114/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1395053809,PR_kwDOAMm_X85AEpA1,7117,Expermimental mypy plugin,43316012,open,0,,,2,2022-10-03T17:07:59Z,2022-10-03T18:53:10Z,,COLLABORATOR,,1,pydata/xarray/pulls/7117,"I was playing around a bit with a mypy plugin and this was the best I could come up with. Unfortunately the mypy docu about the plugins is not very detailed... This plugin makes mypy recognize the user defined accessors. There is a quite severe bug in there (due to my lack of understanding of mypy internals probably) which makes it work only on the first run but when you change a line in your code and run mypy again it will crash... (you can delete the cache to make it work one more time again :) Any chance that a mypy expert can figure this out? haha","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7117/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1386709376,PR_kwDOAMm_X84_o-IX,7089,Fix deepcopy of Variables and DataArrays,43316012,closed,0,,,4,2022-09-26T20:54:16Z,2022-09-29T20:46:01Z,2022-09-29T16:36:51Z,COLLABORATOR,,0,pydata/xarray/pulls/7089,"- [x] Closes #2835 (Even though technically it fixes the DataArray version of the issue) - [x] ~~Tests added ~~ will be done via #7086 - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` related to #2839 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7089/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1386664116,PR_kwDOAMm_X84_o0Jy,7087,Add typing to FacetGrid,43316012,closed,0,,,0,2022-09-26T20:16:26Z,2022-09-28T20:15:53Z,2022-09-28T18:30:59Z,COLLABORATOR,,0,pydata/xarray/pulls/7087,Mainly to annoy @Illviljan :),"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7087/reactions"", ""total_count"": 4, ""+1"": 1, ""-1"": 0, ""laugh"": 3, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1389764085,PR_kwDOAMm_X84_zMRw,7102,Exclude typechecking stuff from coverage,43316012,closed,0,,,1,2022-09-28T18:12:39Z,2022-09-28T20:15:52Z,2022-09-28T19:18:54Z,COLLABORATOR,,0,pydata/xarray/pulls/7102,tiny PR that disables coverage on `types.py` and all typing-only imports a.la. `if TYPE_CHECKING: ...`,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7102/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1376479521,PR_kwDOAMm_X84_IJrz,7048,Add Ellipsis typehint to reductions,43316012,closed,0,,,8,2022-09-16T21:15:10Z,2022-09-28T18:02:51Z,2022-09-28T17:10:05Z,COLLABORATOR,,0,pydata/xarray/pulls/7048,"This PR adds the ellipsis typehint to reductions (only where they behave differently from None to reduce overhead). Follow up on https://github.com/pydata/xarray/pull/7017#issuecomment-1243927061 Additionally I was changing a lot of ""one or more dimensions"" typehints to `str | Iterable[Hashable]` (See https://github.com/pydata/xarray/issues/6142). Some code changes were necessary to support this fully. Before several things were not working with actual hashable dimensions that are not strings. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7048/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1385143758,PR_kwDOAMm_X84_j6Bn,7080,Fix `utils.get_axis` with kwargs,43316012,closed,0,,,3,2022-09-25T19:50:15Z,2022-09-28T18:02:18Z,2022-09-28T17:11:16Z,COLLABORATOR,,0,pydata/xarray/pulls/7080,"- [x] Closes #7078 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7080/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1120405560,I_kwDOAMm_X85CyAg4,6229,[Bug]: rename_vars to dimension coordinate does not create an index,43316012,closed,0,,,6,2022-02-01T09:09:50Z,2022-09-27T09:33:42Z,2022-09-27T09:33:42Z,COLLABORATOR,,,,"### What happened? We used `Data{set,Array}.rename{_vars}({coord: dim_coord})` to make a coordinate a dimension coordinate (instead of `set_index`). This results in the coordinate correctly being displayed as a dimension coordinate (with the *) but it does not create an index, such that further operations like `sel` fail with a strange `KeyError`. ### What did you expect to happen? I expect one of two things to be true: 1. `rename{_vars}` does not allow setting dimension coordinates (raises Error and tells you to use set_index) 2. `rename{_vars}` checks for this occasion and sets the index correctly ### Minimal Complete Verifiable Example ```python import xarray as xr data = xr.DataArray([5, 6, 7], coords={""c"": (""x"", [1, 2, 3])}, dims=""x"") # # array([5, 6, 7]) # Coordinates: # c (x) int64 1 2 3 # Dimensions without coordinates: x data_renamed = data.rename({""c"": ""x""}) # # array([5, 6, 7]) # Coordinates: # * x (x) int64 1 2 3 data_renamed.indexes # Empty data_renamed.sel(x=2) # KeyError: 'no index found for coordinate x' # if we use set_index it works data_indexed = data.set_index({""x"": ""c""}) # looks the same as data_renamed! # # array([1, 2, 3]) # Coordinates: # * x (x) int64 1 2 3 data_indexed.indexes # x: Int64Index([1, 2, 3], dtype='int64', name='x') ``` ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.20.2 pandas: 1.3.5 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.2 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6229/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1368690120,PR_kwDOAMm_X84-uNM2,7017,Add Ellipsis typehints,43316012,closed,0,,,3,2022-09-10T17:53:26Z,2022-09-12T15:40:08Z,2022-09-11T13:40:07Z,COLLABORATOR,,0,pydata/xarray/pulls/7017,"This PR adds an `Ellipsis` typehint to some functions. Interestingly mypy did not complain at the tests before, I assume it is because ""..."" is Hashable or something like that? I don't know what to do with reductions, since they also support ellipsis, but it is basically the same as using None. Therefore, I assume it is not necessary to expose this feature. Did I miss any functions where ellipsis is supported? It is hard to look for ""...""... xD","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7017/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1345227910,PR_kwDOAMm_X849gI4A,6939,Improve quantile method docstring + error,43316012,closed,0,,,1,2022-08-20T17:17:32Z,2022-09-10T09:03:05Z,2022-09-05T22:40:07Z,COLLABORATOR,,0,pydata/xarray/pulls/6939," - [x] Closes #6875 - [x] ~~Tests added~~ - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6939/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1362485455,PR_kwDOAMm_X84-Zfn0,6994,Even less warnings in tests,43316012,closed,0,,,2,2022-09-05T21:35:50Z,2022-09-10T09:02:46Z,2022-09-09T05:48:19Z,COLLABORATOR,,0,pydata/xarray/pulls/6994,"This PR removes several warnings from the tests and improves their typing on the way. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6994/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1361262641,PR_kwDOAMm_X84-VY9P,6986,Remove some warnings in tests,43316012,closed,0,,,3,2022-09-04T21:58:57Z,2022-09-05T16:06:35Z,2022-09-05T10:52:45Z,COLLABORATOR,,0,pydata/xarray/pulls/6986,"This PR tries to get rid of several warnings in the tests. I could not get rid of `RuntimeWarning: All-NaN slice encountered` for tests with dask. Does anyone know why is that? pytest.mark.filterwarnings does not seem to capture them...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6986/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1345220697,PR_kwDOAMm_X849gHlT,6938,Fix bug where indexes were changed inplace,43316012,closed,0,,,1,2022-08-20T16:45:22Z,2022-08-22T11:07:46Z,2022-08-22T10:39:54Z,COLLABORATOR,,0,pydata/xarray/pulls/6938," - [x] Closes #6931 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ Some typing on the way :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6938/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1315959612,PR_kwDOAMm_X847_0k6,6821,Fix numpy 1.20 incompatibility,43316012,closed,0,,,8,2022-07-24T17:10:24Z,2022-08-20T17:01:13Z,2022-07-30T21:11:08Z,COLLABORATOR,,0,pydata/xarray/pulls/6821,"This PR removes the `_SupportsDType` dependency from numpy and introduces its own. Closes https://github.com/pydata/xarray/issues/6818","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6821/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1292091752,PR_kwDOAMm_X846vljd,6744,Fix `DataArrayRolling.__iter__` with `center=True`,43316012,closed,0,,,8,2022-07-02T16:36:00Z,2022-07-18T15:31:54Z,2022-07-14T17:41:01Z,COLLABORATOR,,0,pydata/xarray/pulls/6744,"- [x] Closes #6739 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ I have taken the freedom to move all rolling related tests into their own testing module. https://github.com/pydata/xarray/pull/6730 should then take care of the (by now) copy-pasted `da` and `ds` fixtures.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6744/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1261153511,PR_kwDOAMm_X845IYF5,6665,Update DataArray.rename + docu,43316012,closed,0,,,16,2022-06-05T20:32:57Z,2022-07-18T15:31:38Z,2022-07-18T14:48:02Z,COLLABORATOR,,0,pydata/xarray/pulls/6665," - [x] Closes #5458 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` On the way, I have added the support for changing the name and dims/coords in the same rename call. Also took the freedom to fix some unrelated typing problems.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6665/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 917034151,MDU6SXNzdWU5MTcwMzQxNTE=,5458,DataArray.rename docu missing renaming of dimensions,43316012,closed,0,,,0,2021-06-10T07:57:11Z,2022-07-18T14:48:02Z,2022-07-18T14:48:02Z,COLLABORATOR,,,,"**What happened**: http://xarray.pydata.org/en/stable/generated/xarray.DataArray.rename.html#xarray.DataArray.rename states that: > Returns a new DataArray with renamed coordinates or a new name. **What you expected to happen**: It should state: ""Returns a new DataArray with renamed coordinates, dimensions or a new name."" Since it definitely can do that. **Minimal example** ``` xr.DataArray([1, 2, 3]).rename({""dim_0"": ""new""}) ``` **Further** While at it: Dataset.rename als does not mention explicitly that you can rename coordinates.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5458/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1292284929,I_kwDOAMm_X85NBrQB,6749,What should `Dataset.count` return for missing dims?,43316012,open,0,,,5,2022-07-03T11:49:12Z,2022-07-14T17:27:23Z,,COLLABORATOR,,,,"### What is your issue? When using a dataset with multiple variables and using `Dataset.count(""x"")` it will return ones for variables that are missing dimension ""x"", e.g.: ```python import xarray as xr ds = xr.Dataset({""a"": (""x"", [1, 2, 3]), ""b"": (""y"", [4, 5])}) ds.count(""x"") # returns: # # Dimensions: (y: 2) # Dimensions without coordinates: y # Data variables: # a int32 3 # b (y) int32 1 1 ``` I can understand why ""1"" can be a valid answer, but the result is probably a bit philosophical. For my usecase I would like it to return an array of `ds.sizes[""x""]` / 0. I think this is also a valid return value, considering the broadcasting rules, where the size of the missing dimension is actually known in the dataset. Maybe one could make this behavior adjustable with a kwarg, e.g. ""missing_dim_value: {int, ""size""}, default 1. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6749/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1302461674,PR_kwDOAMm_X847SR33,6777,Move Rolling tests to their own testing module,43316012,closed,0,,,1,2022-07-12T18:20:58Z,2022-07-12T18:48:38Z,2022-07-12T18:46:32Z,COLLABORATOR,,0,pydata/xarray/pulls/6777,"This PR moves all DataArrayRolling and DatasetRolling tests to their own module. See request https://github.com/pydata/xarray/pull/6744#issuecomment-1182169308 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6777/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1292291988,PR_kwDOAMm_X846wMD8,6750,Add import change to whats-new,43316012,closed,0,,,0,2022-07-03T12:19:01Z,2022-07-12T18:09:40Z,2022-07-06T03:06:31Z,COLLABORATOR,,0,pydata/xarray/pulls/6750,"- [x] Closes #6741 - [x] ~~Tests added~~ - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6750/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1275262097,PR_kwDOAMm_X8453Zo1,6702,Typing of GroupBy & Co.,43316012,closed,0,,,2,2022-06-17T16:50:43Z,2022-07-03T13:32:30Z,2022-06-29T20:06:04Z,COLLABORATOR,,0,pydata/xarray/pulls/6702,"This PR adds typing support for groupby, coarsen, rolling, weighted and resample. There are several open points: 1. Coarsen is missing type annotations for reductions like `max`, they get added dynamically. 2. The `Groupby` group-key type is quite wide. Does anyone have any idea on how to type it correctly? For now it is still Any. 3. Several function signatures were inconsistent between the DataArray and Dataset versions (looking at you: `map`). I took the freedom to align them (required for mypy), hopefully this does not break too much. 4. I was moving the generation functions from `DataWithCoords` to `DataArray` and `Dataset`, which adds some copy-paste of code (I tried to keep it minimal) but was unavoidable for typing support. (Adds the bonus that the corresponding modules are now only imported when required).","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6702/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1275993190,PR_kwDOAMm_X8455vDs,6706,Add `Dataset.dtypes` property,43316012,closed,0,,,7,2022-06-19T08:40:25Z,2022-06-26T08:08:21Z,2022-06-22T16:01:45Z,COLLABORATOR,,0,pydata/xarray/pulls/6706,"- [x] Closes #6714 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` Currently returns a Mapping from variable names to dtypes for ALL variables in the Dataset, including coordinates. Possibly better to only return data_vars dtypes? Give me your thoughts on that, it is easy to change. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6706/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1278661854,PR_kwDOAMm_X846CnXr,6710,Expanduser (~) for open_dataset with dask,43316012,closed,0,,,2,2022-06-21T15:58:34Z,2022-06-26T08:08:04Z,2022-06-25T23:44:56Z,COLLABORATOR,,0,pydata/xarray/pulls/6710," - [x] Closes #6707 - [x] ~~Tests added~~ - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I don't really know how to test this... Is it ok to leave it untested?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6710/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1275747776,I_kwDOAMm_X85MCl3A,6703,"Add coarsen, rolling and weighted to generate_reductions",43316012,open,0,,,1,2022-06-18T09:49:22Z,2022-06-18T16:04:15Z,,COLLABORATOR,,,,"### Is your feature request related to a problem? Coarsen reductions are currently added dynamically which is not very useful for typing. This is a follow-up to @Illviljan in https://github.com/pydata/xarray/pull/6702#discussion_r900700532_ Same goes for Weighted. And similar for Rolling (not sure if it is exactly the same though?) ### Describe the solution you'd like Extend the generate_reductions script to include `DataArrayCoarsen` and `DatasetCoarsen`. Once finished: use type checking in all test_coarsen tests.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6703/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1268697316,PR_kwDOAMm_X845hhHE,6690,Fix Dataset.where with drop=True and mixed dims,43316012,closed,0,,,3,2022-06-12T20:47:05Z,2022-06-13T18:06:44Z,2022-06-12T22:06:51Z,COLLABORATOR,,0,pydata/xarray/pulls/6690,"- [x] Closes #6227 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6690/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1120378011,I_kwDOAMm_X85Cx5yb,6227,"[Bug]: Dataset.where(x, drop=True) behaves inconsistent",43316012,closed,0,,,0,2022-02-01T08:40:30Z,2022-06-12T22:06:51Z,2022-06-12T22:06:51Z,COLLABORATOR,,,,"### What happened? I tried to reduce some dimensions using where (sel did not work in this case) and shorten the dimensions with ""drop=True"". This works fine on DataArrays and Datasets with only a single dimension but fails as soon as you have a Dataset with two dimensions on different variables. The dimensions are left untouched and you have NaNs in the data, just as if you were using ""drop=False"" (see example). I am actually not sure what the expected behavior is, maybe I am wrong and it is correct due to some broadcasting rules? ### What did you expect to happen? I expected that relevant dims are shortened. If the `ds.where` with ""drop=False"" all variables along a dimenions have some NaNs, then using ""drop=True"" I expect these dimensions to be shortened and the NaNs removed. ### Minimal Complete Verifiable Example ```python import xarray as xr # this works ds = xr.Dataset({""a"": (""x"", [1, 2 ,3])}) ds.where(ds > 2, drop=True) # returns: # # Dimensions: (x: 1) # Dimensions without coordinates: x # Data variables: # a (x) float64 3.0 # this doesn't ds = xr.Dataset({""a"": (""x"", [1, 2 ,3]), ""b"": (""y"", [2, 3, 4])}) ds.where(ds > 2, drop=True) # returns: # # Dimensions: (x: 3, y: 3) # Dimensions without coordinates: x, y # Data variables: # a (x) float64 nan nan 3.0 # b (y) float64 nan 3.0 4.0 ``` ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.20.2 pandas: 1.3.5 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.2 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6227/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1243123406,PR_kwDOAMm_X844MH4t,6624,CFTime support for polyval,43316012,closed,0,,,7,2022-05-20T13:04:46Z,2022-06-04T10:18:14Z,2022-05-31T17:16:04Z,COLLABORATOR,,0,pydata/xarray/pulls/6624," - [x] Closes #6623 - [x] Tests added ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6624/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1251511305,PR_kwDOAMm_X844nz8j,6651,Typing support for custom backends,43316012,closed,0,,,4,2022-05-28T07:28:41Z,2022-06-04T10:17:55Z,2022-05-28T10:29:16Z,COLLABORATOR,,0,pydata/xarray/pulls/6651," - [x] Closes #6632 So far we have not found a good way of typing dynamically added custom backends (installed via pip). So the only fallback option is to allow str. I have decided to leave the Literal as type, since it gives the user a nice list of supported backends in the editor. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6651/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1260041574,PR_kwDOAMm_X845E39t,6661,Typing of Dataset,43316012,closed,0,,,4,2022-06-03T15:22:24Z,2022-06-04T10:17:48Z,2022-06-04T04:26:03Z,COLLABORATOR,,0,pydata/xarray/pulls/6661,"- [x] Closes #5945 - [x] Tests added Feel free to wait with merging after the 2022.05.0.dev0 release :) - This PR adds typing to all Dataset methods (only the ones in Dataset directly and DataWithCoords). - On the way it fixes several typing issues and adds small code changes for more consistent typing (hopefully does not break anything). - All test_dataset tests are now typed. - As usual there are still some mypy bugs open that prevent all typing changes to be final (especially https://github.com/python/mypy/issues/12846). I choose to remove the TypeVars in these occasions to prevent false positives on the user side. - Sorry for updating so many files at once, mostly these are the changes of adding `from __future__ import annotations` and automatic fixes from pyupdate. If anyone has some typing expertise: I think that the way align is typed is wrong. The current implementation only works for aligning a sequence of objects of the same type, but not mixed DataArrays and Datasets. In some placed I had to add some ""#type: ignore""s... I should probably open an issue for that. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6661/reactions"", ""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 3, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1248463174,PR_kwDOAMm_X844drcZ,6637,Improved DataArray typing,43316012,closed,0,,,5,2022-05-25T17:54:54Z,2022-05-29T14:02:22Z,2022-05-27T16:03:10Z,COLLABORATOR,,0,pydata/xarray/pulls/6637,"- [x] Tests added This PR improves typing of `DataArray` class methods. Main change is that `T_DataArray` is used whenever possible. I have left everything untouched that would cause problems with mypy (there are some mypy bugs) or require larger typing efforts in other areas. Main problems are `argmin` and `argmax`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6637/reactions"", ""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 3, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1245726154,I_kwDOAMm_X85KQEXK,6632,Literal type of engine argument incompatible with custom backends,43316012,closed,0,,,5,2022-05-23T21:40:14Z,2022-05-28T10:29:16Z,2022-05-28T10:29:16Z,COLLABORATOR,,,,"### What is your issue? In the recent typing improvements the `engine` argument for `open_dataset` was changed from Str to a Literal of xarrays internal engines. This will cause problems for all third party backend plugins. We have several possibilities: 1. I don't know if there is a way to know installed backends at type checking time. Then we could add this support. (I doubt this is possible seeing how dynamic these imports are) 2. Is it possible for these plugins to tell type checkers that their engine is valid, i.e. change the type signature of xarrays function? Then we should add a how-to in the docu. 3. Else we should probably revert to using Str. Any typing experts here that could help?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6632/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1244978704,PR_kwDOAMm_X844SHU7,6630,"fix {full,zeros,ones}_like overloads",43316012,closed,0,,,0,2022-05-23T10:57:36Z,2022-05-27T06:32:37Z,2022-05-24T04:41:22Z,COLLABORATOR,,0,pydata/xarray/pulls/6630," - [x] Closes #6628 - [x] Tests added forgot to add defaults to the dtype argument, so the typing defaulted back to the Union definition...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6630/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1249902974,PR_kwDOAMm_X844idGc,6641,Typing of `str` and `dt` accessors,43316012,closed,0,,,1,2022-05-26T18:25:44Z,2022-05-27T06:32:33Z,2022-05-26T20:12:23Z,COLLABORATOR,,0,pydata/xarray/pulls/6641,"This is initial try to get type hints for `str` and `dt` accessors. I think there is no way of accessing the class at class scope (or is there?), so I had to use plain ""DataArray"" as the generic type of the accessors. I think that is acceptable for now. The hack of `DatetimeAccessor` vs `TimedeltaAccessor` in the `CombinedDatetimelikeAccessor.__new__` is something static typing can handle, so at type-checking time all properties of both accessors are available. If someone has a better idea? Maybe a common interface class for accessors would be also beneficial?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6641/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,,13221727,pull 1244082778,PR_kwDOAMm_X844PPS5,6626,Mypy badge,43316012,closed,0,,,1,2022-05-21T21:12:05Z,2022-05-22T13:56:45Z,2022-05-21T22:59:52Z,COLLABORATOR,,0,pydata/xarray/pulls/6626,"This PR adds a mypy badge to the README. Also, nicer alt texts for all other badges. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6626/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1150618439,I_kwDOAMm_X85ElQtH,6306,Assigning to dataset with missing dim raises ValueError,43316012,open,0,,,1,2022-02-25T16:08:04Z,2022-05-21T20:35:52Z,,COLLABORATOR,,,,"### What happened? I tried to assign values to a dataset with a selector-dict where a variable is missing the dim from the selector-dict. This raises a ValueError. ### What did you expect to happen? I expect that assigning works the same as selecting and it will ignore the missing dims. ### Minimal Complete Verifiable Example ```Python import xarray as xr ds = xr.Dataset({""a"": (""x"", [1, 2, 3]), ""b"": (""y"", [4, 5])}) ds[{""x"": 1}] # this works and returns: # # Dimensions: (y: 2) # Dimensions without coordinates: y # Data variables: # a int64 2 # b (y) int64 4 5 ds[{""x"": 1}] = 1 # this fails and raises a ValueError # ValueError: Variable 'b': indexer {'x': 1} not available ``` ### Relevant log output ```Python Traceback (most recent call last): File ""xarray/core/dataset.py"", line 1591, in _setitem_check var_k = var[key] File ""xarray/core/dataarray.py"", line 740, in __getitem__ return self.isel(indexers=self._item_key_to_dict(key)) File ""xarray/core/dataarray.py"", line 1204, in isel variable = self._variable.isel(indexers, missing_dims=missing_dims) File ""xarray/core/variable.py"", line 1181, in isel indexers = drop_dims_from_indexers(indexers, self.dims, missing_dims) File ""xarray/core/utils.py"", line 834, in drop_dims_from_indexers raise ValueError( ValueError: Dimensions {'x'} do not exist. Expected one or more of ('y',) The above exception was the direct cause of the following exception: Traceback (most recent call last): File """", line 1, in File ""xarray/core/dataset.py"", line 1521, in __setitem__ value = self._setitem_check(key, value) File ""xarray/core/dataset.py"", line 1593, in _setitem_check raise ValueError( ValueError: Variable 'b': indexer {'x': 1} not available ``` ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6306/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1236356209,PR_kwDOAMm_X8431neg,6612,Typing for open_dataset/array/mfdataset and to_netcdf/zarr,43316012,closed,0,,,5,2022-05-15T18:09:15Z,2022-05-19T16:08:07Z,2022-05-17T19:32:01Z,COLLABORATOR,,0,pydata/xarray/pulls/6612," - [x] Closes #5382 Mypy is not able to compute the overloads of to_netcdf properly (too many Unions). I had to add some `# type: ignore` I am not sure about the types of * `concat_dim` * `combine_attrs`: the `Callable` part is still with Anys","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6612/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1236316818,PR_kwDOAMm_X8431gAK,6611,"{full,zeros,ones}_like typing",43316012,closed,0,,,2,2022-05-15T15:18:55Z,2022-05-16T18:10:05Z,2022-05-16T17:42:25Z,COLLABORATOR,,0,pydata/xarray/pulls/6611,"(partial) typing for functions `full_like`, `zeros_like`, `ones_like`. I could not figure out how to properly use TypeVars so many things are ""hardcoded"" with overloads. I have added a `DTypeLikeSave` to `npcompat`, not sure that this file is supposed to be edited. Problem1: `TypeVar[""T"", Dataset, DataArray, Variable]` can only be one of these three, but never with `Union[Dataset, DataArray]` which is used in several other places in xarray. Problem2: The official mypy support says: use `TypeVar[""T"", bound=Union[Dataset, DataArray, Variable]` but the the `isinstance(obj, Dataset)` could not be correctly resolved (is that a mypy issue?). So if anyone can get it to work with TypeVars, feel free to change it. :)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6611/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1234229210,PR_kwDOAMm_X843u7hK,6601,change polyval dim ordering,43316012,closed,0,,,1,2022-05-12T16:30:44Z,2022-05-16T18:10:03Z,2022-05-12T19:01:59Z,COLLABORATOR,,0,pydata/xarray/pulls/6601," - [x] Closes #6600 - [x] Tests added","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6601/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1234135124,PR_kwDOAMm_X843unh4,6599,re-add timedelta support for polyval,43316012,closed,0,,,1,2022-05-12T15:12:41Z,2022-05-12T16:27:01Z,2022-05-12T15:43:29Z,COLLABORATOR,,0,pydata/xarray/pulls/6599," - [x] Closes #6597 - [x] Tests added","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6599/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1233058314,PR_kwDOAMm_X843rEkU,6593,Fix polyval overloads,43316012,closed,0,,,2,2022-05-11T18:54:54Z,2022-05-12T14:50:14Z,2022-05-11T19:42:41Z,COLLABORATOR,,0,pydata/xarray/pulls/6593,"Attempt to fix the typing issues in `xr.polyval`. Some problems are still occuring and require a `type: ignore`. They seem more like mypy issues to me.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6593/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1228977960,PR_kwDOAMm_X843dwXx,6579,Fix Dataset/DataArray.isel with drop=True and scalar DataArray indexes,43316012,closed,0,,,1,2022-05-08T20:17:04Z,2022-05-11T17:19:53Z,2022-05-10T06:18:19Z,COLLABORATOR,,0,pydata/xarray/pulls/6579," - [x] Closes #6554 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Additionally I have added new literal types for error handling (Only applied to functions related to isel such that mypy stops complaining).","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6579/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1222103599,I_kwDOAMm_X85I19Iv,6554,isel with drop=True does not drop coordinates if using scalar DataArray as indexer,43316012,closed,0,,,2,2022-05-01T10:14:37Z,2022-05-10T06:18:19Z,2022-05-10T06:18:19Z,COLLABORATOR,,,,"### What happened? When using `DataArray/Dataset.isel` with `drop=True` with a scalar DataArray as indexer (see example) resulting scalar coordinates do not get dropped. When using an integer the behavior is as expected. ### What did you expect to happen? I expect that using a scalar DataArray behaves the same as an integer. ### Minimal Complete Verifiable Example ```Python import xarray as xr da = xr.DataArray([1, 2, 3], dims=""x"", coord={""k"": (""x"", [0, 1, 2])}) # # array([1, 2, 3]) # Coordinates: # k (x) int32 0 1 2 da.isel({""x"": 1}, drop=True) # works # # array(2) da.isel({""x"": xr.DataArray(1)}, drop=True) # does not drop ""k"" coordinate # # array(2) # Coordinates: # k int32 1 ``` ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment
INSTALLED VERSIONS ------------------ commit: 4fbca23a9fd8458ec8f917dd0e54656925503e90 python: 3.9.6 | packaged by conda-forge | (default, Jul 6 2021, 08:46:02) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('de_DE', 'cp1252') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.2.dev76+g3a7e7ca2.d20210706 pandas: 1.3.0 numpy: 1.21.0 scipy: 1.7.0 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 3.3.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: 1.3.1 PseudoNetCDF: installed cfgrib: None iris: 2.4.0 bottleneck: 1.3.2 dask: 2021.06.2 distributed: 2021.06.2 matplotlib: 3.4.2 cartopy: 0.19.0.post1 seaborn: 0.11.1 numbagg: 0.2.1 fsspec: 2021.06.1 cupy: None pint: 0.17 sparse: 0.12.0 setuptools: 49.6.0.post20210108 pip: 21.3.1 conda: None pytest: 6.2.4 IPython: None sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6554/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1221848774,PR_kwDOAMm_X843HV4r,6548,polyval: Use Horner's algorithm + support chunked inputs,43316012,closed,0,,,13,2022-04-30T14:50:53Z,2022-05-05T19:33:53Z,2022-05-05T19:15:58Z,COLLABORATOR,,0,pydata/xarray/pulls/6548," - [x] Closes #6526, #6411 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6548/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1217543476,I_kwDOAMm_X85Ikj00,6526,xr.polyval first arg requires name attribute,43316012,closed,0,,,2,2022-04-27T15:47:02Z,2022-05-05T19:15:58Z,2022-05-05T19:15:58Z,COLLABORATOR,,,,"### What happened? I have some polynomial coefficients and want to evaluate them at some values using `xr.polyval`. As described in the docstring/docu I created a 1D coordinate DataArray and pass it to `xr.polyval` but it raises a KeyError (see example). ### What did you expect to happen? I expected that the polynomial would be evaluated at the given points. ### Minimal Complete Verifiable Example ```Python import xarray as xr coeffs = xr.DataArray([1, 2, 3], dims=""degree"") # With a ""handmade"" coordinate it fails: coord = xr.DataArray([0, 1, 2], dims=""x"") xr.polyval(coord, coeffs) # raises: # Traceback (most recent call last): # File """", line 1, in # File ""xarray/core/computation.py"", line 1847, in polyval # x = get_clean_interp_index(coord, coord.name, strict=False) # File ""xarray/core/missing.py"", line 252, in get_clean_interp_index # index = arr.get_index(dim) # File ""xarray/core/common.py"", line 404, in get_index # raise KeyError(key) # KeyError: None # If one adds a name to the coord that is called like the dimension: coord2 = xr.DataArray([0, 1, 2], dims=""x"", name=""x"") xr.polyval(coord2, coeffs) # works ``` ### Relevant log output _No response_ ### Anything else we need to know? I assume that the ""standard"" workflow is to obtain the `coord` argument from an existing DataArrays coordinate, where the name would be correctly set already. However, that is not clear from the description, and also prevents my ""manual"" workflow. It could be that the problem will be solved by replacing the coord DataArray argument by an explicit Index in the future. ### Environment
INSTALLED VERSIONS ------------------ commit: None python: 3.9.10 (main, Mar 15 2022, 15:56:56) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 2022.3.0 pandas: 1.4.2 numpy: 1.22.3 scipy: None netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: 0.20.2 seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 58.1.0 pip: 22.0.4 conda: None pytest: None IPython: 8.2.0 sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6526/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1222215528,I_kwDOAMm_X85I2Ydo,6555,sortby with ascending=False should create an index,43316012,closed,0,,,4,2022-05-01T16:57:51Z,2022-05-01T22:17:50Z,2022-05-01T22:17:50Z,COLLABORATOR,,,,"### Is your feature request related to a problem? When using `sortby` with `ascending=False` on a DataArray/Dataset **without** an explicit index, the data gets correctly reversed, but it is not possible to tell anymore which ordering the data has. If an explicit index (like [0, 1, 2]) exists, it gets correctly reordered and allowes correct aligning. ### Describe the solution you'd like For consistency with aligning xarray should create a new index that indicates that the data has been reordered, i.e. [2, 1, 0]. Only downside: this will break code that relies on non-existent indexes. ### Describe alternatives you've considered _No response_ ### Additional context _No response_","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6555/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1221885425,I_kwDOAMm_X85I1H3x,6549,Improved Dataset broadcasting,43316012,open,0,,,3,2022-04-30T17:51:37Z,2022-05-01T14:37:43Z,,COLLABORATOR,,,,"### Is your feature request related to a problem? I am a bit puzzled about how xarrays is broadcasting Datasets. It seems to always add all dimensions to all variables. Is this what you want in general? See this example: ```python import xarray as xr da = xr.DataArray([[1, 2, 3]], dims=(""x"", ""y"")) # # array([[1, 2, 3]]) ds = xr.Dataset({""a"": (""x"", [1]), ""b"": (""z"", [2, 3])}) # # Dimensions: (x: 1, z: 2) # Dimensions without coordinates: x, z # Data variables: # a (x) int32 1 # b (z) int32 2 3 ds.broadcast_like(da) # returns: # # Dimensions: (x: 1, y: 3, z: 2) # Dimensions without coordinates: x, y, z # Data variables: # a (x, y, z) int32 1 1 1 1 1 1 # b (x, y, z) int32 2 3 2 3 2 3 # I think it should return: # # Dimensions: (x: 1, y: 3, z: 2) # Dimensions without coordinates: x, y, z # Data variables: # a (x, y) int32 1 1 1 # notice here without ""z"" dim # b (x, y, z) int32 2 3 2 3 2 3 ``` ### Describe the solution you'd like I would like broadcasting to behave the same way as e.g. a simple addition. In the upper example `da + ds` produces the dimensions that I want. ### Describe alternatives you've considered `ds + xr.zeros_like(da)` this works, but seems more like a ""dirty hack"". ### Additional context Maybe one can add an option to broadcasting that controls this behavior?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6549/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1155321209,I_kwDOAMm_X85E3M15,6313,groubpy on array with multiindex renames indices,43316012,closed,0,,,1,2022-03-01T13:08:30Z,2022-03-17T17:11:44Z,2022-03-17T17:11:44Z,COLLABORATOR,,,,"### What happened? When grouping and reducing an array or dataset over a multi-index the coordinates that make up the multi-index get renamed to ""{name_of_multiindex}\_level\_{i}"". It only works correctly when the Multiindex is a ""homogenous grid"", i.e. as obtained by stacking. ### What did you expect to happen? I expect that all coordinates keep their initial names. ### Minimal Complete Verifiable Example ```Python import xarray as xr # this works: d = xr.DataArray(range(4), dims=""t"", coords={""x"": (""t"", [0, 0, 1, 1]), ""y"": (""t"", [0, 1, 0, 1])}) dd = d.set_index({""t"": [""x"", ""y""]}) # returns # # array([0, 1, 2, 3]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 # - y (t) int64 0 1 0 1 dd.groupby(""t"").mean(...) # returns # # array([0., 1., 2., 3.]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 # - y (t) int64 0 1 0 1 # this does not work d2 = xr.DataArray(range(6), dims=""t"", coords={""x"": (""t"", [0, 0, 1, 1, 0, 1]), ""y"": (""t"", [0, 1, 0, 1, 0, 0])}) dd2 = d2.set_index({""t"": [""x"", ""y""]}) # returns # # array([0, 1, 2, 3, 4, 5]) # Coordinates: # * t (t) MultiIndex # - x (t) int64 0 0 1 1 0 1 # - y (t) int64 0 1 0 1 0 0 dd2.groupby(""t"").mean(...) # returns # # array([2. , 1. , 3.5, 3. ]) # Coordinates: # * t (t) MultiIndex # - t_level_0 (t) int64 0 0 1 1 # - t_level_1 (t) int64 0 1 0 1 ``` ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment INSTALLED VERSIONS ------------------ commit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6313/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 932677183,MDU6SXNzdWU5MzI2NzcxODM=,5550,Dataset.transpose support for missing_dims,43316012,closed,0,,,6,2021-06-29T13:32:37Z,2021-07-17T21:02:59Z,2021-07-17T21:02:59Z,COLLABORATOR,,,,"**Is your feature request related to a problem? Please describe.** I have a dataset where I do not know which of two dimensions (lets call them `a` and `b`) exists in this dataset (So either it has dims (""a"", ""other"") or (""b"", ""other"")). I would like to make sure that this dimension is first using transpose, but currently this is only possible using `if` or `try` statements. Just using `ds.transpose(""a"", ""b"", ""other"")` raises a `ValueError arguments to transpose XXX must be permuted dataset dimensions YYY`. **Describe the solution you'd like** It would be nice if I could just use `ds.transpose(""a"", ""b"", ""other"", missing_dims=""ignore"")` similar to how `DataArray.transpose` handles it. **Describe alternatives you've considered** Currently I'm also using `ds.map(lambda x: x.transpose(""a"", ""b"", ""other"", missing_dims=""ignore""))`, which could (maybe?) replace the current implementation of the transpose. While at it, `transpose_coords` could also be exposed to Dataset.transpose.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5550/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue