id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 1250939008,I_kwDOAMm_X85Kj9CA,6646,`dim` vs `dims`,5635139,closed,0,,,4,2022-05-27T16:15:02Z,2024-04-29T18:24:56Z,2024-04-29T18:24:56Z,MEMBER,,,,"### What is your issue? I've recently been hit with this when experimenting with `xr.dot` and `xr.corr` — `xr.dot` takes `dims`, and `xr.cov` takes `dim`. Because they each take multiple arrays as positional args, kwargs are more conventional. Should we standardize on one of these?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6646/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 2126375172,I_kwDOAMm_X85-vekE,8726,PRs requiring approval & merging main?,5635139,closed,0,,,4,2024-02-09T02:35:58Z,2024-02-09T18:23:52Z,2024-02-09T18:21:59Z,MEMBER,,,,"### What is your issue? Sorry I haven't been on the calls at all recently (unfortunately the schedule is difficult for me). Maybe this was discussed there?  PRs now seem to require a separate approval prior to merging. Is there an upside to this? Is there any difference between those who can approve and those who can merge? Otherwise it just seems like more clicking. PRs also now seem to require merging the latest main prior to merging? I get there's some theoretical value to this, because changes can semantically conflict with each other. But it's extremely rare that this actually happens (can we point to cases?), and it limits the immediacy & throughput of PRs. If the bad outcome does ever happen, we find out quickly when main tests fail and can revert. (fwiw I wrote a few principles around this down a while ago [here](https://prql-lang.org/book/project/contributing/development.html#merges); those are much stronger than what I'm suggesting in this issue though)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8726/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 1923431725,I_kwDOAMm_X85ypT0t,8264,Improve error messages,5635139,open,0,,,4,2023-10-03T06:42:57Z,2023-10-24T18:40:04Z,,MEMBER,,,,"### Is your feature request related to a problem? Coming back to xarray, and using it based on what I remember from a year ago or so, means I make lots of mistakes. I've also been using it outside of a repl, where error messages are more important, given I can't explore a dataset inline. Some of the error messages could be _much_ more helpful. Take one example: ``` xarray.core.merge.MergeError: conflicting values for variable 'date' on objects to be combined. You can skip this check by specifying compat='override'. ``` The second sentence is nice. But the first could be give us much more information: - Which variables conflict? I'm merging four objects, so would be so helpful to know which are causing the issue. - What is the conflict? Is one a superset and I can `join=...`? Are they off by 1 or are they completely different types? - Our `testing.assert_equal` produces pretty nice errors, as a comparison Having these good is really useful, lets folks stay in the flow while they're working, and it signals that we're a well-built, refined library. ### Describe the solution you'd like I'm not sure the best way to surface the issues — error messages make for less legible contributions than features or bug fixes, and the primary audience for good error messages is often the opposite of those actively developing the library. They're also more difficult to manage as GH issues — there could be scores of marginal issues which would often be out of date. One thing we do in PRQL is have a file that snapshots error messages [`test_bad_error_messages.rs`](https://github.com/PRQL/prql/blob/587aa6ec0e2da0181103bc5045cc5dfa43708827/crates/prql-compiler/src/tests/test_bad_error_messages.rs), which can then be a nice contribution to change those from bad to good. I'm not sure whether that would work here (python doesn't seem to have a great snapshotter, `pytest-regtest` is the best I've found; I wrote `pytest-accept` but requires doctests). Any other ideas? ### Describe alternatives you've considered _No response_ ### Additional context A couple of specific error-message issues: - https://github.com/pydata/xarray/issues/2078 - https://github.com/pydata/xarray/issues/5290","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8264/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1905824568,I_kwDOAMm_X85xmJM4,8221,Frequent doc build timeout / OOM,5635139,open,0,,,4,2023-09-20T23:02:37Z,2023-09-21T03:50:07Z,,MEMBER,,,,"### What is your issue? I'm frequently seeing `Command killed due to timeout or excessive memory consumption` in the doc build. It's after 1552 seconds, so it not being a round number means it might be the memory? It follows `writing output... [ 90%] generated/xarray.core.rolling.DatasetRolling.max`, which I wouldn't have thought as a particularly memory-intensive part of the build? Here's an example: https://readthedocs.org/projects/xray/builds/21983708/ Any thoughts for what might be going on? ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8221/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 1326238990,I_kwDOAMm_X85PDM0O,6870,`rolling_exp` loses coords,5635139,closed,0,,,4,2022-08-02T18:27:44Z,2023-09-19T01:13:23Z,2023-09-19T01:13:23Z,MEMBER,,,,"### What happened? We lose the time coord here — `Dimensions without coordinates: time`: ```python ds = xr.tutorial.load_dataset(""air_temperature"") ds.rolling_exp(time=5).mean() Dimensions: (lat: 25, time: 2920, lon: 53) Coordinates: * lat (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0 * lon (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0 Dimensions without coordinates: time Data variables: air (time, lat, lon) float32 241.2 242.5 243.5 ... 296.4 296.1 295.7 ``` (I realize I wrote this, I didn't think this used to happen, but either it always did or I didn't write good enough tests... mea culpa) ### What did you expect to happen? We keep the time coords, like we do for normal `rolling`: ```python In [2]: ds.rolling(time=5).mean() Out[2]: Dimensions: (lat: 25, lon: 53, time: 2920) Coordinates: * lat (lat) float32 75.0 72.5 70.0 67.5 65.0 ... 25.0 22.5 20.0 17.5 15.0 * lon (lon) float32 200.0 202.5 205.0 207.5 ... 322.5 325.0 327.5 330.0 * time (time) datetime64[ns] 2013-01-01 ... 2014-12-31T18:00:00 ``` ### Minimal Complete Verifiable Example ```Python (as above) ``` ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment
INSTALLED VERSIONS ------------------ commit: None python: 3.9.13 (main, May 24 2022, 21:13:51) [Clang 13.1.6 (clang-1316.0.21.2)] python-bits: 64 OS: Darwin OS-release: 21.6.0 machine: arm64 processor: arm byteorder: little LC_ALL: en_US.UTF-8 LANG: None LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 2022.6.0 pandas: 1.4.3 numpy: 1.21.6 scipy: 1.8.1 netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.12.0 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.12.0 distributed: 2021.12.0 matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: 0.2.1 fsspec: 2021.11.1 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 62.3.2 pip: 22.1.2 conda: None pytest: 7.1.2 IPython: 8.4.0 sphinx: 4.3.2
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6870/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 907715257,MDU6SXNzdWU5MDc3MTUyNTc=,5409,Split up tests?,5635139,open,0,,,4,2021-05-31T21:07:53Z,2021-06-16T15:51:19Z,,MEMBER,,,,"Currently a large share of our tests are in `test_dataset.py` and `test_dataarray.py` — each of which are around 7k lines. There's a case for splitting these up: - Many of the tests are somewhat duplicated between the files (and `test_variable.py` in some cases) — i.e. we're running the same test over a Dataset & DataArray, but putting them far away from each other in separate files. Should we instead have them split by ""function""; e.g. `test_rolling.py` for all rolling tests? - My editor takes 5-20 seconds to run the linter and save the file. This is a very narrow complaint. - Now that we're all onto pytest, there's no need to have them in the same class. If we do this, we could start on the margin — new tests around some specific functionality — e.g. join / rolling / reindex / stack (just a few from browsing through) — could go into a new respective `test_{}.py` file. Rather than some big copy and paste commit.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5409/reactions"", ""total_count"": 5, ""+1"": 5, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 326711578,MDU6SXNzdWUzMjY3MTE1Nzg=,2188,Allow all dims-as-kwargs methods to take a dict instead,5635139,closed,0,,,4,2018-05-26T05:22:55Z,2020-08-24T10:21:58Z,2020-08-24T05:24:32Z,MEMBER,,,,"Follow up to https://github.com/pydata/xarray/pull/2174 Pasting from https://github.com/pydata/xarray/pull/2174#issuecomment-392111566 - [x] stack - [x] shift - [x] roll - [x] set_index - [x] reorder_levels - [x] rolling - [ ] resample (not yet, we still support old behavior for the first positional arguments with a warning) ...potentially `rename` (I often trip myself up on that)?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2188/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 305663416,MDU6SXNzdWUzMDU2NjM0MTY=,1992,Canonical approach for new vectorized functions,5635139,closed,0,,,4,2018-03-15T18:09:08Z,2020-02-29T07:22:01Z,2020-02-29T07:22:00Z,MEMBER,,,,"We are moving some code over from pandas to Xarray, and one of the biggest missing features is exponential functions, e.g. `series.ewm(span=20).mean()`. It looks like we can write these as gufuncs without too much trouble in numba. But I also notice that [numbagg](https://github.com/shoyer/numbagg) hasn't changed in a while and that we chose bottleneck for many of the functions in Xarray. - Is numba a good approach for these? - As well as our own internal use, could we add numba functions to Xarray, or are there dependency issues? - Tangentially, I'd be interested why we're using bottleneck rather than numbagg for the existing functions","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1992/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 567993968,MDU6SXNzdWU1Njc5OTM5Njg=,3782,Add groupby.pipe?,5635139,closed,0,,,4,2020-02-20T01:33:31Z,2020-02-21T14:37:44Z,2020-02-21T14:37:44Z,MEMBER,,,,"#### MCVE Code Sample ```python In [1]: import xarray as xr In [3]: import numpy as np In [4]: ds = xr.Dataset( ...: {""foo"": ((""x"", ""y""), np.random.rand(4, 3))}, ...: coords={""x"": [10, 20, 30, 40], ""letters"": (""x"", list(""abba""))}, ...: ) In [5]: ds.groupby('letters') Out[5]: DatasetGroupBy, grouped over 'letters' 2 groups with labels 'a', 'b'. In [8]: ds.groupby('letters').sum(...) / ds.groupby('letters').count(...) Out[8]: Dimensions: (letters: 2) Coordinates: * letters (letters) object 'a' 'b' Data variables: foo (letters) float64 0.4182 0.4995 In [9]: ds.groupby('letters').pipe(lambda x: x.sum() / x.count()) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 ds.groupby('letters').pipe(lambda x: x.sum() / x.count()) AttributeError: 'DatasetGroupBy' object has no attribute 'pipe' ``` #### Expected Output I think we could add `groupby.pipe`, as a convenience? #### Output of ``xr.show_versions()``
In [12]: xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 (default, Aug 7 2019, 17:28:10) [...] python-bits: 64 OS: Linux OS-release: [...] machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.utf8 LOCALE: en_US.UTF-8 libhdf5: 1.10.4 libnetcdf: None xarray: 0.14.1 pandas: 0.25.3 numpy: 1.18.1 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.1.2 cartopy: None seaborn: 0.10.0 numbagg: None setuptools: 45.0.0 pip: 20.0.2 conda: None pytest: 5.3.2 IPython: 7.11.1 sphinx: 2.3.1
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3782/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 493108860,MDU6SXNzdWU0OTMxMDg4NjA=,3308,NetCDF tests failing,5635139,closed,0,,,4,2019-09-13T02:29:39Z,2019-09-13T15:36:27Z,2019-09-13T15:32:46Z,MEMBER,,,,"(edit: original failure was mistaken) Does anyone know off hand why [this](https://dev.azure.com/xarray/xarray/_build/results?buildId=798) is failing? ``` ResolvePackageNotFound: - pandas=0.19 - python=3.5.0 ``` Worst case we could drop it... https://github.com/pydata/xarray/issues/3293","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3308/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 399549346,MDU6SXNzdWUzOTk1NDkzNDY=,2683,Travis failing on segfault at print_versions,5635139,closed,0,,,4,2019-01-15T21:45:30Z,2019-01-18T21:47:44Z,2019-01-18T21:47:44Z,MEMBER,,,,"master is breaking on both the docs and python3.6 `print_versions`: https://travis-ci.org/pydata/xarray/jobs/479834129 ``` /home/travis/.travis/job_stages: line 104: 3514 Segmentation fault (core dumped) python xarray/util/print_versions.py The command ""python xarray/util/print_versions.py"" failed and exited with 139 during . ``` Has anyone seen this before? I can't replicate locally, but I likely don't have the same dependencies ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2683/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 304021813,MDU6SXNzdWUzMDQwMjE4MTM=,1978,Efficient rolling 'trick',5635139,closed,0,,,4,2018-03-10T00:29:33Z,2018-03-10T01:23:06Z,2018-03-10T01:23:06Z,MEMBER,,,,"Based off http://www.rigtorp.se/2011/01/01/rolling-statistics-numpy.html, we wrote up a function that 'tricks' numpy into presenting an array that looks rolling, but without the O^2 memory requirements Would people be interested in this going into xarray? It seems to work really well on a few use-cases, but I imagine it's enough trickery that we might not want to support it in xarray. And, to be clear, it's strictly worse where we have rolling algos. But where we don't, you get a rolling `apply` without the python loops. ```python def rolling_window_numpy(a, window): """""" Make an array appear to be rolling, but using only a view http://www.rigtorp.se/2011/01/01/rolling-statistics-numpy.html """""" shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) def rolling_window(da, span, dim=None, new_dim='dim_0'): """""" Adds a rolling dimension to a DataArray using only a view """""" original_dims = da.dims da = da.transpose(*tuple(d for d in da.dims if d != dim) + (dim,)) result = apply_ufunc( rolling_window_numpy, da, output_core_dims=((new_dim,),), kwargs=(dict(window=span))) return result.transpose(*(original_dims + (new_dim,))) # tests import numpy as np import pandas as pd import pytest import xarray as xr @pytest.fixture def da(dims): return xr.DataArray( np.random.rand(5, 10, 15), dims=(list('abc'))).transpose(*dims) @pytest.fixture(params=[ list('abc'), list('bac'), list('cab'), ]) def dims(request): return request.param def test_iterate_imputation_fills_missing(sample_data): sample_data.iloc[2, 2] = pd.np.nan result = iterate_imputation(sample_data) assert result.shape == sample_data.shape assert result.notnull().values.all() def test_rolling_window(da, dims): result = rolling_window(da, 3, dim='c', new_dim='x') assert result.transpose(*list('abcx')).shape == (5, 10, 13, 3) # should be a view, so doesn't have any larger strides assert np.max(result.values.strides) == 10 * 15 * 8 def test_rolling_window_values(): da = xr.DataArray(np.arange(12).reshape(2, 6), dims=('item', 'date')) rolling = rolling_window(da, 3, dim='date', new_dim='rolling_date') expected = sum([11, 10, 9]) result = rolling.sum('rolling_date').isel(item=1, date=-1) assert result == expected ```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1978/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 153640301,MDU6SXNzdWUxNTM2NDAzMDE=,846,Inconsistent handling of .item with PeriodIndex,5635139,closed,0,,,4,2016-05-08T06:51:03Z,2016-05-11T05:05:36Z,2016-05-11T05:05:36Z,MEMBER,,,,"Is this an inconsistency? With DatetimeIndex, `.item()` gets the item, and `.data` gets a 0-d array: ``` python In [14]: da=xr.DataArray(pd.DataFrame(pd.np.random.rand(10), index=pd.DatetimeIndex(start='2000', periods=10,freq='A'))) In [15]: p=da['dim_0'][0] In [16]: p.values Out[16]: numpy.datetime64('2000-12-31T00:00:00.000000000') In [17]: p.item() Out[17]: 978220800000000000L ``` But with a PeriodIndex, `values` gets the item, and so `.item()` fails ``` python In [22]: da=xr.DataArray(pd.DataFrame(pd.np.random.rand(10), index=pd.PeriodIndex(start='2000', periods=10))) In [23]: p=da['dim_0'][0] In [24]: p.values Out[24]: Period('2000', 'A-DEC') In [25]: p.item() AttributeError: 'pandas._period.Period' object has no attribute 'item' ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/846/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue