id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 2241095068,PR_kwDOAMm_X85sixE5,8935,Use Variable.stack instead of np.ravel,14371165,open,0,,,1,2024-04-12T23:04:35Z,2024-04-13T08:27:13Z,,MEMBER,,1,pydata/xarray/pulls/8935," - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8935/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1410608825,PR_kwDOAMm_X85A4RjC,7173,Add LineCollection plot,14371165,open,0,,,1,2022-10-16T20:16:28Z,2024-04-07T20:26:44Z,,MEMBER,,1,pydata/xarray/pulls/7173,"This adds a line plotter based on `LineCollections`, called `.lines` at the moment. I wanted to replace `darray.plot()` with using LineCollection instead. But unfortunately due to how many cases are supported (and tested in xarray) `darray.plot()` will continue using `plt.plot`. xref: #4820 #5622","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7173/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2215324218,PR_kwDOAMm_X85rKmW7,8890,Add typing to test_groupby.py,14371165,closed,0,,,1,2024-03-29T13:13:59Z,2024-03-29T16:38:17Z,2024-03-29T16:38:16Z,MEMBER,,0,pydata/xarray/pulls/8890,Enforce typing on all tests in `test_groupby.py` and add the remaining type hints.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8890/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2203493958,PR_kwDOAMm_X85qiskT,8868,Try ruff lint for numpy 2.0,14371165,closed,0,,,1,2024-03-22T23:31:04Z,2024-03-22T23:34:11Z,2024-03-22T23:33:03Z,MEMBER,,1,pydata/xarray/pulls/8868,From https://numpy.org/devdocs/numpy_2_0_migration_guide.html#numpy-2-migration-guide,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8868/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1418830429,PR_kwDOAMm_X85BT_a6,7194,Align naming convention with plt.subplots,14371165,closed,0,,,1,2022-10-21T20:31:34Z,2024-03-13T21:44:17Z,2022-10-23T11:58:12Z,MEMBER,,0,pydata/xarray/pulls/7194,"I noticed that the normal notation for `fig, axs = plt.subplots(2, 1)` wasn't used in facetgrid so did a quick find replace to change that. This feels better for my pedantic brain at least but I'm not sure it's worth the effort?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7194/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 2034545604,PR_kwDOAMm_X85hnyZ9,8538,Check that compat is equal to identical only once in dataset concat,14371165,closed,0,,,1,2023-12-10T21:42:20Z,2023-12-13T09:27:11Z,2023-12-13T09:27:11Z,MEMBER,,0,pydata/xarray/pulls/8538,Small change to avoid triggering several if-checks unnecessarily. ,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8538/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1966567753,PR_kwDOAMm_X85eBydo,8386,Use get_args for duckarray checks,14371165,closed,0,,,1,2023-10-28T11:58:57Z,2023-10-28T12:46:10Z,2023-10-28T12:45:32Z,MEMBER,,1,pydata/xarray/pulls/8386," xref: #8376 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8386/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1944068053,PR_kwDOAMm_X85c1sC5,8314,Align MemoryCachedArray and PandasIndexingAdapter more,14371165,closed,0,,,1,2023-10-15T21:42:27Z,2023-10-16T20:01:21Z,2023-10-16T20:01:20Z,MEMBER,,0,pydata/xarray/pulls/8314,"Seen in #8294. The issue is the IndexVariable, ExplicitlyIndexedNDArrayMixin lacks `.array` which is required for IndexVariable, and therefore we need a new minimal class that are common between the two.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8314/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1944059054,PR_kwDOAMm_X85c1qR0,8312,Fix typing issues in tests,14371165,closed,0,,,1,2023-10-15T21:11:12Z,2023-10-16T15:09:57Z,2023-10-16T15:09:57Z,MEMBER,,0,pydata/xarray/pulls/8312,"Seen in #8294. These tests implicitly made sure the type was correct in a way that type checkers wont understand. Make it explicit instead. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8312/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1944083743,PR_kwDOAMm_X85c1vGB,8315,Handle numpy missing the array api function astype,14371165,closed,0,,,1,2023-10-15T22:32:17Z,2023-10-16T06:25:20Z,2023-10-16T06:25:19Z,MEMBER,,0,pydata/xarray/pulls/8315,"This is how our get_array_namespace works: https://github.com/pydata/xarray/blob/dafd726c36e24ac77427513a4a149a6933353b66/xarray/core/duck_array_ops.py#L44-L48 Which usually works. But not for astype. Using np.array_api doesn't work because you have to use np.array_api.Array instead of np.ndarray: ```python import numpy.array_api as nxp nxp.astype(np.array([1, 2,]), np.dtype(float)) Traceback (most recent call last): File ""C:\Users\J.W\AppData\Local\Temp\ipykernel_8616\23329947.py"", line 1, in nxp.astype(np.array([1, 2,]), np.dtype(float)) File ""C:\Users\J.W\anaconda3\envs\xarray-tests\lib\site-packages\numpy\array_api\_data_type_functions.py"", line 20, in astype return Array._new(x._array.astype(dtype=dtype, copy=copy)) AttributeError: 'numpy.ndarray' object has no attribute '_array' ``` I found it simpler to just change astype here. An alternative solution would be to use: https://github.com/data-apis/array-api-compat https://github.com/tomwhite/cubed/pull/317 Seen in #8294.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8315/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1938790253,PR_kwDOAMm_X85ckfW9,8295,"Remove real, imag, astype methods from NamedArray",14371165,closed,0,,,1,2023-10-11T21:44:26Z,2023-10-13T15:58:07Z,2023-10-13T15:58:06Z,MEMBER,,0,pydata/xarray/pulls/8295,"These methods are not in the Array API. Instead convert the methods to functions in similar fashion as the array api. https://data-apis.org/array-api/latest/API_specification/index.html Not sure how to handle array compliant functions with an axis argument (max for example) but that's for a future PR.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8295/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1931329996,PR_kwDOAMm_X85cK3Or,8281,Add high level from_array function in namedarray,14371165,closed,0,,,1,2023-10-07T12:19:23Z,2023-10-10T17:10:37Z,2023-10-10T17:10:37Z,MEMBER,,1,pydata/xarray/pulls/8281,"The Idea is to avoid as much normalization in the NamedArray class as possible. Different types are handled before initializing instead. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` References: https://github.com/tomwhite/cubed/blob/ea885193dd37d27917a24878b51bb086aaef5fb1/cubed/core/ops.py#L34 https://stackoverflow.com/questions/74633074/how-to-type-hint-a-generic-numpy-array https://numpy.org/doc/stable/reference/arrays.scalars.html#scalars https://github.com/numpy/numpy/blob/040ed2dc9847265c581a342301dd87d2b518a3c2/numpy/__init__.pyi#L1423 https://github.com/numpy/numpy/blob/040ed2dc9847265c581a342301dd87d2b518a3c2/numpy/_typing/_array_like.py#L32 Mypy issues: https://github.com/python/typing/issues/548","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8281/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1916331957,PR_kwDOAMm_X85bYX4z,8240,Bind T_DuckArray to NamedArray,14371165,closed,0,,,1,2023-09-27T21:11:58Z,2023-09-28T16:18:26Z,2023-09-28T16:18:26Z,MEMBER,,0,pydata/xarray/pulls/8240,Binding allows typing the .data property.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8240/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1549889322,PR_kwDOAMm_X85IKePm,7460,Add abstractmethods to backend classes,14371165,open,0,,,1,2023-01-19T20:19:36Z,2023-07-29T11:42:33Z,,MEMBER,,1,pydata/xarray/pulls/7460,"It's been unclear to me what methods are necessary to implement or not. I think decorating with `@abstractmethod` will help with that. It's a breaking change though and it could be disruptive. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7460/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1699099029,PR_kwDOAMm_X85P8IoD,7824,Improve concat performance,14371165,closed,0,,,1,2023-05-07T14:54:06Z,2023-06-02T14:36:11Z,2023-06-02T14:36:11Z,MEMBER,,0,pydata/xarray/pulls/7824,"* Don't use python for loops for possibly large coords. Rather create a np array once then filter out bad data. * DuckArrayModule is slightly slow, so cache the first import in a dict instead to speed up later calls. * Add more typing to be more confident that inputs are valid and then remove redundant checks and conversions. - [x] Requires #7843 - [x] Requires #7844. - [x] Requires #7858 - [x] Closes #7833 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7824/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1710752209,PR_kwDOAMm_X85QjIMH,7844,Improve to_dask_dataframe performance,14371165,closed,0,,,1,2023-05-15T20:08:24Z,2023-05-25T20:08:54Z,2023-05-25T20:08:54Z,MEMBER,,0,pydata/xarray/pulls/7844,"* ds.chunks loops all the variables, do it once. * Faster to create a meta dataframe once than letting dask guess 2000 times. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7844/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1697987899,PR_kwDOAMm_X85P4g05,7820,Pin pint to 0.20,14371165,closed,0,,,1,2023-05-05T17:59:40Z,2023-05-06T07:27:28Z,2023-05-06T07:27:28Z,MEMBER,,0,pydata/xarray/pulls/7820,"Newest pint crashes our tests for some reason, pin it for now. - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7820/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1638243008,PR_kwDOAMm_X85MxepP,7668,Pull Request Labeler - Use a released version,14371165,closed,0,,,1,2023-03-23T20:18:49Z,2023-03-23T20:29:04Z,2023-03-23T20:29:04Z,MEMBER,,0,pydata/xarray/pulls/7668," - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7668/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1554036799,PR_kwDOAMm_X85IYHUz,7472,Avoid in-memory broadcasting when converting to_dask_dataframe,14371165,closed,0,,,1,2023-01-24T00:15:01Z,2023-01-26T17:00:24Z,2023-01-26T17:00:23Z,MEMBER,,0,pydata/xarray/pulls/7472,"Turns out that there's a call to `.set_dims` that forces a broadcast on the numpy coordinates. - [x] Closes #6811 - [x] Tests added, see #7474. - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` Debugging script:
```python import dask.array as da import xarray as xr import numpy as np chunks = 5000 # I have to restart the pc if running with this: # dim1_sz = 100_000 # dim2_sz = 100_000 # Does not crash when using the following constants, >5 gig RAM increase though: dim1_sz = 40_000 dim2_sz = 40_000 x = da.random.random((dim1_sz, dim2_sz), chunks=chunks) ds = xr.Dataset( { ""x"": xr.DataArray( data=x, dims=[""dim1"", ""dim2""], coords={""dim1"": np.arange(0, dim1_sz), ""dim2"": np.arange(0, dim2_sz)}, ) } ) # with dask.config.set(**{""array.slicing.split_large_chunks"": True}): df = ds.to_dask_dataframe() print(df) ```
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7472/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1555497796,PR_kwDOAMm_X85Ic_wm,7474,Add benchmarks for to_dataframe and to_dask_dataframe,14371165,closed,0,,,1,2023-01-24T18:48:26Z,2023-01-24T21:00:39Z,2023-01-24T20:13:30Z,MEMBER,,0,pydata/xarray/pulls/7474,Related to #7472.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7474/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1523232313,PR_kwDOAMm_X85G2q5I,7426,Add lazy backend ASV test,14371165,closed,0,,,1,2023-01-06T22:01:26Z,2023-01-12T16:00:05Z,2023-01-11T18:56:25Z,MEMBER,,0,pydata/xarray/pulls/7426,"This tests xr.open_dataset without any slow file reading that can quickly become the majority of the performance time. Related to #7374. Timings for the new ASV-tests: ``` [ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ============ chunks -------- ------------ None 265±4ms {} 1.17±0.02s ======== ============ [ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============= ============= -- chunks --------- --------------------------- engine None {} ========= ============= ============= scipy 4.81±0.1ms 6.65±0.01ms netcdf4 8.41±0.08ms 10.9±0.2ms ========= ============= ============= ``` From the IOReadCustomEngine test we can see that chunking datasets with many variables (2000+) is considerably slower.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7426/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1485474624,PR_kwDOAMm_X85E1fDn,7370,absolufy-imports - Only in xarray folder,14371165,closed,0,,,1,2022-12-08T21:57:58Z,2022-12-10T11:42:32Z,2022-12-09T16:55:12Z,MEMBER,,0,pydata/xarray/pulls/7370,"This reverts some of commit 6e77f5e8942206b3e0ab08c3621ade1499d8235b and #7204. Apparently using it on all folders is not a good idea, follow pandas example. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7370/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1445870847,PR_kwDOAMm_X85CuWCd,7281,Use a default value for constant dimensions,14371165,closed,0,,,1,2022-11-11T18:41:16Z,2022-11-20T09:59:38Z,2022-11-20T09:59:38Z,MEMBER,,0,pydata/xarray/pulls/7281,"* default markersize values of widths are 18 to 72. * plt.scatter default markersize is 36. * plt.plot default linewidth is 6. With main we get 18 for constant arrays, but 36 if markersize was undefined. This seems a bit inconsistent to me. This PR adds a default value instead for constant arrays Follow up to #7272.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7281/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1440711212,PR_kwDOAMm_X85Cc31j,7272,Handle division by zero in _Normalize._calc_widths,14371165,closed,0,,,1,2022-11-08T18:35:55Z,2022-11-11T06:27:50Z,2022-11-11T06:27:50Z,MEMBER,,0,pydata/xarray/pulls/7272,Found an issue when constant values was used. Now if constant values are found it'll default to the minimum width value instead.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7272/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1422864082,PR_kwDOAMm_X85BhW94,7218,Rename FacetGrid.axes to FacetGrid.axs in tests,14371165,closed,0,,,1,2022-10-25T17:59:39Z,2022-10-27T17:45:20Z,2022-10-27T17:45:19Z,MEMBER,,0,pydata/xarray/pulls/7218,"Follow up to #7194. This fixes all the warnings related to the change. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7218/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1410526253,PR_kwDOAMm_X85A4Bki,7169,Rework docs about scatter plots,14371165,closed,0,,,1,2022-10-16T15:37:25Z,2022-10-17T13:40:01Z,2022-10-17T13:40:01Z,MEMBER,,0,pydata/xarray/pulls/7169,"Show off some more possibilities with the scatter plot. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7169/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1340745375,PR_kwDOAMm_X849RXXN,6923,Add Self in xarray.core.types,14371165,closed,0,,,1,2022-08-16T18:48:15Z,2022-08-22T12:24:05Z,2022-08-22T12:24:05Z,MEMBER,,0,pydata/xarray/pulls/6923,"Adds `typing_extensions.Self` wrapped in some safety checks. Wont really become useful until https://github.com/python/mypy/issues/11871 is fixed. But it can be used with pyright at least.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6923/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1052952145,PR_kwDOAMm_X84uf1L8,5988,Check for py version instead of try/except when importing entry_points,14371165,closed,0,,,1,2021-11-14T14:23:18Z,2022-08-12T09:08:25Z,2021-11-14T20:16:57Z,MEMBER,,0,pydata/xarray/pulls/5988,This removes the need for the `# type: ignore` to make mypy happy. It is also clearer when this compatibillity code can be removed.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5988/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1099631638,PR_kwDOAMm_X84w0pGe,6159,Import Literal from typing instead of typing_extensions,14371165,closed,0,,,1,2022-01-11T21:26:59Z,2022-08-12T09:06:58Z,2022-01-11T21:59:16Z,MEMBER,,0,pydata/xarray/pulls/6159," Small edit to #6121. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6159/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1171424128,PR_kwDOAMm_X840jZCq,6371,Remove test_rasterio_vrt_network,14371165,closed,0,,,1,2022-03-16T18:49:29Z,2022-08-12T09:06:02Z,2022-03-17T06:25:22Z,MEMBER,,0,pydata/xarray/pulls/6371," This test has been failing with a 404 error for a while. Remove the test because a lot of the functionality is implemented in rioxarray. - [x] Closes #6363 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6371/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1167394407,PR_kwDOAMm_X840WQW1,6351,Run pyupgrade on core/groupby,14371165,closed,0,,,1,2022-03-12T20:46:15Z,2022-08-12T09:05:37Z,2022-03-13T04:21:54Z,MEMBER,,0,pydata/xarray/pulls/6351," Minor touch up looking through #5950. - [x] xref #6244 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6351/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1133868003,PR_kwDOAMm_X84yl7lK,6270,Update pyupgrade to py38-plus,14371165,closed,0,,,1,2022-02-12T10:58:00Z,2022-08-12T09:05:31Z,2022-02-12T13:50:31Z,MEMBER,,0,pydata/xarray/pulls/6270,xref: #6244,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6270/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1034382021,PR_kwDOAMm_X84tlqIi,5893,Only run asv benchmark when labeled,14371165,closed,0,,,1,2021-10-24T10:44:17Z,2022-08-12T09:02:27Z,2021-10-24T11:35:42Z,MEMBER,,0,pydata/xarray/pulls/5893,Small fix to #5796. The benchmark was only intended to run when the PR has the label `run-benchmark`. I split the if condition in multiple lines for better readability thinking it didn't change the function but it did.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5893/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1332546810,PR_kwDOAMm_X8482ZvM,6897,Type xr.tutorial,14371165,closed,0,,,1,2022-08-09T00:20:19Z,2022-08-12T08:59:30Z,2022-08-10T07:40:18Z,MEMBER,,0,pydata/xarray/pulls/6897,Add some typing to the open_dataset functions. Was doing some debugging and I only got `Any` when trying to simplify the problem with tutorial data.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6897/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1236115720,PR_kwDOAMm_X84306YV,6609,Add setuptools as dependency in ASV benchmark CI,14371165,closed,0,,,1,2022-05-14T20:33:09Z,2022-05-15T17:14:24Z,2022-05-14T23:06:44Z,MEMBER,,0,pydata/xarray/pulls/6609,"Adding `setuptools_scm[toml]` and `setuptools_scm_git_archive` appears to fix the issue. Not sure why this is needed though. - [x] Closes #6606 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/6609/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 781168967,MDExOlB1bGxSZXF1ZXN0NTUwOTM4NDAy,4776,Speed up missing._get_interpolator,14371165,closed,0,,,1,2021-01-07T09:32:40Z,2021-05-18T18:17:06Z,2021-01-08T15:55:39Z,MEMBER,,0,pydata/xarray/pulls/4776,"Importing scipy.interpolate is slow and should only be done when necessary. Test case from 200ms to 6ms. - [x] Related to #4739 - [x] Passes `isort . && black . && mypy . && flake8` By default, the upstream dev CI is disabled on pull request and push events. You can override this behavior per commit by adding a `[test-upstream]` tag to the first line of the commit message. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4776/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 797453161,MDExOlB1bGxSZXF1ZXN0NTY0NDQ4Mzg3,4850,"Allow ""unit"" in label_from_attrs",14371165,closed,0,,,1,2021-01-30T16:02:34Z,2021-05-18T18:16:58Z,2021-01-30T16:26:04Z,MEMBER,,0,pydata/xarray/pulls/4850," It is also popular to call units `unit`. Allow both keys to be appended to labels in plots. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4850/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 891165324,MDExOlB1bGxSZXF1ZXN0NjQ0MDc2NDg3,5297,Add whats new for dataset interpolation with non-numerics,14371165,closed,0,,,1,2021-05-13T16:00:51Z,2021-05-18T18:15:35Z,2021-05-13T16:30:21Z,MEMBER,,0,pydata/xarray/pulls/5297,Follow up for #5008,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5297/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 892748525,MDExOlB1bGxSZXF1ZXN0NjQ1MzcyMjE3,5319,Convert new_shape from list to tuple in _unstack_once,14371165,closed,0,,,1,2021-05-16T20:04:05Z,2021-05-18T18:14:00Z,2021-05-16T23:50:10Z,MEMBER,,0,pydata/xarray/pulls/5319,"Having `new_shape` as a `list` broke some checks in sparse. `.shape` is usually a tuple so I changed `new_shape` to be a tuple as well. sparse arrays errors one step further down now instead, at the item assignment... - Related to #5315 - [x] Passes `pre-commit run --all-files` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5319/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 892411608,MDExOlB1bGxSZXF1ZXN0NjQ1MTIxMzQ0,5314,Add version variable for optional imports in pycompat,14371165,closed,0,,,1,2021-05-15T10:43:35Z,2021-05-18T18:13:32Z,2021-05-16T23:50:31Z,MEMBER,,0,pydata/xarray/pulls/5314,It was difficult to do version checks with optional imports so I added variables in pycompat and removed some of the imports I found.,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5314/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull