html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/7888#issuecomment-1577838062,https://api.github.com/repos/pydata/xarray/issues/7888,1577838062,IC_kwDOAMm_X85eC-Xu,2448579,2023-06-06T03:20:39Z,2023-06-06T03:20:39Z,MEMBER,Should we delete the cfgrib example instead?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1736542260
https://github.com/pydata/xarray/issues/7841#issuecomment-1577827466,https://api.github.com/repos/pydata/xarray/issues/7841,1577827466,IC_kwDOAMm_X85eC7yK,2448579,2023-06-06T03:05:47Z,2023-06-06T03:05:47Z,MEMBER,https://github.com/corteva/rioxarray/issues/676,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1709215291
https://github.com/pydata/xarray/issues/7894#issuecomment-1577474914,https://api.github.com/repos/pydata/xarray/issues/7894,1577474914,IC_kwDOAMm_X85eBlti,2448579,2023-06-05T21:05:47Z,2023-06-05T21:05:57Z,MEMBER,">  but is it not possible for it to calculate the integrated values where there were regular values?

@chfite Can you provide an example of what you would want it to do please","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1742035781
https://github.com/pydata/xarray/issues/7890#issuecomment-1574365471,https://api.github.com/repos/pydata/xarray/issues/7890,1574365471,IC_kwDOAMm_X85d1ukf,2448579,2023-06-02T22:04:33Z,2023-06-02T22:04:33Z,MEMBER,"I think the only other one is dask, which *should* also work.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1738835134
https://github.com/pydata/xarray/issues/7890#issuecomment-1574331034,https://api.github.com/repos/pydata/xarray/issues/7890,1574331034,IC_kwDOAMm_X85d1mKa,2448579,2023-06-02T21:23:25Z,2023-06-02T21:27:06Z,MEMBER,"This seems like a real easy fix?
```
axis = tuple(self.get_axis_num(d) for d in dim)
```

EDIT: the Array API seems to type `axis` as `Optional[Union[int, Tuple[int, ...]]]` pretty consistently, so it seems like we should always pass tuples down to the array library","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1738835134
https://github.com/pydata/xarray/pull/7862#issuecomment-1574264842,https://api.github.com/repos/pydata/xarray/issues/7862,1574264842,IC_kwDOAMm_X85d1WAK,2448579,2023-06-02T20:14:33Z,2023-06-02T20:14:48Z,MEMBER,"> xarray/tests/test_coding_strings.py:36: error: No overload variant of ""dtype"" matches argument types ""str"", ""Dict[str, Type[str]]""  [call-overload]


cc @Illviljan @headtr1ck ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1720045908
https://github.com/pydata/xarray/pull/7883#issuecomment-1572306481,https://api.github.com/repos/pydata/xarray/issues/7883,1572306481,IC_kwDOAMm_X85dt34x,2448579,2023-06-01T15:49:42Z,2023-06-01T15:49:42Z,MEMBER,Hmmm [ndim is in the array api](https://data-apis.org/array-api/latest/API_specification/array_object.html#attributes) so potentially we could just update the test.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1731320789
https://github.com/pydata/xarray/issues/7884#issuecomment-1572276996,https://api.github.com/repos/pydata/xarray/issues/7884,1572276996,IC_kwDOAMm_X85dtwsE,2448579,2023-06-01T15:30:26Z,2023-06-01T15:30:26Z,MEMBER,Please ask over at the cfgrib repo. But it does look like a bad environment / bad install.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1732510720
https://github.com/pydata/xarray/issues/5644#issuecomment-1561173824,https://api.github.com/repos/pydata/xarray/issues/5644,1561173824,IC_kwDOAMm_X85dDZ9A,2448579,2023-05-24T13:39:30Z,2023-05-24T13:39:30Z,MEMBER,Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,955043280
https://github.com/pydata/xarray/pull/7865#issuecomment-1560225308,https://api.github.com/repos/pydata/xarray/issues/7865,1560225308,IC_kwDOAMm_X85c_yYc,2448579,2023-05-23T22:49:14Z,2023-05-23T22:49:14Z,MEMBER,Thanks @martinfleis this is a very valuable contribution to the ecosystem! ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1720850091
https://github.com/pydata/xarray/pull/7019#issuecomment-1553390072,https://api.github.com/repos/pydata/xarray/issues/7019,1553390072,IC_kwDOAMm_X85cltn4,2448579,2023-05-18T17:34:01Z,2023-05-18T17:34:01Z,MEMBER,Thanks @TomNicholas Big change!,"{""total_count"": 3, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 3, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1368740629
https://github.com/pydata/xarray/pull/7788#issuecomment-1551638978,https://api.github.com/repos/pydata/xarray/issues/7788,1551638978,IC_kwDOAMm_X85cfCHC,2448579,2023-05-17T15:45:41Z,2023-05-17T15:45:41Z,MEMBER,"Thanks @maxhollmann I pushed a test for #2377.

I see this is your first contribution to Xarray. Welcome! #1792 would be a nice contribution if you're looking for more to do ;)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1685422501
https://github.com/pydata/xarray/issues/7841#issuecomment-1550594808,https://api.github.com/repos/pydata/xarray/issues/7841,1550594808,IC_kwDOAMm_X85cbDL4,2448579,2023-05-17T02:24:57Z,2023-05-17T02:24:57Z,MEMBER,"We should migrate these to rioxarray if they aren't there already. 

cc @snowman2 @scottyhq @JessicaS11 ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1709215291
https://github.com/pydata/xarray/pull/7788#issuecomment-1550187275,https://api.github.com/repos/pydata/xarray/issues/7788,1550187275,IC_kwDOAMm_X85cZfsL,2448579,2023-05-16T18:47:27Z,2023-05-16T18:47:27Z,MEMBER,Thanks @maxhollmann can you add a note to `whats-new.rst` please?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1685422501
https://github.com/pydata/xarray/issues/7838#issuecomment-1546024198,https://api.github.com/repos/pydata/xarray/issues/7838,1546024198,IC_kwDOAMm_X85cJnUG,2448579,2023-05-12T16:52:29Z,2023-05-12T16:52:29Z,MEMBER,"Thanks! I tracked this down to the difference between reading the file remotely, or downloading first and accessing a local copy  on v0.20.2 (the latter is what I used to produce my figures). Can you reproduce?
```
remote = xr.open_dataset(
    ""http://kage.ldeo.columbia.edu:81/SOURCES/.LOCAL/.sst.mon.mean.nc/.sst/dods""
).sst.sel(lat=20, lon=280, method=""nearest"")
local = xr.open_dataset(""~/Downloads/data.cdf"").sst.sel(lat=20, lon=280, method=""nearest"")
```

```
(remote.groupby(""time.month"") - remote.groupby(""time.month"").mean()).plot()
```
![image](https://github.com/pydata/xarray/assets/2448579/a44e1c34-55b9-4b88-9604-c344cca14744)


```
(local.groupby(""time.month"") - local.groupby(""time.month"").mean()).plot()
```
![image](https://github.com/pydata/xarray/assets/2448579/23f2f4dc-377e-43d1-b296-aab7c34285e1)
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1706864252
https://github.com/pydata/xarray/pull/7796#issuecomment-1545974461,https://api.github.com/repos/pydata/xarray/issues/7796,1545974461,IC_kwDOAMm_X85cJbK9,2448579,2023-05-12T16:09:04Z,2023-05-12T16:09:14Z,MEMBER,"Yes, 10-30% on my machine:
```
       before           after         ratio
     [91f14c9b]       [1eb74149]
     <v2023.04.2>       <speedup-dt-accesor>
-     1.17±0.04ms      1.02±0.04ms     0.87  accessors.DateTimeAccessor.time_dayofyear('standard')
-     1.25±0.07ms         976±30μs     0.78  accessors.DateTimeAccessor.time_year('standard')
-      3.90±0.1ms      2.68±0.05ms     0.69  accessors.DateTimeAccessor.time_year('noleap')
-     4.75±0.07ms      3.25±0.04ms     0.68  accessors.DateTimeAccessor.time_dayofyear('noleap')
```","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 0}",,1689364566
https://github.com/pydata/xarray/pull/7788#issuecomment-1545939343,https://api.github.com/repos/pydata/xarray/issues/7788,1545939343,IC_kwDOAMm_X85cJSmP,2448579,2023-05-12T15:39:24Z,2023-05-12T15:39:24Z,MEMBER,"I defer to @shoyer, the solution with `where_method` seems good to me.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1685422501
https://github.com/pydata/xarray/issues/7838#issuecomment-1545930953,https://api.github.com/repos/pydata/xarray/issues/7838,1545930953,IC_kwDOAMm_X85cJQjJ,2448579,2023-05-12T15:32:47Z,2023-05-12T15:35:02Z,MEMBER,"Can you compare `ds_anom` at a point in both versions please? I get a plot that looks quite similar
### v0.20.2:
![image](https://github.com/pydata/xarray/assets/2448579/fbea3985-ec66-44c9-b434-7497778b9004)

### v2022.03.0:
![image](https://github.com/pydata/xarray/assets/2448579/a4843606-934f-4a86-831b-31cfaba1e62c)

### v2023.04.2

![image](https://github.com/pydata/xarray/assets/2448579/57560794-d73b-47e0-adee-4343b60692b5)
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1706864252
https://github.com/pydata/xarray/issues/4412#issuecomment-1542465745,https://api.github.com/repos/pydata/xarray/issues/4412,1542465745,IC_kwDOAMm_X85b8CjR,2448579,2023-05-10T16:06:26Z,2023-05-10T16:06:54Z,MEMBER,"Related request for `to_zarr(..., encode_cf=False)`: https://github.com/pydata/xarray/issues/5405 

This came up in the discussion today.

cc @tom-white @kmuehlbauer ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,696047530
https://github.com/pydata/xarray/issues/7831#issuecomment-1540816942,https://api.github.com/repos/pydata/xarray/issues/7831,1540816942,IC_kwDOAMm_X85b1wAu,2448579,2023-05-09T20:02:27Z,2023-05-09T20:02:27Z,MEMBER,"I was suggesting to special-case rioxarray only just because we recently deleted the rasterio backend, and that might ease the transition. Can we do it at the top-level open-dataset when `engine==""rasterio""` but `rioxarray` is not importable?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1702025553
https://github.com/pydata/xarray/issues/7831#issuecomment-1540435470,https://api.github.com/repos/pydata/xarray/issues/7831,1540435470,IC_kwDOAMm_X85b0S4O,2448579,2023-05-09T15:44:55Z,2023-05-09T15:44:55Z,MEMBER,I think this would be nice since we recently removed the rasterio backend.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1702025553
https://github.com/pydata/xarray/pull/7825#issuecomment-1538808220,https://api.github.com/repos/pydata/xarray/issues/7825,1538808220,IC_kwDOAMm_X85buFmc,2448579,2023-05-08T18:03:58Z,2023-05-08T18:03:58Z,MEMBER,LGTM. Thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1699112787
https://github.com/pydata/xarray/issues/7817#issuecomment-1537031554,https://api.github.com/repos/pydata/xarray/issues/7817,1537031554,IC_kwDOAMm_X85bnT2C,2448579,2023-05-06T03:23:13Z,2023-05-06T03:23:13Z,MEMBER,"> because CFMaskCoder will convert the variable to floating point and insert ""NaN"". In CFDatetimeCoder the floating point is cast back to int64 to transform into datetime64.

Can we reverse the order so that `CFDatetimeCoder` handles `_FillValue` for datetime arrays, and then it will be skipped in `CFMaskCoder`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1696097756
https://github.com/pydata/xarray/issues/7814#issuecomment-1534000660,https://api.github.com/repos/pydata/xarray/issues/7814,1534000660,IC_kwDOAMm_X85bbv4U,2448579,2023-05-04T02:35:39Z,2023-05-04T02:35:39Z,MEMBER,We;ll need a reproducible example,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1695028906
https://github.com/pydata/xarray/pull/7795#issuecomment-1531721493,https://api.github.com/repos/pydata/xarray/issues/7795,1531721493,IC_kwDOAMm_X85bTDcV,2448579,2023-05-02T15:56:48Z,2023-05-02T15:56:48Z,MEMBER,Thanks @Illviljan I'm merging so I can run benchmarks on a few other PRs,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7795#issuecomment-1530439461,https://api.github.com/repos/pydata/xarray/issues/7795,1530439461,IC_kwDOAMm_X85bOKcl,2448579,2023-05-01T22:23:53Z,2023-05-01T22:23:53Z,MEMBER,well that was it apparently 🤷🏾‍♂️ ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7795#issuecomment-1530417559,https://api.github.com/repos/pydata/xarray/issues/7795,1530417559,IC_kwDOAMm_X85bOFGX,2448579,2023-05-01T22:13:03Z,2023-05-01T22:13:03Z,MEMBER,"`asv run -v` failed locally and printed out the yaml file. For some reason `channels` is empty
```
·· Error running /Users/dcherian/mambaforge/bin/conda env create -f /var/folders/yp/rzgd0f7n5zbcbf5dg73sfb8ntv3xxm/T/tmpbusmg2a0.yml -p /Users/dcherian/work/python/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force (exit status 1)
·· conda env create/update failed: in /Users/dcherian/work/python/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d with:
   name: conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse
   channels:
   dependencies:
      - python=3.10
      - wheel
      - pip
      - setuptools_scm
      - numpy
      - pandas
      - netcdf4
      - scipy
      - bottleneck
      - dask
      - distributed
      - flox
      - numpy_groupies
      - sparse
      - cftime
  ```","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7795#issuecomment-1530367137,https://api.github.com/repos/pydata/xarray/issues/7795,1530367137,IC_kwDOAMm_X85bN4yh,2448579,2023-05-01T21:53:34Z,2023-05-01T21:53:34Z,MEMBER,"Here's  a truncated diff:
```
5c5
<  Job is about to start running on the hosted runner: GitHub Actions 2 (hosted)
---
>  Job is about to start running on the hosted runner: GitHub Actions 4 (hosted)
14,16c14,16
<  Version: 20230417.1
<  Included Software: https://github.com/actions/runner-images/blob/ubuntu20/20230417.1/images/linux/Ubuntu2004-Readme.md
<  Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu20%2F20230417.1
---
>  Version: 20230426.1
>  Included Software: https://github.com/actions/runner-images/blob/ubuntu20/20230426.1/images/linux/Ubuntu2004-Readme.md
>  Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu20%2F20230426.1
64c64
<  git version 2.40.0
---
>  git version 2.40.1
66c66
<  Temporarily overriding HOME='/home/runner/work/_temp/f08ac219-4ee6-4fb1-962a-95a90ed2ab7a' before making global git config changes
---
>  Temporarily overriding HOME='/home/runner/work/_temp/ea752478-2e4c-4c11-9a80-64b3b9b03d35' before making global git config changes
575c593
<  0f4e99d036b0d6d76a3271e6191eacbc9922662f
---
>  a220022e7ef8d3df68619643954352fa39394ea8
585c603
<  '0f4e99d036b0d6d76a3271e6191eacbc9922662f'
---
>  'a220022e7ef8d3df68619643954352fa39394ea8'
605,607c623,625
<  Received 5491626 of 5491626 (100.0%), 32.9 MBs/sec
<  Cache Size: ~5 MB (5491626 B)
<  [command]/usr/bin/tar -xf /home/runner/work/_temp/d8162eae-90d3-4277-abdf-d3574b623c16/cache.tgz -P -C /home/runner/work/xarray/xarray -z
---
>  Received 5491632 of 5491632 (100.0%), 8.7 MBs/sec
>  Cache Size: ~5 MB (5491632 B)
>  [command]/usr/bin/tar -xf /home/runner/work/_temp/8bf4827f-8427-494a-86a7-047343a5e85a/cache.tgz -P -C /home/runner/work/xarray/xarray -z
609c627
<  Cache hit for key 'micromamba-bin https://micro.mamba.pm/api/micromamba/linux-64/latest Wed Apr 26 2023 YYY'
---
>  Cache hit for key 'micromamba-bin https://micro.mamba.pm/api/micromamba/linux-64/latest Fri Apr 28 2023 YYY'
634c652
<  Modifying RC file ""/tmp/micromamba-m6EeL6/.bashrc""
---
>  Modifying RC file ""/tmp/micromamba-zQHzJD/.bashrc""
637c655
<  Adding (or replacing) the following in your ""/tmp/micromamba-m6EeL6/.bashrc"" file
---
>  Adding (or replacing) the following in your ""/tmp/micromamba-zQHzJD/.bashrc"" file
772c790
<    + coverage                          7.2.3  py310h1fa729e_0          conda-forge/linux-64     282kB
---
>    + coverage                          7.2.4  py310h2372a71_0          conda-forge/linux-64     280kB
813c831
<    + hypothesis                       6.74.0  pyha770c72_0             conda-forge/noarch       291kB
---
>    + hypothesis                       6.74.1  pyha770c72_0             conda-forge/noarch       292kB
920c938
<    + platformdirs                      3.3.0  pyhd8ed1ab_0             conda-forge/noarch        18kB
---
>    + platformdirs                      3.5.0  pyhd8ed1ab_0             conda-forge/noarch        19kB
954c972
<    + requests                         2.28.2  pyhd8ed1ab_1             conda-forge/noarch        57kB
---
>    + requests                         2.29.0  pyhd8ed1ab_0             conda-forge/noarch        57kB
985c1003
<    + virtualenv                      20.22.0  pyhd8ed1ab_0             conda-forge/noarch         3MB
---
>    + virtualenv                      20.23.0  pyhd8ed1ab_0             conda-forge/noarch         3MB
1029,1030d1046
<  Linking font-ttf-inconsolata-3.000-h77eed37_0
<  Linking font-ttf-source-code-pro-2.038-h77eed37_0
1032a1049,1050
>  Linking font-ttf-inconsolata-3.000-h77eed37_0
>  Linking font-ttf-source-code-pro-2.038-h77eed37_0
1053d1070
<  Linking xorg-xextproto-7.3.0-h0b41bf4_1003
1054a1072,1073
>  Linking xorg-xextproto-7.3.0-h0b41bf4_1003
>  Linking xxhash-0.8.1-h0b41bf4_0
1056a1076
>  Linking libaec-1.0.6-hcb278e6_1
1057a1078,1079
>  Linking libopenblas-0.3.21-pthreads_h78a6416_3
>  Linking lz4-c-1.9.4-hcb278e6_0
1066a1089,1091
>  Linking c-ares-1.18.1-h7f98852_0
>  Linking keyutils-1.6.1-h166bdaf_0
>  Linking openssl-3.1.0-hd590300_2
1069,1070d1093
<  Linking ncurses-6.3-h27087fc_1
<  Linking lz4-c-1.9.4-hcb278e6_0
1072d1094
<  Linking xxhash-0.8.1-h0b41bf4_0
1078,1079c1100
<  Linking icu-70.1-h27087fc_0
<  Linking libopenblas-0.3.21-pthreads_h78a6416_3
---
>  Linking ncurses-6.3-h27087fc_1
1082,1085c1103
<  Linking c-ares-1.18.1-h7f98852_0
<  Linking keyutils-1.6.1-h166bdaf_0
<  Linking openssl-3.1.0-hd590300_2
<  Linking libaec-1.0.6-hcb278e6_1
---
>  Linking icu-70.1-h27087fc_0
1091a1110,1111
>  Linking openblas-0.3.21-pthreads_h320a7e8_3
>  Linking libblas-3.9.0-16_linux64_openblas
1096,1097d1115
<  Linking openblas-0.3.21-pthreads_h320a7e8_3
<  Linking libblas-3.9.0-16_linux64_openblas
1100a1119
>  Linking zstd-1.5.2-h3eb15da_6
1101a1121
>  Linking libnghttp2-1.52.0-h61bc06f_0
1103d1122
<  Linking zstd-1.5.2-h3eb15da_6
1107d1125
<  Linking libnghttp2-1.52.0-h61bc06f_0
1111a1130,1131
>  Linking libcblas-3.9.0-16_linux64_openblas
>  Linking liblapack-3.9.0-16_linux64_openblas
1115,1119d1134
<  Linking libcblas-3.9.0-16_linux64_openblas
<  Linking liblapack-3.9.0-16_linux64_openblas
<  Linking libxslt-1.1.37-h873f0b0_0
<  Linking nss-3.89-he45b914_0
<  Linking sqlite-3.40.0-h4ff8645_1
1122a1138,1140
>  Linking libxslt-1.1.37-h873f0b0_0
>  Linking nss-3.89-he45b914_0
>  Linking sqlite-3.40.0-h4ff8645_1
1131d1148
<  Linking libpq-15.2-hb675445_0
1132a1150
>  Linking libpq-15.2-hb675445_0
1138d1155
<  Linking postgresql-15.2-h3248436_0
1139a1157
>  Linking curl-8.0.1-h588be90_0
1142d1159
<  Linking curl-8.0.1-h588be90_0
1143a1161
>  Linking postgresql-15.2-h3248436_0
1145a1164
>  Linking tiledb-2.13.2-hd532e3d_0
1148,1149d1166
<  Linking tiledb-2.13.2-hd532e3d_0
<  Linking kealib-1.5.0-ha7026e8_0
1150a1168
>  Linking kealib-1.5.0-ha7026e8_0
1166d1183
<  Linking pyshp-2.3.1-pyhd8ed1ab_0
1168,1170d1184
<  Linking munkres-1.1.4-pyh9f0ad1d_0
<  Linking pyparsing-3.0.9-pyhd8ed1ab_0
<  Linking cycler-0.11.0-pyhd8ed1ab_0
1172,1174d1185
<  Linking pytz-2023.3-pyhd8ed1ab_0
<  Linking python-tzdata-2023.3-pyhd8ed1ab_0
<  Linking affine-2.4.0-pyhd8ed1ab_0
1181d1191
<  Linking certifi-2022.12.7-pyhd8ed1ab_0
1182a1193,1200
>  Linking pyshp-2.3.1-pyhd8ed1ab_0
>  Linking munkres-1.1.4-pyh9f0ad1d_0
>  Linking pyparsing-3.0.9-pyhd8ed1ab_0
>  Linking cycler-0.11.0-pyhd8ed1ab_0
>  Linking pytz-2023.3-pyhd8ed1ab_0
>  Linking python-tzdata-2023.3-pyhd8ed1ab_0
>  Linking certifi-2022.12.7-pyhd8ed1ab_0
>  Linking affine-2.4.0-pyhd8ed1ab_0
1226c1244
<  Linking platformdirs-3.3.0-pyhd8ed1ab_0
---
>  Linking platformdirs-3.5.0-pyhd8ed1ab_0
1230c1248
<  Linking virtualenv-20.22.0-pyhd8ed1ab_0
---
>  Linking virtualenv-20.23.0-pyhd8ed1ab_0
1234,1237d1251
<  Linking unicodedata2-15.0.0-py310h5764c6d_0
<  Linking pillow-9.4.0-py310h023d228_1
<  Linking kiwisolver-1.4.4-py310hbf28c38_1
<  Linking llvmlite-0.39.1-py310h58363a5_1
1246d1259
<  Linking libcf-1.0.3-py310h71500c5_116
1247a1261,1264
>  Linking unicodedata2-15.0.0-py310h5764c6d_0
>  Linking pillow-9.4.0-py310h023d228_1
>  Linking kiwisolver-1.4.4-py310hbf28c38_1
>  Linking llvmlite-0.39.1-py310h58363a5_1
1248a1266
>  Linking libcf-1.0.3-py310h71500c5_116
1254c1272
<  Linking coverage-7.2.3-py310h1fa729e_0
---
>  Linking coverage-7.2.4-py310h2372a71_0
1259,1260d1276
<  Linking contourpy-1.0.7-py310hdf3cbec_0
<  Linking shapely-2.0.1-py310h8b84c32_0
1261a1278,1279
>  Linking shapely-2.0.1-py310h8b84c32_0
>  Linking contourpy-1.0.7-py310hdf3cbec_0
1277,1278c1295
<  Linking hypothesis-6.74.0-pyha770c72_0
<  Linking snuggs-1.4.7-py_0
---
>  Linking hypothesis-6.74.1-pyha770c72_0
1279a1297
>  Linking snuggs-1.4.7-py_0
1292c1310
<  Linking requests-2.28.2-pyhd8ed1ab_1
---
>  Linking requests-2.29.0-pyhd8ed1ab_0
1458c1476
<    coverage                   7.2.3         py310h1fa729e_0          conda-forge
---
>    coverage                   7.2.4         py310h2372a71_0          conda-forge
1499c1517
<    hypothesis                 6.74.0        pyha770c72_0             conda-forge
---
>    hypothesis                 6.74.1        pyha770c72_0             conda-forge
1606c1624
<    platformdirs               3.3.0         pyhd8ed1ab_0             conda-forge
---
>    platformdirs               3.5.0         pyhd8ed1ab_0             conda-forge
1640c1658
<    requests                   2.28.2        pyhd8ed1ab_1             conda-forge
---
>    requests                   2.29.0        pyhd8ed1ab_0             conda-forge
1671c1689
<    virtualenv                 20.22.0       pyhd8ed1ab_0             conda-forge
---
>    virtualenv                 20.23.0       pyhd8ed1ab_0             conda-forge
1717c1735
<  echo ""Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f""
---
>  echo ""Contender: a220022e7ef8d3df68619643954352fa39394ea8""
1723c1741
<  asv continuous $ASV_OPTIONS v2023.04.2  0f4e99d036b0d6d76a3271e6191eacbc9922662f \
---
>  asv continuous $ASV_OPTIONS v2023.04.2  a220022e7ef8d3df68619643954352fa39394ea8 \
1744c1762
<  · No information stored about machine 'fv-az133-41'. I know about nothing.
---
>  · No information stored about machine 'fv-az613-176'. I know about nothing.
1754c1772
<  machine [fv-az133-41]: 2. os:  The OS type and version of this machine.  For example,
---
>  machine [fv-az613-176]: 2. os:  The OS type and version of this machine.  For example,
1756c1774
<  os [Linux 5.15.0-1035-azure]: 3. arch:  The generic CPU architecture of this machine.  For example,
---
>  os [Linux 5.15.0-1036-azure]: 3. arch:  The generic CPU architecture of this machine.  For example,
1761c1779
<  cpu [Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz]: 5. num_cpu:  The number of CPUs in the system. For example, '4'.
---
>  cpu [Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz]: 5. num_cpu:  The number of CPUs in the system. For example, '4'.
1765,1767c1783,1785
<  ram [7110636]: Baseline: v2023.04.2 
<  + echo 'Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f'
<  Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f
---
>  + echo 'Contender: a220022e7ef8d3df68619643954352fa39394ea8'
>  ram [7110632]: Baseline: v2023.04.2 
>  Contender: a220022e7ef8d3df68619643954352fa39394ea8
1772d1789
<  + asv continuous --split --show-stderr --factor 1.5 v2023.04.2 0f4e99d036b0d6d76a3271e6191eacbc9922662f
1773a1791
>  + asv continuous --split --show-stderr --factor 1.5 v2023.04.2 a220022e7ef8d3df68619643954352fa39394ea8
1774a1793,1818
>  Traceback (most recent call last):
>    File ""/home/runner/micromamba-root/envs/xarray-tests/bin/asv"", line 10, in <module>
>      sys.exit(main())
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/main.py"", line 38, in main
>      result = args.func(args)
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/__init__.py"", line 49, in run_from_args
>      return cls.run_from_conf_args(conf, args)
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py"", line 75, in run_from_conf_args
>      return cls.run(
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py"", line 114, in run
>      result = Run.run(
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/run.py"", line 294, in run
>      Setup.perform_setup(environments, parallel=parallel)
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py"", line 89, in perform_setup
>      list(map(_create, environments))
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py"", line 21, in _create
>      env.create()
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/environment.py"", line 704, in create
>      self._setup()
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py"", line 174, in _setup
>      self._run_conda(['env', 'create', '-f', env_file_name,
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py"", line 227, in _run_conda
>      return util.check_output([conda] + args, env=env)
>    File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/util.py"", line 754, in check_output
>      raise ProcessError(args, retcode, stdout, stderr)
>  asv.util.ProcessError: Command '/usr/bin/conda env create -f /tmp/tmp4i32rw09.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force' returned non-zero exit status 1
1776,1827c1820,1826
<  · Discovering benchmarks
<  ·· Uninstalling from conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse
<  ·· Building 0f4e99d0 <main> for conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse
<  ·· Installing 0f4e99d0 <main> into conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse
<  · Running 362 total benchmarks (2 commits * 1 environments * 181 benchmarks)
<  [  0.00%] · For xarray commit 91f14c9b <v2023.04.2> (round 1/2):
<  [  0.00%] ·· Building for conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse

```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7795#issuecomment-1530349371,https://api.github.com/repos/pydata/xarray/issues/7795,1530349371,IC_kwDOAMm_X85bN0c7,2448579,2023-05-01T21:44:08Z,2023-05-01T21:44:08Z,MEMBER,Yes very frustrating it broke between [5](https://github.com/pydata/xarray/actions/runs/4810343177/jobs/8562834120) and [3](https://github.com/pydata/xarray/actions/runs/4831637514/jobs/8609338116) days ago,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7795#issuecomment-1529882778,https://api.github.com/repos/pydata/xarray/issues/7795,1529882778,IC_kwDOAMm_X85bMCia,2448579,2023-05-01T16:01:59Z,2023-05-01T16:01:59Z,MEMBER,"I'm not sure why asv can't create  the env for benchmarking.


```
+ echo 'Baseline:  25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d (pydata:main)'
ram [7110632]: Baseline:  25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d (pydata:main)
+ echo 'Contender: 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32 (dcherian:bench-cftime-groupby)'
Contender: 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32 (dcherian:bench-cftime-groupby)
++ which conda
+ export CONDA_EXE=/usr/bin/conda
+ CONDA_EXE=/usr/bin/conda
+ ASV_OPTIONS='--split --show-stderr --factor 1.5'
+ asv continuous --split --show-stderr --factor 1.5 25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32
+ tee benchmarks.log
+ sed '/Traceback \|failed$\|PERFORMANCE DECREASED/ s/^/::error::/'
Traceback (most recent call last):
  File ""/home/runner/micromamba-root/envs/xarray-tests/bin/asv"", line 10, in <module>
    sys.exit(main())
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/main.py"", line 38, in main
    result = args.func(args)
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/__init__.py"", line 49, in run_from_args
    return cls.run_from_conf_args(conf, args)
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py"", line 75, in run_from_conf_args
    return cls.run(
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py"", line 114, in run
    result = Run.run(
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/run.py"", line 294, in run
    Setup.perform_setup(environments, parallel=parallel)
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py"", line 89, in perform_setup
    list(map(_create, environments))
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py"", line 21, in _create
    env.create()
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/environment.py"", line 704, in create
    self._setup()
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py"", line 174, in _setup
    self._run_conda(['env', 'create', '-f', env_file_name,
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py"", line 227, in _run_conda
    return util.check_output([conda] + args, env=env)
  File ""/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/util.py"", line 754, in check_output
    raise ProcessError(args, retcode, stdout, stderr)
asv.util.ProcessError: Command '/usr/bin/conda env create -f /tmp/tmphnyugp42.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force' returned non-zero exit status 1
· Creating environments
·· Error running /usr/bin/conda env create -f /tmp/tmphnyugp42.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force (exit status 1)
   STDOUT -------->
   
   STDERR -------->
   
   SpecNotFound: /tmp/tmphnyugp42.yml is not a valid yaml file.
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1688781350
https://github.com/pydata/xarray/pull/7799#issuecomment-1529877407,https://api.github.com/repos/pydata/xarray/issues/7799,1529877407,IC_kwDOAMm_X85bMBOf,2448579,2023-05-01T16:00:25Z,2023-05-01T16:00:25Z,MEMBER,"In general I think it would be fine to merge incremental changes. 

It may be good to schedule a quick  30 minute chat  to sync  up ideas here.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1690019325
https://github.com/pydata/xarray/issues/7797#issuecomment-1528934982,https://api.github.com/repos/pydata/xarray/issues/7797,1528934982,IC_kwDOAMm_X85bIbJG,2448579,2023-04-30T04:15:09Z,2023-04-30T04:15:29Z,MEMBER,"```
x_slice.groupby(""time.month"") - clim
```

Nice, we test `x.groupby(""time.month"")  - clim.slice(...)` but not `x.sel(...).groupby() - clim`

Is it possible for you to run nightly tests against xarray's main branch? That would help a lot.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1689655334
https://github.com/pydata/xarray/issues/3267#issuecomment-1528597777,https://api.github.com/repos/pydata/xarray/issues/3267,1528597777,IC_kwDOAMm_X85bHI0R,2448579,2023-04-29T03:30:20Z,2023-04-29T03:30:20Z,MEMBER,Should be better now with `flox` installed,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,485508509
https://github.com/pydata/xarray/issues/7790#issuecomment-1528072972,https://api.github.com/repos/pydata/xarray/issues/7790,1528072972,IC_kwDOAMm_X85bFIsM,2448579,2023-04-28T20:43:44Z,2023-04-28T20:43:44Z,MEMBER,https://github.com/pydata/xarray/blob/25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d/xarray/coding/times.py#L717-L735,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,1685803922
https://github.com/pydata/xarray/pull/7635#issuecomment-1527657759,https://api.github.com/repos/pydata/xarray/issues/7635,1527657759,IC_kwDOAMm_X85bDjUf,2448579,2023-04-28T14:29:52Z,2023-04-28T14:31:39Z,MEMBER,Thanks for your patience here @dsgreen2 . This is a nice contribution. Welcome to Xarray!,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,1627298527
https://github.com/pydata/xarray/issues/7713#issuecomment-1527652802,https://api.github.com/repos/pydata/xarray/issues/7713,1527652802,IC_kwDOAMm_X85bDiHC,2448579,2023-04-28T14:26:15Z,2023-04-28T14:26:32Z,MEMBER,This  is a duplicate of https://github.com/pydata/xarray/issues/4404 It seems like this is for MultiIndex support. A better error message  and  documentation  would be a great contribution! Let's move the conversation there,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1652227927
https://github.com/pydata/xarray/pull/7739#issuecomment-1527648649,https://api.github.com/repos/pydata/xarray/issues/7739,1527648649,IC_kwDOAMm_X85bDhGJ,2448579,2023-04-28T14:22:58Z,2023-04-28T14:22:58Z,MEMBER,Thanks @jmccreight great  work!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659078413
https://github.com/pydata/xarray/pull/7741#issuecomment-1527647415,https://api.github.com/repos/pydata/xarray/issues/7741,1527647415,IC_kwDOAMm_X85bDgy3,2448579,2023-04-28T14:22:02Z,2023-04-28T14:22:02Z,MEMBER,Thanks @abrammer very nice work!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659654612
https://github.com/pydata/xarray/issues/7764#issuecomment-1526241680,https://api.github.com/repos/pydata/xarray/issues/7764,1526241680,IC_kwDOAMm_X85a-JmQ,2448579,2023-04-27T19:26:13Z,2023-04-27T19:26:13Z,MEMBER,I think I agree with `use_opt_einsum: bool`,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1672288892
https://github.com/pydata/xarray/issues/7764#issuecomment-1526240154,https://api.github.com/repos/pydata/xarray/issues/7764,1526240154,IC_kwDOAMm_X85a-JOa,2448579,2023-04-27T19:25:29Z,2023-04-27T19:25:29Z,MEMBER,"`numpy.einsum` has some version of `opt_einsum` implemented under the `optimize` kwarg. IIUC this is False by default because it adds overhead to small problems ([comment](https://github.com/numpy/numpy/pull/5488#issuecomment-246496342))
> The complete overhead for computing a path (parsing the input, finding the path, and organization that data) with default options is about 150us. Looks like einsum takes a minimum of 5-10us to call as a reference. So the worst case scenario would be that the optimization overhead makes einsum 30x slower. Personally id go for turning optimization off by default and then revisiting if someone tackles the parsing issue to reduce the overhead.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1672288892
https://github.com/pydata/xarray/issues/7790#issuecomment-1526224630,https://api.github.com/repos/pydata/xarray/issues/7790,1526224630,IC_kwDOAMm_X85a-Fb2,2448579,2023-04-27T19:18:12Z,2023-04-27T19:18:12Z,MEMBER,"I think the issue is that we're always running ""CF encoding"" which is more appropriate for netCDF4 than Zarr, since Zarr supports datetime64 natively. And currently there's no way to control whether the datetime encoder is applied or not, we just look at the dtype:
https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/coding/times.py#L697-L704

I think the right way to fix this is to allow the user to run the `encode` and `write` steps separately, with the encoding steps being controllable: https://github.com/pydata/xarray/issues/4412","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1685803922
https://github.com/pydata/xarray/issues/6610#issuecomment-1523666774,https://api.github.com/repos/pydata/xarray/issues/6610,1523666774,IC_kwDOAMm_X85a0U9W,2448579,2023-04-26T15:59:06Z,2023-04-26T16:06:17Z,MEMBER,"We voted to move forward with this API:
```python
data.groupby({
	""x0"": xr.BinGrouper(bins=pd.IntervalIndex.from_breaks(coords[""x_vertices""])),  # binning
    ""y"": xr.UniqueGrouper(labels=[""a"", ""b"", ""c""]),  # categorical, data.y is dask-backed
    ""time"": xr.TimeResampleGrouper(freq=""MS"")
	},
)
```

We won't break backwards-compatibility for `da.groupby(other_data_array)` but for any complicated use-cases with `Grouper` the user must add the `by` variable to the xarray object, and refer to it by name in the dictionary as above,","{""total_count"": 4, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 1, ""eyes"": 1}",,1236174701
https://github.com/pydata/xarray/pull/7787#issuecomment-1523670253,https://api.github.com/repos/pydata/xarray/issues/7787,1523670253,IC_kwDOAMm_X85a0Vzt,2448579,2023-04-26T16:01:16Z,2023-04-26T16:01:16Z,MEMBER,:+1: from me!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1684281101
https://github.com/pydata/xarray/pull/7561#issuecomment-1523669010,https://api.github.com/repos/pydata/xarray/issues/7561,1523669010,IC_kwDOAMm_X85a0VgS,2448579,2023-04-26T16:00:32Z,2023-04-26T16:00:32Z,MEMBER,"I'd like to merge this soon. It's an internal refactor with no public API changes.

I think we can expose the Grouper objects publicly in a new PR","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1600382587
https://github.com/pydata/xarray/issues/6610#issuecomment-1498463195,https://api.github.com/repos/pydata/xarray/issues/6610,1498463195,IC_kwDOAMm_X85ZULvb,2448579,2023-04-06T04:07:05Z,2023-04-26T15:52:21Z,MEMBER,"Here's a question.

In #7561, I implement `Grouper` objects that don't have any information of the variable we're grouping by. So the future API would be:

``` python
data.groupby({
	""x0"": xr.BinGrouper(bins=pd.IntervalIndex.from_breaks(coords[""x_vertices""])),  # binning
    ""y"": xr.UniqueGrouper(labels=[""a"", ""b"", ""c""]),  # categorical, data.y is dask-backed
    ""time"": xr.TimeResampleGrouper(freq=""MS"")
	},
)
```

Does this look OK or do we want to support passing the DataArray or variable name as a `by` kwarg:  
```python
xr.BinGrouper(by=""x0"", bins=pd.IntervalIndex.from_breaks(coords[""x_vertices""]))
``` 

This syntax would support passing `DataArray` in `by` so `xr.UniqueGrouper(by=data.y)` for example. Is that an important usecase to support? In #7561, I create new `ResolvedGrouper` objects that do  contain `by` as a DataArray always, so it's really a question of exposing that to the user.

PS: [Pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html) has a `key` kwarg for a column name. So following that would mean

``` python
data.groupby([
	xr.BinGrouper(""x0"", bins=pd.IntervalIndex.from_breaks(coords[""x_vertices""])),  # binning
    xr.UniqueGrouper(""y"", labels=[""a"", ""b"", ""c""]),  # categorical, data.y is dask-backed
    xr.TimeResampleGrouper(""time"", freq=""MS"")
	],
)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1236174701
https://github.com/pydata/xarray/issues/7782#issuecomment-1523618985,https://api.github.com/repos/pydata/xarray/issues/7782,1523618985,IC_kwDOAMm_X85a0JSp,2448579,2023-04-26T15:29:14Z,2023-04-26T15:29:14Z,MEMBER,"Thanks for the in-depth investigation!

> As we can see from the above output, in netCDF4-python scaling is adapting the dtype to unsigned, not masking. This is also reflected in the docs [unidata.github.io/netcdf4-python/#Variable](https://unidata.github.io/netcdf4-python/#Variable).

Do we know why this is so?

> If Xarray is trying to align with netCDF4-python it should separate mask and scale as netCDF4-python is doing. It does this already by using different coders but it doesn't separate it API-wise.

:+1:","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1681353195
https://github.com/pydata/xarray/pull/7786#issuecomment-1523589353,https://api.github.com/repos/pydata/xarray/issues/7786,1523589353,IC_kwDOAMm_X85a0CDp,2448579,2023-04-26T15:10:44Z,2023-04-26T15:10:44Z,MEMBER,Thanks @ksunden ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1683839855
https://github.com/pydata/xarray/pull/7785#issuecomment-1521951732,https://api.github.com/repos/pydata/xarray/issues/7785,1521951732,IC_kwDOAMm_X85atyP0,2448579,2023-04-25T15:03:39Z,2023-04-25T15:03:39Z,MEMBER,"Thanks we use this file only for Githubs dependency graph, which now supports `pyproject.toml` so we should just migrate and have one less thing to update.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1683335751
https://github.com/pydata/xarray/pull/7650#issuecomment-1521798719,https://api.github.com/repos/pydata/xarray/issues/7650,1521798719,IC_kwDOAMm_X85atM4_,2448579,2023-04-25T13:32:43Z,2023-04-25T13:32:43Z,MEMBER,Yes you should be good on 2023.04.2,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1632422255
https://github.com/pydata/xarray/issues/7782#issuecomment-1520550980,https://api.github.com/repos/pydata/xarray/issues/7782,1520550980,IC_kwDOAMm_X85aocRE,2448579,2023-04-24T17:18:37Z,2023-04-24T19:55:11Z,MEMBER,"> We would want to check the different attributes and apply the coders only as needed.

The current approach seeems OK no? It seems like the bug is that `UnsignedMaskCoder`should be outside `if mask_and_scale`


> We would want to check the different attributes and apply the coders only as needed. 

EDIT: I mean that each coder checks whether it is applicable, so we already do that","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1681353195
https://github.com/pydata/xarray/issues/7782#issuecomment-1520434316,https://api.github.com/repos/pydata/xarray/issues/7782,1520434316,IC_kwDOAMm_X85an_yM,2448579,2023-04-24T15:55:48Z,2023-04-24T15:55:48Z,MEMBER,">mask_and_scale=False will also deactivate the Unsigned decoding.

Do these two have to be linked?  I wonder if we can handle the filling  later :
https://github.com/pydata/xarray/blob/2657787f76fffe4395288702403a68212e69234b/xarray/coding/variables.py#L397-L407

It seems like this code is setting  fill values to the right type  for CFMaskCoder which is the next step


https://github.com/pydata/xarray/blob/2657787f76fffe4395288702403a68212e69234b/xarray/conventions.py#L266-L272
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1681353195
https://github.com/pydata/xarray/issues/7772#issuecomment-1518429926,https://api.github.com/repos/pydata/xarray/issues/7772,1518429926,IC_kwDOAMm_X85agWbm,2448579,2023-04-21T23:56:26Z,2023-04-21T23:56:26Z,MEMBER,"I cannot reproduce this on `main`. What version are you running

```
(xarray-tests) 17:55:11 [cgdm-caguas] {~/python/xarray/devel}
──────> python lazy-nbytes.py
8582842640
Filename: /Users/dcherian/work/python/xarray/devel/lazy-nbytes.py

Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     4    101.5 MiB    101.5 MiB           1   @profile
     5                                         def get_dataset_size() :
     6    175.9 MiB     74.4 MiB           1       dataset =     xa.open_dataset(""test_1.nc"")
     7    175.9 MiB      0.0 MiB           1       print(dataset.nbytes)
```

The BackendArray types define `shape` and `dtype` so we can calculate size without loading the data.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1676561243
https://github.com/pydata/xarray/issues/7758#issuecomment-1516738260,https://api.github.com/repos/pydata/xarray/issues/7758,1516738260,IC_kwDOAMm_X85aZ5bU,2448579,2023-04-20T18:03:48Z,2023-04-20T18:03:48Z,MEMBER,"We already have this:
https://github.com/pydata/xarray/blob/a4c54a3b1085d7d8ab900f9a645439270327d2c3/xarray/backends/netCDF4_.py#L102-L106

https://github.com/pydata/xarray/blob/a4c54a3b1085d7d8ab900f9a645439270327d2c3/xarray/backends/common.py#L61-L68

but you're right I don't think its configurable.

```
ds = xr.open_dataset(
    ""https://thredds.met.no/thredds/dodsC/osisaf/met.no/ice/index/v2p1/nh/osisaf_nh_sie_monthly.nc""
)
ds.sie.variable._data.array.array.array.array.array.datastore.is_remote
```
```
True
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1668898601
https://github.com/pydata/xarray/issues/7773#issuecomment-1516642402,https://api.github.com/repos/pydata/xarray/issues/7773,1516642402,IC_kwDOAMm_X85aZiBi,2448579,2023-04-20T16:44:51Z,2023-04-20T16:44:51Z,MEMBER,Can you try with `netcdf4.Dataset` to remove xarray from the equation,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1676792648
https://github.com/pydata/xarray/pull/7769#issuecomment-1516635276,https://api.github.com/repos/pydata/xarray/issues/7769,1516635276,IC_kwDOAMm_X85aZgSM,2448579,2023-04-20T16:38:41Z,2023-04-20T16:38:41Z,MEMBER,"Thanks @gsieros, and apologies for the trouble. Clearly our test suite was insufficient.

 I'll push out a release soon.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1675073096
https://github.com/pydata/xarray/issues/7770#issuecomment-1515134832,https://api.github.com/repos/pydata/xarray/issues/7770,1515134832,IC_kwDOAMm_X85aTx9w,2448579,2023-04-19T17:49:36Z,2023-04-19T17:49:36Z,MEMBER,"You should be using entrypoints:
- https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html
- https://tutorial.xarray.dev/advanced/backends/backends.html","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1675299031
https://github.com/pydata/xarray/pull/7739#issuecomment-1514762927,https://api.github.com/repos/pydata/xarray/issues/7739,1514762927,IC_kwDOAMm_X85aSXKv,2448579,2023-04-19T13:44:10Z,2023-04-19T13:44:10Z,MEMBER,"> what about instead of adding another kwarg, you could use data = True / False / ""numpy""?

Oh yeah, I like this. Only suggestion is `data =  True / False / ""array"" / ""list""` where `True` and `""list""` are synonymous.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659078413
https://github.com/pydata/xarray/pull/7669#issuecomment-1514043756,https://api.github.com/repos/pydata/xarray/issues/7669,1514043756,IC_kwDOAMm_X85aPnls,2448579,2023-04-19T02:26:30Z,2023-04-19T02:26:30Z,MEMBER,Thanks @remigathoni great PR!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1639361476
https://github.com/pydata/xarray/pull/7698#issuecomment-1513871216,https://api.github.com/repos/pydata/xarray/issues/7698,1513871216,IC_kwDOAMm_X85aO9dw,2448579,2023-04-18T22:36:29Z,2023-04-18T22:36:29Z,MEMBER,"> One workaround is to use os.read when passed a filename, and .read() when passed a file object.

Not sure about the details here. I think it would be good to discuss in an issue before proceeding","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1646350377
https://github.com/pydata/xarray/pull/7739#issuecomment-1513865565,https://api.github.com/repos/pydata/xarray/issues/7739,1513865565,IC_kwDOAMm_X85aO8Fd,2448579,2023-04-18T22:28:07Z,2023-04-18T22:28:07Z,MEMBER,"Copying my  comment from https://github.com/pydata/xarray/issues/1599#issuecomment-1504276696
>  Perhaps we should have array_to_list: bool instead. If False, we just preserve the underlying array type.
> Then the user could do ds.as_numpy().to_dict(array_to_list=False) to always get numpy arrays as #7739

`array_data` or `data_as_array` could be other options","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659078413
https://github.com/pydata/xarray/pull/7753#issuecomment-1513849818,https://api.github.com/repos/pydata/xarray/issues/7753,1513849818,IC_kwDOAMm_X85aO4Pa,2448579,2023-04-18T22:08:50Z,2023-04-18T22:08:50Z,MEMBER,"> Is it possible to prevent the cancelling of this run when pushing another commit to main? This would be nice to trace the regressions.

I *think* that's the default

>  assume that a decreased performance will result in a CI fail? Maybe we could automate this even more and automatically open an issue? 

Yeah but its a little flaky: https://labs.quansight.org/blog/github-actions-benchmarks so the noise might not be worth it","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1666853925
https://github.com/pydata/xarray/issues/7759#issuecomment-1511781870,https://api.github.com/repos/pydata/xarray/issues/7759,1511781870,IC_kwDOAMm_X85aG_Xu,2448579,2023-04-17T17:25:17Z,2023-04-17T17:25:17Z,MEMBER,"Ouch, thanks for finding and report this bad bug!

We'll issue a bugfix release soon.","{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1670415238
https://github.com/pydata/xarray/issues/7716#issuecomment-1507391665,https://api.github.com/repos/pydata/xarray/issues/7716,1507391665,IC_kwDOAMm_X85Z2Pix,2448579,2023-04-13T17:56:33Z,2023-04-13T17:56:33Z,MEMBER,Should be fixed with the various repodata patches,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1654022522
https://github.com/pydata/xarray/issues/4325#issuecomment-1507213204,https://api.github.com/repos/pydata/xarray/issues/4325,1507213204,IC_kwDOAMm_X85Z1j-U,2448579,2023-04-13T15:56:51Z,2023-04-13T15:56:51Z,MEMBER,"Over in https://github.com/pydata/xarray/issues/7344#issuecomment-1336299057 @shoyer

> That said -- we could also switch to smarter NumPy based algorithms to implement most moving window calculations, e.g,. using np.nancumsum for moving window means.

After some digging, this would involve using [""summed area tables""](https://en.wikipedia.org/wiki/Summed-area_table) which have been generalized to nD, and can be used to compute all our built-in reductions (except median). Basically we'd store the summed area table (repeated `np.cumsum`) and then calculate reductions using binary ops (mostly subtraction) on those tables. 

This would be an intermediate level project but we could implement it incrementally (start with `sum` for example). One downside is the potential for floating point inaccuracies because we're taking differences of potentially large numbers.

cc @aulemahal ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,675482176
https://github.com/pydata/xarray/pull/4915#issuecomment-1507198725,https://api.github.com/repos/pydata/xarray/issues/4915,1507198725,IC_kwDOAMm_X85Z1gcF,2448579,2023-04-13T15:46:18Z,2023-04-13T15:46:18Z,MEMBER,Can you copy your comment to #4325 please?,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,809366777
https://github.com/pydata/xarray/pull/7681#issuecomment-1507126565,https://api.github.com/repos/pydata/xarray/issues/7681,1507126565,IC_kwDOAMm_X85Z1O0l,2448579,2023-04-13T14:59:41Z,2023-04-13T14:59:41Z,MEMBER,Thanks @harshitha1201 !,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1641188400
https://github.com/pydata/xarray/pull/7731#issuecomment-1507124290,https://api.github.com/repos/pydata/xarray/issues/7731,1507124290,IC_kwDOAMm_X85Z1ORC,2448579,2023-04-13T14:58:20Z,2023-04-13T14:58:20Z,MEMBER,"Thanks for patiently working through this Spencer. I'll merge now, and then we can release tomorrow.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657396474
https://github.com/pydata/xarray/pull/7731#issuecomment-1506285455,https://api.github.com/repos/pydata/xarray/issues/7731,1506285455,IC_kwDOAMm_X85ZyBeP,2448579,2023-04-13T03:35:01Z,2023-04-13T03:35:01Z,MEMBER,"There are a bunch of warnings in the tests that could be silenced:
```
D:\a\xarray\xarray\xarray\tests\test_dataset.py:516: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. This warning is caused by passing non-nanosecond np.datetime64 or np.timedelta64 values to the DataArray or Variable constructor; it can be silenced by converting the values to nanosecond precision ahead of time.
```

But we can also just merge quickly to get a release out","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657396474
https://github.com/pydata/xarray/pull/7751#issuecomment-1505834246,https://api.github.com/repos/pydata/xarray/issues/7751,1505834246,IC_kwDOAMm_X85ZwTUG,2448579,2023-04-12T19:49:28Z,2023-04-12T19:49:28Z,MEMBER,nice!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1664888324
https://github.com/pydata/xarray/pull/4915#issuecomment-1505584866,https://api.github.com/repos/pydata/xarray/issues/4915,1505584866,IC_kwDOAMm_X85ZvWbi,2448579,2023-04-12T16:34:55Z,2023-04-12T16:34:55Z,MEMBER,"We would welcome a PR. Looking at the implementation of `mean` should help:
https://github.com/pydata/xarray/blob/67ff171367ada960f02b40195249e79deb4ac891/xarray/core/rolling.py#L160","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,809366777
https://github.com/pydata/xarray/issues/7730#issuecomment-1505403088,https://api.github.com/repos/pydata/xarray/issues/7730,1505403088,IC_kwDOAMm_X85ZuqDQ,2448579,2023-04-12T14:43:15Z,2023-04-12T14:43:15Z,MEMBER,Thanks for the report! I think we should add your example as a benchmark.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657036222
https://github.com/pydata/xarray/pull/7731#issuecomment-1505399396,https://api.github.com/repos/pydata/xarray/issues/7731,1505399396,IC_kwDOAMm_X85ZupJk,2448579,2023-04-12T14:41:02Z,2023-04-12T14:41:17Z,MEMBER,"RTD failures are real:
```
WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_all_dates.
Possible hints:
* ImportError: 
* AttributeError: type object 'CFTimeIndex' has no attribute 'is_all_dates'
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_mixed.
Possible hints:
* ImportError: 
* AttributeError: type object 'CFTimeIndex' has no attribute 'is_mixed'
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_monotonic.
Possible hints:
* ImportError: 
* AttributeError: type object 'CFTimeIndex' has no attribute 'is_monotonic'
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_type_compatible.
Possible hints:
* AttributeError: type object 'CFTimeIndex' has no attribute 'is_type_compatible'
* ImportError: 
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
WARNING: [autosummary] failed to import xarray.CFTimeIndex.set_value.
Possible hints:
* ImportError: 
* AttributeError: type object 'CFTimeIndex' has no attribute 'set_value'
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
WARNING: [autosummary] failed to import xarray.CFTimeIndex.to_native_types.
Possible hints:
* ImportError: 
* AttributeError: type object 'CFTimeIndex' has no attribute 'to_native_types'
* ModuleNotFoundError: No module named 'xarray.CFTimeIndex'
```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657396474
https://github.com/pydata/xarray/issues/1599#issuecomment-1504276696,https://api.github.com/repos/pydata/xarray/issues/1599,1504276696,IC_kwDOAMm_X85ZqXDY,2448579,2023-04-11T23:43:34Z,2023-04-11T23:43:43Z,MEMBER,"Perhaps we should have `array_to_list: bool` instead. If `False`, we just preserve the underlying array type.

Then the user could do `ds.as_numpy().to_dict(array_to_list=False)` to always get numpy arrays as #7739","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,261727170
https://github.com/pydata/xarray/pull/7747#issuecomment-1504098302,https://api.github.com/repos/pydata/xarray/issues/7747,1504098302,IC_kwDOAMm_X85Zprf-,2448579,2023-04-11T21:11:30Z,2023-04-11T21:11:30Z,MEMBER,Welcome to Xarray! Next time you can just update the existing branch/PR . I'll close the other one,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1663213844
https://github.com/pydata/xarray/pull/7724#issuecomment-1504093110,https://api.github.com/repos/pydata/xarray/issues/7724,1504093110,IC_kwDOAMm_X85ZpqO2,2448579,2023-04-11T21:07:32Z,2023-04-11T21:07:32Z,MEMBER,"I think we can double check that the only failures are cftimeindex, restore the pin, then merge, and then remove the pin in #7731 

","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1655782486
https://github.com/pydata/xarray/pull/7461#issuecomment-1503517162,https://api.github.com/repos/pydata/xarray/issues/7461,1503517162,IC_kwDOAMm_X85Zndnq,2448579,2023-04-11T14:50:11Z,2023-04-11T14:50:11Z,MEMBER,"Here is our support policy for versions: https://docs.xarray.dev/en/stable/getting-started-guide/installing.html#minimum-dependency-versions though I think we dropped py38 too early.

For your current issue, I'm surprised this patch didn't fix it: https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/429

cc @hmaarrfk @ocefpaf 

","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1550109629
https://github.com/pydata/xarray/pull/7019#issuecomment-1502378655,https://api.github.com/repos/pydata/xarray/issues/7019,1502378655,IC_kwDOAMm_X85ZjHqf,2448579,2023-04-10T21:57:04Z,2023-04-10T21:57:04Z,MEMBER,"> We could still achieve the goal of running cubed without dask by making normalize_chunks the responsibility of the chunkmanager 

Seems OK to me.


The other option is to xfail the broken tests on old dask versions","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1368740629
https://github.com/pydata/xarray/pull/7741#issuecomment-1501903273,https://api.github.com/repos/pydata/xarray/issues/7741,1501903273,IC_kwDOAMm_X85ZhTmp,2448579,2023-04-10T14:44:30Z,2023-04-10T14:44:30Z,MEMBER,"I forgot to say, this looks pretty great already. We just need tests.

Thank you!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659654612
https://github.com/pydata/xarray/pull/7741#issuecomment-1501902783,https://api.github.com/repos/pydata/xarray/issues/7741,1501902783,IC_kwDOAMm_X85ZhTe_,2448579,2023-04-10T14:44:02Z,2023-04-10T14:44:02Z,MEMBER,"> I don't see tests for other ops. Are these tested somewhere if so I can add tests when I find them.

Grepping for `unary_op` and `binary_op` shows a bunch in `test_dataset.py`, `test_dataarray.py`, `test_variable.py`","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1659654612
https://github.com/pydata/xarray/pull/7719#issuecomment-1499904939,https://api.github.com/repos/pydata/xarray/issues/7719,1499904939,IC_kwDOAMm_X85ZZrur,2448579,2023-04-07T03:56:54Z,2023-04-07T03:56:54Z,MEMBER,Thanks @kmuehlbauer ,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 1, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1654988876
https://github.com/pydata/xarray/issues/7730#issuecomment-1499878014,https://api.github.com/repos/pydata/xarray/issues/7730,1499878014,IC_kwDOAMm_X85ZZlJ-,2448579,2023-04-07T02:56:29Z,2023-04-07T02:56:29Z,MEMBER,"Also because your groups are sorted, `engine='flox'` is faster

```python
gb = da.groupby(""time.year"")

# using max
xr.set_options(use_flox=True)
%timeit gb.max(""time"")
%timeit gb.max(""time"", engine=""flox"")

xr.set_options(use_flox=False)
%timeit gb.max(""time"")
```

```
177 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
11.9 ms ± 471 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
18.5 ms ± 629 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657036222
https://github.com/pydata/xarray/issues/7730#issuecomment-1499872700,https://api.github.com/repos/pydata/xarray/issues/7730,1499872700,IC_kwDOAMm_X85ZZj28,2448579,2023-04-07T02:45:24Z,2023-04-07T02:49:41Z,MEMBER,"The slowness is basically a bunch of copies happening in `align`, `broadcast`, and `transpose`. It's made a lot worse for this case, because we take CFTimeIndex and cast it back to CFTimeIndex, repeating all the validity checks.

And then there's https://github.com/xarray-contrib/flox/issues/222","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657036222
https://github.com/pydata/xarray/issues/7733#issuecomment-1499273560,https://api.github.com/repos/pydata/xarray/issues/7733,1499273560,IC_kwDOAMm_X85ZXRlY,2448579,2023-04-06T15:44:48Z,2023-04-06T15:44:48Z,MEMBER,"Hi @alippai it is now possible to write ""external"" backends that register with xarray. See https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html

Feel free to ask questions here while you experiment with it. This [tutorial](https://tutorial.xarray.dev/advanced/backends/backends.html) may help too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657651596
https://github.com/pydata/xarray/issues/7730#issuecomment-1498971031,https://api.github.com/repos/pydata/xarray/issues/7730,1498971031,IC_kwDOAMm_X85ZWHuX,2448579,2023-04-06T12:18:50Z,2023-04-06T12:18:50Z,MEMBER,Thanks can you add version info please,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1657036222
https://github.com/pydata/xarray/issues/7723#issuecomment-1498468954,https://api.github.com/repos/pydata/xarray/issues/7723,1498468954,IC_kwDOAMm_X85ZUNJa,2448579,2023-04-06T04:15:06Z,2023-04-06T04:15:06Z,MEMBER,"> Would be a good idea to document this behaviour.

+1

> Maybe yet another keyword switch, use_default_fillvalues?

Adding `mask_default_netcdf_fill_values: bool` is probably a good idea.

> I'm still convinced this could be fixed for floating point data.

Generally its worse if we obey some default fill values but not others, because it becomes quite confusing to a user.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1655569401
https://github.com/pydata/xarray/pull/7669#issuecomment-1498447641,https://api.github.com/repos/pydata/xarray/issues/7669,1498447641,IC_kwDOAMm_X85ZUH8Z,2448579,2023-04-06T03:40:24Z,2023-04-06T03:40:24Z,MEMBER,"The docs build failure is real, from some rst formatting error
```
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation.
/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent.
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1639361476
https://github.com/pydata/xarray/issues/7722#issuecomment-1498404409,https://api.github.com/repos/pydata/xarray/issues/7722,1498404409,IC_kwDOAMm_X85ZT9Y5,2448579,2023-04-06T02:26:41Z,2023-04-06T02:26:41Z,MEMBER,how about not adding `_FillValue` when `missing_value` is present? Is that a good idea? Is it standards compliant?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1655483374
https://github.com/pydata/xarray/issues/7723#issuecomment-1498403174,https://api.github.com/repos/pydata/xarray/issues/7723,1498403174,IC_kwDOAMm_X85ZT9Fm,2448579,2023-04-06T02:24:34Z,2023-04-06T02:24:34Z,MEMBER,"See https://github.com/pydata/xarray/pull/5680#issuecomment-895508489

> To follow up, from a practical perspective, there are two problems with assuming that there are always ""truly missing values"" (case 2):

>    It makes it impossible to represent the full range of values in a data type, e.g., 255 for uint8 now means ""missing"".
>    Due to unfortunately limited options for representing missing data in NumPy, Xarray represents truly missing values in its data model with ""NaN"". This is more or less OK for floating point data, but means that integer data gets converted into floats. For example, uint8 would now get automatically converted into float32.

> Both of these issues are problematic for faithful ""round tripping"" of Xarray data into netCDF and back. For this reason, Xarray needs an unambiguous way to know if a netCDF variable could contain semantically missing values. So far, we've used the presence of missing_value and _FillValue attributes for that.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1655569401
https://github.com/pydata/xarray/pull/7706#issuecomment-1498386138,https://api.github.com/repos/pydata/xarray/issues/7706,1498386138,IC_kwDOAMm_X85ZT47a,2448579,2023-04-06T01:56:08Z,2023-04-06T01:56:08Z,MEMBER,Thanks @nishtha981 this is a great contribution! ,"{""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 1, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1650309361
https://github.com/pydata/xarray/issues/7727#issuecomment-1498375006,https://api.github.com/repos/pydata/xarray/issues/7727,1498375006,IC_kwDOAMm_X85ZT2Ne,2448579,2023-04-06T01:37:06Z,2023-04-06T01:37:06Z,MEMBER,I think we would gladly take a PR for this. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1656363348
https://github.com/pydata/xarray/issues/7716#issuecomment-1498186349,https://api.github.com/repos/pydata/xarray/issues/7716,1498186349,IC_kwDOAMm_X85ZTIJt,2448579,2023-04-05T21:30:02Z,2023-04-05T21:30:02Z,MEMBER,"> I think they are all expected [pandas.pydata.org/docs/whatsnew/v2.0.0.html](https://pandas.pydata.org/docs/whatsnew/v2.0.0.html) namely

Yes they are. We just haven't had the time to fix things.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1654022522
https://github.com/pydata/xarray/issues/6836#issuecomment-1498119367,https://api.github.com/repos/pydata/xarray/issues/6836,1498119367,IC_kwDOAMm_X85ZS3zH,2448579,2023-04-05T20:35:20Z,2023-04-05T20:35:20Z,MEMBER,"I think we could special-case extracting a multiindex level here:
https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/groupby.py#L469

`group` at that stage should have values
```
['a', 'a', 'b', 'b', 'c', 'c']
```

@mschrimpf Can you try that and send in a PR?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1318992926
https://github.com/pydata/xarray/issues/7573#issuecomment-1497690966,https://api.github.com/repos/pydata/xarray/issues/7573,1497690966,IC_kwDOAMm_X85ZRPNW,2448579,2023-04-05T15:33:03Z,2023-04-05T15:33:03Z,MEMBER,Does any one have any thoughts here? Shall we merge https://github.com/conda-forge/xarray-feedstock/pull/84/files and see if someone complains?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1603957501
https://github.com/pydata/xarray/pull/7561#issuecomment-1497686846,https://api.github.com/repos/pydata/xarray/issues/7561,1497686846,IC_kwDOAMm_X85ZROM-,2448579,2023-04-05T15:30:16Z,2023-04-05T15:30:16Z,MEMBER,"Variables don't have coordinates so that won't work.

mypy is correct here, it's a bug and we don't test for grouping by index variables. A commit reverting to the old `len` check would be great here, if you have the time.

 It's not clear to me why we allow this actually. Seems like `.groupby(""DIMENSION"")` solves that use-case.

","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1600382587
https://github.com/pydata/xarray/issues/7707#issuecomment-1496083800,https://api.github.com/repos/pydata/xarray/issues/7707,1496083800,IC_kwDOAMm_X85ZLG1Y,2448579,2023-04-04T14:34:27Z,2023-04-04T14:34:27Z,MEMBER,"Oh wow, we're down to mostly Zarr failures!

cc @jhamman ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1650481625
https://github.com/pydata/xarray/issues/7378#issuecomment-1493220291,https://api.github.com/repos/pydata/xarray/issues/7378,1493220291,IC_kwDOAMm_X85ZALvD,2448579,2023-04-02T04:26:57Z,2023-04-02T04:26:57Z,MEMBER,"> would one have to create these names for each method?

Yes I think so.

> [xarray.Dataset.var](https://docs.xarray.dev/en/stable/generated/xarray.Dataset.var.html#xarray.Dataset.var) suggests to see [numpy.var](https://numpy.org/doc/stable/reference/generated/numpy.var.html#numpy.var) which is about computing variance but I don't want to guess wrong.

Yes things like `var`, `std` etc. are pretty standard so you should able to find them. If not, feel free to ask !

","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1497131525
https://github.com/pydata/xarray/pull/6812#issuecomment-1493004758,https://api.github.com/repos/pydata/xarray/issues/6812,1493004758,IC_kwDOAMm_X85Y_XHW,2448579,2023-04-01T15:26:04Z,2023-04-01T15:26:04Z,MEMBER,"We should figure out how to express some of this understanding as  tests (some xfailed). That way it's easy to check when something gets fixed, and prevent regressions. 
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1309966595
https://github.com/pydata/xarray/pull/7523#issuecomment-1492098823,https://api.github.com/repos/pydata/xarray/issues/7523,1492098823,IC_kwDOAMm_X85Y758H,2448579,2023-03-31T15:15:02Z,2023-03-31T15:15:02Z,MEMBER,Thanks @headtr1ck great PR!,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1581313830
https://github.com/pydata/xarray/pull/7689#issuecomment-1490966436,https://api.github.com/repos/pydata/xarray/issues/7689,1490966436,IC_kwDOAMm_X85Y3lek,2448579,2023-03-30T21:09:23Z,2023-03-30T21:09:23Z,MEMBER,Sounds good to me! Thanks @jhamman ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1642922680
https://github.com/pydata/xarray/pull/7689#issuecomment-1490885299,https://api.github.com/repos/pydata/xarray/issues/7689,1490885299,IC_kwDOAMm_X85Y3Rqz,2448579,2023-03-30T20:10:08Z,2023-03-30T20:10:08Z,MEMBER,"`ds.reset_encoding(keys=[""dtype"", ""chunks""])` 

I agree that it may not be necessary.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1642922680