home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2,724 rows where user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue >30

  • merge scipy19 docs 18
  • ENH: Scatter plots of one variable vs another 16
  • Enable `flox` in `GroupBy` and `resample` 13
  • 0.13.0 release 12
  • Add an example of ERA5 and GRIB data & visualization to the gallery 10
  • interpolate_na: Add max_gap support. 10
  • Read grid mapping and bounds as coords 9
  • map_blocks 9
  • release 0.15.0? 9
  • release v0.18.0 9
  • sparse and other duck array issues 8
  • plot.line(): Draw multiple lines for 2D DataArrays. 7
  • We need a fast path for open_mfdataset 7
  • v0.14.1 Release 7
  • Silence sphinx warnings 7
  • Add DatasetGroupBy.quantile 7
  • Pint support for variables 7
  • provide a error summary for assert_allclose 7
  • FIX: correct dask array handling in _calc_idxminmax 7
  • Make xr.corr and xr.map_blocks work without dask 7
  • apply_ufunc: Add meta kwarg + bump dask to 2.2 6
  • Fancy indexing a Dataset with dask DataArray triggers multiple computes 6
  • Polyfit performance on large datasets - Suboptimal dask task graph 6
  • Generator for groupby reductions 6
  • Add sphinx-codeautolink extension to docs build 6
  • [skip-ci] Add cftime groupby, resample benchmarks 6
  • MultiIndex serialization to NetCDF 5
  • concat_dim getting added to *all* variables of multifile datasets 5
  • Add `scales` attributes to Dataset created in open_rasterio (#3012) 5
  • type annotations make docs confusing 5
  • …

user 1

  • dcherian · 2,724 ✖

author_association 1

  • MEMBER 2,724
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1577838062 https://github.com/pydata/xarray/pull/7888#issuecomment-1577838062 https://api.github.com/repos/pydata/xarray/issues/7888 IC_kwDOAMm_X85eC-Xu dcherian 2448579 2023-06-06T03:20:39Z 2023-06-06T03:20:39Z MEMBER

Should we delete the cfgrib example instead?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add cfgrib,ipywidgets to doc env 1736542260
1577827466 https://github.com/pydata/xarray/issues/7841#issuecomment-1577827466 https://api.github.com/repos/pydata/xarray/issues/7841 IC_kwDOAMm_X85eC7yK dcherian 2448579 2023-06-06T03:05:47Z 2023-06-06T03:05:47Z MEMBER

https://github.com/corteva/rioxarray/issues/676

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray docs showing tracebacks instead of plots 1709215291
1577474914 https://github.com/pydata/xarray/issues/7894#issuecomment-1577474914 https://api.github.com/repos/pydata/xarray/issues/7894 IC_kwDOAMm_X85eBlti dcherian 2448579 2023-06-05T21:05:47Z 2023-06-05T21:05:57Z MEMBER

but is it not possible for it to calculate the integrated values where there were regular values?

@chfite Can you provide an example of what you would want it to do please

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can a "skipna" argument be added for Dataset.integrate() and DataArray.integrate()? 1742035781
1574365471 https://github.com/pydata/xarray/issues/7890#issuecomment-1574365471 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1ukf dcherian 2448579 2023-06-02T22:04:33Z 2023-06-02T22:04:33Z MEMBER

I think the only other one is dask, which should also work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574331034 https://github.com/pydata/xarray/issues/7890#issuecomment-1574331034 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1mKa dcherian 2448579 2023-06-02T21:23:25Z 2023-06-02T21:27:06Z MEMBER

This seems like a real easy fix? axis = tuple(self.get_axis_num(d) for d in dim)

EDIT: the Array API seems to type axis as Optional[Union[int, Tuple[int, ...]]] pretty consistently, so it seems like we should always pass tuples down to the array library

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1574264842 https://github.com/pydata/xarray/pull/7862#issuecomment-1574264842 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85d1WAK dcherian 2448579 2023-06-02T20:14:33Z 2023-06-02T20:14:48Z MEMBER

xarray/tests/test_coding_strings.py:36: error: No overload variant of "dtype" matches argument types "str", "Dict[str, Type[str]]" [call-overload]

cc @Illviljan @headtr1ck

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1572306481 https://github.com/pydata/xarray/pull/7883#issuecomment-1572306481 https://api.github.com/repos/pydata/xarray/issues/7883 IC_kwDOAMm_X85dt34x dcherian 2448579 2023-06-01T15:49:42Z 2023-06-01T15:49:42Z MEMBER

Hmmm ndim is in the array api so potentially we could just update the test.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid one call to len when getting ndim of Variables 1731320789
1572276996 https://github.com/pydata/xarray/issues/7884#issuecomment-1572276996 https://api.github.com/repos/pydata/xarray/issues/7884 IC_kwDOAMm_X85dtwsE dcherian 2448579 2023-06-01T15:30:26Z 2023-06-01T15:30:26Z MEMBER

Please ask over at the cfgrib repo. But it does look like a bad environment / bad install.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reading .grib files with xarray 1732510720
1561173824 https://github.com/pydata/xarray/issues/5644#issuecomment-1561173824 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dDZ9A dcherian 2448579 2023-05-24T13:39:30Z 2023-05-24T13:39:30Z MEMBER

Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1560225308 https://github.com/pydata/xarray/pull/7865#issuecomment-1560225308 https://api.github.com/repos/pydata/xarray/issues/7865 IC_kwDOAMm_X85c_yYc dcherian 2448579 2023-05-23T22:49:14Z 2023-05-23T22:49:14Z MEMBER

Thanks @martinfleis this is a very valuable contribution to the ecosystem!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Upload nightly wheels to scientific-python-nightly-wheels 1720850091
1553390072 https://github.com/pydata/xarray/pull/7019#issuecomment-1553390072 https://api.github.com/repos/pydata/xarray/issues/7019 IC_kwDOAMm_X85cltn4 dcherian 2448579 2023-05-18T17:34:01Z 2023-05-18T17:34:01Z MEMBER

Thanks @TomNicholas Big change!

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 3,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Generalize handling of chunked array types 1368740629
1551638978 https://github.com/pydata/xarray/pull/7788#issuecomment-1551638978 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cfCHC dcherian 2448579 2023-05-17T15:45:41Z 2023-05-17T15:45:41Z MEMBER

Thanks @maxhollmann I pushed a test for #2377.

I see this is your first contribution to Xarray. Welcome! #1792 would be a nice contribution if you're looking for more to do ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1550594808 https://github.com/pydata/xarray/issues/7841#issuecomment-1550594808 https://api.github.com/repos/pydata/xarray/issues/7841 IC_kwDOAMm_X85cbDL4 dcherian 2448579 2023-05-17T02:24:57Z 2023-05-17T02:24:57Z MEMBER

We should migrate these to rioxarray if they aren't there already.

cc @snowman2 @scottyhq @JessicaS11

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray docs showing tracebacks instead of plots 1709215291
1550187275 https://github.com/pydata/xarray/pull/7788#issuecomment-1550187275 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cZfsL dcherian 2448579 2023-05-16T18:47:27Z 2023-05-16T18:47:27Z MEMBER

Thanks @maxhollmann can you add a note to whats-new.rst please?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1546024198 https://github.com/pydata/xarray/issues/7838#issuecomment-1546024198 https://api.github.com/repos/pydata/xarray/issues/7838 IC_kwDOAMm_X85cJnUG dcherian 2448579 2023-05-12T16:52:29Z 2023-05-12T16:52:29Z MEMBER

Thanks! I tracked this down to the difference between reading the file remotely, or downloading first and accessing a local copy on v0.20.2 (the latter is what I used to produce my figures). Can you reproduce? remote = xr.open_dataset( "http://kage.ldeo.columbia.edu:81/SOURCES/.LOCAL/.sst.mon.mean.nc/.sst/dods" ).sst.sel(lat=20, lon=280, method="nearest") local = xr.open_dataset("~/Downloads/data.cdf").sst.sel(lat=20, lon=280, method="nearest")

(remote.groupby("time.month") - remote.groupby("time.month").mean()).plot()

(local.groupby("time.month") - local.groupby("time.month").mean()).plot()

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Anomaly calculation with groupby leaves seasonal cycle 1706864252
1545974461 https://github.com/pydata/xarray/pull/7796#issuecomment-1545974461 https://api.github.com/repos/pydata/xarray/issues/7796 IC_kwDOAMm_X85cJbK9 dcherian 2448579 2023-05-12T16:09:04Z 2023-05-12T16:09:14Z MEMBER

Yes, 10-30% on my machine: before after ratio [91f14c9b] [1eb74149] <v2023.04.2> <speedup-dt-accesor> - 1.17±0.04ms 1.02±0.04ms 0.87 accessors.DateTimeAccessor.time_dayofyear('standard') - 1.25±0.07ms 976±30μs 0.78 accessors.DateTimeAccessor.time_year('standard') - 3.90±0.1ms 2.68±0.05ms 0.69 accessors.DateTimeAccessor.time_year('noleap') - 4.75±0.07ms 3.25±0.04ms 0.68 accessors.DateTimeAccessor.time_dayofyear('noleap')

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
  Speed up .dt accessor by preserving Index objects. 1689364566
1545939343 https://github.com/pydata/xarray/pull/7788#issuecomment-1545939343 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cJSmP dcherian 2448579 2023-05-12T15:39:24Z 2023-05-12T15:39:24Z MEMBER

I defer to @shoyer, the solution with where_method seems good to me.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1545930953 https://github.com/pydata/xarray/issues/7838#issuecomment-1545930953 https://api.github.com/repos/pydata/xarray/issues/7838 IC_kwDOAMm_X85cJQjJ dcherian 2448579 2023-05-12T15:32:47Z 2023-05-12T15:35:02Z MEMBER

Can you compare ds_anom at a point in both versions please? I get a plot that looks quite similar

v0.20.2:

v2022.03.0:

v2023.04.2

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Anomaly calculation with groupby leaves seasonal cycle 1706864252
1542465745 https://github.com/pydata/xarray/issues/4412#issuecomment-1542465745 https://api.github.com/repos/pydata/xarray/issues/4412 IC_kwDOAMm_X85b8CjR dcherian 2448579 2023-05-10T16:06:26Z 2023-05-10T16:06:54Z MEMBER

Related request for to_zarr(..., encode_cf=False): https://github.com/pydata/xarray/issues/5405

This came up in the discussion today.

cc @tom-white @kmuehlbauer

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.encode_cf function 696047530
1540816942 https://github.com/pydata/xarray/issues/7831#issuecomment-1540816942 https://api.github.com/repos/pydata/xarray/issues/7831 IC_kwDOAMm_X85b1wAu dcherian 2448579 2023-05-09T20:02:27Z 2023-05-09T20:02:27Z MEMBER

I was suggesting to special-case rioxarray only just because we recently deleted the rasterio backend, and that might ease the transition. Can we do it at the top-level open-dataset when engine=="rasterio" but rioxarray is not importable?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't open datasets with the `rasterio` engine. 1702025553
1540435470 https://github.com/pydata/xarray/issues/7831#issuecomment-1540435470 https://api.github.com/repos/pydata/xarray/issues/7831 IC_kwDOAMm_X85b0S4O dcherian 2448579 2023-05-09T15:44:55Z 2023-05-09T15:44:55Z MEMBER

I think this would be nice since we recently removed the rasterio backend.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't open datasets with the `rasterio` engine. 1702025553
1538808220 https://github.com/pydata/xarray/pull/7825#issuecomment-1538808220 https://api.github.com/repos/pydata/xarray/issues/7825 IC_kwDOAMm_X85buFmc dcherian 2448579 2023-05-08T18:03:58Z 2023-05-08T18:03:58Z MEMBER

LGTM. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  test: Fix test_write_read_select_write for Zarr V3 1699112787
1537031554 https://github.com/pydata/xarray/issues/7817#issuecomment-1537031554 https://api.github.com/repos/pydata/xarray/issues/7817 IC_kwDOAMm_X85bnT2C dcherian 2448579 2023-05-06T03:23:13Z 2023-05-06T03:23:13Z MEMBER

because CFMaskCoder will convert the variable to floating point and insert "NaN". In CFDatetimeCoder the floating point is cast back to int64 to transform into datetime64.

Can we reverse the order so that CFDatetimeCoder handles _FillValue for datetime arrays, and then it will be skipped in CFMaskCoder

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  nanosecond precision lost when reading time data 1696097756
1534000660 https://github.com/pydata/xarray/issues/7814#issuecomment-1534000660 https://api.github.com/repos/pydata/xarray/issues/7814 IC_kwDOAMm_X85bbv4U dcherian 2448579 2023-05-04T02:35:39Z 2023-05-04T02:35:39Z MEMBER

We;ll need a reproducible example

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  TypeError: 'NoneType' object is not callable when joining netCDF files. Works when ran interactively. 1695028906
1531721493 https://github.com/pydata/xarray/pull/7795#issuecomment-1531721493 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bTDcV dcherian 2448579 2023-05-02T15:56:48Z 2023-05-02T15:56:48Z MEMBER

Thanks @Illviljan I'm merging so I can run benchmarks on a few other PRs

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1530439461 https://github.com/pydata/xarray/pull/7795#issuecomment-1530439461 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bOKcl dcherian 2448579 2023-05-01T22:23:53Z 2023-05-01T22:23:53Z MEMBER

well that was it apparently 🤷🏾‍♂️

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1530417559 https://github.com/pydata/xarray/pull/7795#issuecomment-1530417559 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bOFGX dcherian 2448579 2023-05-01T22:13:03Z 2023-05-01T22:13:03Z MEMBER

asv run -v failed locally and printed out the yaml file. For some reason channels is empty ·· Error running /Users/dcherian/mambaforge/bin/conda env create -f /var/folders/yp/rzgd0f7n5zbcbf5dg73sfb8ntv3xxm/T/tmpbusmg2a0.yml -p /Users/dcherian/work/python/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force (exit status 1) ·· conda env create/update failed: in /Users/dcherian/work/python/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d with: name: conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse channels: dependencies: - python=3.10 - wheel - pip - setuptools_scm - numpy - pandas - netcdf4 - scipy - bottleneck - dask - distributed - flox - numpy_groupies - sparse - cftime

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1530367137 https://github.com/pydata/xarray/pull/7795#issuecomment-1530367137 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bN4yh dcherian 2448579 2023-05-01T21:53:34Z 2023-05-01T21:53:34Z MEMBER

Here's a truncated diff: ``` 5c5 < Job is about to start running on the hosted runner: GitHub Actions 2 (hosted)


Job is about to start running on the hosted runner: GitHub Actions 4 (hosted) 14,16c14,16 < Version: 20230417.1 < Included Software: https://github.com/actions/runner-images/blob/ubuntu20/20230417.1/images/linux/Ubuntu2004-Readme.md < Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu20%2F20230417.1


Version: 20230426.1 Included Software: https://github.com/actions/runner-images/blob/ubuntu20/20230426.1/images/linux/Ubuntu2004-Readme.md Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu20%2F20230426.1 64c64 < git version 2.40.0


git version 2.40.1 66c66 < Temporarily overriding HOME='/home/runner/work/_temp/f08ac219-4ee6-4fb1-962a-95a90ed2ab7a' before making global git config changes


Temporarily overriding HOME='/home/runner/work/_temp/ea752478-2e4c-4c11-9a80-64b3b9b03d35' before making global git config changes 575c593 < 0f4e99d036b0d6d76a3271e6191eacbc9922662f


a220022e7ef8d3df68619643954352fa39394ea8 585c603 < '0f4e99d036b0d6d76a3271e6191eacbc9922662f'


'a220022e7ef8d3df68619643954352fa39394ea8' 605,607c623,625 < Received 5491626 of 5491626 (100.0%), 32.9 MBs/sec < Cache Size: ~5 MB (5491626 B) < [command]/usr/bin/tar -xf /home/runner/work/_temp/d8162eae-90d3-4277-abdf-d3574b623c16/cache.tgz -P -C /home/runner/work/xarray/xarray -z


Received 5491632 of 5491632 (100.0%), 8.7 MBs/sec Cache Size: ~5 MB (5491632 B) [command]/usr/bin/tar -xf /home/runner/work/_temp/8bf4827f-8427-494a-86a7-047343a5e85a/cache.tgz -P -C /home/runner/work/xarray/xarray -z 609c627 < Cache hit for key 'micromamba-bin https://micro.mamba.pm/api/micromamba/linux-64/latest Wed Apr 26 2023 YYY'


Cache hit for key 'micromamba-bin https://micro.mamba.pm/api/micromamba/linux-64/latest Fri Apr 28 2023 YYY' 634c652 < Modifying RC file "/tmp/micromamba-m6EeL6/.bashrc"


Modifying RC file "/tmp/micromamba-zQHzJD/.bashrc" 637c655 < Adding (or replacing) the following in your "/tmp/micromamba-m6EeL6/.bashrc" file


Adding (or replacing) the following in your "/tmp/micromamba-zQHzJD/.bashrc" file 772c790 < + coverage 7.2.3 py310h1fa729e_0 conda-forge/linux-64 282kB


  • coverage 7.2.4 py310h2372a71_0 conda-forge/linux-64 280kB 813c831 < + hypothesis 6.74.0 pyha770c72_0 conda-forge/noarch 291kB

  • hypothesis 6.74.1 pyha770c72_0 conda-forge/noarch 292kB 920c938 < + platformdirs 3.3.0 pyhd8ed1ab_0 conda-forge/noarch 18kB

  • platformdirs 3.5.0 pyhd8ed1ab_0 conda-forge/noarch 19kB 954c972 < + requests 2.28.2 pyhd8ed1ab_1 conda-forge/noarch 57kB

  • requests 2.29.0 pyhd8ed1ab_0 conda-forge/noarch 57kB 985c1003 < + virtualenv 20.22.0 pyhd8ed1ab_0 conda-forge/noarch 3MB

  • virtualenv 20.23.0 pyhd8ed1ab_0 conda-forge/noarch 3MB 1029,1030d1046 < Linking font-ttf-inconsolata-3.000-h77eed37_0 < Linking font-ttf-source-code-pro-2.038-h77eed37_0 1032a1049,1050 Linking font-ttf-inconsolata-3.000-h77eed37_0 Linking font-ttf-source-code-pro-2.038-h77eed37_0 1053d1070 < Linking xorg-xextproto-7.3.0-h0b41bf4_1003 1054a1072,1073 Linking xorg-xextproto-7.3.0-h0b41bf4_1003 Linking xxhash-0.8.1-h0b41bf4_0 1056a1076 Linking libaec-1.0.6-hcb278e6_1 1057a1078,1079 Linking libopenblas-0.3.21-pthreads_h78a6416_3 Linking lz4-c-1.9.4-hcb278e6_0 1066a1089,1091 Linking c-ares-1.18.1-h7f98852_0 Linking keyutils-1.6.1-h166bdaf_0 Linking openssl-3.1.0-hd590300_2 1069,1070d1093 < Linking ncurses-6.3-h27087fc_1 < Linking lz4-c-1.9.4-hcb278e6_0 1072d1094 < Linking xxhash-0.8.1-h0b41bf4_0 1078,1079c1100 < Linking icu-70.1-h27087fc_0 < Linking libopenblas-0.3.21-pthreads_h78a6416_3

Linking ncurses-6.3-h27087fc_1 1082,1085c1103 < Linking c-ares-1.18.1-h7f98852_0 < Linking keyutils-1.6.1-h166bdaf_0 < Linking openssl-3.1.0-hd590300_2 < Linking libaec-1.0.6-hcb278e6_1


Linking icu-70.1-h27087fc_0 1091a1110,1111 Linking openblas-0.3.21-pthreads_h320a7e8_3 Linking libblas-3.9.0-16_linux64_openblas 1096,1097d1115 < Linking openblas-0.3.21-pthreads_h320a7e8_3 < Linking libblas-3.9.0-16_linux64_openblas 1100a1119 Linking zstd-1.5.2-h3eb15da_6 1101a1121 Linking libnghttp2-1.52.0-h61bc06f_0 1103d1122 < Linking zstd-1.5.2-h3eb15da_6 1107d1125 < Linking libnghttp2-1.52.0-h61bc06f_0 1111a1130,1131 Linking libcblas-3.9.0-16_linux64_openblas Linking liblapack-3.9.0-16_linux64_openblas 1115,1119d1134 < Linking libcblas-3.9.0-16_linux64_openblas < Linking liblapack-3.9.0-16_linux64_openblas < Linking libxslt-1.1.37-h873f0b0_0 < Linking nss-3.89-he45b914_0 < Linking sqlite-3.40.0-h4ff8645_1 1122a1138,1140 Linking libxslt-1.1.37-h873f0b0_0 Linking nss-3.89-he45b914_0 Linking sqlite-3.40.0-h4ff8645_1 1131d1148 < Linking libpq-15.2-hb675445_0 1132a1150 Linking libpq-15.2-hb675445_0 1138d1155 < Linking postgresql-15.2-h3248436_0 1139a1157 Linking curl-8.0.1-h588be90_0 1142d1159 < Linking curl-8.0.1-h588be90_0 1143a1161 Linking postgresql-15.2-h3248436_0 1145a1164 Linking tiledb-2.13.2-hd532e3d_0 1148,1149d1166 < Linking tiledb-2.13.2-hd532e3d_0 < Linking kealib-1.5.0-ha7026e8_0 1150a1168 Linking kealib-1.5.0-ha7026e8_0 1166d1183 < Linking pyshp-2.3.1-pyhd8ed1ab_0 1168,1170d1184 < Linking munkres-1.1.4-pyh9f0ad1d_0 < Linking pyparsing-3.0.9-pyhd8ed1ab_0 < Linking cycler-0.11.0-pyhd8ed1ab_0 1172,1174d1185 < Linking pytz-2023.3-pyhd8ed1ab_0 < Linking python-tzdata-2023.3-pyhd8ed1ab_0 < Linking affine-2.4.0-pyhd8ed1ab_0 1181d1191 < Linking certifi-2022.12.7-pyhd8ed1ab_0 1182a1193,1200 Linking pyshp-2.3.1-pyhd8ed1ab_0 Linking munkres-1.1.4-pyh9f0ad1d_0 Linking pyparsing-3.0.9-pyhd8ed1ab_0 Linking cycler-0.11.0-pyhd8ed1ab_0 Linking pytz-2023.3-pyhd8ed1ab_0 Linking python-tzdata-2023.3-pyhd8ed1ab_0 Linking certifi-2022.12.7-pyhd8ed1ab_0 Linking affine-2.4.0-pyhd8ed1ab_0 1226c1244 < Linking platformdirs-3.3.0-pyhd8ed1ab_0


Linking platformdirs-3.5.0-pyhd8ed1ab_0 1230c1248 < Linking virtualenv-20.22.0-pyhd8ed1ab_0


Linking virtualenv-20.23.0-pyhd8ed1ab_0 1234,1237d1251 < Linking unicodedata2-15.0.0-py310h5764c6d_0 < Linking pillow-9.4.0-py310h023d228_1 < Linking kiwisolver-1.4.4-py310hbf28c38_1 < Linking llvmlite-0.39.1-py310h58363a5_1 1246d1259 < Linking libcf-1.0.3-py310h71500c5_116 1247a1261,1264 Linking unicodedata2-15.0.0-py310h5764c6d_0 Linking pillow-9.4.0-py310h023d228_1 Linking kiwisolver-1.4.4-py310hbf28c38_1 Linking llvmlite-0.39.1-py310h58363a5_1 1248a1266 Linking libcf-1.0.3-py310h71500c5_116 1254c1272 < Linking coverage-7.2.3-py310h1fa729e_0


Linking coverage-7.2.4-py310h2372a71_0 1259,1260d1276 < Linking contourpy-1.0.7-py310hdf3cbec_0 < Linking shapely-2.0.1-py310h8b84c32_0 1261a1278,1279 Linking shapely-2.0.1-py310h8b84c32_0 Linking contourpy-1.0.7-py310hdf3cbec_0 1277,1278c1295 < Linking hypothesis-6.74.0-pyha770c72_0 < Linking snuggs-1.4.7-py_0


Linking hypothesis-6.74.1-pyha770c72_0 1279a1297 Linking snuggs-1.4.7-py_0 1292c1310 < Linking requests-2.28.2-pyhd8ed1ab_1


Linking requests-2.29.0-pyhd8ed1ab_0 1458c1476 < coverage 7.2.3 py310h1fa729e_0 conda-forge


coverage 7.2.4 py310h2372a71_0 conda-forge 1499c1517 < hypothesis 6.74.0 pyha770c72_0 conda-forge


hypothesis 6.74.1 pyha770c72_0 conda-forge 1606c1624 < platformdirs 3.3.0 pyhd8ed1ab_0 conda-forge


platformdirs 3.5.0 pyhd8ed1ab_0 conda-forge 1640c1658 < requests 2.28.2 pyhd8ed1ab_1 conda-forge


requests 2.29.0 pyhd8ed1ab_0 conda-forge 1671c1689 < virtualenv 20.22.0 pyhd8ed1ab_0 conda-forge


virtualenv 20.23.0 pyhd8ed1ab_0 conda-forge 1717c1735 < ?[36;1mecho "Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f"?[0m


?[36;1mecho "Contender: a220022e7ef8d3df68619643954352fa39394ea8"?[0m 1723c1741 < ?[36;1masv continuous $ASV_OPTIONS v2023.04.2 0f4e99d036b0d6d76a3271e6191eacbc9922662f \?[0m


?[36;1masv continuous $ASV_OPTIONS v2023.04.2 a220022e7ef8d3df68619643954352fa39394ea8 \?[0m 1744c1762 < · No information stored about machine 'fv-az133-41'. I know about nothing.


· No information stored about machine 'fv-az613-176'. I know about nothing. 1754c1772 < machine [fv-az133-41]: 2. os: The OS type and version of this machine. For example,


machine [fv-az613-176]: 2. os: The OS type and version of this machine. For example, 1756c1774 < os [Linux 5.15.0-1035-azure]: 3. arch: The generic CPU architecture of this machine. For example,


os [Linux 5.15.0-1036-azure]: 3. arch: The generic CPU architecture of this machine. For example, 1761c1779 < cpu [Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz]: 5. num_cpu: The number of CPUs in the system. For example, '4'.


cpu [Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz]: 5. num_cpu: The number of CPUs in the system. For example, '4'. 1765,1767c1783,1785 < ram [7110636]: Baseline: v2023.04.2 < + echo 'Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f' < Contender: 0f4e99d036b0d6d76a3271e6191eacbc9922662f


  • echo 'Contender: a220022e7ef8d3df68619643954352fa39394ea8' ram [7110632]: Baseline: v2023.04.2 Contender: a220022e7ef8d3df68619643954352fa39394ea8 1772d1789 < + asv continuous --split --show-stderr --factor 1.5 v2023.04.2 0f4e99d036b0d6d76a3271e6191eacbc9922662f 1773a1791
  • asv continuous --split --show-stderr --factor 1.5 v2023.04.2 a220022e7ef8d3df68619643954352fa39394ea8 1774a1793,1818 Traceback (most recent call last): File "/home/runner/micromamba-root/envs/xarray-tests/bin/asv", line 10, in <module> sys.exit(main()) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/main.py", line 38, in main result = args.func(args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/init.py", line 49, in run_from_args return cls.run_from_conf_args(conf, args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py", line 75, in run_from_conf_args return cls.run( File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py", line 114, in run result = Run.run( File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/run.py", line 294, in run Setup.perform_setup(environments, parallel=parallel) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py", line 89, in perform_setup list(map(_create, environments)) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py", line 21, in _create env.create() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/environment.py", line 704, in create self._setup() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py", line 174, in _setup self._run_conda(['env', 'create', '-f', env_file_name, File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py", line 227, in _run_conda return util.check_output([conda] + args, env=env) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/util.py", line 754, in check_output raise ProcessError(args, retcode, stdout, stderr) asv.util.ProcessError: Command '/usr/bin/conda env create -f /tmp/tmp4i32rw09.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force' returned non-zero exit status 1 1776,1827c1820,1826 < · Discovering benchmarks < ·· Uninstalling from conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse < ·· Building 0f4e99d0 <main> for conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse < ·· Installing 0f4e99d0 <main> into conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse < · Running 362 total benchmarks (2 commits * 1 environments * 181 benchmarks) < [ 0.00%] · For xarray commit 91f14c9b <v2023.04.2> (round 1/2): < [ 0.00%] ·· Building for conda-py3.10-bottleneck-cftime-dask-distributed-flox-netcdf4-numpy-numpy_groupies-pandas-scipy-setuptools_scm-sparse

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1530349371 https://github.com/pydata/xarray/pull/7795#issuecomment-1530349371 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bN0c7 dcherian 2448579 2023-05-01T21:44:08Z 2023-05-01T21:44:08Z MEMBER

Yes very frustrating it broke between 5 and 3 days ago

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1529882778 https://github.com/pydata/xarray/pull/7795#issuecomment-1529882778 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bMCia dcherian 2448579 2023-05-01T16:01:59Z 2023-05-01T16:01:59Z MEMBER

I'm not sure why asv can't create the env for benchmarking.

``` + echo 'Baseline: 25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d (pydata:main)' ram [7110632]: Baseline: 25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d (pydata:main) + echo 'Contender: 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32 (dcherian:bench-cftime-groupby)' Contender: 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32 (dcherian:bench-cftime-groupby) ++ which conda + export CONDA_EXE=/usr/bin/conda + CONDA_EXE=/usr/bin/conda + ASV_OPTIONS='--split --show-stderr --factor 1.5' + asv continuous --split --show-stderr --factor 1.5 25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d 4ca69efdf7e5e3fc661e5ec3ae618d102a374f32 + tee benchmarks.log + sed '/Traceback \|failed$\|PERFORMANCE DECREASED/ s/^/::error::/' Traceback (most recent call last): File "/home/runner/micromamba-root/envs/xarray-tests/bin/asv", line 10, in <module> sys.exit(main()) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/main.py", line 38, in main result = args.func(args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/init.py", line 49, in run_from_args return cls.run_from_conf_args(conf, args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py", line 75, in run_from_conf_args return cls.run( File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/continuous.py", line 114, in run result = Run.run( File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/run.py", line 294, in run Setup.perform_setup(environments, parallel=parallel) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py", line 89, in perform_setup list(map(_create, environments)) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/commands/setup.py", line 21, in _create env.create() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/environment.py", line 704, in create self._setup() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py", line 174, in _setup self._run_conda(['env', 'create', '-f', env_file_name, File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/plugins/conda.py", line 227, in _run_conda return util.check_output([conda] + args, env=env) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/util.py", line 754, in check_output raise ProcessError(args, retcode, stdout, stderr) asv.util.ProcessError: Command '/usr/bin/conda env create -f /tmp/tmphnyugp42.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force' returned non-zero exit status 1 · Creating environments ·· Error running /usr/bin/conda env create -f /tmp/tmphnyugp42.yml -p /home/runner/work/xarray/xarray/asv_bench/.asv/env/df282ba4a530a0853b7f9108ec3ff02d --force (exit status 1) STDOUT -------->

STDERR -------->

SpecNotFound: /tmp/tmphnyugp42.yml is not a valid yaml file. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1529877407 https://github.com/pydata/xarray/pull/7799#issuecomment-1529877407 https://api.github.com/repos/pydata/xarray/issues/7799 IC_kwDOAMm_X85bMBOf dcherian 2448579 2023-05-01T16:00:25Z 2023-05-01T16:00:25Z MEMBER

In general I think it would be fine to merge incremental changes.

It may be good to schedule a quick 30 minute chat to sync up ideas here.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Start making unit testing more general 1690019325
1528934982 https://github.com/pydata/xarray/issues/7797#issuecomment-1528934982 https://api.github.com/repos/pydata/xarray/issues/7797 IC_kwDOAMm_X85bIbJG dcherian 2448579 2023-04-30T04:15:09Z 2023-04-30T04:15:29Z MEMBER

x_slice.groupby("time.month") - clim

Nice, we test x.groupby("time.month") - clim.slice(...) but not x.sel(...).groupby() - clim

Is it possible for you to run nightly tests against xarray's main branch? That would help a lot.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  More `groupby` indexing problems 1689655334
1528597777 https://github.com/pydata/xarray/issues/3267#issuecomment-1528597777 https://api.github.com/repos/pydata/xarray/issues/3267 IC_kwDOAMm_X85bHI0R dcherian 2448579 2023-04-29T03:30:20Z 2023-04-29T03:30:20Z MEMBER

Should be better now with flox installed

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Resample excecution time is significantly longer in version 0.12 than 0.11 485508509
1528072972 https://github.com/pydata/xarray/issues/7790#issuecomment-1528072972 https://api.github.com/repos/pydata/xarray/issues/7790 IC_kwDOAMm_X85bFIsM dcherian 2448579 2023-04-28T20:43:44Z 2023-04-28T20:43:44Z MEMBER

https://github.com/pydata/xarray/blob/25d9a28e12141b9b5e4a79454eb76ddd2ee2bc4d/xarray/coding/times.py#L717-L735

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Fill values in time arrays (numpy.datetime64) are lost in zarr 1685803922
1527657759 https://github.com/pydata/xarray/pull/7635#issuecomment-1527657759 https://api.github.com/repos/pydata/xarray/issues/7635 IC_kwDOAMm_X85bDjUf dcherian 2448579 2023-04-28T14:29:52Z 2023-04-28T14:31:39Z MEMBER

Thanks for your patience here @dsgreen2 . This is a nice contribution. Welcome to Xarray!

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Implement DataArray.to_dask_dataframe() 1627298527
1527652802 https://github.com/pydata/xarray/issues/7713#issuecomment-1527652802 https://api.github.com/repos/pydata/xarray/issues/7713 IC_kwDOAMm_X85bDiHC dcherian 2448579 2023-04-28T14:26:15Z 2023-04-28T14:26:32Z MEMBER

This is a duplicate of https://github.com/pydata/xarray/issues/4404 It seems like this is for MultiIndex support. A better error message and documentation would be a great contribution! Let's move the conversation there

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `Variable/IndexVariable` do not accept a tuple for data. 1652227927
1527648649 https://github.com/pydata/xarray/pull/7739#issuecomment-1527648649 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85bDhGJ dcherian 2448579 2023-04-28T14:22:58Z 2023-04-28T14:22:58Z MEMBER

Thanks @jmccreight great work!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1527647415 https://github.com/pydata/xarray/pull/7741#issuecomment-1527647415 https://api.github.com/repos/pydata/xarray/issues/7741 IC_kwDOAMm_X85bDgy3 dcherian 2448579 2023-04-28T14:22:02Z 2023-04-28T14:22:02Z MEMBER

Thanks @abrammer very nice work!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add lshift and rshift operators 1659654612
1526241680 https://github.com/pydata/xarray/issues/7764#issuecomment-1526241680 https://api.github.com/repos/pydata/xarray/issues/7764 IC_kwDOAMm_X85a-JmQ dcherian 2448579 2023-04-27T19:26:13Z 2023-04-27T19:26:13Z MEMBER

I think I agree with use_opt_einsum: bool

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support opt_einsum in xr.dot 1672288892
1526240154 https://github.com/pydata/xarray/issues/7764#issuecomment-1526240154 https://api.github.com/repos/pydata/xarray/issues/7764 IC_kwDOAMm_X85a-JOa dcherian 2448579 2023-04-27T19:25:29Z 2023-04-27T19:25:29Z MEMBER

numpy.einsum has some version of opt_einsum implemented under the optimize kwarg. IIUC this is False by default because it adds overhead to small problems (comment)

The complete overhead for computing a path (parsing the input, finding the path, and organization that data) with default options is about 150us. Looks like einsum takes a minimum of 5-10us to call as a reference. So the worst case scenario would be that the optimization overhead makes einsum 30x slower. Personally id go for turning optimization off by default and then revisiting if someone tackles the parsing issue to reduce the overhead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support opt_einsum in xr.dot 1672288892
1526224630 https://github.com/pydata/xarray/issues/7790#issuecomment-1526224630 https://api.github.com/repos/pydata/xarray/issues/7790 IC_kwDOAMm_X85a-Fb2 dcherian 2448579 2023-04-27T19:18:12Z 2023-04-27T19:18:12Z MEMBER

I think the issue is that we're always running "CF encoding" which is more appropriate for netCDF4 than Zarr, since Zarr supports datetime64 natively. And currently there's no way to control whether the datetime encoder is applied or not, we just look at the dtype: https://github.com/pydata/xarray/blob/0f4e99d036b0d6d76a3271e6191eacbc9922662f/xarray/coding/times.py#L697-L704

I think the right way to fix this is to allow the user to run the encode and write steps separately, with the encoding steps being controllable: https://github.com/pydata/xarray/issues/4412

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fill values in time arrays (numpy.datetime64) are lost in zarr 1685803922
1523666774 https://github.com/pydata/xarray/issues/6610#issuecomment-1523666774 https://api.github.com/repos/pydata/xarray/issues/6610 IC_kwDOAMm_X85a0U9W dcherian 2448579 2023-04-26T15:59:06Z 2023-04-26T16:06:17Z MEMBER

We voted to move forward with this API: python data.groupby({ "x0": xr.BinGrouper(bins=pd.IntervalIndex.from_breaks(coords["x_vertices"])), # binning "y": xr.UniqueGrouper(labels=["a", "b", "c"]), # categorical, data.y is dask-backed "time": xr.TimeResampleGrouper(freq="MS") }, )

We won't break backwards-compatibility for da.groupby(other_data_array) but for any complicated use-cases with Grouper the user must add the by variable to the xarray object, and refer to it by name in the dictionary as above,

{
    "total_count": 4,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 1
}
  Update GroupBy constructor for grouping by multiple variables, dask arrays 1236174701
1523670253 https://github.com/pydata/xarray/pull/7787#issuecomment-1523670253 https://api.github.com/repos/pydata/xarray/issues/7787 IC_kwDOAMm_X85a0Vzt dcherian 2448579 2023-04-26T16:01:16Z 2023-04-26T16:01:16Z MEMBER

:+1: from me!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the label run-upstream to run upstream CI 1684281101
1523669010 https://github.com/pydata/xarray/pull/7561#issuecomment-1523669010 https://api.github.com/repos/pydata/xarray/issues/7561 IC_kwDOAMm_X85a0VgS dcherian 2448579 2023-04-26T16:00:32Z 2023-04-26T16:00:32Z MEMBER

I'd like to merge this soon. It's an internal refactor with no public API changes.

I think we can expose the Grouper objects publicly in a new PR

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Introduce Grouper objects internally 1600382587
1498463195 https://github.com/pydata/xarray/issues/6610#issuecomment-1498463195 https://api.github.com/repos/pydata/xarray/issues/6610 IC_kwDOAMm_X85ZULvb dcherian 2448579 2023-04-06T04:07:05Z 2023-04-26T15:52:21Z MEMBER

Here's a question.

In #7561, I implement Grouper objects that don't have any information of the variable we're grouping by. So the future API would be:

python data.groupby({ "x0": xr.BinGrouper(bins=pd.IntervalIndex.from_breaks(coords["x_vertices"])), # binning "y": xr.UniqueGrouper(labels=["a", "b", "c"]), # categorical, data.y is dask-backed "time": xr.TimeResampleGrouper(freq="MS") }, )

Does this look OK or do we want to support passing the DataArray or variable name as a by kwarg:
python xr.BinGrouper(by="x0", bins=pd.IntervalIndex.from_breaks(coords["x_vertices"]))

This syntax would support passing DataArray in by so xr.UniqueGrouper(by=data.y) for example. Is that an important usecase to support? In #7561, I create new ResolvedGrouper objects that do contain by as a DataArray always, so it's really a question of exposing that to the user.

PS: Pandas has a key kwarg for a column name. So following that would mean

python data.groupby([ xr.BinGrouper("x0", bins=pd.IntervalIndex.from_breaks(coords["x_vertices"])), # binning xr.UniqueGrouper("y", labels=["a", "b", "c"]), # categorical, data.y is dask-backed xr.TimeResampleGrouper("time", freq="MS") ], )

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update GroupBy constructor for grouping by multiple variables, dask arrays 1236174701
1523618985 https://github.com/pydata/xarray/issues/7782#issuecomment-1523618985 https://api.github.com/repos/pydata/xarray/issues/7782 IC_kwDOAMm_X85a0JSp dcherian 2448579 2023-04-26T15:29:14Z 2023-04-26T15:29:14Z MEMBER

Thanks for the in-depth investigation!

As we can see from the above output, in netCDF4-python scaling is adapting the dtype to unsigned, not masking. This is also reflected in the docs unidata.github.io/netcdf4-python/#Variable.

Do we know why this is so?

If Xarray is trying to align with netCDF4-python it should separate mask and scale as netCDF4-python is doing. It does this already by using different coders but it doesn't separate it API-wise.

:+1:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset() reading ubyte variables as float32 from DAP server 1681353195
1523589353 https://github.com/pydata/xarray/pull/7786#issuecomment-1523589353 https://api.github.com/repos/pydata/xarray/issues/7786 IC_kwDOAMm_X85a0CDp dcherian 2448579 2023-04-26T15:10:44Z 2023-04-26T15:10:44Z MEMBER

Thanks @ksunden

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use canonical name for set_horizonalalignment over alias set_ha 1683839855
1521951732 https://github.com/pydata/xarray/pull/7785#issuecomment-1521951732 https://api.github.com/repos/pydata/xarray/issues/7785 IC_kwDOAMm_X85atyP0 dcherian 2448579 2023-04-25T15:03:39Z 2023-04-25T15:03:39Z MEMBER

Thanks we use this file only for Githubs dependency graph, which now supports pyproject.toml so we should just migrate and have one less thing to update.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove pandas<2 pin 1683335751
1521798719 https://github.com/pydata/xarray/pull/7650#issuecomment-1521798719 https://api.github.com/repos/pydata/xarray/issues/7650 IC_kwDOAMm_X85atM4_ dcherian 2448579 2023-04-25T13:32:43Z 2023-04-25T13:32:43Z MEMBER

Yes you should be good on 2023.04.2

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pin pandas < 2 1632422255
1520550980 https://github.com/pydata/xarray/issues/7782#issuecomment-1520550980 https://api.github.com/repos/pydata/xarray/issues/7782 IC_kwDOAMm_X85aocRE dcherian 2448579 2023-04-24T17:18:37Z 2023-04-24T19:55:11Z MEMBER

We would want to check the different attributes and apply the coders only as needed.

The current approach seeems OK no? It seems like the bug is that UnsignedMaskCodershould be outside if mask_and_scale

We would want to check the different attributes and apply the coders only as needed.

EDIT: I mean that each coder checks whether it is applicable, so we already do that

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset() reading ubyte variables as float32 from DAP server 1681353195
1520434316 https://github.com/pydata/xarray/issues/7782#issuecomment-1520434316 https://api.github.com/repos/pydata/xarray/issues/7782 IC_kwDOAMm_X85an_yM dcherian 2448579 2023-04-24T15:55:48Z 2023-04-24T15:55:48Z MEMBER

mask_and_scale=False will also deactivate the Unsigned decoding.

Do these two have to be linked? I wonder if we can handle the filling later : https://github.com/pydata/xarray/blob/2657787f76fffe4395288702403a68212e69234b/xarray/coding/variables.py#L397-L407

It seems like this code is setting fill values to the right type for CFMaskCoder which is the next step

https://github.com/pydata/xarray/blob/2657787f76fffe4395288702403a68212e69234b/xarray/conventions.py#L266-L272

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset() reading ubyte variables as float32 from DAP server 1681353195
1518429926 https://github.com/pydata/xarray/issues/7772#issuecomment-1518429926 https://api.github.com/repos/pydata/xarray/issues/7772 IC_kwDOAMm_X85agWbm dcherian 2448579 2023-04-21T23:56:26Z 2023-04-21T23:56:26Z MEMBER

I cannot reproduce this on main. What version are you running

``` (xarray-tests) 17:55:11 [cgdm-caguas] {~/python/xarray/devel} ──────> python lazy-nbytes.py 8582842640 Filename: /Users/dcherian/work/python/xarray/devel/lazy-nbytes.py

Line # Mem usage Increment Occurrences Line Contents

 4    101.5 MiB    101.5 MiB           1   @profile
 5                                         def get_dataset_size() :
 6    175.9 MiB     74.4 MiB           1       dataset =     xa.open_dataset("test_1.nc")
 7    175.9 MiB      0.0 MiB           1       print(dataset.nbytes)

```

The BackendArray types define shape and dtype so we can calculate size without loading the data.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Process getting killed due to high memory consumption of xarray's nbytes method 1676561243
1516738260 https://github.com/pydata/xarray/issues/7758#issuecomment-1516738260 https://api.github.com/repos/pydata/xarray/issues/7758 IC_kwDOAMm_X85aZ5bU dcherian 2448579 2023-04-20T18:03:48Z 2023-04-20T18:03:48Z MEMBER

We already have this: https://github.com/pydata/xarray/blob/a4c54a3b1085d7d8ab900f9a645439270327d2c3/xarray/backends/netCDF4_.py#L102-L106

https://github.com/pydata/xarray/blob/a4c54a3b1085d7d8ab900f9a645439270327d2c3/xarray/backends/common.py#L61-L68

but you're right I don't think its configurable.

ds = xr.open_dataset( "https://thredds.met.no/thredds/dodsC/osisaf/met.no/ice/index/v2p1/nh/osisaf_nh_sie_monthly.nc" ) ds.sie.variable._data.array.array.array.array.array.datastore.is_remote True

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Provide a way to specify how long open_dataset tries to fetch data before timing out 1668898601
1516642402 https://github.com/pydata/xarray/issues/7773#issuecomment-1516642402 https://api.github.com/repos/pydata/xarray/issues/7773 IC_kwDOAMm_X85aZiBi dcherian 2448579 2023-04-20T16:44:51Z 2023-04-20T16:44:51Z MEMBER

Can you try with netcdf4.Dataset to remove xarray from the equation

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  opendap access fails only in ubuntu machines 1676792648
1516635276 https://github.com/pydata/xarray/pull/7769#issuecomment-1516635276 https://api.github.com/repos/pydata/xarray/issues/7769 IC_kwDOAMm_X85aZgSM dcherian 2448579 2023-04-20T16:38:41Z 2023-04-20T16:38:41Z MEMBER

Thanks @gsieros, and apologies for the trouble. Clearly our test suite was insufficient.

I'll push out a release soon.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix groupby_bins when labels are specified 1675073096
1515134832 https://github.com/pydata/xarray/issues/7770#issuecomment-1515134832 https://api.github.com/repos/pydata/xarray/issues/7770 IC_kwDOAMm_X85aTx9w dcherian 2448579 2023-04-19T17:49:36Z 2023-04-19T17:49:36Z MEMBER

You should be using entrypoints: - https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html - https://tutorial.xarray.dev/advanced/backends/backends.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Provide a public API for adding new backends 1675299031
1514762927 https://github.com/pydata/xarray/pull/7739#issuecomment-1514762927 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85aSXKv dcherian 2448579 2023-04-19T13:44:10Z 2023-04-19T13:44:10Z MEMBER

what about instead of adding another kwarg, you could use data = True / False / "numpy"?

Oh yeah, I like this. Only suggestion is data = True / False / "array" / "list" where True and "list" are synonymous.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1514043756 https://github.com/pydata/xarray/pull/7669#issuecomment-1514043756 https://api.github.com/repos/pydata/xarray/issues/7669 IC_kwDOAMm_X85aPnls dcherian 2448579 2023-04-19T02:26:30Z 2023-04-19T02:26:30Z MEMBER

Thanks @remigathoni great PR!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Docstrings examples for string methods 1639361476
1513871216 https://github.com/pydata/xarray/pull/7698#issuecomment-1513871216 https://api.github.com/repos/pydata/xarray/issues/7698 IC_kwDOAMm_X85aO9dw dcherian 2448579 2023-04-18T22:36:29Z 2023-04-18T22:36:29Z MEMBER

One workaround is to use os.read when passed a filename, and .read() when passed a file object.

Not sure about the details here. I think it would be good to discuss in an issue before proceeding

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use read1 instead of read to get magic number 1646350377
1513865565 https://github.com/pydata/xarray/pull/7739#issuecomment-1513865565 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85aO8Fd dcherian 2448579 2023-04-18T22:28:07Z 2023-04-18T22:28:07Z MEMBER

Copying my comment from https://github.com/pydata/xarray/issues/1599#issuecomment-1504276696

Perhaps we should have array_to_list: bool instead. If False, we just preserve the underlying array type. Then the user could do ds.as_numpy().to_dict(array_to_list=False) to always get numpy arrays as #7739

array_data or data_as_array could be other options

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1513849818 https://github.com/pydata/xarray/pull/7753#issuecomment-1513849818 https://api.github.com/repos/pydata/xarray/issues/7753 IC_kwDOAMm_X85aO4Pa dcherian 2448579 2023-04-18T22:08:50Z 2023-04-18T22:08:50Z MEMBER

Is it possible to prevent the cancelling of this run when pushing another commit to main? This would be nice to trace the regressions.

I think that's the default

assume that a decreased performance will result in a CI fail? Maybe we could automate this even more and automatically open an issue?

Yeah but its a little flaky: https://labs.quansight.org/blog/github-actions-benchmarks so the noise might not be worth it

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add benchmark against latest release on main. 1666853925
1511781870 https://github.com/pydata/xarray/issues/7759#issuecomment-1511781870 https://api.github.com/repos/pydata/xarray/issues/7759 IC_kwDOAMm_X85aG_Xu dcherian 2448579 2023-04-17T17:25:17Z 2023-04-17T17:25:17Z MEMBER

Ouch, thanks for finding and report this bad bug!

We'll issue a bugfix release soon.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  groupby_bins returns data in reversed order 1670415238
1507391665 https://github.com/pydata/xarray/issues/7716#issuecomment-1507391665 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85Z2Pix dcherian 2448579 2023-04-13T17:56:33Z 2023-04-13T17:56:33Z MEMBER

Should be fixed with the various repodata patches

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1507213204 https://github.com/pydata/xarray/issues/4325#issuecomment-1507213204 https://api.github.com/repos/pydata/xarray/issues/4325 IC_kwDOAMm_X85Z1j-U dcherian 2448579 2023-04-13T15:56:51Z 2023-04-13T15:56:51Z MEMBER

Over in https://github.com/pydata/xarray/issues/7344#issuecomment-1336299057 @shoyer

That said -- we could also switch to smarter NumPy based algorithms to implement most moving window calculations, e.g,. using np.nancumsum for moving window means.

After some digging, this would involve using "summed area tables" which have been generalized to nD, and can be used to compute all our built-in reductions (except median). Basically we'd store the summed area table (repeated np.cumsum) and then calculate reductions using binary ops (mostly subtraction) on those tables.

This would be an intermediate level project but we could implement it incrementally (start with sum for example). One downside is the potential for floating point inaccuracies because we're taking differences of potentially large numbers.

cc @aulemahal

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize ndrolling nanreduce 675482176
1507198725 https://github.com/pydata/xarray/pull/4915#issuecomment-1507198725 https://api.github.com/repos/pydata/xarray/issues/4915 IC_kwDOAMm_X85Z1gcF dcherian 2448579 2023-04-13T15:46:18Z 2023-04-13T15:46:18Z MEMBER

Can you copy your comment to #4325 please?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better rolling reductions 809366777
1507126565 https://github.com/pydata/xarray/pull/7681#issuecomment-1507126565 https://api.github.com/repos/pydata/xarray/issues/7681 IC_kwDOAMm_X85Z1O0l dcherian 2448579 2023-04-13T14:59:41Z 2023-04-13T14:59:41Z MEMBER

Thanks @harshitha1201 !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  restructure the contributing guide 1641188400
1507124290 https://github.com/pydata/xarray/pull/7731#issuecomment-1507124290 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85Z1ORC dcherian 2448579 2023-04-13T14:58:20Z 2023-04-13T14:58:20Z MEMBER

Thanks for patiently working through this Spencer. I'll merge now, and then we can release tomorrow.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1506285455 https://github.com/pydata/xarray/pull/7731#issuecomment-1506285455 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZyBeP dcherian 2448579 2023-04-13T03:35:01Z 2023-04-13T03:35:01Z MEMBER

There are a bunch of warnings in the tests that could be silenced: D:\a\xarray\xarray\xarray\tests\test_dataset.py:516: UserWarning: Converting non-nanosecond precision datetime values to nanosecond precision. This behavior can eventually be relaxed in xarray, as it is an artifact from pandas which is now beginning to support non-nanosecond precision values. This warning is caused by passing non-nanosecond np.datetime64 or np.timedelta64 values to the DataArray or Variable constructor; it can be silenced by converting the values to nanosecond precision ahead of time.

But we can also just merge quickly to get a release out

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1505834246 https://github.com/pydata/xarray/pull/7751#issuecomment-1505834246 https://api.github.com/repos/pydata/xarray/issues/7751 IC_kwDOAMm_X85ZwTUG dcherian 2448579 2023-04-12T19:49:28Z 2023-04-12T19:49:28Z MEMBER

nice!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  minor doc updates to clarify extensions using accessors 1664888324
1505584866 https://github.com/pydata/xarray/pull/4915#issuecomment-1505584866 https://api.github.com/repos/pydata/xarray/issues/4915 IC_kwDOAMm_X85ZvWbi dcherian 2448579 2023-04-12T16:34:55Z 2023-04-12T16:34:55Z MEMBER

We would welcome a PR. Looking at the implementation of mean should help: https://github.com/pydata/xarray/blob/67ff171367ada960f02b40195249e79deb4ac891/xarray/core/rolling.py#L160

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better rolling reductions 809366777
1505403088 https://github.com/pydata/xarray/issues/7730#issuecomment-1505403088 https://api.github.com/repos/pydata/xarray/issues/7730 IC_kwDOAMm_X85ZuqDQ dcherian 2448579 2023-04-12T14:43:15Z 2023-04-12T14:43:15Z MEMBER

Thanks for the report! I think we should add your example as a benchmark.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  flox performance regression for cftime resampling 1657036222
1505399396 https://github.com/pydata/xarray/pull/7731#issuecomment-1505399396 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZupJk dcherian 2448579 2023-04-12T14:41:02Z 2023-04-12T14:41:17Z MEMBER

RTD failures are real: WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_all_dates. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_all_dates' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_mixed. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_mixed' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_monotonic. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_monotonic' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.is_type_compatible. Possible hints: * AttributeError: type object 'CFTimeIndex' has no attribute 'is_type_compatible' * ImportError: * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.set_value. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'set_value' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex' WARNING: [autosummary] failed to import xarray.CFTimeIndex.to_native_types. Possible hints: * ImportError: * AttributeError: type object 'CFTimeIndex' has no attribute 'to_native_types' * ModuleNotFoundError: No module named 'xarray.CFTimeIndex'

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1504276696 https://github.com/pydata/xarray/issues/1599#issuecomment-1504276696 https://api.github.com/repos/pydata/xarray/issues/1599 IC_kwDOAMm_X85ZqXDY dcherian 2448579 2023-04-11T23:43:34Z 2023-04-11T23:43:43Z MEMBER

Perhaps we should have array_to_list: bool instead. If False, we just preserve the underlying array type.

Then the user could do ds.as_numpy().to_dict(array_to_list=False) to always get numpy arrays as #7739

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray to_dict() without converting with numpy tolist() 261727170
1504098302 https://github.com/pydata/xarray/pull/7747#issuecomment-1504098302 https://api.github.com/repos/pydata/xarray/issues/7747 IC_kwDOAMm_X85Zprf- dcherian 2448579 2023-04-11T21:11:30Z 2023-04-11T21:11:30Z MEMBER

Welcome to Xarray! Next time you can just update the existing branch/PR . I'll close the other one

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Clarify vectorized indexing documentation 1663213844
1504093110 https://github.com/pydata/xarray/pull/7724#issuecomment-1504093110 https://api.github.com/repos/pydata/xarray/issues/7724 IC_kwDOAMm_X85ZpqO2 dcherian 2448579 2023-04-11T21:07:32Z 2023-04-11T21:07:32Z MEMBER

I think we can double check that the only failures are cftimeindex, restore the pin, then merge, and then remove the pin in #7731

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `pandas=2.0` support 1655782486
1503517162 https://github.com/pydata/xarray/pull/7461#issuecomment-1503517162 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Zndnq dcherian 2448579 2023-04-11T14:50:11Z 2023-04-11T14:50:11Z MEMBER

Here is our support policy for versions: https://docs.xarray.dev/en/stable/getting-started-guide/installing.html#minimum-dependency-versions though I think we dropped py38 too early.

For your current issue, I'm surprised this patch didn't fix it: https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/429

cc @hmaarrfk @ocefpaf

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1502378655 https://github.com/pydata/xarray/pull/7019#issuecomment-1502378655 https://api.github.com/repos/pydata/xarray/issues/7019 IC_kwDOAMm_X85ZjHqf dcherian 2448579 2023-04-10T21:57:04Z 2023-04-10T21:57:04Z MEMBER

We could still achieve the goal of running cubed without dask by making normalize_chunks the responsibility of the chunkmanager

Seems OK to me.

The other option is to xfail the broken tests on old dask versions

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Generalize handling of chunked array types 1368740629
1501903273 https://github.com/pydata/xarray/pull/7741#issuecomment-1501903273 https://api.github.com/repos/pydata/xarray/issues/7741 IC_kwDOAMm_X85ZhTmp dcherian 2448579 2023-04-10T14:44:30Z 2023-04-10T14:44:30Z MEMBER

I forgot to say, this looks pretty great already. We just need tests.

Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add lshift and rshift operators 1659654612
1501902783 https://github.com/pydata/xarray/pull/7741#issuecomment-1501902783 https://api.github.com/repos/pydata/xarray/issues/7741 IC_kwDOAMm_X85ZhTe_ dcherian 2448579 2023-04-10T14:44:02Z 2023-04-10T14:44:02Z MEMBER

I don't see tests for other ops. Are these tested somewhere if so I can add tests when I find them.

Grepping for unary_op and binary_op shows a bunch in test_dataset.py, test_dataarray.py, test_variable.py

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add lshift and rshift operators 1659654612
1499904939 https://github.com/pydata/xarray/pull/7719#issuecomment-1499904939 https://api.github.com/repos/pydata/xarray/issues/7719 IC_kwDOAMm_X85ZZrur dcherian 2448579 2023-04-07T03:56:54Z 2023-04-07T03:56:54Z MEMBER

Thanks @kmuehlbauer

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement more Variable Coders 1654988876
1499878014 https://github.com/pydata/xarray/issues/7730#issuecomment-1499878014 https://api.github.com/repos/pydata/xarray/issues/7730 IC_kwDOAMm_X85ZZlJ- dcherian 2448579 2023-04-07T02:56:29Z 2023-04-07T02:56:29Z MEMBER

Also because your groups are sorted, engine='flox' is faster

```python gb = da.groupby("time.year")

using max

xr.set_options(use_flox=True) %timeit gb.max("time") %timeit gb.max("time", engine="flox")

xr.set_options(use_flox=False) %timeit gb.max("time") ```

177 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 11.9 ms ± 471 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 18.5 ms ± 629 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  flox performance regression for cftime resampling 1657036222
1499872700 https://github.com/pydata/xarray/issues/7730#issuecomment-1499872700 https://api.github.com/repos/pydata/xarray/issues/7730 IC_kwDOAMm_X85ZZj28 dcherian 2448579 2023-04-07T02:45:24Z 2023-04-07T02:49:41Z MEMBER

The slowness is basically a bunch of copies happening in align, broadcast, and transpose. It's made a lot worse for this case, because we take CFTimeIndex and cast it back to CFTimeIndex, repeating all the validity checks.

And then there's https://github.com/xarray-contrib/flox/issues/222

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  flox performance regression for cftime resampling 1657036222
1499273560 https://github.com/pydata/xarray/issues/7733#issuecomment-1499273560 https://api.github.com/repos/pydata/xarray/issues/7733 IC_kwDOAMm_X85ZXRlY dcherian 2448579 2023-04-06T15:44:48Z 2023-04-06T15:44:48Z MEMBER

Hi @alippai it is now possible to write "external" backends that register with xarray. See https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html

Feel free to ask questions here while you experiment with it. This tutorial may help too.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Blosc2 ndarray support 1657651596
1498971031 https://github.com/pydata/xarray/issues/7730#issuecomment-1498971031 https://api.github.com/repos/pydata/xarray/issues/7730 IC_kwDOAMm_X85ZWHuX dcherian 2448579 2023-04-06T12:18:50Z 2023-04-06T12:18:50Z MEMBER

Thanks can you add version info please

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  flox performance regression for cftime resampling 1657036222
1498468954 https://github.com/pydata/xarray/issues/7723#issuecomment-1498468954 https://api.github.com/repos/pydata/xarray/issues/7723 IC_kwDOAMm_X85ZUNJa dcherian 2448579 2023-04-06T04:15:06Z 2023-04-06T04:15:06Z MEMBER

Would be a good idea to document this behaviour.

+1

Maybe yet another keyword switch, use_default_fillvalues?

Adding mask_default_netcdf_fill_values: bool is probably a good idea.

I'm still convinced this could be fixed for floating point data.

Generally its worse if we obey some default fill values but not others, because it becomes quite confusing to a user.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  default fill_value not masked when read from file 1655569401
1498447641 https://github.com/pydata/xarray/pull/7669#issuecomment-1498447641 https://api.github.com/repos/pydata/xarray/issues/7669 IC_kwDOAMm_X85ZUH8Z dcherian 2448579 2023-04-06T03:40:24Z 2023-04-06T03:40:24Z MEMBER

The docs build failure is real, from some rst formatting error /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Docstrings examples for string methods 1639361476
1498404409 https://github.com/pydata/xarray/issues/7722#issuecomment-1498404409 https://api.github.com/repos/pydata/xarray/issues/7722 IC_kwDOAMm_X85ZT9Y5 dcherian 2448579 2023-04-06T02:26:41Z 2023-04-06T02:26:41Z MEMBER

how about not adding _FillValue when missing_value is present? Is that a good idea? Is it standards compliant?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Conflicting _FillValue and missing_value on write 1655483374
1498403174 https://github.com/pydata/xarray/issues/7723#issuecomment-1498403174 https://api.github.com/repos/pydata/xarray/issues/7723 IC_kwDOAMm_X85ZT9Fm dcherian 2448579 2023-04-06T02:24:34Z 2023-04-06T02:24:34Z MEMBER

See https://github.com/pydata/xarray/pull/5680#issuecomment-895508489

To follow up, from a practical perspective, there are two problems with assuming that there are always "truly missing values" (case 2):

It makes it impossible to represent the full range of values in a data type, e.g., 255 for uint8 now means "missing". Due to unfortunately limited options for representing missing data in NumPy, Xarray represents truly missing values in its data model with "NaN". This is more or less OK for floating point data, but means that integer data gets converted into floats. For example, uint8 would now get automatically converted into float32.

Both of these issues are problematic for faithful "round tripping" of Xarray data into netCDF and back. For this reason, Xarray needs an unambiguous way to know if a netCDF variable could contain semantically missing values. So far, we've used the presence of missing_value and _FillValue attributes for that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  default fill_value not masked when read from file 1655569401
1498386138 https://github.com/pydata/xarray/pull/7706#issuecomment-1498386138 https://api.github.com/repos/pydata/xarray/issues/7706 IC_kwDOAMm_X85ZT47a dcherian 2448579 2023-04-06T01:56:08Z 2023-04-06T01:56:08Z MEMBER

Thanks @nishtha981 this is a great contribution!

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add: Adds a config.yml file for welcome-bot 1650309361
1498375006 https://github.com/pydata/xarray/issues/7727#issuecomment-1498375006 https://api.github.com/repos/pydata/xarray/issues/7727 IC_kwDOAMm_X85ZT2Ne dcherian 2448579 2023-04-06T01:37:06Z 2023-04-06T01:37:06Z MEMBER

I think we would gladly take a PR for this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement lshift and rshift operators 1656363348
1498186349 https://github.com/pydata/xarray/issues/7716#issuecomment-1498186349 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZTIJt dcherian 2448579 2023-04-05T21:30:02Z 2023-04-05T21:30:02Z MEMBER

I think they are all expected pandas.pydata.org/docs/whatsnew/v2.0.0.html namely

Yes they are. We just haven't had the time to fix things.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1498119367 https://github.com/pydata/xarray/issues/6836#issuecomment-1498119367 https://api.github.com/repos/pydata/xarray/issues/6836 IC_kwDOAMm_X85ZS3zH dcherian 2448579 2023-04-05T20:35:20Z 2023-04-05T20:35:20Z MEMBER

I think we could special-case extracting a multiindex level here: https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/groupby.py#L469

group at that stage should have values ['a', 'a', 'b', 'b', 'c', 'c']

@mschrimpf Can you try that and send in a PR?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  groupby(multi-index level) not working correctly on a multi-indexed DataArray or DataSet 1318992926
1497690966 https://github.com/pydata/xarray/issues/7573#issuecomment-1497690966 https://api.github.com/repos/pydata/xarray/issues/7573 IC_kwDOAMm_X85ZRPNW dcherian 2448579 2023-04-05T15:33:03Z 2023-04-05T15:33:03Z MEMBER

Does any one have any thoughts here? Shall we merge https://github.com/conda-forge/xarray-feedstock/pull/84/files and see if someone complains?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add optional min versions to conda-forge recipe (`run_constrained`) 1603957501
1497686846 https://github.com/pydata/xarray/pull/7561#issuecomment-1497686846 https://api.github.com/repos/pydata/xarray/issues/7561 IC_kwDOAMm_X85ZROM- dcherian 2448579 2023-04-05T15:30:16Z 2023-04-05T15:30:16Z MEMBER

Variables don't have coordinates so that won't work.

mypy is correct here, it's a bug and we don't test for grouping by index variables. A commit reverting to the old len check would be great here, if you have the time.

It's not clear to me why we allow this actually. Seems like .groupby("DIMENSION") solves that use-case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Introduce Grouper objects internally 1600382587
1496083800 https://github.com/pydata/xarray/issues/7707#issuecomment-1496083800 https://api.github.com/repos/pydata/xarray/issues/7707 IC_kwDOAMm_X85ZLG1Y dcherian 2448579 2023-04-04T14:34:27Z 2023-04-04T14:34:27Z MEMBER

Oh wow, we're down to mostly Zarr failures!

cc @jhamman

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1650481625
1493220291 https://github.com/pydata/xarray/issues/7378#issuecomment-1493220291 https://api.github.com/repos/pydata/xarray/issues/7378 IC_kwDOAMm_X85ZALvD dcherian 2448579 2023-04-02T04:26:57Z 2023-04-02T04:26:57Z MEMBER

would one have to create these names for each method?

Yes I think so.

xarray.Dataset.var suggests to see numpy.var which is about computing variance but I don't want to guess wrong.

Yes things like var, std etc. are pretty standard so you should able to find them. If not, feel free to ask !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improve docstrings for better discoverability 1497131525
1493004758 https://github.com/pydata/xarray/pull/6812#issuecomment-1493004758 https://api.github.com/repos/pydata/xarray/issues/6812 IC_kwDOAMm_X85Y_XHW dcherian 2448579 2023-04-01T15:26:04Z 2023-04-01T15:26:04Z MEMBER

We should figure out how to express some of this understanding as tests (some xfailed). That way it's easy to check when something gets fixed, and prevent regressions.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improved CF decoding 1309966595
1492098823 https://github.com/pydata/xarray/pull/7523#issuecomment-1492098823 https://api.github.com/repos/pydata/xarray/issues/7523 IC_kwDOAMm_X85Y758H dcherian 2448579 2023-03-31T15:15:02Z 2023-03-31T15:15:02Z MEMBER

Thanks @headtr1ck great PR!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  allow refreshing of backends 1581313830
1490966436 https://github.com/pydata/xarray/pull/7689#issuecomment-1490966436 https://api.github.com/repos/pydata/xarray/issues/7689 IC_kwDOAMm_X85Y3lek dcherian 2448579 2023-03-30T21:09:23Z 2023-03-30T21:09:23Z MEMBER

Sounds good to me! Thanks @jhamman

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add reset_encoding to dataset/dataarray/variable 1642922680
1490885299 https://github.com/pydata/xarray/pull/7689#issuecomment-1490885299 https://api.github.com/repos/pydata/xarray/issues/7689 IC_kwDOAMm_X85Y3Rqz dcherian 2448579 2023-03-30T20:10:08Z 2023-03-30T20:10:08Z MEMBER

ds.reset_encoding(keys=["dtype", "chunks"])

I agree that it may not be necessary.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add reset_encoding to dataset/dataarray/variable 1642922680

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 128.166ms · About: xarray-datasette