home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

where author_association = "CONTRIBUTOR" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user >30

  • pwolfram 207
  • github-actions[bot] 206
  • dopplershift 123
  • nbren12 116
  • ocefpaf 115
  • jbusecke 114
  • snowman2 106
  • hmaarrfk 104
  • jsignell 101
  • martindurant 92
  • Zac-HD 92
  • djhoese 88
  • WeatherGod 77
  • gerritholl 73
  • jthielen 67
  • alimanfoo 66
  • spencerahill 60
  • raybellwaves 59
  • aaronspring 58
  • chunweiyuan 57
  • DocOtak 51
  • johnomotani 46
  • seth-p 44
  • aulemahal 43
  • slevang 43
  • huard 41
  • AndrewILWilliams 40
  • yohai 38
  • malmans2 38
  • delgadom 37
  • …

issue >30

  • Fixes OS error arising from too many files open 39
  • WIP: Zarr backend 30
  • Add methods for combining variables of differing dimensionality 29
  • Add CRS/projection information to xarray objects 29
  • Appending to zarr store 27
  • Read grid mapping and bounds as coords 26
  • Html repr 25
  • Implement interp for interpolating between chunks of data (dask) 23
  • support for units 18
  • Vectorized lazy indexing 18
  • CFTimeIndex Resampling 18
  • expand dimension by re-allocating larger arrays with more space "at the end of the corresponding dimension", block copying previously existing data, and autofill newly created entry by a default value (note: alternative to reindex, but much faster for extending large arrays along, for example, the time dimension) 18
  • Added PNC backend to xarray 17
  • Integration with dask/distributed (xarray backend design) 16
  • Sortby 16
  • ENH: Scatter plots of one variable vs another 16
  • enable internal plotting with cftime datetime 16
  • open_mfdataset too many files 15
  • Allow concat() to drop/replace duplicate index labels? 15
  • Add a filter_by_attrs method to Dataset 14
  • Attributes from netCDF4 intialization retained 14
  • xarray / vtk integration 14
  • New deep copy behavior in 2022.9.0 causes maximum recursion error 14
  • dask.async.RuntimeError: NetCDF: HDF error on xarray to_netcdf 13
  • ArviZ Dev Xarray Dev Get Together? 13
  • Support multiple dimensions in DataArray.argmin() and DataArray.argmax() methods 13
  • CFTimeIndex calendar in repr 13
  • Allow fsspec URLs in open_(mf)dataset 13
  • Allow assigning values to a subset of a dataset 13
  • float32 instead of float64 when decoding int16 with scale_factor netcdf var using xarray 12
  • …

author_association 1

  • CONTRIBUTOR · 5,080 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1406463669 https://github.com/pydata/xarray/issues/7377#issuecomment-1406463669 https://api.github.com/repos/pydata/xarray/issues/7377 IC_kwDOAMm_X85T1O61 maawoo 56583917 2023-01-27T12:45:10Z 2024-01-03T08:41:41Z CONTRIBUTOR

Hi all, I just created a simple workaround, which might be useful for others:
https://gist.github.com/maawoo/0b34d371c3cc1960a1589ccaded868c2

It uses the _nan_quantile method of xclim and works fine for my applications. Here is a quick comparison using the same example data as in my initial post:

EDIT: I've updated the code to use numbagg instead of xclim.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1497031605
1575677244 https://github.com/pydata/xarray/pull/7891#issuecomment-1575677244 https://api.github.com/repos/pydata/xarray/issues/7891 IC_kwDOAMm_X85d6u08 mgunyho 20118130 2023-06-04T19:08:20Z 2023-06-04T19:11:44Z CONTRIBUTOR

Oh no, the doctest failure is because the test is flaky, this was introduced by me in #7821, see here: https://github.com/pydata/xarray/pull/7821#issuecomment-1537142237 and here: https://github.com/pydata/xarray/pull/7821/commits/a0e6659ca01188378f29a35b418d6f9e2b889d2e. I'll submit another patch to fix it soon, although I'm not sure how. If you have any tips to avoid this problem, let me know.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add errors option to curvefit 1740268634
1575492166 https://github.com/pydata/xarray/pull/6515#issuecomment-1575492166 https://api.github.com/repos/pydata/xarray/issues/6515 IC_kwDOAMm_X85d6BpG mgunyho 20118130 2023-06-04T09:43:50Z 2023-06-04T09:43:50Z CONTRIBUTOR

Hi! I would also like to see this implemented, so I rebased this branch and added a test in #7891.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add allow_failures flag to Dataset.curve_fit 1215946244
1574338418 https://github.com/pydata/xarray/issues/7890#issuecomment-1574338418 https://api.github.com/repos/pydata/xarray/issues/7890 IC_kwDOAMm_X85d1n9y negin513 17344536 2023-06-02T21:30:05Z 2023-06-02T21:30:05Z CONTRIBUTOR

@dcherian : agreed! But I am afraid it might break other components. Although numpy seems to be able to handle both tuple and list in normalize_axis_tuple and I cannot see any other issues rising from this: https://github.com/numpy/numpy/blob/f67467c21a1797becde3097661996f60df4080ff/numpy/core/numeric.py#L1328

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `xarray.rolling_window` Converts `dims` Argument from Tuple to List Causing Issues for Cupy-Xarray 1738835134
1573764660 https://github.com/pydata/xarray/pull/7862#issuecomment-1573764660 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dzb40 tomwhite 85085 2023-06-02T13:44:43Z 2023-06-02T13:44:43Z CONTRIBUTOR

@kmuehlbauer thanks for adding tests! I'm not sure what the mypy error is either, I'm afraid...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1572357330 https://github.com/pydata/xarray/pull/7877#issuecomment-1572357330 https://api.github.com/repos/pydata/xarray/issues/7877 IC_kwDOAMm_X85duETS dependabot[bot] 49699333 2023-06-01T16:21:59Z 2023-06-01T16:21:59Z CONTRIBUTOR

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 15 to 16 1730190019
1572174061 https://github.com/pydata/xarray/pull/7670#issuecomment-1572174061 https://api.github.com/repos/pydata/xarray/issues/7670 IC_kwDOAMm_X85dtXjt malmans2 22245117 2023-06-01T14:34:44Z 2023-06-01T14:34:44Z CONTRIBUTOR

The cfgrib notebook in the documentation is broken. I guess it's related to this PR. See: https://docs.xarray.dev/en/stable/examples/ERA5-GRIB-example.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Delete built-in cfgrib backend 1639732867
1568704895 https://github.com/pydata/xarray/pull/7876#issuecomment-1568704895 https://api.github.com/repos/pydata/xarray/issues/7876 IC_kwDOAMm_X85dgIl_ tomvothecoder 25624127 2023-05-30T16:09:17Z 2023-05-30T20:59:48Z CONTRIBUTOR

Thanks you @keewis and @Illviljan! I made comment to deprecate cdms2 in xarray in another issue/PR last year and didn't get around to it. I linked this PR in an xCDAT discussion post here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate the `cdms2` conversion methods 1729709527
1567319929 https://github.com/pydata/xarray/issues/7879#issuecomment-1567319929 https://api.github.com/repos/pydata/xarray/issues/7879 IC_kwDOAMm_X85da2d5 huard 81219 2023-05-29T16:15:49Z 2023-05-29T16:15:49Z CONTRIBUTOR

There are similar segfaults in an xncml PR: https://github.com/xarray-contrib/xncml/pull/48

Googling around suggest it is related to netCDF not being thread-safe and recent python-netcdf4 releasing the GIL.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  occasional segfaults on CI 1730451312
1563078509 https://github.com/pydata/xarray/issues/7856#issuecomment-1563078509 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85dKq9t frazane 62377868 2023-05-25T15:10:04Z 2023-05-25T15:19:49Z CONTRIBUTOR

Same issue here. I installed xarray with conda/mamba (not a dev install).

``` INSTALLED VERSIONS


commit: None python: 3.11.3 | packaged by conda-forge | (main, Apr 6 2023, 08:57:19) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.42.2.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2

xarray: 2023.4.2 pandas: 2.0.1 numpy: 1.24.3 scipy: 1.10.1 netCDF4: 1.6.3 h5netcdf: None h5py: None zarr: 2.14.2 dask: 2023.4.1 distributed: None pip: 23.1.2 IPython: 8.13.1 ```

Edit: downgrading to 2023.4.0 solved the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1562615805 https://github.com/pydata/xarray/issues/7870#issuecomment-1562615805 https://api.github.com/repos/pydata/xarray/issues/7870 IC_kwDOAMm_X85dI5_9 vhaasteren 3092444 2023-05-25T09:52:06Z 2023-05-25T09:52:06Z CONTRIBUTOR

Thank you @TomNicholas, that is encouraging to hear. I will wait for @keewis to respond before filing a PR.

FWIW, I have tested the modification I suggest in my fork of xarray, and it works well for our purposes. It just generalizes the exception catch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Name collision with Pulsar Timing package 'PINT'  1722614979
1561328867 https://github.com/pydata/xarray/issues/5644#issuecomment-1561328867 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85dD_zj malmans2 22245117 2023-05-24T15:02:44Z 2023-05-24T15:02:44Z CONTRIBUTOR

Do you know where the in-place modification is happening? We could just copy there and fix this particular issue.

Not sure, but I'll take a look!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1561308333 https://github.com/pydata/xarray/pull/7862#issuecomment-1561308333 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dD6yt tomwhite 85085 2023-05-24T14:51:23Z 2023-05-24T14:51:23Z CONTRIBUTOR

So it looks like the changes here with the fix in my branch will get your issue resolved @tomwhite, right?

Yes - thanks!

I'm a bit worried, that this might break other users workflows, if they depend on the current conversion to floating point for some reason.

The floating point default is preserved if you do e.g. xr.Dataset({"a": np.array([], dtype=object)}). The change here will only convert to string if there is extra metadata present that says it is a string.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561240314 https://github.com/pydata/xarray/pull/7862#issuecomment-1561240314 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDqL6 tomwhite 85085 2023-05-24T14:12:49Z 2023-05-24T14:12:49Z CONTRIBUTOR

Could you verify the above example, please?

The code looks fine, and I get the same result when I run it with this PR.

Your fix in https://github.com/kmuehlbauer/xarray/tree/preserve-vlen-string-dtype changes the metadata so it is correctly preserved as metadata: {'element_type': <class 'str'>}.

I feel less qualified to evaluate the impact of the netcdf4 fix.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561143111 https://github.com/pydata/xarray/pull/7862#issuecomment-1561143111 https://api.github.com/repos/pydata/xarray/issues/7862 IC_kwDOAMm_X85dDSdH tomwhite 85085 2023-05-24T13:23:18Z 2023-05-24T13:23:18Z CONTRIBUTOR

Thanks for taking a look @kmuehlbauer and for the useful example code. I hadn't considered the netcdf cases, so thanks for pointing those out.

Engine netcdf4 does not roundtrip here, losing the dtype metadata information. There is special casing for h5netcdf backend, though.

Could netcdf4 do the same special-casing as h5netcdf?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  CF encoding should preserve vlen dtype for empty arrays 1720045908
1561093283 https://github.com/pydata/xarray/pull/7551#issuecomment-1561093283 https://api.github.com/repos/pydata/xarray/issues/7551 IC_kwDOAMm_X85dDGSj garciampred 99014432 2023-05-24T12:54:46Z 2023-05-24T12:55:08Z CONTRIBUTOR

This is currently stuck waiting until the problems with the last netcdf-c versions are fixed in a new release. See the issues (https://github.com/pydata/xarray/issues/7388).

When they are fixed I will write the tests If I have time. But of course any help and suggestions are welcomed.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for the new compression arguments. 1596511582
1559813805 https://github.com/pydata/xarray/pull/7865#issuecomment-1559813805 https://api.github.com/repos/pydata/xarray/issues/7865 IC_kwDOAMm_X85c-N6t martinfleis 36797143 2023-05-23T16:49:00Z 2023-05-23T16:49:00Z CONTRIBUTOR

depends on whether we can put local versions on anaconda

Seems okay -> https://anaconda.org/scientific-python-nightly-wheels/xarray/files

now I do, the username is the same as on github.

I've added you. You should be able to generate a token at https://anaconda.org/scientific-python-nightly-wheels/settings/access with Allow write access to the API site and Allow uploads to Standard Python repositories permissions and add the token as a ANACONDA_NIGHTLY secret.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Upload nightly wheels to scientific-python-nightly-wheels 1720850091
1559669046 https://github.com/pydata/xarray/pull/7865#issuecomment-1559669046 https://api.github.com/repos/pydata/xarray/issues/7865 IC_kwDOAMm_X85c9qk2 martinfleis 36797143 2023-05-23T15:28:39Z 2023-05-23T15:28:39Z CONTRIBUTOR

Do we need to build the wheel in the same way you currently do in https://github.com/pydata/xarray/blob/main/.github/workflows/testpypi-release.yaml? I used the building workflow from the PyPI release but I just noticed that you do it a bit differently when pushing to TestPyPI.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Upload nightly wheels to scientific-python-nightly-wheels 1720850091
1559665998 https://github.com/pydata/xarray/pull/7865#issuecomment-1559665998 https://api.github.com/repos/pydata/xarray/issues/7865 IC_kwDOAMm_X85c9p1O martinfleis 36797143 2023-05-23T15:26:44Z 2023-05-23T15:26:44Z CONTRIBUTOR

I'll help setting that up, but I wonder if it is possible to sign up multiple people?

We can add as many people as you'd like. They just need an account on anaconda.org. Do you have an account there so I can add you? You can then add other as you'll need.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Upload nightly wheels to scientific-python-nightly-wheels 1720850091
1558263680 https://github.com/pydata/xarray/issues/7295#issuecomment-1558263680 https://api.github.com/repos/pydata/xarray/issues/7295 IC_kwDOAMm_X85c4TeA OriolAbril 23738400 2023-05-23T00:26:07Z 2023-05-23T00:26:07Z CONTRIBUTOR

Finally had some time to play around with the accessors, I have opened a PR adding them: https://github.com/arviz-devs/xarray-einstats/pull/51

{
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
  einops integration? 1452291042
1557440032 https://github.com/pydata/xarray/issues/5644#issuecomment-1557440032 https://api.github.com/repos/pydata/xarray/issues/5644 IC_kwDOAMm_X85c1KYg malmans2 22245117 2023-05-22T15:35:54Z 2023-05-22T15:35:54Z CONTRIBUTOR

Hi! I was about to open a new issue about this, but looks like it's a known issue and there's a stale PR... Let me know if I can help to get this fixed!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `polyfit` with weights alters the DataArray in place 955043280
1555970164 https://github.com/pydata/xarray/issues/7665#issuecomment-1555970164 https://api.github.com/repos/pydata/xarray/issues/7665 IC_kwDOAMm_X85cvjh0 Ockenfuss 42680748 2023-05-20T18:44:53Z 2023-05-20T18:44:53Z CONTRIBUTOR

Do you have any thoughts on this?

I think with the following signature, the function will be backward compatible: interpolate_na(dim, method='linear', use_coordinate=True, limit=None, limit_direction='forward', limit_area=None, limit_use_coordinate=False, max_gap=None)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Interpolate_na: Rework 'limit' argument documentation/implementation 1637898633
1554332081 https://github.com/pydata/xarray/pull/7019#issuecomment-1554332081 https://api.github.com/repos/pydata/xarray/issues/7019 IC_kwDOAMm_X85cpTmx tomwhite 85085 2023-05-19T10:01:06Z 2023-05-19T10:01:06Z CONTRIBUTOR

Thanks for all your hard work on this @TomNicholas!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Generalize handling of chunked array types 1368740629
1552140734 https://github.com/pydata/xarray/pull/7800#issuecomment-1552140734 https://api.github.com/repos/pydata/xarray/issues/7800 IC_kwDOAMm_X85cg8m- dstansby 6197628 2023-05-17T21:55:36Z 2023-05-17T21:55:36Z CONTRIBUTOR

Is keeping things in a single file a deliberate design choice? Personally from what I can see splitting up into separate files makes sense, given the original file is already so long.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  De-duplicate some unit test paramatrization 1690041959
1551178793 https://github.com/pydata/xarray/pull/7788#issuecomment-1551178793 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cdRwp maxhollmann 454724 2023-05-17T10:57:05Z 2023-05-17T10:57:05Z CONTRIBUTOR

@dcherian Sure, done :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1549984548 https://github.com/pydata/xarray/pull/7788#issuecomment-1549984548 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cYuMk maxhollmann 454724 2023-05-16T16:21:12Z 2023-05-16T16:21:12Z CONTRIBUTOR

I like it, solves the concern in my previous comment as well. Updated the branch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1549151705 https://github.com/pydata/xarray/pull/7821#issuecomment-1549151705 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85cVi3Z mgunyho 20118130 2023-05-16T07:35:19Z 2023-05-16T07:40:46Z CONTRIBUTOR

I updated the type hints now (and also did a rebase just in case).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1546942397 https://github.com/pydata/xarray/issues/5511#issuecomment-1546942397 https://api.github.com/repos/pydata/xarray/issues/5511 IC_kwDOAMm_X85cNHe9 josephnowak 25071375 2023-05-14T16:41:38Z 2023-05-14T17:03:57Z CONTRIBUTOR

Hi @shoyer, sorry for bothering you with this issue again, I know that it is old right now, but I have been dealing with it again some days ago and I have also noticed the same problem using the region parameter, so I was thinking that based on this issue I opened on Zarr (https://github.com/zarr-developers/zarr-python/issues/1414) it would be good to implement any of this options to solve the problem:

  1. A warning on the docs indicating that it is necessary to add a synchronizer if you want to append or update data to a Zarr file, or that you need to manually align the chunks based on the size of the missing data on the last chunk to be able to get independent writes.

  2. Automatically align the chunks to get independent writes (which I think can produce slower writes due to the modification of the chunks).

  3. Raise an error if there is no synchronizer and the chunks are not properly aligned, I think that the error can be controlled using the parameter safe_chunks that you offer on the to_zarr method.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result 927617256
1545300010 https://github.com/pydata/xarray/pull/7788#issuecomment-1545300010 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85cG2gq maxhollmann 454724 2023-05-12T07:26:52Z 2023-05-12T07:26:52Z CONTRIBUTOR

@kmuehlbauer Do you need any adjustments to merge this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1544199022 https://github.com/pydata/xarray/issues/7833#issuecomment-1544199022 https://api.github.com/repos/pydata/xarray/issues/7833 IC_kwDOAMm_X85cCptu alimanfoo 703554 2023-05-11T15:26:52Z 2023-05-11T15:26:52Z CONTRIBUTOR

Awesome, thanks @kmuehlbauer and @Illviljan 🙏🏻

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Slow performance of concat() 1704950804
1543534474 https://github.com/pydata/xarray/pull/7834#issuecomment-1543534474 https://api.github.com/repos/pydata/xarray/issues/7834 IC_kwDOAMm_X85cAHeK mx-moth 132147 2023-05-11T08:07:58Z 2023-05-11T08:07:58Z CONTRIBUTOR

I thtew this pull request up before leaving the office so it would run all the tests. Turns out that np.can_cast is the wrong option here. It doesn't consider casting floats to integers safe even if all values are correctly representable. Another solution will have to be found.

I can investigate further on Monday, and combine this with the linked issues. Thanks for the extra context

-- Tim Heap @.***

On Thu, 11 May 2023, at 18:03, Kai Mühlbauer wrote:

@mx-moth https://github.com/mx-moth Yes, this casting should be fixed.

I'm adding a bit of context here, as this might need to be solved in combination with #7098 https://github.com/pydata/xarray/pull/7098 and #7827 https://github.com/pydata/xarray/pull/7827. #7098 https://github.com/pydata/xarray/pull/7098 removes undefined casting for decoding. In #7827 https://github.com/pydata/xarray/pull/7827 there are efforts to do this for encoding, too.

As cast_to_int_if_safe is called for encoding as well as decoding I'm not sure if all cases have been catched by these two PR.

One issue on decoding is that at least for datetime64 based times the calculated time_deltas are currently converted to float64 in the presence of NaT (although NaT can perfectly be expressed as int64). It would be great if you could try your PR on top of #7827 https://github.com/pydata/xarray/pull/7827 (which includes #7098 https://github.com/pydata/xarray/pull/7098) to see if that fixes the errors in this PR.

— Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/pull/7834#issuecomment-1543526954, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABAIM6WCCVWQRC7DY6YHKTXFSMMDANCNFSM6AAAAAAX5UZXZM. You are receiving this because you were mentioned.Message ID: @.***>

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use `numpy.can_cast` instead of casting and checking 1705163672
1543433057 https://github.com/pydata/xarray/pull/7834#issuecomment-1543433057 https://api.github.com/repos/pydata/xarray/issues/7834 IC_kwDOAMm_X85b_uth mx-moth 132147 2023-05-11T06:52:46Z 2023-05-11T06:52:46Z CONTRIBUTOR

Using latest xarray and numpy >= 1.24.0, the following code generates a warning. This function is called when saving datasets to disk using dataset.to_netcdf:

python import numpy as np from xarray.coding.times import cast_to_int_if_safe array = np.array([1., 2., np.nan]) cast_to_int_if_safe(array)

$HOME/projects/xarray/xarray/coding/times.py:619: RuntimeWarning: invalid value encountered in cast int_num = np.asarray(num, dtype=np.int64) array([ 1., 2., nan])

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use `numpy.can_cast` instead of casting and checking 1705163672
1543417823 https://github.com/pydata/xarray/pull/7834#issuecomment-1543417823 https://api.github.com/repos/pydata/xarray/issues/7834 IC_kwDOAMm_X85b_q_f mx-moth 132147 2023-05-11T06:36:59Z 2023-05-11T06:36:59Z CONTRIBUTOR

Unsure if new tests need to be added, as the intention is that no behaviour changes except for the lack of warnings from numpy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use `numpy.can_cast` instead of casting and checking 1705163672
1539073371 https://github.com/pydata/xarray/issues/7237#issuecomment-1539073371 https://api.github.com/repos/pydata/xarray/issues/7237 IC_kwDOAMm_X85bvGVb djhoese 1828519 2023-05-08T21:23:59Z 2023-05-08T21:23:59Z CONTRIBUTOR

And with new pandas (which I understand as being the thing/library that is changing) and new xarray, what will happen? What happens between nano and non-nano times?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  The new NON_NANOSECOND_WARNING is not very nice to end users 1428549868
1538397945 https://github.com/pydata/xarray/issues/7237#issuecomment-1538397945 https://api.github.com/repos/pydata/xarray/issues/7237 IC_kwDOAMm_X85bshb5 djhoese 1828519 2023-05-08T13:53:19Z 2023-05-08T13:53:19Z CONTRIBUTOR

Sorry for dragging this issue up again, but even with the new warning message I still have some questions. Do I have to switch to nanosecond precision times or will xarray/pandas/numpy just figure it out when I combine/compare times with different precisions?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  The new NON_NANOSECOND_WARNING is not very nice to end users 1428549868
1538068408 https://github.com/pydata/xarray/issues/6854#issuecomment-1538068408 https://api.github.com/repos/pydata/xarray/issues/6854 IC_kwDOAMm_X85brQ-4 QuLogic 302469 2023-05-08T09:42:08Z 2023-05-08T09:42:08Z CONTRIBUTOR

I think this was fixed by https://github.com/Unidata/netcdf-c/issues/2573

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  test_open_nczarr uses too much memory 1323734180
1537368210 https://github.com/pydata/xarray/pull/7821#issuecomment-1537368210 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85bomCS mgunyho 20118130 2023-05-07T09:23:16Z 2023-05-07T09:23:16Z CONTRIBUTOR

I implemented the broadcasting for bounds also. I hope it's not too ugly. Do you think the signature for p0 and bounds should be updated to explicitly allow only (tuples of) floats or DataArrays?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1537162245 https://github.com/pydata/xarray/pull/7821#issuecomment-1537162245 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85bnzwF slevang 39069044 2023-05-06T15:12:57Z 2023-05-06T15:12:57Z CONTRIBUTOR

This looks pretty good to me on first glance! I would vote to do p0 and bounds in one PR. It could surprise a user if one of these can take an array and the other only scalars.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1537159048 https://github.com/pydata/xarray/pull/7821#issuecomment-1537159048 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85bny-I mgunyho 20118130 2023-05-06T14:56:05Z 2023-05-06T14:56:05Z CONTRIBUTOR

I just noticed that the docs for curvefit have some formatting issues, I think it's using single backticks instead of double backticks for code formatting. Should I add those to this PR as well?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1537142237 https://github.com/pydata/xarray/pull/7821#issuecomment-1537142237 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85bnu3d mgunyho 20118130 2023-05-06T13:25:41Z 2023-05-06T13:25:52Z CONTRIBUTOR

Hm the doctest failed because the result is off in the last decimal place. I can't reproduce it, even though I have the same versions of numpy 1.23.5 and scipy 1.10.1 in my env as what the CI says. Anyway, changed it in 3001eaf.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1537140981 https://github.com/pydata/xarray/issues/7768#issuecomment-1537140981 https://api.github.com/repos/pydata/xarray/issues/7768 IC_kwDOAMm_X85bnuj1 mgunyho 20118130 2023-05-06T13:18:47Z 2023-05-06T13:18:47Z CONTRIBUTOR

I implemented this for p0 in #7821. I used your idea of passing p0 as part of *args. It's maybe a tiny bit hacky to put two things in *args and then reconstruct them based on the lengths, but not too bad.

I can also do this for the bounds, just didn't have time to do it yet. How do you think the multidimensional bounds should be passed? As a tuple of arrays, or as an array of tuples, or something else? To me, it would make most sense to pass them as tuples of "things that can be broadcast", so that e.g. the lower bound of can be a scalar 0, but the upper bound could vary.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Supplying multidimensional initial guess to `curvefit` 1674818753
1537132014 https://github.com/pydata/xarray/pull/7799#issuecomment-1537132014 https://api.github.com/repos/pydata/xarray/issues/7799 IC_kwDOAMm_X85bnsXu dstansby 6197628 2023-05-06T12:30:03Z 2023-05-06T12:30:03Z CONTRIBUTOR

I think this is good for review now? There's plenty of tests lower down the file that can be generalised using the new framework I've introduced, but I think worth leaving that to another PR to make this one easier to review.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Start making unit testing more general 1690019325
1461573402 https://github.com/pydata/xarray/issues/7593#issuecomment-1461573402 https://api.github.com/repos/pydata/xarray/issues/7593 IC_kwDOAMm_X85XHdca quantsnus 25102059 2023-03-09T08:41:16Z 2023-05-06T03:24:46Z CONTRIBUTOR

@Karimat22

If you are encountering an error message that says "Plotting with time-zone-aware pd.Timestamp axis not possible",

No, I don't. This is the title of the issue!

it means that you are trying to plot a Pandas DataFrame or Series

No, I don't. We are in the xarray repository!

To fix this error....

All below does not really make sense, with respect to my issue posted.

Quite frankly, your post reads like it was copy and pasted from ChatGPT or similar.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Plotting with time-zone-aware pd.Timestamp axis not possible 1613054013
1536054499 https://github.com/pydata/xarray/issues/7813#issuecomment-1536054499 https://api.github.com/repos/pydata/xarray/issues/7813 IC_kwDOAMm_X85bjlTj tomwhite 85085 2023-05-05T10:30:38Z 2023-05-05T10:30:38Z CONTRIBUTOR

Ah I understand better now. This makes sense - if ChunkManager has a name then the implementation could use that to name tasks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Task naming for general chunkmanagers 1694956396
1534281206 https://github.com/pydata/xarray/issues/7813#issuecomment-1534281206 https://api.github.com/repos/pydata/xarray/issues/7813 IC_kwDOAMm_X85bc0X2 tomwhite 85085 2023-05-04T08:22:05Z 2023-05-04T08:22:05Z CONTRIBUTOR

If you hover over a node in the SVG representation you'll get a tooltip that shows the call stack and the line number of the top-level user function that invoked the computation. Does that help at all? (That said, I'm open to changing the way it is displayed, or how tasks are named in general.)

BTW should this be moved to a cubed issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Task naming for general chunkmanagers 1694956396
1532986706 https://github.com/pydata/xarray/pull/7798#issuecomment-1532986706 https://api.github.com/repos/pydata/xarray/issues/7798 IC_kwDOAMm_X85bX4VS slevang 39069044 2023-05-03T12:58:35Z 2023-05-03T12:58:35Z CONTRIBUTOR

Would it be possible to run another bug fix release with this incorporated? Or I guess we're already on to 2023.5.0 according to the date.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix groupby binary ops when grouped array is subset relative to other 1689773381
1532601237 https://github.com/pydata/xarray/issues/7516#issuecomment-1532601237 https://api.github.com/repos/pydata/xarray/issues/7516 IC_kwDOAMm_X85bWaOV Thomas-Z 1492047 2023-05-03T07:58:22Z 2023-05-03T07:58:22Z CONTRIBUTOR

Hello,

I'm not sure performances problematics were fully addressed (we're now forced to fully compute/load the selection expression) but changes made in the last versions makes this issue irrelevant and I think we can close it.

Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset.where performances regression. 1575938277
1530256448 https://github.com/pydata/xarray/issues/7802#issuecomment-1530256448 https://api.github.com/repos/pydata/xarray/issues/7802 IC_kwDOAMm_X85bNdxA ksunden 2501846 2023-05-01T21:00:30Z 2023-05-01T21:00:30Z CONTRIBUTOR

[xy]ticks upstream PR submitted, linked above

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Mypy errors with matplotlib 3.8 1691206894
1529920573 https://github.com/pydata/xarray/pull/7799#issuecomment-1529920573 https://api.github.com/repos/pydata/xarray/issues/7799 IC_kwDOAMm_X85bMLw9 dstansby 6197628 2023-05-01T16:26:31Z 2023-05-01T16:26:31Z CONTRIBUTOR

I was not aware of https://github.com/pydata/xarray/issues/6894, which is definitely my bad for not searching properley before setting off 😄

It looks like the changes I'm proposing here are probably orthogonal to work in https://github.com/pydata/xarray/issues/6894 though? The new tests added in #6894 still use pint as the single unit library and add some new tests with the new hypothesis strategies, but the goal of this PR is to generalise the existing unit testing to make it a bit easier to run tests with different unit libraries. Also definitely agree that keeping the end goal for duck arrays in mind is important, but I think that testing for unit libraries is a bit less general than the duck array testing stuff, because there's a host of extra information you need to be a unit library compared to a general duck array.

Anyway, definitely agree that it would be good to have the end goal in mind here. Not sure if I'll be able to find time for a synchronous discussion, but happy for others to do that and report back, or happy to chat async somewhere that isn't a github issue if that would be helpful.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 1,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Start making unit testing more general 1690019325
1529130077 https://github.com/pydata/xarray/pull/7798#issuecomment-1529130077 https://api.github.com/repos/pydata/xarray/issues/7798 IC_kwDOAMm_X85bJKxd abrammer 6145107 2023-04-30T20:15:24Z 2023-04-30T20:20:21Z CONTRIBUTOR

Apologies, that's my bad. Looks like I introduced a broken test and didn't manually double check the results coming back. The right shift test should have been:

``` python right_expected = Dataset( { "x": ("index", [0, 0, 2, 2]), "y": ("index", [-1, -1, -2, -2]), "level": ("index", [0, 0, 4, 4]), "index": [0, 1, 2, 3], } )

right_actual = (left_expected.groupby("level") >> shift).reset_coords(names="level")
assert_equal(right_expected, right_actual)

```

I haven't paid attention to this issue, but doing the groupby manually didn't have the bug fwiw.

Probably overkill test that only fails at the last assert before this fix

```python def test_groupby_math_bitshift() -> None: # create new dataset of int's only ds = Dataset( { "x": ("index", np.ones(4, dtype=int)), "y": ("index", np.ones(4, dtype=int) * -1), "level": ("index", [1, 1, 2, 2]), "index": [0, 1, 2, 3], } ) shift = DataArray([1, 2, 1], [("level", [1, 2, 8])]) left_expected = Dataset( { "x": ("index", [2, 2, 4, 4]), "y": ("index", [-2, -2, -4, -4]), "level": ("index", [2, 2, 8, 8]), "index": [0, 1, 2, 3], } ) left_manual = [] for lev, group in ds.groupby("level"): shifter = shift.sel(level=lev) left_manual.append(group << shifter) left_actual = xr.concat(left_manual, dim="index").reset_coords(names="level") assert_equal(left_expected, left_actual) left_actual = (ds.groupby("level") << shift).reset_coords(names="level") assert_equal(left_expected, left_actual) right_expected = Dataset( { "x": ("index", [0, 0, 2, 2]), "y": ("index", [-1, -1, -2, -2]), "level": ("index", [0, 0, 4, 4]), "index": [0, 1, 2, 3], } ) right_manual = [] for lev, group in left_expected.groupby("level"): shifter = shift.sel(level=lev) right_manual.append(group >> shifter) right_actual = xr.concat(right_manual, dim="index").reset_coords(names="level") assert_equal(right_expected, right_actual) right_actual = (left_expected.groupby("level") >> shift).reset_coords(names="level") assert_equal(right_expected, right_actual) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix groupby binary ops when grouped array is subset relative to other 1689773381
1529050645 https://github.com/pydata/xarray/pull/7798#issuecomment-1529050645 https://api.github.com/repos/pydata/xarray/issues/7798 IC_kwDOAMm_X85bI3YV slevang 39069044 2023-04-30T15:20:47Z 2023-04-30T15:20:47Z CONTRIBUTOR

Thanks for the quick fix! Not sure about the bitshift test but I'm assuming @headtr1ck is right.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix groupby binary ops when grouped array is subset relative to other 1689773381
1528091492 https://github.com/pydata/xarray/pull/7787#issuecomment-1528091492 https://api.github.com/repos/pydata/xarray/issues/7787 IC_kwDOAMm_X85bFNNk ksunden 2501846 2023-04-28T21:02:29Z 2023-04-28T21:02:29Z CONTRIBUTOR

The suggestion from mpl (specifically @tacaswell) was to use constrained layout for the purpose that xarray currently uses get_renderer, this will ensure that the facetgrid works with all mpl backends.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the label run-upstream to run upstream CI 1684281101
1527322138 https://github.com/pydata/xarray/pull/7788#issuecomment-1527322138 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85bCRYa maxhollmann 454724 2023-04-28T10:10:08Z 2023-04-28T10:10:08Z CONTRIBUTOR

@kmuehlbauer Okay, I got it. It only seems to happen with float arrays. I adjusted the test, and it now fails without the fix.

Only tangentially related to this PR, but I noticed that as_compatible_data will modify the original data in this path, since asarray passes through the original and afterwards the masked values are replaced with fill_value. So masked_array > some_dataarray might modify masked_array. Shouldn't it default to creating a copy to prevent this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1527198919 https://github.com/pydata/xarray/pull/7788#issuecomment-1527198919 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85bBzTH maxhollmann 454724 2023-04-28T08:41:32Z 2023-04-28T08:41:32Z CONTRIBUTOR

@kmuehlbauer For some reason I can't reproduce it anymore. I'll monitor whether it occurs again in the original situation and close this otherwise after some time.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1524063361 https://github.com/pydata/xarray/pull/7741#issuecomment-1524063361 https://api.github.com/repos/pydata/xarray/issues/7741 IC_kwDOAMm_X85a11yB abrammer 6145107 2023-04-26T21:24:14Z 2023-04-26T21:24:14Z CONTRIBUTOR

The commits yesterday were to add an entry to whats-new and a couple examples lines to the computation doc page. I didn't find the binary_ops listed in methods anywhere, so this was the best idea I had? In the block just above missing-values: https://xray--7741.org.readthedocs.build/en/7741/user-guide/computation.html#missing-values

Otherwise, I think this is done from my perspective.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add lshift and rshift operators 1659654612
1523870704 https://github.com/pydata/xarray/issues/7789#issuecomment-1523870704 https://api.github.com/repos/pydata/xarray/issues/7789 IC_kwDOAMm_X85a1Gvw jerabaul29 8382834 2023-04-26T18:30:58Z 2023-04-26T18:32:33Z CONTRIBUTOR

Just found the solution (ironic, I had been bumping my head into this for quite a while before writing this issue, but found the solution right after writing this): one needs to provide both account_name and sas_token together, the adlfs exception is actually pointing to the right issue, I was just confused. I.e., this works:

xr.open_mfdataset([filename], engine="zarr", storage_options={'account_name':AZURE_STORAGE_ACCOUNT_NAME, 'sas_token': AZURE_STORAGE_SAS})

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot access zarr data on Azure using shared access signatures (SAS) 1685503657
1523837408 https://github.com/pydata/xarray/pull/7788#issuecomment-1523837408 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85a0-ng maxhollmann 454724 2023-04-26T18:01:45Z 2023-04-26T18:01:45Z CONTRIBUTOR

@kmuehlbauer Sure, I pushed the test as I was hoping it would work.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1523814515 https://github.com/pydata/xarray/pull/7788#issuecomment-1523814515 https://api.github.com/repos/pydata/xarray/issues/7788 IC_kwDOAMm_X85a05Bz maxhollmann 454724 2023-04-26T17:41:47Z 2023-04-26T17:41:47Z CONTRIBUTOR

Hi @kmuehlbauer, no worries! It's in draft because can't figure out how to reproduce this bug for the tests. data[mask] = fill_value was crashing when I tried to create a DataArray from a non-writeable MaskedArray I got via netCDF4 from a remote source. data = np.asarray(data, dtype=dtype) didn't set writeable to true in that case, but it does when I create a non-writeable MaskedArray in the tests. Any ideas how to test this properly?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix as_compatible_data for read-only np.ma.MaskedArray 1685422501
1523743471 https://github.com/pydata/xarray/pull/7787#issuecomment-1523743471 https://api.github.com/repos/pydata/xarray/issues/7787 IC_kwDOAMm_X85a0nrv ksunden 2501846 2023-04-26T16:46:14Z 2023-04-26T16:46:14Z CONTRIBUTOR

Tackling a few of them (but not all in one go):

  • [xy]ticks in mpl is currently overly narrowly type hinted because I was following the docstring, but I agree that ArrayLike is a better type hint for that, plan on updating (including the docstring) upstream
  • [xy]lim originally neglected the case of passing set_xlim((min, max)) as a tuple, but that has been updated. xarray has that type hinted as array like, but mpl has it hinted as a 2-tuple (I think it is currently still of floats, but may be expanded as we more directly address units/categoricals/etc). Willing to debate here, but my starting position is that the "exactly 2 values" is valuable info here, and I think tuple is the only way to do that.
  • get_renderer is not actually available on all of our backends, we should maybe see if there is a more preferred way of doing what you are doing here that will work for all backends, but haven't looked into it too closely.
  • Module has no attribute <colormap> is another instance of dynamically generated behavior which can't be statically type checked (elegantly, at least), can probably be replaced by mpl.colormaps["<colormap>"] in many cases, which is statically typecheckable
  • Anything to do with 3D Axes is not type hinted, perhaps ignore for now (or help us get that type hinted adequately, but it is relatively low priority currently)
  • Module has no attribute "dates" we don't currently type hint dates/units things, but it is on my mind, not sure yet if it will be in first release or not though (may at least put a placeholder that gets rid of this error, but treats everything as "Any").
{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
  Allow the label run-upstream to run upstream CI 1684281101
1520409398 https://github.com/pydata/xarray/issues/7782#issuecomment-1520409398 https://api.github.com/repos/pydata/xarray/issues/7782 IC_kwDOAMm_X85an5s2 Articoking 90768774 2023-04-24T15:39:50Z 2023-04-24T15:39:50Z CONTRIBUTOR

Your suggestion worked perfectly, thank you very much! Avoiding using astype() reduced processing time massively

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset() reading ubyte variables as float32 from DAP server 1681353195
1520341470 https://github.com/pydata/xarray/issues/7782#issuecomment-1520341470 https://api.github.com/repos/pydata/xarray/issues/7782 IC_kwDOAMm_X85anpHe Articoking 90768774 2023-04-24T14:58:36Z 2023-04-24T14:58:36Z CONTRIBUTOR

Thank you for your quick reply. Adding the mask_and_scale=False kwarg solves the issue of conversion to float, but the resulting is of dtype int8 instead of uint8. Is there any way of making open_dataset() directly interpret the values as unsigned?

It would save me quite a lot of processing time since using DataArray.astype(np.uint8) takes a while to run.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xr.open_dataset() reading ubyte variables as float32 from DAP server 1681353195
1519536791 https://github.com/pydata/xarray/issues/7388#issuecomment-1519536791 https://api.github.com/repos/pydata/xarray/issues/7388 IC_kwDOAMm_X85akkqX markelg 6883049 2023-04-24T07:32:26Z 2023-04-24T07:32:26Z CONTRIBUTOR

Good question. Right now ci/requirements/environment.yml is resolving libnetcdf 4.9.1, so fixing 4.9.2 would not work. I am not sure why or how to change this, as few package versions are pinned.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray does not support full range of netcdf-python compression options 1503046820
1517820737 https://github.com/pydata/xarray/issues/7388#issuecomment-1517820737 https://api.github.com/repos/pydata/xarray/issues/7388 IC_kwDOAMm_X85aeBtB markelg 6883049 2023-04-21T13:15:03Z 2023-04-21T13:15:03Z CONTRIBUTOR

I think it is about these two issues only, so backporting the fixes it should work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray does not support full range of netcdf-python compression options 1503046820
1516681755 https://github.com/pydata/xarray/issues/7768#issuecomment-1516681755 https://api.github.com/repos/pydata/xarray/issues/7768 IC_kwDOAMm_X85aZrob slevang 39069044 2023-04-20T17:16:39Z 2023-04-20T17:16:39Z CONTRIBUTOR

This should be doable. I think we would have to rework the apply_ufunc wrapper to pass p0 and bounds as DataArrays through *args instead of simple dictionaries throughkwargs, so that apply_ufunc can broadcast them and handle dask chunks.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Supplying multidimensional initial guess to `curvefit` 1674818753
1516345065 https://github.com/pydata/xarray/pull/7424#issuecomment-1516345065 https://api.github.com/repos/pydata/xarray/issues/7424 IC_kwDOAMm_X85aYZbp tomwhite 85085 2023-04-20T13:37:13Z 2023-04-20T13:37:13Z CONTRIBUTOR

Related issue: https://github.com/data-apis/array-api/issues/621

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  array api - Add tests for aggregations 1522810384
1515539273 https://github.com/pydata/xarray/issues/7770#issuecomment-1515539273 https://api.github.com/repos/pydata/xarray/issues/7770 IC_kwDOAMm_X85aVUtJ hmaarrfk 90008 2023-04-20T00:15:23Z 2023-04-20T00:15:23Z CONTRIBUTOR

Understood. Thank you for your prompt replies.

I'll read up on ask again if I have any questions.

I guess I was trying to accommodate in the past users that were not using our wrappers to to_netcdf

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Provide a public API for adding new backends 1675299031
1515142339 https://github.com/pydata/xarray/pull/7739#issuecomment-1515142339 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85aTzzD jmccreight 12465248 2023-04-19T17:55:16Z 2023-04-19T17:55:16Z CONTRIBUTOR

I followed data = True / False / "array" / "list"

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1514700581 https://github.com/pydata/xarray/pull/7739#issuecomment-1514700581 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85aSH8l jmccreight 12465248 2023-04-19T13:04:19Z 2023-04-19T13:04:19Z CONTRIBUTOR

Making all the requested changes, the above should resolve momentarily.

I like this "trick"/suggestion:

And a design question/suggestion: what about instead of adding another kwarg, you could use data = True / False / "numpy"?

I will implement this if we are in agreement with @dcherian

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1511596653 https://github.com/pydata/xarray/issues/7388#issuecomment-1511596653 https://api.github.com/repos/pydata/xarray/issues/7388 IC_kwDOAMm_X85aGSJt markelg 6883049 2023-04-17T15:28:21Z 2023-04-17T15:29:43Z CONTRIBUTOR

Thanks. It looks like the errors are related to this bug https://github.com/Unidata/netcdf-c/issues/2674 The fix has been merged so I hope they include it in the next netcdf-c release. For the moment I prefer not to merge this as netcdf 4.9.2 and dask do not seem to play well together.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray does not support full range of netcdf-python compression options 1503046820
1511459288 https://github.com/pydata/xarray/pull/7739#issuecomment-1511459288 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85aFwnY jmccreight 12465248 2023-04-17T14:22:50Z 2023-04-17T14:22:50Z CONTRIBUTOR

I'm happy to "fix" the mypy issues, but it's on that I suspect might be requested for changes (if I recall correctly, it's just in the tests)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1507182085 https://github.com/pydata/xarray/pull/7681#issuecomment-1507182085 https://api.github.com/repos/pydata/xarray/issues/7681 IC_kwDOAMm_X85Z1cYF harshitha1201 97012127 2023-04-13T15:34:29Z 2023-04-13T15:34:29Z CONTRIBUTOR

Thanks @harshitha1201 !

Thank you!!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  restructure the contributing guide 1641188400
1507176030 https://github.com/pydata/xarray/pull/7461#issuecomment-1507176030 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Z1a5e st-bender 28786187 2023-04-13T15:30:28Z 2023-04-13T15:30:28Z CONTRIBUTOR

Hi,

I assume you have given this a lot of thought, but imho the minimum dependency versions should be decided according to features needed, not timing.

It's not based on timing. The policy is there so that, when a developer finds that they have to do extra labour to support an old version of a dependency, they can instead drop the support for the old version without needing to seek approval from the maintainers.

That's not how I interpret the link given by @dcherian, which states "rolling" minimum versions based on age.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1507150495 https://github.com/pydata/xarray/pull/7461#issuecomment-1507150495 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Z1Uqf st-bender 28786187 2023-04-13T15:13:28Z 2023-04-13T15:13:28Z CONTRIBUTOR

Hi @dcherian

Here is our support policy for versions: https://docs.xarray.dev/en/stable/getting-started-guide/installing.html#minimum-dependency-versions though I think we dropped py38 too early.

I assume you have given this a lot of thought, but imho the minimum dependency versions should be decided according to features needed, not timing.

For your current issue, I'm surprised this patch didn't fix it: conda-forge/conda-forge-repodata-patches-feedstock#429

Thanks for the pointer. I am not sure why, maybe I was updating too eagerly before the feedstock was fixed, but mamba update --all on py38 pulled pandas 2.0 without updating xarray.

python3.8 -m pip install xarray will result in incompatible versions.

cc @hmaarrfk @ocefpaf

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1505014586 https://github.com/pydata/xarray/pull/7732#issuecomment-1505014586 https://api.github.com/repos/pydata/xarray/issues/7732 IC_kwDOAMm_X85ZtLM6 harshitha1201 97012127 2023-04-12T10:11:20Z 2023-04-12T10:11:20Z CONTRIBUTOR

@headtr1ck I have done the changes required, please review

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Extending the glossary 1657534038
1504309371 https://github.com/pydata/xarray/pull/7739#issuecomment-1504309371 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85ZqfB7 jmccreight 12465248 2023-04-12T00:13:03Z 2023-04-12T00:13:03Z CONTRIBUTOR

i kinda implied, but I'll just state that the extra code to test equality of encodings is not handsome.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1504297701 https://github.com/pydata/xarray/pull/7739#issuecomment-1504297701 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85ZqcLl jmccreight 12465248 2023-04-12T00:03:23Z 2023-04-12T00:03:23Z CONTRIBUTOR

@dcherian thanks! I didnt incoroprate any suggestions yet. regarding the inequality of encodings of datasets is obscured by assert_identical(a, b) not evaluating encodings. it seems like it should have an option to also check encodings (or not).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1504241169 https://github.com/pydata/xarray/pull/7739#issuecomment-1504241169 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85ZqOYR jmccreight 12465248 2023-04-11T23:09:57Z 2023-04-11T23:09:57Z CONTRIBUTOR

In the off-hand chance this is reviewed before I push again, do not merge. I have a fix to encodings not getting properly roundtripped in Ds.from_dict(ds.to_dict). it was minor to fix but making sure it's tested will take a min

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1503827455 https://github.com/pydata/xarray/issues/7388#issuecomment-1503827455 https://api.github.com/repos/pydata/xarray/issues/7388 IC_kwDOAMm_X85ZopX_ markelg 6883049 2023-04-11T17:38:51Z 2023-04-11T17:38:51Z CONTRIBUTOR

Hi. I updated the branch and created a fresh python environment with the idea of writing another, final test for this. However before doing it I run the test suite, and got some bad HDF5 errors in test_backends.py::test_open_mfdataset_manyfiles[netcdf4-20-True-None-5]

#000: H5A.c line 528 in H5Aopen_by_name(): can't open attribute major: Attribute minor: Can't open object #001: H5VLcallback.c line 1091 in H5VL_attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #002: H5VLcallback.c line 1058 in H5VL__attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #003: H5VLnative_attr.c line 130 in H5VL__native_attr_open(): can't open attribute major: Attribute minor: Can't open object #004: H5Aint.c line 545 in H5A__open_by_name(): unable to load attribute info from object header major: Attribute minor: Unable to initialize object #005: H5Oattribute.c line 494 in H5O__attr_open_by_name(): can't locate attribute: '_QuantizeBitGroomNumberOfSignificantDigits' major: Attribute minor: Object not found HDF5-DIAG: Error detected in HDF5 (1.12.2) thread 1: #000: H5A.c line 528 in H5Aopen_by_name(): can't open attribute major: Attribute minor: Can't open object #001: H5VLcallback.c line 1091 in H5VL_attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #002: H5VLcallback.c line 1058 in H5VL__attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #003: H5VLnative_attr.c line 130 in H5VL__native_attr_open(): can't open attribute major: Attribute minor: Can't open object #004: H5Aint.c line 545 in H5A__open_by_name(): unable to load attribute info from object header major: Attribute minor: Unable to initialize object #005: H5Oattribute.c line 494 in H5O__attr_open_by_name(): can't locate attribute: '_QuantizeGranularBitRoundNumberOfSignificantDigits' major: Attribute minor: Object not found HDF5-DIAG: Error detected in HDF5 (1.12.2) thread 1: #000: H5A.c line 528 in H5Aopen_by_name(): can't open attribute major: Attribute minor: Can't open object #001: H5VLcallback.c line 1091 in H5VL_attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #002: H5VLcallback.c line 1058 in H5VL__attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #003: H5VLnative_attr.c line 130 in H5VL__native_attr_open(): can't open attribute major: Attribute minor: Can't open object #004: H5Aint.c line 545 in H5A__open_by_name(): unable to load attribute info from object header major: Attribute minor: Unable to initialize object #005: H5Oattribute.c line 494 in H5O__attr_open_by_name(): can't locate attribute: '_QuantizeBitRoundNumberOfSignificantBits' major: Attribute minor: Object not found

I am not sure what is going on. It seems that the currently resolved netcdf4-hdf5 versions do not like the default parameters we are supplying. My environment is

```

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge affine 2.4.0 pyhd8ed1ab_0 conda-forge aiobotocore 2.5.0 pyhd8ed1ab_0 conda-forge aiohttp 3.8.4 py310h1fa729e_0 conda-forge aioitertools 0.11.0 pyhd8ed1ab_0 conda-forge aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge antlr-python-runtime 4.7.2 py310hff52083_1003 conda-forge asciitree 0.3.3 py_2 conda-forge async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge backports.zoneinfo 0.2.1 py310hff52083_7 conda-forge beautifulsoup4 4.12.2 pyha770c72_0 conda-forge blosc 1.21.3 hafa529b_0 conda-forge boost-cpp 1.78.0 h5adbc97_2 conda-forge boto3 1.26.76 pyhd8ed1ab_0 conda-forge botocore 1.29.76 pyhd8ed1ab_0 conda-forge bottleneck 1.3.7 py310h0a54255_0 conda-forge brotli 1.0.9 h166bdaf_8 conda-forge brotli-bin 1.0.9 h166bdaf_8 conda-forge brotlipy 0.7.0 py310h5764c6d_1005 conda-forge bzip2 1.0.8 h7f98852_4 conda-forge c-ares 1.18.1 h7f98852_0 conda-forge ca-certificates 2022.12.7 ha878542_0 conda-forge cached-property 1.5.2 hd8ed1ab_1 conda-forge cached_property 1.5.2 pyha770c72_1 conda-forge cairo 1.16.0 ha61ee94_1014 conda-forge cartopy 0.21.1 py310hcb7e713_0 conda-forge cdat_info 8.2.1 pyhd8ed1ab_2 conda-forge cdms2 3.1.5 py310hb9168da_16 conda-forge cdtime 3.1.4 py310h87e304a_8 conda-forge certifi 2022.12.7 pyhd8ed1ab_0 conda-forge cf-units 3.1.1 py310hde88566_2 conda-forge cffi 1.15.1 py310h255011f_3 conda-forge cfgrib 0.9.10.3 pyhd8ed1ab_0 conda-forge cfgv 3.3.1 pyhd8ed1ab_0 conda-forge cfitsio 4.2.0 hd9d235c_0 conda-forge cftime 1.6.2 py310hde88566_1 conda-forge charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge click 8.1.3 unix_pyhd8ed1ab_2 conda-forge click-plugins 1.1.1 py_0 conda-forge cligj 0.7.2 pyhd8ed1ab_1 conda-forge cloudpickle 2.2.1 pyhd8ed1ab_0 conda-forge colorama 0.4.6 pyhd8ed1ab_0 conda-forge contourpy 1.0.7 py310hdf3cbec_0 conda-forge coverage 7.2.3 py310h1fa729e_0 conda-forge cryptography 40.0.1 py310h34c0648_0 conda-forge curl 7.88.1 hdc1c0ab_1 conda-forge cycler 0.11.0 pyhd8ed1ab_0 conda-forge cytoolz 0.12.0 py310h5764c6d_1 conda-forge dask-core 2023.3.2 pyhd8ed1ab_0 conda-forge distarray 2.12.2 pyh050c7b8_4 conda-forge distlib 0.3.6 pyhd8ed1ab_0 conda-forge distributed 2023.3.2.1 pyhd8ed1ab_0 conda-forge docopt 0.6.2 py_1 conda-forge eccodes 2.29.0 h54fcba4_0 conda-forge entrypoints 0.4 pyhd8ed1ab_0 conda-forge esmf 8.4.1 nompi_he2e5181_0 conda-forge esmpy 8.4.1 pyhc1e730c_0 conda-forge exceptiongroup 1.1.1 pyhd8ed1ab_0 conda-forge execnet 1.9.0 pyhd8ed1ab_0 conda-forge expat 2.5.0 hcb278e6_1 conda-forge fasteners 0.17.3 pyhd8ed1ab_0 conda-forge filelock 3.11.0 pyhd8ed1ab_0 conda-forge findlibs 0.0.2 pyhd8ed1ab_0 conda-forge flox 0.6.10 pyhd8ed1ab_0 conda-forge font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge font-ttf-inconsolata 3.000 h77eed37_0 conda-forge font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge font-ttf-ubuntu 0.83 hab24e00_0 conda-forge fontconfig 2.14.2 h14ed4e7_0 conda-forge fonts-conda-ecosystem 1 0 conda-forge fonts-conda-forge 1 0 conda-forge fonttools 4.39.3 py310h1fa729e_0 conda-forge freeglut 3.2.2 h9c3ff4c_1 conda-forge freetype 2.12.1 hca18f0e_1 conda-forge freexl 1.0.6 h166bdaf_1 conda-forge frozenlist 1.3.3 py310h5764c6d_0 conda-forge fsspec 2023.4.0 pyh1a96a4e_0 conda-forge future 0.18.3 pyhd8ed1ab_0 conda-forge g2clib 1.6.3 hbecde78_1 conda-forge geos 3.11.1 h27087fc_0 conda-forge geotiff 1.7.1 h7a142b4_6 conda-forge gettext 0.21.1 h27087fc_0 conda-forge giflib 5.2.1 h0b41bf4_3 conda-forge h5netcdf 1.1.0 pyhd8ed1ab_1 conda-forge h5py 3.8.0 nompi_py310h0311031_100 conda-forge hdf4 4.2.15 h9772cbc_5 conda-forge hdf5 1.12.2 nompi_h4df4325_101 conda-forge heapdict 1.0.1 py_0 conda-forge hypothesis 6.71.0 pyha770c72_0 conda-forge icu 70.1 h27087fc_0 conda-forge identify 2.5.22 pyhd8ed1ab_0 conda-forge idna 3.4 pyhd8ed1ab_0 conda-forge importlib-metadata 6.3.0 pyha770c72_0 conda-forge importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge importlib_resources 5.12.0 pyhd8ed1ab_0 conda-forge iniconfig 2.0.0 pyhd8ed1ab_0 conda-forge iris 3.4.1 pyhd8ed1ab_0 conda-forge jasper 2.0.33 h0ff4b12_1 conda-forge jinja2 3.1.2 pyhd8ed1ab_1 conda-forge jmespath 1.0.1 pyhd8ed1ab_0 conda-forge jpeg 9e h0b41bf4_3 conda-forge json-c 0.16 hc379101_0 conda-forge jsonschema 4.17.3 pyhd8ed1ab_0 conda-forge jupyter_core 5.3.0 py310hff52083_0 conda-forge kealib 1.5.0 ha7026e8_0 conda-forge keyutils 1.6.1 h166bdaf_0 conda-forge kiwisolver 1.4.4 py310hbf28c38_1 conda-forge krb5 1.20.1 h81ceb04_0 conda-forge lazy-object-proxy 1.9.0 py310h1fa729e_0 conda-forge lcms2 2.15 hfd0df8a_0 conda-forge ld_impl_linux-64 2.40 h41732ed_0 conda-forge lerc 4.0.0 h27087fc_0 conda-forge libaec 1.0.6 hcb278e6_1 conda-forge libblas 3.9.0 16_linux64_openblas conda-forge libbrotlicommon 1.0.9 h166bdaf_8 conda-forge libbrotlidec 1.0.9 h166bdaf_8 conda-forge libbrotlienc 1.0.9 h166bdaf_8 conda-forge libcblas 3.9.0 16_linux64_openblas conda-forge libcdms 3.1.2 h9366c0b_120 conda-forge libcf 1.0.3 py310h71500c5_116 conda-forge libcurl 7.88.1 hdc1c0ab_1 conda-forge libdeflate 1.17 h0b41bf4_0 conda-forge libdrs 3.1.2 h01ed8d5_119 conda-forge libdrs_f 3.1.2 h059c5b8_115 conda-forge libedit 3.1.20191231 he28a2e2_2 conda-forge libev 4.33 h516909a_1 conda-forge libexpat 2.5.0 hcb278e6_1 conda-forge libffi 3.4.2 h7f98852_5 conda-forge libgcc-ng 12.2.0 h65d4601_19 conda-forge libgdal 3.6.2 h6c674c2_9 conda-forge libgfortran-ng 12.2.0 h69a702a_19 conda-forge libgfortran5 12.2.0 h337968e_19 conda-forge libglib 2.74.1 h606061b_1 conda-forge libglu 9.0.0 he1b5a44_1001 conda-forge libgomp 12.2.0 h65d4601_19 conda-forge libiconv 1.17 h166bdaf_0 conda-forge libkml 1.3.0 h37653c0_1015 conda-forge liblapack 3.9.0 16_linux64_openblas conda-forge libllvm11 11.1.0 he0ac6c6_5 conda-forge libnetcdf 4.9.1 nompi_h34a3ff0_101 conda-forge libnghttp2 1.52.0 h61bc06f_0 conda-forge libnsl 2.0.0 h7f98852_0 conda-forge libopenblas 0.3.21 pthreads_h78a6416_3 conda-forge libpng 1.6.39 h753d276_0 conda-forge libpq 15.2 hb675445_0 conda-forge librttopo 1.1.0 ha49c73b_12 conda-forge libspatialite 5.0.1 h221c8f1_23 conda-forge libsqlite 3.40.0 h753d276_0 conda-forge libssh2 1.10.0 hf14f497_3 conda-forge libstdcxx-ng 12.2.0 h46fd767_19 conda-forge libtiff 4.5.0 h6adf6a1_2 conda-forge libuuid 2.38.1 h0b41bf4_0 conda-forge libwebp-base 1.3.0 h0b41bf4_0 conda-forge libxcb 1.13 h7f98852_1004 conda-forge libxml2 2.10.3 hca2bb57_4 conda-forge libxslt 1.1.37 h873f0b0_0 conda-forge libzip 1.9.2 hc929e4a_1 conda-forge libzlib 1.2.13 h166bdaf_4 conda-forge llvmlite 0.39.1 py310h58363a5_1 conda-forge locket 1.0.0 pyhd8ed1ab_0 conda-forge lxml 4.9.2 py310hbdc0903_0 conda-forge lz4-c 1.9.4 hcb278e6_0 conda-forge markupsafe 2.1.2 py310h1fa729e_0 conda-forge matplotlib-base 3.7.1 py310he60537e_0 conda-forge msgpack-python 1.0.5 py310hdf3cbec_0 conda-forge multidict 6.0.4 py310h1fa729e_0 conda-forge munkres 1.1.4 pyh9f0ad1d_0 conda-forge nbformat 5.8.0 pyhd8ed1ab_0 conda-forge nc-time-axis 1.4.1 pyhd8ed1ab_0 conda-forge ncurses 6.3 h27087fc_1 conda-forge netcdf-fortran 4.6.0 nompi_heb5813c_103 conda-forge netcdf4 1.6.3 nompi_py310h0feb132_100 conda-forge nodeenv 1.7.0 pyhd8ed1ab_0 conda-forge nomkl 1.0 h5ca1d4c_0 conda-forge nspr 4.35 h27087fc_0 conda-forge nss 3.89 he45b914_0 conda-forge numba 0.56.4 py310h0e39c9b_1 conda-forge numbagg 0.2.2 pyhd8ed1ab_1 conda-forge numcodecs 0.11.0 py310heca2aa9_1 conda-forge numexpr 2.8.4 py310h690d005_100 conda-forge numpy 1.23.5 py310h53a5b5f_0 conda-forge numpy_groupies 0.9.20 pyhd8ed1ab_0 conda-forge openblas 0.3.21 pthreads_h320a7e8_3 conda-forge openjpeg 2.5.0 hfec8fc6_2 conda-forge openssl 3.1.0 h0b41bf4_0 conda-forge packaging 23.0 pyhd8ed1ab_0 conda-forge pandas 1.5.3 py310h9b08913_1 conda-forge partd 1.3.0 pyhd8ed1ab_0 conda-forge patsy 0.5.3 pyhd8ed1ab_0 conda-forge pcre2 10.40 hc3806b6_0 conda-forge pillow 9.4.0 py310h023d228_1 conda-forge pint 0.20.1 pyhd8ed1ab_0 conda-forge pip 23.0.1 pyhd8ed1ab_0 conda-forge pixman 0.40.0 h36c2ea0_0 conda-forge pkgutil-resolve-name 1.3.10 pyhd8ed1ab_0 conda-forge platformdirs 3.2.0 pyhd8ed1ab_0 conda-forge pluggy 1.0.0 pyhd8ed1ab_5 conda-forge pooch 1.7.0 pyha770c72_3 conda-forge poppler 23.03.0 h091648b_0 conda-forge poppler-data 0.4.12 hd8ed1ab_0 conda-forge postgresql 15.2 h3248436_0 conda-forge pre-commit 3.2.2 pyha770c72_0 conda-forge proj 9.1.1 h8ffa02c_2 conda-forge pseudonetcdf 3.2.2 pyhd8ed1ab_0 conda-forge psutil 5.9.4 py310h5764c6d_0 conda-forge pthread-stubs 0.4 h36c2ea0_1001 conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pydap 3.4.0 pyhd8ed1ab_0 conda-forge pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge pyproj 3.5.0 py310h15e2413_0 conda-forge pyrsistent 0.19.3 py310h1fa729e_0 conda-forge pyshp 2.3.1 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 pyha2e5f31_6 conda-forge pytest 7.3.0 pyhd8ed1ab_0 conda-forge pytest-cov 4.0.0 pyhd8ed1ab_0 conda-forge pytest-env 0.8.1 pyhd8ed1ab_0 conda-forge pytest-xdist 3.2.1 pyhd8ed1ab_0 conda-forge python 3.10.10 he550d4f_0_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-eccodes 1.5.1 py310h0a54255_0 conda-forge python-fastjsonschema 2.16.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py310h1fa729e_0 conda-forge python_abi 3.10 3_cp310 conda-forge pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py310h5764c6d_5 conda-forge rasterio 1.3.6 py310h3e853a9_0 conda-forge readline 8.2 h8228510_1 conda-forge requests 2.28.2 pyhd8ed1ab_1 conda-forge s3transfer 0.6.0 pyhd8ed1ab_0 conda-forge scipy 1.10.1 py310h8deb116_0 conda-forge seaborn 0.12.2 hd8ed1ab_0 conda-forge seaborn-base 0.12.2 pyhd8ed1ab_0 conda-forge setuptools 67.6.1 pyhd8ed1ab_0 conda-forge shapely 2.0.1 py310h8b84c32_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.10 h9fff704_0 conda-forge snuggs 1.4.7 py_0 conda-forge sortedcontainers 2.4.0 pyhd8ed1ab_0 conda-forge soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge sparse 0.14.0 pyhd8ed1ab_0 conda-forge sqlite 3.40.0 h4ff8645_0 conda-forge statsmodels 0.13.5 py310hde88566_2 conda-forge tblib 1.7.0 pyhd8ed1ab_0 conda-forge tiledb 2.13.2 hd532e3d_0 conda-forge tk 8.6.12 h27826a3_0 conda-forge toml 0.10.2 pyhd8ed1ab_0 conda-forge tomli 2.0.1 pyhd8ed1ab_0 conda-forge toolz 0.12.0 pyhd8ed1ab_0 conda-forge tornado 6.2 py310h5764c6d_1 conda-forge traitlets 5.9.0 pyhd8ed1ab_0 conda-forge typing-extensions 4.5.0 hd8ed1ab_0 conda-forge typing_extensions 4.5.0 pyha770c72_0 conda-forge tzcode 2023c h0b41bf4_0 conda-forge tzdata 2023c h71feb2d_0 conda-forge udunits2 2.2.28 hc3e0081_0 conda-forge ukkonen 1.0.1 py310hbf28c38_3 conda-forge unicodedata2 15.0.0 py310h5764c6d_0 conda-forge urllib3 1.26.15 pyhd8ed1ab_0 conda-forge virtualenv 20.21.0 pyhd8ed1ab_0 conda-forge webob 1.8.7 pyhd8ed1ab_0 conda-forge wheel 0.40.0 pyhd8ed1ab_0 conda-forge wrapt 1.15.0 py310h1fa729e_0 conda-forge xarray 2023.3.0 pyhd8ed1ab_0 conda-forge xerces-c 3.2.4 h55805fa_1 conda-forge xorg-fixesproto 5.0 h7f98852_1002 conda-forge xorg-inputproto 2.3.2 h7f98852_1002 conda-forge xorg-kbproto 1.0.7 h7f98852_1002 conda-forge xorg-libice 1.0.10 h7f98852_0 conda-forge xorg-libsm 1.2.3 hd9c2040_1000 conda-forge xorg-libx11 1.8.4 h0b41bf4_0 conda-forge xorg-libxau 1.0.9 h7f98852_0 conda-forge xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge xorg-libxext 1.3.4 h0b41bf4_2 conda-forge xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge xorg-libxi 1.7.10 h7f98852_0 conda-forge xorg-libxrender 0.9.10 h7f98852_1003 conda-forge xorg-renderproto 0.11.1 h7f98852_1002 conda-forge xorg-xextproto 7.3.0 h0b41bf4_1003 conda-forge xorg-xproto 7.0.31 h7f98852_1007 conda-forge xxhash 0.8.1 h0b41bf4_0 conda-forge xz 5.2.6 h166bdaf_0 conda-forge yaml 0.2.5 h7f98852_2 conda-forge yarl 1.8.2 py310h5764c6d_0 conda-forge zarr 2.14.2 pyhd8ed1ab_0 conda-forge zict 2.2.0 pyhd8ed1ab_0 conda-forge zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 h166bdaf_4 conda-forge zstd 1.5.2 h3eb15da_6 conda-forge ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray does not support full range of netcdf-python compression options 1503046820
1503671262 https://github.com/pydata/xarray/pull/7724#issuecomment-1503671262 https://api.github.com/repos/pydata/xarray/issues/7724 IC_kwDOAMm_X85ZoDPe jsignell 4806877 2023-04-11T15:58:48Z 2023-04-11T15:58:48Z CONTRIBUTOR

Is there anything I can do to help out on this? It sounds like the blocker is mypy?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `pandas=2.0` support 1655782486
1503393910 https://github.com/pydata/xarray/pull/7461#issuecomment-1503393910 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Zm_h2 st-bender 28786187 2023-04-11T13:50:42Z 2023-04-11T13:50:42Z CONTRIBUTOR

Hi, Just to let you know that this change breaks python 3.8 setups with automatic updates becuase the pandas version is not restricted, so it will be happily updated to version 2 or higher. Which in turn is not compatible with xarray < 2023.2, which cannot be installed on python 3.8 because of this change. Don't know why the min python version was changed, this PR doesn't say why it was necessary. Cheers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1500720650 https://github.com/pydata/xarray/pull/7739#issuecomment-1500720650 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85Zcy4K jmccreight 12465248 2023-04-07T23:27:25Z 2023-04-07T23:27:25Z CONTRIBUTOR

I solved the mypy errors in a highly dubious way. 👀

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1500558818 https://github.com/pydata/xarray/pull/7739#issuecomment-1500558818 https://api.github.com/repos/pydata/xarray/issues/7739 IC_kwDOAMm_X85ZcLXi jmccreight 12465248 2023-04-07T19:07:35Z 2023-04-07T19:07:35Z CONTRIBUTOR

I would appreciate any edification on the Mypy failures. Looking at the indicated lines, i'm 🤷 .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `ds.to_dict` with data as arrays, not lists 1659078413
1500552035 https://github.com/pydata/xarray/issues/1599#issuecomment-1500552035 https://api.github.com/repos/pydata/xarray/issues/1599 IC_kwDOAMm_X85ZcJtj jmccreight 12465248 2023-04-07T18:59:04Z 2023-04-07T18:59:24Z CONTRIBUTOR

The PR #7739 is available for review. @jhamman @dcherian would be my choices. i think this is pretty straight forward. I suppose the name of the kwarg being numpy_data is debatable. I based this on the discussion of numpy vs tolist above, preferring the former but acknowledging the comment that a package name as an arg is odd. Could do as_numpy or something slightly different.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray to_dict() without converting with numpy tolist() 261727170
1500449963 https://github.com/pydata/xarray/issues/1599#issuecomment-1500449963 https://api.github.com/repos/pydata/xarray/issues/1599 IC_kwDOAMm_X85Zbwyr jmccreight 12465248 2023-04-07T16:41:39Z 2023-04-07T16:41:39Z CONTRIBUTOR

I'd be interested in reviving this, this is exactly what I want to achieve. It's not clear if there was some reason this never went ahead. I looked around but didnt find anything. LMK if it there's some reason not to pursue it. THanks

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray to_dict() without converting with numpy tolist() 261727170
1499591643 https://github.com/pydata/xarray/issues/3216#issuecomment-1499591643 https://api.github.com/repos/pydata/xarray/issues/3216 IC_kwDOAMm_X85ZYfPb chiaral 8453445 2023-04-06T20:34:19Z 2023-04-06T20:34:47Z CONTRIBUTOR

Hello! Just adding a 👍 to this thread - and, since it is an old issue, wondering if this is on xarray roadmap somewhere. Something like .rolling(time='5M') would be really valuable for many applications. thanks so much for all your work! Chiara

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature request: time-based rolling window functionality 480753417
1499079927 https://github.com/pydata/xarray/pull/7731#issuecomment-1499079927 https://api.github.com/repos/pydata/xarray/issues/7731 IC_kwDOAMm_X85ZWiT3 jsignell 4806877 2023-04-06T13:38:38Z 2023-04-06T13:38:38Z CONTRIBUTOR

to properly test this, I guess we'd need to merge #7724 first?

Otherwise the env in the upstream test will never solve right?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Continue to use nanosecond-precision Timestamps in precision-sensitive areas 1657396474
1498773949 https://github.com/pydata/xarray/issues/7721#issuecomment-1498773949 https://api.github.com/repos/pydata/xarray/issues/7721 IC_kwDOAMm_X85ZVXm9 jacobtomlinson 1610850 2023-04-06T09:39:43Z 2023-04-06T09:39:43Z CONTRIBUTOR

Ping @leofang in case you have thoughts?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `as_shared_dtype` converts scalars to 0d `numpy` arrays if chunked `cupy` is involved 1655290694
1498639992 https://github.com/pydata/xarray/pull/7669#issuecomment-1498639992 https://api.github.com/repos/pydata/xarray/issues/7669 IC_kwDOAMm_X85ZU254 remigathoni 51911758 2023-04-06T07:53:56Z 2023-04-06T07:53:56Z CONTRIBUTOR

The docs build failure is real, from some rst formatting error

/home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:53: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:52: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:64: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.pad:63: WARNING: Block quote ends without a blank line; unexpected unindent. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:58: ERROR: Unexpected indentation. /home/docs/checkouts/readthedocs.org/user_builds/xray/checkouts/7669/xarray/core/accessor_str.py:docstring of xarray.core.accessor_str.StringAccessor.count:56: WARNING: Block quote ends without a blank line; unexpected unindent.

I'll fix it!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Docstrings examples for string methods 1639361476
1498200620 https://github.com/pydata/xarray/issues/7573#issuecomment-1498200620 https://api.github.com/repos/pydata/xarray/issues/7573 IC_kwDOAMm_X85ZTLos ocefpaf 950575 2023-04-05T21:47:19Z 2023-04-05T21:47:19Z CONTRIBUTOR

With the current PR we would end up with two different build numbers with differing behaviour, which might confuse folks.

+1

But I'd rely on @ocefpaf's expertise.

The PR is a good idea and we, conda-forge, even though about making something like that for all packages. The problem is that optional packages metadata in Python-land is super unreliable. So, doing it on a package bases and with the original authors as part of it, it is super safe and recommended.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add optional min versions to conda-forge recipe (`run_constrained`) 1603957501
1498164858 https://github.com/pydata/xarray/issues/7716#issuecomment-1498164858 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZTC56 jsignell 4806877 2023-04-05T21:09:59Z 2023-04-05T21:09:59Z CONTRIBUTOR

In that case it could be reasonable to mimic the pattern in test_groupby and mark the failing tests with a @pytest.mark.skipif(not has_pandas_version_two, reason="Tests a scenario that only raises when pandas <= 2")

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1498160636 https://github.com/pydata/xarray/issues/7716#issuecomment-1498160636 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZTB38 mroeschke 10647082 2023-04-05T21:05:34Z 2023-04-05T21:05:34Z CONTRIBUTOR

CI says these are the tests we'd need to fix:

Chiming in from the pandas side on those failures, I think they are all expected https://pandas.pydata.org/docs/whatsnew/v2.0.0.html namely

  • Since pandas supports s, ms and us numpy datetime resolutions now, I'm guessing that's why test_to_datetimeindex_out_of_range and test_should_cftime_be_used_source_outside_range are failing
  • For test_sel_float, pd.Index intentionally does not support np.float16 as dtype anymore (we never had an indexing engine for this dtype)
  • For test_maybe_coerce_to_str, this might be expected too as Index dtypes can hold np.int32 types now
{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1496186966 https://github.com/pydata/xarray/issues/7716#issuecomment-1496186966 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZLgBW ocefpaf 950575 2023-04-04T15:30:37Z 2023-04-04T15:30:37Z CONTRIBUTOR

@dcherian do you mind taking a look at https://github.com/conda-forge/conda-forge-repodata-patches-feedstock/pull/426? Please check the versions patched and the applied patch! Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1496107675 https://github.com/pydata/xarray/issues/7716#issuecomment-1496107675 https://api.github.com/repos/pydata/xarray/issues/7716 IC_kwDOAMm_X85ZLMqb ocefpaf 950575 2023-04-04T14:48:55Z 2023-04-04T14:48:55Z CONTRIBUTOR

We need to do a repodata patch for the current xarray. I'll get to it soon.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  bad conda solve with pandas 2 1654022522
1494686296 https://github.com/pydata/xarray/pull/7681#issuecomment-1494686296 https://api.github.com/repos/pydata/xarray/issues/7681 IC_kwDOAMm_X85ZFxpY harshitha1201 97012127 2023-04-03T17:09:27Z 2023-04-03T17:09:27Z CONTRIBUTOR

@TomNicholas please review

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  restructure the contributing guide 1641188400
1494684558 https://github.com/pydata/xarray/pull/7694#issuecomment-1494684558 https://api.github.com/repos/pydata/xarray/issues/7694 IC_kwDOAMm_X85ZFxOO harshitha1201 97012127 2023-04-03T17:08:13Z 2023-04-03T17:08:13Z CONTRIBUTOR

Thank you!! @TomNicholas and @headtr1ck for updating the commit

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How xarray handles missing values 1644759739
1494560788 https://github.com/pydata/xarray/issues/7079#issuecomment-1494560788 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85ZFTAU ocefpaf 950575 2023-04-03T15:44:18Z 2023-04-03T15:44:18Z CONTRIBUTOR

@kthyng those files are on a remote server and that may not be the segfault from the original issue here. It may be a server that is not happy with parallel access. Can you try that with local files?

PS: you can also try with netcdf4<1.6.1 and, if that also fails, it is most likely the server than the issue here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1491747796 https://github.com/pydata/xarray/issues/7701#issuecomment-1491747796 https://api.github.com/repos/pydata/xarray/issues/7701 IC_kwDOAMm_X85Y6kPU veenstrajelmer 60435591 2023-03-31T11:03:36Z 2023-04-03T07:36:33Z CONTRIBUTOR

@headtr1ck I just discovered that it is not per se a difference between floats/da, but it has to do with the creation of the new dimension (plipoints in this case), I have updated the MCVE in https://github.com/pydata/xarray/issues/7701#issue-1647883619.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Recently introduced different behaviour of da.interp() when using floats vs DataArrays with new dim 1647883619
1493492180 https://github.com/pydata/xarray/pull/7706#issuecomment-1493492180 https://api.github.com/repos/pydata/xarray/issues/7706 IC_kwDOAMm_X85ZBOHU nishtha981 92522516 2023-04-03T00:50:13Z 2023-04-03T00:50:13Z CONTRIBUTOR

Oh, you can add an entry in whats-new if you want!

Hey! @headtr1ck I've added the changes to the what's new file. Please do review it! Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add: Adds a config.yml file for welcome-bot 1650309361
1493382649 https://github.com/pydata/xarray/pull/7706#issuecomment-1493382649 https://api.github.com/repos/pydata/xarray/issues/7706 IC_kwDOAMm_X85ZAzX5 nishtha981 92522516 2023-04-02T16:16:26Z 2023-04-02T16:16:26Z CONTRIBUTOR

@headtr1ck I tried out the bot on a repo of mine. It worked there. I then changed the messages on the bot.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add: Adds a config.yml file for welcome-bot 1650309361
1493380177 https://github.com/pydata/xarray/pull/7681#issuecomment-1493380177 https://api.github.com/repos/pydata/xarray/issues/7681 IC_kwDOAMm_X85ZAyxR harshitha1201 97012127 2023-04-02T16:04:19Z 2023-04-02T16:04:19Z CONTRIBUTOR

@headtr1ck I have done some additions and some deletions too to the contributing guide. Please let me know if any changes are needed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  restructure the contributing guide 1641188400

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2113.551ms · About: xarray-datasette