home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

221 rows where user = 13301940 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

issue >30

  • 📚 New theme & rearrangement of the docs 14
  • Migrate CI from azure pipelines to GitHub Actions 6
  • release v0.18.0 6
  • Add entrypoint for plotting backends 5
  • ENH: Compute hash of xarray objects 5
  • Bump pre-commit/action from v2.0.2 to v2.0.3 5
  • Code cleanup 5
  • Raise an informative error message when object array has mixed types 4
  • Add GitHub action for publishing artifacts to PyPI 4
  • Fix bulleted list indentation in docstrings 4
  • Bump mamba-org/provision-with-micromamba from 12 to 13 4
  • apply_ufunc(dask='parallelized') with multiple outputs 3
  • Documentation improvements 3
  • Harmonize `FillValue` and `missing_value` during encoding and decoding steps 3
  • clean up upstream-dev CI 3
  • Trigger upstream CI on cron schedule (by default) 3
  • Fix lag in Jupyter caused by CSS in `_repr_html_` 3
  • Bump actions/github-script from v3 to v4.0.2 3
  • Parametrize test for __setitem__ (for dask array) 3
  • docs on specifying chunks in to_zarr encoding arg 3
  • Temporarily import `loop_in_thread` fixture from `distributed` 3
  • xarray contrib module 2
  • read ncml files to create multifile datasets 2
  • Expose use_cftime option in open_zarr #2886 2
  • Sync with latest version of cftime (v1.0.4) 2
  • Timedelta dt accessor does not work 2
  • cache rasterio example files 2
  • fix matplotlib errors for single level discrete colormaps 2
  • Add GH action for running tests against upstream dev 2
  • scheduled upstream-dev CI is skipped 2
  • …

user 1

  • andersy005 · 221 ✖

author_association 1

  • MEMBER 221
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1575671448 https://github.com/pydata/xarray/pull/7889#issuecomment-1575671448 https://api.github.com/repos/pydata/xarray/issues/7889 IC_kwDOAMm_X85d6taY andersy005 13301940 2023-06-04T18:46:09Z 2023-06-04T18:46:09Z MEMBER

Thank you @keewis

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  retire the TestPyPI workflow 1738586208
1503928134 https://github.com/pydata/xarray/issues/7744#issuecomment-1503928134 https://api.github.com/repos/pydata/xarray/issues/7744 IC_kwDOAMm_X85ZpB9G andersy005 13301940 2023-04-11T18:53:27Z 2023-04-11T18:53:27Z MEMBER

thank you for your patience, @igibek! i've enabled the private vulnerability reporting for the repo.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reporting a vulnerability 1660741072
1320994455 https://github.com/pydata/xarray/issues/2368#issuecomment-1320994455 https://api.github.com/repos/pydata/xarray/issues/2368 IC_kwDOAMm_X85OvMaX andersy005 13301940 2022-11-19T23:53:57Z 2022-11-19T23:54:43Z MEMBER

@maxaragon, i'm curious. what version of xarray/netcdf4 are you using? i'm asking because this appears to be working fine on my end

```python In [1]: import xarray as xr

In [2]: ds = xr.open_dataset("20200825_hyytiala_icon-iglo-12-23.nc")

In [3]: ds Out[3]: <xarray.Dataset> Dimensions: (time: 25, level: 90, flux_level: 91, frequency: 2, soil_level: 9) Coordinates: * time (time) datetime64[ns] 2020-08-25 ... 2020-0... * level (level) float32 90.0 89.0 88.0 ... 3.0 2.0 1.0 * flux_level (flux_level) float32 91.0 90.0 ... 2.0 1.0 * frequency (frequency) float32 34.96 94.0 Dimensions without coordinates: soil_level Data variables: (12/62) latitude float32 ... longitude float32 ... altitude float32 ... horizontal_resolution float32 ... forecast_time (time) timedelta64[ns] ... height (time, level) float32 ... ... ... gas_atten (frequency, time, level) float32 ... specific_gas_atten (frequency, time, level) float32 ... specific_saturated_gas_atten (frequency, time, level) float32 ... specific_dry_gas_atten (frequency, time, level) float32 ... K2 (frequency, time, level) float32 ... specific_liquid_atten (frequency, time, level) float32 ... Attributes: (12/13) institution: Max Planck Institute for Meteorology/Deutscher Wette... references: see MPIM/DWD publications source: svn://xceh.dwd.de/for0adm/SVN_icon/tags/icon-2.6.0-n... Conventions: CF-1.7 location: hyytiala file_uuid: ace15f8ba477497c8d1dd0833b5ac674 ... ... year: 2020 month: 08 day: 25 history: 2021-01-25 08:24:29 - File content harmonized by the... title: Model file from Hyytiala pid: https://hdl.handle.net/21.12132/1.ace15f8ba477497c ```

here are the versions i'm using

```python In [4]: xr.show_versions() /Users/andersy005/mambaforge/envs/playground/lib/python3.10/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.")

INSTALLED VERSIONS

commit: None python: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:41:22) [Clang 13.0.1 ] python-bits: 64 OS: Darwin OS-release: 22.1.0 machine: arm64 processor: arm byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.8.1

xarray: 2022.10.0 pandas: 1.5.1 numpy: 1.23.4 scipy: 1.9.3 netCDF4: 1.6.1 pydap: installed h5netcdf: 1.0.2 h5py: 3.7.0 Nio: None zarr: 2.13.3 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.10.2 distributed: 2022.10.2 matplotlib: 3.6.1 cartopy: None seaborn: 0.12.0 numbagg: None fsspec: 2022.10.0 cupy: None pint: 0.20.1 sparse: None flox: None numpy_groupies: None setuptools: 65.5.0 pip: 22.3 conda: None pytest: None IPython: 8.6.0 sphinx: None ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Let's list all the netCDF files that xarray can't open 350899839
1257023353 https://github.com/pydata/xarray/pull/6991#issuecomment-1257023353 https://api.github.com/repos/pydata/xarray/issues/6991 IC_kwDOAMm_X85K7Kd5 andersy005 13301940 2022-09-24T17:25:31Z 2022-09-24T17:25:31Z MEMBER

@dependabot rebase

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 12 to 13 1362048785
1253024262 https://github.com/pydata/xarray/pull/6991#issuecomment-1253024262 https://api.github.com/repos/pydata/xarray/issues/6991 IC_kwDOAMm_X85Kr6IG andersy005 13301940 2022-09-20T23:44:01Z 2022-09-20T23:44:01Z MEMBER

@dependabot squash and merge

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 12 to 13 1362048785
1253024007 https://github.com/pydata/xarray/pull/6991#issuecomment-1253024007 https://api.github.com/repos/pydata/xarray/issues/6991 IC_kwDOAMm_X85Kr6EH andersy005 13301940 2022-09-20T23:43:34Z 2022-09-20T23:43:34Z MEMBER

@dependabot rebase

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 12 to 13 1362048785
1242622466 https://github.com/pydata/xarray/pull/6979#issuecomment-1242622466 https://api.github.com/repos/pydata/xarray/issues/6979 IC_kwDOAMm_X85KEOoC andersy005 13301940 2022-09-10T04:17:31Z 2022-09-10T04:17:31Z MEMBER

thank you for the clarification, @mwtoews

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove unnecessary build dependencies, use build defaults, strict twine check 1359451944
1242203788 https://github.com/pydata/xarray/pull/6979#issuecomment-1242203788 https://api.github.com/repos/pydata/xarray/issues/6979 IC_kwDOAMm_X85KCoaM andersy005 13301940 2022-09-09T16:36:23Z 2022-09-09T16:38:01Z MEMBER

Remove check-manifest, since it isn't used

Could we keep check-manifest and add a proper check instead of removing it entirely? It's worth ensuring we are shipping everything and check-manifest would help catch missing files, etc...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove unnecessary build dependencies, use build defaults, strict twine check 1359451944
1239629310 https://github.com/pydata/xarray/pull/6991#issuecomment-1239629310 https://api.github.com/repos/pydata/xarray/issues/6991 IC_kwDOAMm_X85J4z3- andersy005 13301940 2022-09-07T16:37:52Z 2022-09-07T16:37:52Z MEMBER

@dependabot rebase

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump mamba-org/provision-with-micromamba from 12 to 13 1362048785
1228673587 https://github.com/pydata/xarray/issues/6957#issuecomment-1228673587 https://api.github.com/repos/pydata/xarray/issues/6957 IC_kwDOAMm_X85JPBIz andersy005 13301940 2022-08-26T16:03:14Z 2022-08-26T16:03:14Z MEMBER

Duplicate of: - https://github.com/pydata/xarray/issues/6818

which seems to have been addressed (if you use the main branch).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  import xarray fails with numpy 1.20.x 1352445128
1216913006 https://github.com/pydata/xarray/issues/6920#issuecomment-1216913006 https://api.github.com/repos/pydata/xarray/issues/6920 IC_kwDOAMm_X85IiJ5u andersy005 13301940 2022-08-16T17:05:24Z 2022-08-16T17:05:24Z MEMBER

Great... keep us posted once you have a working solution.

I'm going to convert this issue in a discussion instead.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writing a netCDF file is slow 1340474484
1216820021 https://github.com/pydata/xarray/issues/6920#issuecomment-1216820021 https://api.github.com/repos/pydata/xarray/issues/6920 IC_kwDOAMm_X85IhzM1 andersy005 13301940 2022-08-16T15:46:44Z 2022-08-16T15:46:44Z MEMBER

@lassiterdc, writing large, chunked xarray dataset to a netCDF file is always a challenge and quite slow since the write is serial. However, you could take advantage of the xr.save_mfdataset() function to write to multiple netCDF files. here's a good example that showcase how to achieve this: https://ncar.github.io/esds/posts/2020/writing-multiple-netcdf-files-in-parallel-with-xarray-and-dask

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writing a netCDF file is slow 1340474484
1207118357 https://github.com/pydata/xarray/pull/6884#issuecomment-1207118357 https://api.github.com/repos/pydata/xarray/issues/6884 IC_kwDOAMm_X85H8yoV andersy005 13301940 2022-08-06T01:25:38Z 2022-08-06T01:25:38Z MEMBER

Thank you for this fix, @jrbourbeau!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Temporarily import `loop_in_thread` fixture from `distributed` 1330548427
1207101255 https://github.com/pydata/xarray/pull/6884#issuecomment-1207101255 https://api.github.com/repos/pydata/xarray/issues/6884 IC_kwDOAMm_X85H8udH andersy005 13301940 2022-08-06T00:19:08Z 2022-08-06T00:19:08Z MEMBER

I opened an issue just in case this is something that can be fixed upstream.

  • https://github.com/numba/numba.github.com/issues/10
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Temporarily import `loop_in_thread` fixture from `distributed` 1330548427
1207094758 https://github.com/pydata/xarray/pull/6884#issuecomment-1207094758 https://api.github.com/repos/pydata/xarray/issues/6884 IC_kwDOAMm_X85H8s3m andersy005 13301940 2022-08-05T23:58:55Z 2022-08-06T00:04:16Z MEMBER

It's not immediately clear to me why the docs build is failing. Has that been happening elsewhere?

When building docs on RTD, we turn Sphinx warnings into errors. Sphinx is throwing a warning that is then turned into an error after docs are built, as it appears that the SSL certificate for numba.pydata.org has expired or is misconfigured.

bash loading intersphinx inventory from https://rasterio.readthedocs.io/en/latest/objects.inv... loading intersphinx inventory from https://sparse.pydata.org/en/latest/objects.inv... WARNING: failed to reach any of the inventories with the following issues: intersphinx inventory 'https://numba.pydata.org/numba-doc/latest/objects.inv' not fetchable due to <class 'requests.exceptions.SSLError'>: HTTPSConnectionPool(host='numba.pydata.org', port=443): Max retries exceeded with url: /numba-doc/latest/objects.inv (Caused by SSLError(CertificateError("hostname 'numba.pydata.org' doesn't match either of '*.github.com', 'www.github.com', 'github.io', 'github.com', '*.github.io', 'githubusercontent.com', '*.githubusercontent.com'")))

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Temporarily import `loop_in_thread` fixture from `distributed` 1330548427
1200774011 https://github.com/pydata/xarray/pull/6855#issuecomment-1200774011 https://api.github.com/repos/pydata/xarray/issues/6855 IC_kwDOAMm_X85Hklt7 andersy005 13301940 2022-08-01T06:35:30Z 2022-08-01T06:35:30Z MEMBER

@dependabot squash and merge

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump pypa/gh-action-pypi-publish from 1.5.0 to 1.5.1 1323874145
1188550877 https://github.com/pydata/xarray/issues/6807#issuecomment-1188550877 https://api.github.com/repos/pydata/xarray/issues/6807 IC_kwDOAMm_X85G19jd andersy005 13301940 2022-07-19T03:22:07Z 2022-07-19T03:22:07Z MEMBER

at SciPy i learned of fugue which tries to provide a unified API for distributed DataFrames on top of Spark and Dask. it could be a great source of inspiration.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 1
}
  Alternative parallel execution frameworks in xarray 1308715638
1187922418 https://github.com/pydata/xarray/issues/6805#issuecomment-1187922418 https://api.github.com/repos/pydata/xarray/issues/6805 IC_kwDOAMm_X85GzkHy andersy005 13301940 2022-07-18T17:47:18Z 2022-07-18T17:47:18Z MEMBER

@lassiterdc, in your example above, f_in_ncs is just a string and you are passing this string to xr.open_mfdataset() which i don't think knows what to do with it.

python f_in_ncs = "data/"

have you tried retrieving the list of all files under "data/" via the glob module?

```python import glob f_in_ncs = sorted(glob.glob("data/*.nc"))

mf_ds = xr.open_mfdataset(f_in_ncs, concat_dim = "time", chunks={'outlat':3500, 'outlon':7000, 'time':50}, combine = "nested", engine = 'netcdf4') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  PermissionError: [Errno 13] Permission denied 1308176241
1186213606 https://github.com/pydata/xarray/pull/6525#issuecomment-1186213606 https://api.github.com/repos/pydata/xarray/issues/6525 IC_kwDOAMm_X85GtC7m andersy005 13301940 2022-07-16T15:03:01Z 2022-07-16T15:03:01Z MEMBER

@dcherian, are you still working on this or can we merge it?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add cumsum to DatasetGroupBy 1217509109
1176866026 https://github.com/pydata/xarray/issues/2697#issuecomment-1176866026 https://api.github.com/repos/pydata/xarray/issues/2697 IC_kwDOAMm_X85GJYzq andersy005 13301940 2022-07-06T23:53:55Z 2022-07-06T23:53:55Z MEMBER

Ok, another option would be to add that to xncml

@andersy005 What do you think ?

@huard, I haven't touched the codebase in that repo for three years 😃... So, I'm happy to transfer the xncml repo to xarray-contrib org and give you and anyone who wants access to it

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  read ncml files to create multifile datasets 401874795
1170444562 https://github.com/pydata/xarray/pull/6721#issuecomment-1170444562 https://api.github.com/repos/pydata/xarray/issues/6721 IC_kwDOAMm_X85Fw5ES andersy005 13301940 2022-06-29T20:06:30Z 2022-06-29T20:06:30Z MEMBER

Thank you, @dcherian!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix .chunks loading lazy backed array data 1284071791
1170444112 https://github.com/pydata/xarray/pull/6702#issuecomment-1170444112 https://api.github.com/repos/pydata/xarray/issues/6702 IC_kwDOAMm_X85Fw49Q andersy005 13301940 2022-06-29T20:05:58Z 2022-06-29T20:05:58Z MEMBER

Thank you, @headtr1ck!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Typing of GroupBy & Co. 1275262097
1165862649 https://github.com/pydata/xarray/pull/6542#issuecomment-1165862649 https://api.github.com/repos/pydata/xarray/issues/6542 IC_kwDOAMm_X85Ffab5 andersy005 13301940 2022-06-24T19:18:08Z 2022-06-24T19:18:08Z MEMBER

my bad :) thank you for reporting this in #6720. I'm going to look into what;s going on

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  docs on specifying chunks in to_zarr encoding arg 1221393104
1164899177 https://github.com/pydata/xarray/pull/6542#issuecomment-1164899177 https://api.github.com/repos/pydata/xarray/issues/6542 IC_kwDOAMm_X85FbvNp andersy005 13301940 2022-06-23T21:31:01Z 2022-06-23T21:31:01Z MEMBER

Thank you, @delgadom!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  docs on specifying chunks in to_zarr encoding arg 1221393104
1155852498 https://github.com/pydata/xarray/issues/6698#issuecomment-1155852498 https://api.github.com/repos/pydata/xarray/issues/6698 IC_kwDOAMm_X85E5OjS andersy005 13301940 2022-06-15T00:41:53Z 2022-06-15T00:41:53Z MEMBER

The failures appear to be caused by recent changes introduced in

  • https://github.com/pandas-dev/pandas/pull/47338

python ImportError while importing test module '/home/runner/work/xarray/xarray/xarray/tests/test_backends.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ../../../micromamba/envs/xarray-tests/lib/python3.10/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) xarray/tests/test_backends.py:86: in <module> from .test_dataset import ( xarray/tests/test_dataset.py:14: in <module> from pandas.core.computation.ops import UndefinedVariableError

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1271501159
1151341511 https://github.com/pydata/xarray/issues/6680#issuecomment-1151341511 https://api.github.com/repos/pydata/xarray/issues/6680 IC_kwDOAMm_X85EoBPH andersy005 13301940 2022-06-09T16:20:49Z 2022-06-09T16:20:49Z MEMBER

@mjwillson, have you looked at https://github.com/carbonplan/xarray-schema? xarray-schema provides some of the functionality you are looking for...

  • Related to https://github.com/pydata/xarray/issues/1900
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Datatype for a 'shape specification' of a Dataset / DataArray 1266308714
1151319409 https://github.com/pydata/xarray/issues/6681#issuecomment-1151319409 https://api.github.com/repos/pydata/xarray/issues/6681 IC_kwDOAMm_X85En71x andersy005 13301940 2022-06-09T15:58:38Z 2022-06-09T15:58:38Z MEMBER

@PovedaGerman, this doesn't appear to be an Xarray issue. Do you mind asking on the xinvert issue tracker: https://github.com/miniufo/xinvert

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Trouble with Gill_Matsuno 1266329676
1147776998 https://github.com/pydata/xarray/pull/6660#issuecomment-1147776998 https://api.github.com/repos/pydata/xarray/issues/6660 IC_kwDOAMm_X85Eaa_m andersy005 13301940 2022-06-06T18:52:32Z 2022-06-06T18:52:32Z MEMBER

Maybe we can merge those two in a future PR?

👍🏽

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  upload wheels from `main` to TestPyPI 1259827097
1140121929 https://github.com/pydata/xarray/issues/6649#issuecomment-1140121929 https://api.github.com/repos/pydata/xarray/issues/6649 IC_kwDOAMm_X85D9OFJ andersy005 13301940 2022-05-28T00:48:04Z 2022-05-28T00:48:04Z MEMBER

The CI failures appear to be related to https://github.com/dask/dask/issues/9137

```bash file /home/runner/work/xarray/xarray/xarray/tests/test_distributed.py, line 82 @pytest.mark.parametrize("engine,nc_format", ENGINES_AND_FORMATS) def test_dask_distributed_netcdf_roundtrip( file /usr/share/miniconda/envs/xarray-tests/lib/python3.10/site-packages/distributed/utils_test.py, line 138 @pytest.fixture def loop(cleanup): E fixture 'cleanup' not found

  available fixtures: add_standard_imports, backend, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, doctest_namespace, loop, monkeypatch, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, testrun_uid, tmp_netcdf_filename, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, worker_id
  use 'pytest --fixtures [testpath]' for help on them.

```

Ccing @jrbourbeau for visibility upstream...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1251382769
1139085703 https://github.com/pydata/xarray/issues/6551#issuecomment-1139085703 https://api.github.com/repos/pydata/xarray/issues/6551 IC_kwDOAMm_X85D5RGH andersy005 13301940 2022-05-26T21:40:00Z 2022-05-26T21:40:00Z MEMBER

It appears to have been fixed in

  • https://github.com/pydata/xarray/pull/6581

Closing this. Feel free to reopen if I missed something....

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Mypy workflow failing 1221918917
1126083413 https://github.com/pydata/xarray/issues/659#issuecomment-1126083413 https://api.github.com/repos/pydata/xarray/issues/659 IC_kwDOAMm_X85DHqtV andersy005 13301940 2022-05-13T13:55:20Z 2022-05-13T13:55:20Z MEMBER

5734 has greatly improved the performance. Fantastic work @dcherian 👏🏽

```python In [13]: import xarray as xr, pandas as pd, numpy as np

In [14]: ds = xr.Dataset({"a": xr.DataArray(np.r_[np.arange(500.), np.arange(500.)]), ...: "b": xr.DataArray(np.arange(1000.))})

In [15]: ds Out[15]: <xarray.Dataset> Dimensions: (dim_0: 1000) Dimensions without coordinates: dim_0 Data variables: a (dim_0) float64 0.0 1.0 2.0 3.0 4.0 ... 496.0 497.0 498.0 499.0 b (dim_0) float64 0.0 1.0 2.0 3.0 4.0 ... 996.0 997.0 998.0 999.0 ```

```python In [16]: xr.set_options(use_flox=True) Out[16]: <xarray.core.options.set_options at 0x104de21a0>

In [17]: %%timeit ...: ds.groupby("a").mean() ...: ...: 1.5 ms ± 3.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

In [18]: xr.set_options(use_flox=False) Out[18]: <xarray.core.options.set_options at 0x144382350>

In [19]: %%timeit ...: ds.groupby("a").mean() ...: ...: 94 ms ± 715 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ```

{
    "total_count": 4,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 4,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  groupby very slow compared to pandas 117039129
1119770101 https://github.com/pydata/xarray/issues/1346#issuecomment-1119770101 https://api.github.com/repos/pydata/xarray/issues/1346 IC_kwDOAMm_X85CvlX1 andersy005 13301940 2022-05-06T16:01:44Z 2022-05-06T16:01:44Z MEMBER
  • https://github.com/pydata/xarray/pull/5560 introduced "use_bottleneck" option to disable/enable using bottleneck. can we close this issue or keep it open?
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bottleneck : Wrong mean for float32 array 218459353
1118110207 https://github.com/pydata/xarray/pull/6542#issuecomment-1118110207 https://api.github.com/repos/pydata/xarray/issues/6542 IC_kwDOAMm_X85CpQH_ andersy005 13301940 2022-05-05T02:37:36Z 2022-05-05T02:37:36Z MEMBER

ImportError: Pandas requires version '0.15.1' or newer of 'xarray' (version '0.1.dev1+gfdf7303' currently installed).

looking at the reported xarray version, i'm curious... 🧐 are you using a shallow git clone of xarray? I'm able to reproduce the version issue via these steps:

bash git clone --depth 1 git@github.com:pydata/xarray.git cd xarray python -m pip install -e .

```bash conda list xarray ─╯

packages in environment at /Users/andersy005/mambaforge/envs/test:

Name Version Build Channel

xarray 0.1.dev1+g126051f dev_0 <develop> ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  docs on specifying chunks in to_zarr encoding arg 1221393104
1102895190 https://github.com/pydata/xarray/issues/4139#issuecomment-1102895190 https://api.github.com/repos/pydata/xarray/issues/4139 IC_kwDOAMm_X85BvNhW andersy005 13301940 2022-04-19T17:15:20Z 2022-04-19T17:15:20Z MEMBER

this appears to have been fixed in the latest alpha release of rasterio v1.3a3 (which isn't available on conda-forge but is available on PyPI) via fsspec

  • https://github.com/rasterio/rasterio/issues/2360
  • https://github.com/corteva/rioxarray/issues/440

```python In [13]: import xarray as xr

In [14]: import fsspec

In [15]: import rasterio

In [16]: rasterio.version Out[16]: '1.3a3'

In [17]: with fsspec.open('2d_test.tiff') as f: ...: ds = xr.open_dataset(f, engine='rasterio') ...:

In [18]: ds Out[18]: <xarray.Dataset> Dimensions: (band: 1, y: 10, x: 10) Coordinates: * band (band) int64 1 xc (y, x) float64 ... yc (y, x) float64 ... spatial_ref int64 ... Dimensions without coordinates: y, x Data variables: band_data (band, y, x) float64 ... ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [Feature request] Support file-like objects in open_rasterio 636449225
1060674372 https://github.com/pydata/xarray/pull/6337#issuecomment-1060674372 https://api.github.com/repos/pydata/xarray/issues/6337 IC_kwDOAMm_X84_OJtE andersy005 13301940 2022-03-07T13:14:38Z 2022-03-07T13:14:38Z MEMBER

@dependabot squash and merge

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump actions/checkout from 2 to 3 1160948208
1060674108 https://github.com/pydata/xarray/pull/6338#issuecomment-1060674108 https://api.github.com/repos/pydata/xarray/issues/6338 IC_kwDOAMm_X84_OJo8 andersy005 13301940 2022-03-07T13:14:22Z 2022-03-07T13:14:22Z MEMBER

@dependabot squash and merge

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump actions/setup-python from 2 to 3 1160948233
1055595206 https://github.com/pydata/xarray/pull/6237#issuecomment-1055595206 https://api.github.com/repos/pydata/xarray/issues/6237 IC_kwDOAMm_X84-6xrG andersy005 13301940 2022-03-01T16:00:25Z 2022-03-01T16:00:25Z MEMBER

Thank you, @stanwest!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Enable running sphinx-build on Windows 1124431593
1039149253 https://github.com/pydata/xarray/issues/6272#issuecomment-1039149253 https://api.github.com/repos/pydata/xarray/issues/6272 IC_kwDOAMm_X8498CjF andersy005 13301940 2022-02-14T14:26:44Z 2022-02-14T14:26:44Z MEMBER

@ArcticSnow,

The value z is a float32 which varies from 2000 to -2000 along the time dimension. After being saved in the subsample, z is still a float32 but the values that are less than -1000 are being offset by 44500.

You may have scale_factor and add_offset attributes in your dataset.

However, if I do (ds.z.isel(latitude[1,2,3], longitude=[3,4,5])*1).to_netcdf('sub.nc')

There's a chance xarray is discarding the attributes/encoding during the ds.z.isel(latitude[1,2,3], longitude=[3,4,5])*1 and as a result, netCDF ends up not encoding z during the to_netcdf() call.

What's the output of

python print(ds.z.encoding) print(ds.z.attrs) ??

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ds.to_netcdf() changes values of variable 1136315478
1034408563 https://github.com/pydata/xarray/pull/5692#issuecomment-1034408563 https://api.github.com/repos/pydata/xarray/issues/5692 IC_kwDOAMm_X849p9Jz andersy005 13301940 2022-02-10T01:58:14Z 2022-02-10T01:58:35Z MEMBER

We'll also need to keep an eye on the asv benchmarks to ensure that there's no major regression in performance (I haven't checked yet).

I enabled the asv-benchmarks workflow via the run-benchmark label. Feel free to turn this off (by removing the label) if the PR isn't ready :)

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Explicit indexes 966983801
1029124001 https://github.com/pydata/xarray/issues/6222#issuecomment-1029124001 https://api.github.com/repos/pydata/xarray/issues/6222 IC_kwDOAMm_X849Vy-h andersy005 13301940 2022-02-03T15:45:17Z 2022-02-03T15:45:17Z MEMBER

Closed by #6224

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  test packaging & distribution 1119738354
1026050538 https://github.com/pydata/xarray/issues/6222#issuecomment-1026050538 https://api.github.com/repos/pydata/xarray/issues/6222 IC_kwDOAMm_X849KEnq andersy005 13301940 2022-01-31T17:55:52Z 2022-01-31T17:57:26Z MEMBER

after twine check where we pip install and then try to import xarray. Alternatively we could have another test config in our regular CI to build + import.

it's my understanding that we're doing this here:

https://github.com/pydata/xarray/blob/b09de8195a9e22dd35d1b7ed608ea15dad0806ef/.github/workflows/pypi-release.yaml#L74-L80

However, it appears that the three seconds in sleep 3 aren't enough to warrant getting the latest metadata from test.pypi.org... For instance, the latest build ended up pulling v0.20.2 instead of the v0.21.0 (relevant run)

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  test packaging & distribution 1119738354
1025949400 https://github.com/pydata/xarray/issues/6218#issuecomment-1025949400 https://api.github.com/repos/pydata/xarray/issues/6218 IC_kwDOAMm_X849Jr7Y andersy005 13301940 2022-01-31T16:12:45Z 2022-01-31T16:12:45Z MEMBER

@haritha1022, thank you for the report!

i am unable to remove the color bar. i want to remove the color bar

The issue lies in this line:

python z.plot.pcolormesh(cmap="turbo",vmin=7500,vmax=8500, ax = ax1,cbar=False)

The right argument name is add_colorbar instead of cbar:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  raise AttributeError(f"{type(self).__name__!r} object " AttributeError: 'QuadMesh' object has no property 'cbar'  1119570923
1023887027 https://github.com/pydata/xarray/issues/6202#issuecomment-1023887027 https://api.github.com/repos/pydata/xarray/issues/6202 IC_kwDOAMm_X849B0az andersy005 13301940 2022-01-28T04:57:09Z 2022-01-28T05:02:08Z MEMBER

Under "Creating a Data Array" and next to the subitem "coords", it is written, that you can use a dictionary like {'coo1': [1,2,3], 'coo2': ['a', 'b', 'c'], ...}, so you can later skip the "dims"

Are you referring to this page? Per this documentation page, dims are only inferred from coords when coords is a list of tuples.

dims: a list of dimension names. If omitted and coords is a list of tuples, dimension names are taken from coords.

So, I'd say that the ValueError you get when using coords defined in a dictionary without specifying dims isn't really a bug.

But when I tried this, I get the message, that using a dictionary for coords is outdated.

What do you think it should return?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [Bug]: coords: a list or dictionary of coordinates. , but dictionary is outdated 1116697033
1020572086 https://github.com/pydata/xarray/issues/6188#issuecomment-1020572086 https://api.github.com/repos/pydata/xarray/issues/6188 IC_kwDOAMm_X8481LG2 andersy005 13301940 2022-01-24T21:36:06Z 2022-01-24T21:39:01Z MEMBER

@zxdawn,

Thank you for the thorough, reproducible example.

Per Scipy's documentation, RegularGridInterpolator uses fill_value=None for extrapolation

```python In [23]: da.interp(x=x, y=y, kwargs={"fill_value": None})

Out[23]: <xarray.DataArray (z: 3)> array([-0.26074336, 0.63496063, -0.46643289]) Coordinates: x (z) float64 -0.5 1.5 2.5 y (z) float64 0.15 0.25 0.35 Dimensions without coordinates: z ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  extrapolate not working for multi-dimentional data 1112925311
1020519502 https://github.com/pydata/xarray/issues/6176#issuecomment-1020519502 https://api.github.com/repos/pydata/xarray/issues/6176 IC_kwDOAMm_X8480-RO andersy005 13301940 2022-01-24T20:30:05Z 2022-01-24T20:30:05Z MEMBER

Do others have thoughts here? I would support stripping the leading zeros from the MM part of the version string in favor of consistency here.

I'm in favor of a non-zero-padded version for the benefit of having canonical/normalized versions that also match git tags/history

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray versioning to switch to CalVer 1108564253
1013562992 https://github.com/pydata/xarray/pull/5142#issuecomment-1013562992 https://api.github.com/repos/pydata/xarray/issues/5142 IC_kwDOAMm_X848ab5w andersy005 13301940 2022-01-15T00:43:50Z 2022-01-15T00:43:50Z MEMBER

Okay, I believe this is ready. I may have missed some corner cases (I'm happy to address these in separate PRs).

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Trigger CI on push or pull_request but not both 856020853
1008226098 https://github.com/pydata/xarray/issues/2347#issuecomment-1008226098 https://api.github.com/repos/pydata/xarray/issues/2347 IC_kwDOAMm_X848GE8y andersy005 13301940 2022-01-09T04:09:49Z 2022-01-09T04:09:49Z MEMBER

If I'm not mistaken, this appears to have been addressed by https://github.com/pydata/xarray/pull/2659. Should we close this issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Serialization of just coordinates 347962055
1008224592 https://github.com/pydata/xarray/issues/1900#issuecomment-1008224592 https://api.github.com/repos/pydata/xarray/issues/1900 IC_kwDOAMm_X848GElQ andersy005 13301940 2022-01-09T03:56:40Z 2022-01-09T03:56:40Z MEMBER
  • xref the more recent issue: https://github.com/pandera-dev/pandera/issues/705 which aims to implement a pandera.xarray module within pandera
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Representing & checking Dataset schemas  295959111
1008223975 https://github.com/pydata/xarray/issues/5785#issuecomment-1008223975 https://api.github.com/repos/pydata/xarray/issues/5785 IC_kwDOAMm_X848GEbn andersy005 13301940 2022-01-09T03:51:35Z 2022-01-09T03:51:35Z MEMBER

@theobarnhart-USGS, did you find a solution and/or can we close this issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  inconsistent mean computation 992636601
1008223776 https://github.com/pydata/xarray/issues/5790#issuecomment-1008223776 https://api.github.com/repos/pydata/xarray/issues/5790 IC_kwDOAMm_X848GEYg andersy005 13301940 2022-01-09T03:49:32Z 2022-01-09T03:49:32Z MEMBER

@shoyer Tried Dask as you suggested, and it helped significantly! Thanks for the suggestion!

@zachglee, should we close this issue?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  combining 2 arrays with xr.merge() causes temporary spike in memory usage ~3x the combined size of the arrays 995207525
1008222099 https://github.com/pydata/xarray/issues/6091#issuecomment-1008222099 https://api.github.com/repos/pydata/xarray/issues/6091 IC_kwDOAMm_X848GD-T andersy005 13301940 2022-01-09T03:30:55Z 2022-01-09T03:30:55Z MEMBER

Ha, thanks. It makes sense now. Shall we close this?

Great! I'm closing this for the time being...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  uint type data are read as wrong type (float64) 1085619598
1008221879 https://github.com/pydata/xarray/pull/6145#issuecomment-1008221879 https://api.github.com/repos/pydata/xarray/issues/6145 IC_kwDOAMm_X848GD63 andersy005 13301940 2022-01-09T03:27:39Z 2022-01-09T03:27:39Z MEMBER

@mroeschke, could you add an entry to the https://github.com/pydata/xarray/blob/main/doc/whats-new.rst?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove pd.Panel checks 1096802655
998992357 https://github.com/pydata/xarray/issues/4738#issuecomment-998992357 https://api.github.com/repos/pydata/xarray/issues/4738 IC_kwDOAMm_X847i2nl andersy005 13301940 2021-12-21T18:14:15Z 2021-12-21T18:14:15Z MEMBER

Okay... I think the following comment is still valid:

The issue appears to be caused by the coordinates which are used in dask_tokenize

It appears that the deterministic behavior of the tokenization process is affected depending on whether the dataset/datarray contains non-dimension coordinates or dimension coordinates

python In [2]: ds = xr.tutorial.open_dataset('rasm')

```python In [39]: a = ds.isel(time=0)

In [40]: a Out[40]: <xarray.Dataset> Dimensions: (y: 205, x: 275) Coordinates: time object 1980-09-16 12:00:00 xc (y, x) float64 189.2 189.4 189.6 189.7 ... 17.65 17.4 17.15 16.91 yc (y, x) float64 16.53 16.78 17.02 17.27 ... 28.26 28.01 27.76 27.51 Dimensions without coordinates: y, x Data variables: Tair (y, x) float64 ...

In [41]: dask.base.tokenize(a) == dask.base.tokenize(a) Out[41]: True ```

```python In [42]: b = ds.isel(y=0)

In [43]: b Out[43]: <xarray.Dataset> Dimensions: (time: 36, x: 275) Coordinates: * time (time) object 1980-09-16 12:00:00 ... 1983-08-17 00:00:00 xc (x) float64 189.2 189.4 189.6 189.7 ... 293.5 293.8 294.0 294.3 yc (x) float64 16.53 16.78 17.02 17.27 ... 27.61 27.36 27.12 26.87 Dimensions without coordinates: x Data variables: Tair (time, x) float64 ...

In [44]: dask.base.tokenize(b) == dask.base.tokenize(b) Out[44]: False ```

This looks like a bug in my opinion...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Compute hash of xarray objects 775502974
998948715 https://github.com/pydata/xarray/issues/4738#issuecomment-998948715 https://api.github.com/repos/pydata/xarray/issues/4738 IC_kwDOAMm_X847ir9r andersy005 13301940 2021-12-21T17:06:51Z 2021-12-21T17:11:47Z MEMBER

The issue appears to be caused by the coordinates which are used in dask_tokenize

I tried running the reproducer above and things seem to be working fine. I can't for the life of me understand why I got non-deterministic behavior four hours ago :(

```python In [1]: import dask, xarray as xr

In [2]: ds = xr.tutorial.open_dataset('rasm')

In [3]: dask.base.tokenize(ds) == dask.base.tokenize(ds) Out[3]: True

In [4]: dask.base.tokenize(ds.Tair._coords) == dask.base.tokenize(ds.Tair._coords) Out[4]: True ```

```python In [5]: xr.show_versions()

INSTALLED VERSIONS

commit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 20:33:18) [Clang 11.1.0 ] python-bits: 64 OS: Darwin OS-release: 20.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1

xarray: 0.20.1 pandas: 1.3.4 numpy: 1.20.3 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: 0.11.0 h5py: 3.6.0 Nio: None zarr: 2.10.3 cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.11.2 distributed: 2021.11.2 matplotlib: 3.5.0 cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: 0.18 sparse: None setuptools: 59.4.0 pip: 21.3.1 conda: None pytest: None IPython: 7.30.0 sphinx: 4.3.1 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Compute hash of xarray objects 775502974
998789248 https://github.com/pydata/xarray/issues/6091#issuecomment-998789248 https://api.github.com/repos/pydata/xarray/issues/6091 IC_kwDOAMm_X847iFCA andersy005 13301940 2021-12-21T13:41:33Z 2021-12-21T13:42:23Z MEMBER

Note that I can't reproduce it using this example:

I could be wrong but it appears that when you introduce a _FillValue in your dataarray, you end up with the same outcome:

```python In [53]: import numpy as np ...: import xarray as xr ...: ...: da = xr.DataArray(np.array([1,2,4294967295], dtype='uint')).rename('test_array')

In [56]: da.encoding['_FillValue'] = 4294967295 ```

```python In [62]: da.to_netcdf("test.nc", engine='netcdf4')

In [63]: !ncdump -h test.nc netcdf test { dimensions: dim_0 = 3 ; variables: uint64 test_array(dim_0) ; test_array:_FillValue = 4294967295ULL ; data:

test_array = 1, 2, _ ; } ```

```python In [64]: d = Dataset("test.nc")

In [65]: d Out[65]: <class 'netCDF4._netCDF4.Dataset'> root group (NETCDF4 data model, file format HDF5): dimensions(sizes): dim_0(3) variables(dimensions): uint64 test_array(dim_0) groups:

In [66]: xr.open_dataset('test.nc') Out[66]: <xarray.Dataset> Dimensions: (dim_0: 3) Dimensions without coordinates: dim_0 Data variables: test_array (dim_0) float64 ... ```

python In [67]: xr.open_dataset('test.nc').test_array Out[67]: <xarray.DataArray 'test_array' (dim_0: 3)> array([ 1., 2., nan]) Dimensions without coordinates: dim_0 Notice that xarray is using np.NaN as a sentinel value for the missing / fill_values. Because np.NaN is a float, this forces the entire array of integers to become floating pointing numbers...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  uint type data are read as wrong type (float64) 1085619598
998764799 https://github.com/pydata/xarray/issues/4738#issuecomment-998764799 https://api.github.com/repos/pydata/xarray/issues/4738 IC_kwDOAMm_X847h_D_ andersy005 13301940 2021-12-21T13:08:21Z 2021-12-21T13:09:01Z MEMBER

@andersy005 if you can rely on dask always being present, dask.base.tokenize(xarray_object) will do what you want.

@dcherian, I just realized that dask.base.tokenize deosn't return a deterministic token for xarray objects:

```python In [2]: import dask, xarray as xr

In [3]: ds = xr.tutorial.open_dataset('rasm')

In [4]: dask.base.tokenize(ds) == dask.base.tokenize(ds) Out[4]: False

In [5]: dask.base.tokenize(ds) == dask.base.tokenize(ds) Out[5]: False ```

The issue appears to be caused by the coordinates which are used in __dask_tokenize__

https://github.com/pydata/xarray/blob/dbc02d4e51fe404e8b61656f2089efadbf99de28/xarray/core/dataarray.py#L870-L873

python In [8]: dask.base.tokenize(ds.Tair.data) == dask.base.tokenize(ds.Tair.data) Out[8]: True

python In [16]: dask.base.tokenize(ds.Tair._coords) == dask.base.tokenize(ds.Tair._coords) Out[16]: False

Is this the expected behavior or am I missing something?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Compute hash of xarray objects 775502974
967526961 https://github.com/pydata/xarray/pull/5955#issuecomment-967526961 https://api.github.com/repos/pydata/xarray/issues/5955 IC_kwDOAMm_X845q0ox andersy005 13301940 2021-11-12T20:48:26Z 2021-11-12T20:48:26Z MEMBER

The CI failure is related. It has moved the pytest.importorskip("distributed") below a distributed import. See PyCQA/isort#1840.

Good catch. I missed it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1047689644
966810629 https://github.com/pydata/xarray/pull/5955#issuecomment-966810629 https://api.github.com/repos/pydata/xarray/issues/5955 IC_kwDOAMm_X845oFwF andersy005 13301940 2021-11-12T04:26:07Z 2021-11-12T04:26:07Z MEMBER

The CI failures seem to be unrelated. Are we okay with merging this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1047689644
964539712 https://github.com/pydata/xarray/issues/5940#issuecomment-964539712 https://api.github.com/repos/pydata/xarray/issues/5940 IC_kwDOAMm_X845fbVA andersy005 13301940 2021-11-09T21:01:31Z 2021-11-09T21:01:31Z MEMBER

In any case, I'd vote to remove the pre-commit autoupdate CI as that has some quirks that make it difficult to use for everyone other than me.

Once we're satisfied with pre-commit.ci, we should remove https://github.com/pydata/xarray/blob/main/.github/workflows/ci-pre-commit.yml workflow as well.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refresh on issue bots 1045038245
963480816 https://github.com/pydata/xarray/issues/5898#issuecomment-963480816 https://api.github.com/repos/pydata/xarray/issues/5898 IC_kwDOAMm_X845bYzw andersy005 13301940 2021-11-08T19:04:04Z 2021-11-08T19:04:04Z MEMBER

Isn't this a bug?

Yes. Part of this is related to #5897

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update docs for Dataset `reduce` methods to indicate that non-numeric data variables are dropped 1035640211
958175626 https://github.com/pydata/xarray/issues/5927#issuecomment-958175626 https://api.github.com/repos/pydata/xarray/issues/5927 IC_kwDOAMm_X845HJmK andersy005 13301940 2021-11-02T21:20:53Z 2021-11-02T21:21:26Z MEMBER

We could potentially still automate

"add new section to the whats-new.rst", "update the stable branch", "update the active version of the docs" (maybe?), and "email various mailing lists".

with more frequent releases, will we still need to maintain the "stable" branch? It's my understanding that the "stable" branch is slightly different from the latest tag due to very few, trivial/doc changes that are added between releases. Without this additional branch, we would have one less thing to automate.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Release frequency 1042652334
958167951 https://github.com/pydata/xarray/pull/5903#issuecomment-958167951 https://api.github.com/repos/pydata/xarray/issues/5903 IC_kwDOAMm_X845HHuP andersy005 13301940 2021-11-02T21:08:46Z 2021-11-02T21:08:46Z MEMBER

This behavior doesn't feel intuitive to me. It does happen on the stable version as well so it isn't related to this PR. Tested on windows 10 and Firefox.

I'll look into this... It appears to be a CSS issue

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Explicitly list all reductions in api.rst 1038376514
958015953 https://github.com/pydata/xarray/issues/5927#issuecomment-958015953 https://api.github.com/repos/pydata/xarray/issues/5927 IC_kwDOAMm_X845GinR andersy005 13301940 2021-11-02T18:24:20Z 2021-11-02T18:24:20Z MEMBER

👍🏽 for frequent releases... With more frequent releases, should we switch to calendar versioning? I remember seeing this discussion in one of the issues but I don't recall what the conclusion was.

{
    "total_count": 2,
    "+1": 1,
    "-1": 1,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Release frequency 1042652334
953008413 https://github.com/pydata/xarray/issues/5897#issuecomment-953008413 https://api.github.com/repos/pydata/xarray/issues/5897 IC_kwDOAMm_X844zcEd andersy005 13301940 2021-10-27T14:51:07Z 2021-10-27T14:51:07Z MEMBER

This applies to pandas datetime objects as well (#5898)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ds.mean bugs with cftime objects 1035607476
948210349 https://github.com/pydata/xarray/pull/5880#issuecomment-948210349 https://api.github.com/repos/pydata/xarray/issues/5880 IC_kwDOAMm_X844hIqt andersy005 13301940 2021-10-21T03:00:10Z 2021-10-21T03:02:51Z MEMBER

Thank you for working on this, @mlhenderson! Seems there have been attempts at fixing this issue before

  • https://github.com/pydata/xarray/pull/4053
  • https://github.com/executablebooks/sphinx-book-theme/issues/238#issuecomment-714784321 which suggests using the .xr-wrap { display: block !important }. Not sure why this fix was implemented in sphinx-book-theme and not in xarray directly...

Ccing @benbovy who wrote the HTML repr in case he has any feedback.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Alternate method using inline css to hide regular html output in an untrusted notebook 1031724989
927918719 https://github.com/pydata/xarray/pull/5826#issuecomment-927918719 https://api.github.com/repos/pydata/xarray/issues/5826 IC_kwDOAMm_X843Tup_ andersy005 13301940 2021-09-27T14:16:33Z 2021-09-27T14:16:33Z MEMBER

github-script action v5 includes breaking changes that will cause the CI to fail.

https://github.com/pydata/xarray/blob/54dba584dcef37ac0713ed94cd99aa2d5237c197/.github/workflows/upstream-dev-ci.yaml#L161

must be updated to github.rest.issues.create

https://github.com/pydata/xarray/blob/54dba584dcef37ac0713ed94cd99aa2d5237c197/.github/workflows/upstream-dev-ci.yaml#L169

and github.rest.issues.update

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Bump actions/github-script from 4.1 to 5 1007733800
927385227 https://github.com/pydata/xarray/pull/3640#issuecomment-927385227 https://api.github.com/repos/pydata/xarray/issues/3640 IC_kwDOAMm_X843RsaL andersy005 13301940 2021-09-26T22:47:10Z 2021-09-26T22:47:10Z MEMBER

Do we not want to discuss this in the bi-weekly dev meetings perhaps?

👍🏽 I'm in favor of discussing this in the dev meetings... Being able to easily switch plotting backends would be a nice feature to have from a user's perspective.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add entrypoint for plotting backends 539394615
927378300 https://github.com/pydata/xarray/pull/3640#issuecomment-927378300 https://api.github.com/repos/pydata/xarray/issues/3640 IC_kwDOAMm_X843Rqt8 andersy005 13301940 2021-09-26T21:53:42Z 2021-09-26T21:53:42Z MEMBER

Closing this because it has gone stale :(

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add entrypoint for plotting backends 539394615
926875948 https://github.com/pydata/xarray/pull/5813#issuecomment-926875948 https://api.github.com/repos/pydata/xarray/issues/5813 IC_kwDOAMm_X843PwEs andersy005 13301940 2021-09-24T19:39:13Z 2021-09-24T19:40:51Z MEMBER

Readthedocs build is failing due to timeout. It appears that our doc builds have been failing for the last few hours, and my hunch is that somehow we are exceeding the 15 minutes build time limit for some PRs

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add @Illviljan to core team 1005941605
875136135 https://github.com/pydata/xarray/pull/3131#issuecomment-875136135 https://api.github.com/repos/pydata/xarray/issues/3131 MDEyOklzc3VlQ29tbWVudDg3NTEzNjEzNQ== andersy005 13301940 2021-07-06T22:57:29Z 2021-07-06T22:57:29Z MEMBER

@rabernat, the gentlest of bumps on this :)... How much work (content) is left to bring this to completion? I'm asking because I'd be happy to help if there's still more work and/or follow-up PR needed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: tutorial on merging datasets 467908830
867767613 https://github.com/pydata/xarray/pull/5520#issuecomment-867767613 https://api.github.com/repos/pydata/xarray/issues/5520 MDEyOklzc3VlQ29tbWVudDg2Nzc2NzYxMw== andersy005 13301940 2021-06-24T16:10:44Z 2021-06-24T16:10:44Z MEMBER

could someone check whether binder still works?

It's broken, unfortunately :(

I will submit a fix later today unless someone beats me to it

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  update references to `master` 928501063
861525819 https://github.com/pydata/xarray/pull/5468#issuecomment-861525819 https://api.github.com/repos/pydata/xarray/issues/5468 MDEyOklzc3VlQ29tbWVudDg2MTUyNTgxOQ== andersy005 13301940 2021-06-15T14:02:08Z 2021-06-15T14:02:08Z MEMBER

Pushing another commit to the same branch should cancel previous runs and start new runs (for the new commit). So, the fact that all runs were cancelled makes me think that it was a glitch somewhere :)

I take this back... I just realized that #5467 is a different PR :). The previous runs cancellation is currently triggered for commits in the same branch/PR

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Run mypy in pre-commit only in CI? 919842557
861522841 https://github.com/pydata/xarray/pull/5468#issuecomment-861522841 https://api.github.com/repos/pydata/xarray/issues/5468 MDEyOklzc3VlQ29tbWVudDg2MTUyMjg0MQ== andersy005 13301940 2021-06-15T13:58:45Z 2021-06-15T13:58:45Z MEMBER

@max-sixty, I skimmed through Github Action logs, but I couldn't figure out where the "cancel" command came from. I am going to speculate that it was some glitch in GitHub action.

Is it something to do with pushing another commit to #5467?

Pushing another commit to the same branch should cancel previous runs and start new runs (for the new commit). So, the fact that all runs were cancelled makes me think that it was a glitch somewhere :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Run mypy in pre-commit only in CI? 919842557
851835152 https://github.com/pydata/xarray/issues/5413#issuecomment-851835152 https://api.github.com/repos/pydata/xarray/issues/5413 MDEyOklzc3VlQ29tbWVudDg1MTgzNTE1Mg== andersy005 13301940 2021-06-01T05:49:42Z 2021-06-01T05:49:42Z MEMBER

Do we fire twice for each release? Maybe that's fine though?

Yes, on both push and release events. The publication to PyPI step is only run for the release event though i.e. if one were to push a tag to GitHub, this tag would never be published to PyPI via the GitHub workflow.

It's my understanding that

  • We could remove the push trigger and only keep the release trigger and everything would still work fine.
  • Having the two triggers comes in handy when/if one pushes tags via git (for testing purposes). As far as I can remember, @keewis was using the push event (via tags) to test the early versions of the workflow.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Does the PyPI release job fire twice for each release? 907845790
846313602 https://github.com/pydata/xarray/pull/5350#issuecomment-846313602 https://api.github.com/repos/pydata/xarray/issues/5350 MDEyOklzc3VlQ29tbWVudDg0NjMxMzYwMg== andersy005 13301940 2021-05-21T23:54:55Z 2021-05-21T23:54:55Z MEMBER

The general proposal probably needs some socializing and refinement before we start applying it more broadly. I'd be keen to pursue it if others agree — it could bring some more robustness to our testing suite while also making it simpler.

👍🏽

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add dask stack tests 895978572
844457262 https://github.com/pydata/xarray/issues/5298#issuecomment-844457262 https://api.github.com/repos/pydata/xarray/issues/5298 MDEyOklzc3VlQ29tbWVudDg0NDQ1NzI2Mg== andersy005 13301940 2021-05-19T20:42:25Z 2021-05-19T20:42:25Z MEMBER

Here's one example that shows github-activity in action: https://jupyterbook.org/reference/_changelog.html

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.18.1 patch release? 891200849
844455537 https://github.com/pydata/xarray/issues/5298#issuecomment-844455537 https://api.github.com/repos/pydata/xarray/issues/5298 MDEyOklzc3VlQ29tbWVudDg0NDQ1NTUzNw== andersy005 13301940 2021-05-19T20:39:43Z 2021-05-19T20:39:43Z MEMBER

Glad to hear that releasing is getting easier/simpler :)

There are some smaller tasks we can put into pipelines if we want to squeeze more out there (e.g. updating stable branch, adding contributor lists). The only really necessary tasks are writing the summary, organizing the whatsnew, sending out communications.

I've been thinking about this, and I'm wondering if something like this great tool: github-activity could help us (at least with updating whatsnew.rst). github-activity is able to generate the changelog from the GitHub API. This would eliminate the need for updating whatsnew.rst multiple times (once for each PR). With proper labels, github-activity is able to categorize PRs and Issues during the changelog generation. Another cool feature of github-activity is that it includes other contributions in addition to the default PR submissions: https://github-activity.readthedocs.io/en/latest/#how-does-this-tool-define-contributions-in-the-reports

There's an extreme end of "constantly release new main branches". I don't see anything functionally wrong with that, but I'm sure there are unexplored consequences, and I'm not sure our role is to forge a new path in OSS release cycles.

I'm curious.. How are these "new main branches" different from git tags from releases?

So maybe monthly is a reasonable balance?

👍🏽 for frequent/monthly release cadence

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  0.18.1 patch release? 891200849
844108207 https://github.com/pydata/xarray/pull/5343#issuecomment-844108207 https://api.github.com/repos/pydata/xarray/issues/5343 MDEyOklzc3VlQ29tbWVudDg0NDEwODIwNw== andersy005 13301940 2021-05-19T13:30:08Z 2021-05-19T13:30:08Z MEMBER

5267 seems to accidentally have disabled the scheduled CI.

Oooops! Thanks for catching that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fix the upstream-dev CI 895421791
843279831 https://github.com/pydata/xarray/pull/5300#issuecomment-843279831 https://api.github.com/repos/pydata/xarray/issues/5300 MDEyOklzc3VlQ29tbWVudDg0MzI3OTgzMQ== andersy005 13301940 2021-05-18T15:40:31Z 2021-05-18T15:40:31Z MEMBER

@alexamici @aurghs any thoughts?

@alexamici @aurghs, the gentlest of bumps on this. Have a few minutes to take a look at this? :smile:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Better error message when no backend engine is found. 891253662
842692873 https://github.com/pydata/xarray/issues/5326#issuecomment-842692873 https://api.github.com/repos/pydata/xarray/issues/5326 MDEyOklzc3VlQ29tbWVudDg0MjY5Mjg3Mw== andersy005 13301940 2021-05-17T22:51:54Z 2021-05-17T22:51:54Z MEMBER

I expected dimension order not to matter, as it is the case in most of xarray. Right now, my code wraps func in a another function that performs this transpose when needed.

Isn't it the case the order matters here because you are providing a template for map_blocks to follow? Would your code still work if you were to leave map_blocks to do the inference of the output dataset on its own as Deepak pointed out? From the code snippet you provided, it appears that map_blocks works without the template

python In [10]: dac.map_blocks(func).load() Out[10]: <xarray.DataArray <this-array> (y: 3, x: 2)> array([[0, 3], [1, 4], [2, 5]]) Dimensions without coordinates: y, x

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  map_blocks doesn't handle tranposed arrays 893692903
839910670 https://github.com/pydata/xarray/pull/5234#issuecomment-839910670 https://api.github.com/repos/pydata/xarray/issues/5234 MDEyOklzc3VlQ29tbWVudDgzOTkxMDY3MA== andersy005 13301940 2021-05-12T16:15:13Z 2021-05-12T16:15:13Z MEMBER

I believe I have addressed most, if not all changes requested during the review. If there are still disputable changes that need addressing, please let me know

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Code cleanup 870619014
839903564 https://github.com/pydata/xarray/pull/5234#issuecomment-839903564 https://api.github.com/repos/pydata/xarray/issues/5234 MDEyOklzc3VlQ29tbWVudDgzOTkwMzU2NA== andersy005 13301940 2021-05-12T16:07:29Z 2021-05-12T16:07:29Z MEMBER

Shall we also add parts of Stephan's comment to the "Contributing Guide" in the documentation?

👍🏽 for addressing this in a separate PR

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Code cleanup 870619014
839878862 https://github.com/pydata/xarray/pull/5234#issuecomment-839878862 https://api.github.com/repos/pydata/xarray/issues/5234 MDEyOklzc3VlQ29tbWVudDgzOTg3ODg2Mg== andersy005 13301940 2021-05-12T15:44:51Z 2021-05-12T15:44:51Z MEMBER

@max-sixty, I'm going to address a few issues pointed out during the review, then we can merge afterwards.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Code cleanup 870619014
834820456 https://github.com/pydata/xarray/pull/5234#issuecomment-834820456 https://api.github.com/repos/pydata/xarray/issues/5234 MDEyOklzc3VlQ29tbWVudDgzNDgyMDQ1Ng== andersy005 13301940 2021-05-07T22:20:00Z 2021-05-07T22:20:00Z MEMBER

@shoyer, Thank you for the thorough feedback.

Even code that has been carefully checked can introduce bugs. Style clean-up PRs thus still need to be reviewed carefully.

Fewer lines or tokens of code is not always better. Good examples from this PR include reduced nesting with early return statements, comprehensions and conditional expressions. All of these would be unobjectionable to use in new code (and are often a good idea), but code that doesn't use them even when it could is fine, too.

👍🏽. I reverted back to the original style in most places. There're a few comments that I haven't addressed yet. I will look into those sometime this weekend.

Code-base wide clean-up for particular issues may be warranted occasionally, but I would suggest always discussing it with other core developers first.

I agree. I didn't expect the PR to get this large. My original intent when I created the PR was to clean the backends code, and I ended up invading other areas :(. In the future, I'll definitely make sure to start a discussion before working on changes that may end up affecting a large chunk of the code base.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Code cleanup 870619014
833782283 https://github.com/pydata/xarray/issues/5232#issuecomment-833782283 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc4MjI4Mw== andersy005 13301940 2021-05-06T18:59:01Z 2021-05-06T18:59:01Z MEMBER

maybe we should make the "upload to TestPyPI" a separate step?

Or we could just set

yaml skip_existing: true

on the upload to TestPyPI step.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833774532 https://github.com/pydata/xarray/issues/5232#issuecomment-833774532 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc3NDUzMg== andersy005 13301940 2021-05-06T18:50:11Z 2021-05-06T18:50:46Z MEMBER

@keewis, do we need to do anything with the release on test.pypi.org? (since the v0.18 is already published there)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833772828 https://github.com/pydata/xarray/issues/5232#issuecomment-833772828 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc3MjgyOA== andersy005 13301940 2021-05-06T18:48:41Z 2021-05-06T18:48:41Z MEMBER

hmm should I also delete the tag?

Yes...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833768930 https://github.com/pydata/xarray/issues/5232#issuecomment-833768930 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc2ODkzMA== andersy005 13301940 2021-05-06T18:43:04Z 2021-05-06T18:43:04Z MEMBER

I accidentally closed this via Github's keywords... The bug in the CI workflow should be fixed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833756957 https://github.com/pydata/xarray/pull/5273#issuecomment-833756957 https://api.github.com/repos/pydata/xarray/issues/5273 MDEyOklzc3VlQ29tbWVudDgzMzc1Njk1Nw== andersy005 13301940 2021-05-06T18:25:11Z 2021-05-06T18:25:11Z MEMBER

Also, feel free to merge this at your earliest convenience

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Release-workflow: Bug fix 877830157
833756161 https://github.com/pydata/xarray/pull/5273#issuecomment-833756161 https://api.github.com/repos/pydata/xarray/issues/5273 MDEyOklzc3VlQ29tbWVudDgzMzc1NjE2MQ== andersy005 13301940 2021-05-06T18:24:09Z 2021-05-06T18:24:09Z MEMBER

@TomNicholas & @alexamici... If you still want to use this, you may want to delete the old release/tag and re-publish it :( Sorry for the extra work

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Release-workflow: Bug fix 877830157
833753256 https://github.com/pydata/xarray/issues/5232#issuecomment-833753256 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc1MzI1Ng== andersy005 13301940 2021-05-06T18:19:41Z 2021-05-06T18:19:41Z MEMBER

Never mind... There's a bug in this if

https://github.com/pydata/xarray/blob/01c1d25578634306b5d3179989a0a9bd867de176/.github/workflows/pypi-release.yaml#L84

It should be github.ref instead of github.event.ref... Opening a PR shortly (not sure if you still want to use the GitHub action)...

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833741743 https://github.com/pydata/xarray/issues/5232#issuecomment-833741743 https://api.github.com/repos/pydata/xarray/issues/5232 MDEyOklzc3VlQ29tbWVudDgzMzc0MTc0Mw== andersy005 13301940 2021-05-06T18:03:01Z 2021-05-06T18:03:13Z MEMBER

Hmm... Looks like it built but didn't upload: pydata/xarray/runs/2520588073?check_suite_focus=true

cc @keewis @andersy005

Sorry about that. I think the culprit here are the if conditions which evaluate to false on release events... @keewis what do you think? :)

https://github.com/pydata/xarray/blob/01c1d25578634306b5d3179989a0a9bd867de176/.github/workflows/pypi-release.yaml#L66

https://github.com/pydata/xarray/blob/01c1d25578634306b5d3179989a0a9bd867de176/.github/workflows/pypi-release.yaml#L75

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  release v0.18.0 870292042
833654155 https://github.com/pydata/xarray/pull/5267#issuecomment-833654155 https://api.github.com/repos/pydata/xarray/issues/5267 MDEyOklzc3VlQ29tbWVudDgzMzY1NDE1NQ== andersy005 13301940 2021-05-06T16:17:14Z 2021-05-06T16:17:14Z MEMBER

Thank you both for the feedback! Merging this shortly...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Disable workflows on forks 877586869
833653685 https://github.com/pydata/xarray/pull/5267#issuecomment-833653685 https://api.github.com/repos/pydata/xarray/issues/5267 MDEyOklzc3VlQ29tbWVudDgzMzY1MzY4NQ== andersy005 13301940 2021-05-06T16:16:45Z 2021-05-06T16:16:45Z MEMBER

LGTM, assuming this worked properly in this PR itself :)

I think it's working properly. I'm not seeing new CI runs for this PR in my fork

looks good to me. I thought we did that already

As far as I can tell, we did it for the upstream-dev CI only

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Disable workflows on forks 877586869
833605051 https://github.com/pydata/xarray/pull/5269#issuecomment-833605051 https://api.github.com/repos/pydata/xarray/issues/5269 MDEyOklzc3VlQ29tbWVudDgzMzYwNTA1MQ== andersy005 13301940 2021-05-06T15:15:28Z 2021-05-06T15:15:28Z MEMBER

should we remove the push:branches trigger?

I think so

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pypi upload workflow maintenance 877605485
833604055 https://github.com/pydata/xarray/pull/5268#issuecomment-833604055 https://api.github.com/repos/pydata/xarray/issues/5268 MDEyOklzc3VlQ29tbWVudDgzMzYwNDA1NQ== andersy005 13301940 2021-05-06T15:14:12Z 2021-05-06T15:14:12Z MEMBER

Duplicate of https://github.com/pydata/xarray/pull/5269

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove push event trigger for the release workflow 877605250
833520119 https://github.com/pydata/xarray/pull/5234#issuecomment-833520119 https://api.github.com/repos/pydata/xarray/issues/5234 MDEyOklzc3VlQ29tbWVudDgzMzUyMDExOQ== andersy005 13301940 2021-05-06T13:24:18Z 2021-05-06T13:24:18Z MEMBER

I feel this PR is too big and diverse to accept it as a whole.

For example I like most changes from older style formatting to f-strings, but I disagree strongly the moving the return statements into functions improves readability.

If folks are in favor of the original style (inline variables that are immediately returned) throughout the codebase, I'm happy to revert back to the original style :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Code cleanup 870619014
833482609 https://github.com/pydata/xarray/pull/5244#issuecomment-833482609 https://api.github.com/repos/pydata/xarray/issues/5244 MDEyOklzc3VlQ29tbWVudDgzMzQ4MjYwOQ== andersy005 13301940 2021-05-06T12:27:53Z 2021-05-06T12:32:57Z MEMBER

Here's the workflow visualization graph. Let me know if the current job dependency is okay...

Also, someone with admin permissions on PyPI should make sure to get the necessary tokens from PyPI and TestPyPI and set them on this repo.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add GitHub action for publishing artifacts to PyPI 873842812
831599258 https://github.com/pydata/xarray/pull/5250#issuecomment-831599258 https://api.github.com/repos/pydata/xarray/issues/5250 MDEyOklzc3VlQ29tbWVudDgzMTU5OTI1OA== andersy005 13301940 2021-05-03T23:33:27Z 2021-05-03T23:33:27Z MEMBER

Thanks, @max-sixty

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix bulleted list indentation in docstrings 874231291
831509902 https://github.com/pydata/xarray/pull/5244#issuecomment-831509902 https://api.github.com/repos/pydata/xarray/issues/5244 MDEyOklzc3VlQ29tbWVudDgzMTUwOTkwMg== andersy005 13301940 2021-05-03T20:21:51Z 2021-05-03T20:21:51Z MEMBER

@andersy005 I'm curious, why do you go with multiple jobs within the workflow, and using artifacts to transfer state between them, rather than multiple steps in a single job?

I have a tendency to split a workflow into multiple jobs because it makes reasoning about the workflow easy (at least for me :)) . However, I think using a single job here would reduce overhead since the logic isn't complex to warrant a need for multiple jobs...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add GitHub action for publishing artifacts to PyPI 873842812

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1032.884ms · About: xarray-datasette