home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

69 rows where comments = 2, state = "closed" and user = 5635139 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 46
  • issue 23

state 1

  • closed · 69 ✖

repo 1

  • xarray 69
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2262478932 PR_kwDOAMm_X85tqpUi 8974 Raise errors on new warnings from within xarray max-sixty 5635139 closed 0     2 2024-04-25T01:50:48Z 2024-04-29T12:18:42Z 2024-04-29T02:50:21Z MEMBER   0 pydata/xarray/pulls/8974

Notes are inline.

  • [x] Closes https://github.com/pydata/xarray/issues/8494
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

Done with some help from an LLM — quite good for doing tedious tasks that we otherwise wouldn't want to do — can paste in all the warnings output and get a decent start on rules for exclusions

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8974/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
2033367994 PR_kwDOAMm_X85hj9np 8533 Offer a fixture for unifying DataArray & Dataset tests max-sixty 5635139 closed 0     2 2023-12-08T22:06:28Z 2023-12-18T21:30:41Z 2023-12-18T21:30:40Z MEMBER   0 pydata/xarray/pulls/8533

Some tests are literally copy & pasted between DataArray & Dataset tests. This change allows them to use a single test. Not everything will be able to use this — sometimes we want to check specifics — but some will — I've change the .cumulative tests to use this fixture.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8533/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
907845790 MDU6SXNzdWU5MDc4NDU3OTA= 5413 Does the PyPI release job fire twice for each release? max-sixty 5635139 closed 0     2 2021-06-01T04:01:17Z 2023-12-04T19:22:32Z 2023-12-04T19:22:32Z MEMBER      

I was attempting to copy the great work here for numbagg and spotted this! Do we fire twice for each release? Maybe that's fine though?

https://github.com/pydata/xarray/actions/workflows/pypi-release.yaml

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5413/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2021810083 PR_kwDOAMm_X85g8r6c 8508 Implement `np.clip` as `__array_function__` max-sixty 5635139 closed 0     2 2023-12-02T02:20:11Z 2023-12-03T05:27:38Z 2023-12-03T05:27:33Z MEMBER   0 pydata/xarray/pulls/8508

Would close https://github.com/pydata/xarray/issues/2570

Because of https://numpy.org/neps/nep-0018-array-function-protocol.html#partial-implementation-of-numpy-s-api, no option is ideal: - Don't do anything — don't implement __array_function__. Any numpy function that's not a ufunc — such as np.clip will materialize the array into memory. - Implement __array_function__ and lose the ability to call any non-ufunc-numpy-func that we don't explicitly configure here. So np.lexsort(da) wouldn't work, for example; and users would have to run np.lexsort(da.values). - Implement __array_function__, and attempt to handle the functions we don't explicitly configure by coercing to numpy arrays. This requires writing code to walk a tree of objects looking for arrays to coerce. It seems to go against the original numpy proposal.

@shoyer is this summary accurate?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8508/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1964877168 PR_kwDOAMm_X85d8EmN 8381 Allow writing to zarr with differently ordered dims max-sixty 5635139 closed 0     2 2023-10-27T06:47:59Z 2023-11-25T21:02:20Z 2023-11-15T18:09:08Z MEMBER   0 pydata/xarray/pulls/8381

Is this reasonable?

  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8381/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
400444797 MDExOlB1bGxSZXF1ZXN0MjQ1NjMwOTUx 2687 Enable resampling on PeriodIndex max-sixty 5635139 closed 0     2 2019-01-17T20:13:25Z 2023-11-17T20:38:44Z 2023-11-17T20:38:44Z MEMBER   0 pydata/xarray/pulls/2687

This allows resampling with PeriodIndex objects by keeping the group as an index rather than coercing to a DataArray (which coerces any non-native types to objects)

I'm still getting one failure around the name of the IndexVariable still being __resample_dim__ after resample, but wanted to socialize the approach of allowing a name argument to IndexVariable - is this reasonable?

  • [x] Closes https://github.com/pydata/xarray/issues/1270
  • [x] Tests added
  • [ ] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2687/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1986758555 PR_kwDOAMm_X85fGE95 8438 Rename `to_array` to `to_dataarray` max-sixty 5635139 closed 0     2 2023-11-10T02:58:21Z 2023-11-10T06:15:03Z 2023-11-10T06:15:02Z MEMBER   0 pydata/xarray/pulls/8438

This is a very minor nit, so I'm not sure it's worth changing.

What do others think?

(I would have opened an issue but it's just as quick to just do the PR)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8438/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1931582554 PR_kwDOAMm_X85cLpap 8285 Add `min_weight` param to `rolling_exp` functions max-sixty 5635139 closed 0     2 2023-10-08T01:09:59Z 2023-10-14T07:24:48Z 2023-10-14T07:24:48Z MEMBER   0 pydata/xarray/pulls/8285  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8285/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1916391948 PR_kwDOAMm_X85bYlaM 8242 Add modules to `check-untyped` max-sixty 5635139 closed 0     2 2023-09-27T21:56:45Z 2023-09-29T17:43:07Z 2023-09-29T16:39:34Z MEMBER   0 pydata/xarray/pulls/8242

In reviewing https://github.com/pydata/xarray/pull/8241, I realize that we actually want check-untyped-defs, which is a bit less strict, but lets us add some more modules on. I did have to add a couple of ignores, think it's a reasonable tradeoff to add big modules like computation on.

Errors with this enabled are actual type errors, not just mypy pedanticness, so would be good to get as much as possible into this list...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8242/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1459938761 PR_kwDOAMm_X85DdsG- 7311 Add numba to nightly upstream max-sixty 5635139 closed 0     2 2022-11-22T14:04:54Z 2023-09-28T16:48:16Z 2023-01-15T18:07:48Z MEMBER   0 pydata/xarray/pulls/7311

I saw that we didn't have this while investigating https://github.com/numba/numba/issues/8615. We should probably wait until that's resolved before merging this (this doesn't solve that issue).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7311/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
430194200 MDExOlB1bGxSZXF1ZXN0MjY4MTM4OTg1 2875 DOC: More on pandas comparison max-sixty 5635139 closed 0     2 2019-04-07T21:24:44Z 2023-09-28T16:46:14Z 2023-09-28T16:46:14Z MEMBER   1 pydata/xarray/pulls/2875

Follow up from the mailing list: - Added some more thoughts on the multi-dimensional comparison. Some of this is opinionated (in concepts not arguments) so I'd appreciate feedback on both the concepts and language. - Removed some of the specific comparison with NDPanel etc, given those are removed - Placeholder for something on the Explicit API (wanted to get this version out, consistent with me leaving less work hanging)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2875/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1895568528 PR_kwDOAMm_X85aSh-V 8181 Set dev version above released version max-sixty 5635139 closed 0     2 2023-09-14T02:56:26Z 2023-09-14T20:54:36Z 2023-09-14T20:47:08Z MEMBER   0 pydata/xarray/pulls/8181

Pandas asserts that the xarray version that's running is above 2022.; locally it was set to 999 which failed this. It only applies when developing locally.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8181/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1203835220 I_kwDOAMm_X85HwRFU 6484 Should we raise a more informative error on no zarr dir? max-sixty 5635139 closed 0     2 2022-04-13T22:05:07Z 2022-09-20T22:38:46Z 2022-09-20T22:38:46Z MEMBER      

What happened?

Currently if someone supplies a path that doesn't exist, we get quite a long stack trace, without really saying that the path doesn't exist.

What did you expect to happen?

Possibly a FileNotFoundError

Minimal Complete Verifiable Example

Python xr.open_zarr('x.zarr')

Relevant log output

```Python In [1]: xr.open_zarr('x.zarr') <ipython-input-1-8be4b98d9b20>:1: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:

  1. Consolidating metadata in this existing store with zarr.consolidate_metadata().
  2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
  3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata. xr.open_zarr('x.zarr')

KeyError Traceback (most recent call last) ~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/xarray/backends/zarr.py in open_group(cls, store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, append_dim, write_region, safe_chunks, stacklevel) 347 try: --> 348 zarr_group = zarr.open_consolidated(store, **open_kwargs) 349 except KeyError:

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/zarr/convenience.py in open_consolidated(store, metadata_key, mode, **kwargs) 1186 # setup metadata store -> 1187 meta_store = ConsolidatedMetadataStore(store, metadata_key=metadata_key) 1188

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/zarr/storage.py in init(self, store, metadata_key) 2643 # retrieve consolidated metadata -> 2644 meta = json_loads(store[metadata_key]) 2645

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/zarr/storage.py in getitem(self, key) 894 else: --> 895 raise KeyError(key) 896

KeyError: '.zmetadata'

During handling of the above exception, another exception occurred:

GroupNotFoundError Traceback (most recent call last) <ipython-input-1-8be4b98d9b20> in <cell line: 1>() ----> 1 xr.open_zarr('x.zarr')

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/xarray/backends/zarr.py in open_zarr(store, group, synchronizer, chunks, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, consolidated, overwrite_encoded_chunks, chunk_store, storage_options, decode_timedelta, use_cftime, **kwargs) 750 } 751 --> 752 ds = open_dataset( 753 filename_or_obj=store, 754 group=group,

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/xarray/backends/api.py in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, args, *kwargs) 493 494 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None) --> 495 backend_ds = backend.open_dataset( 496 filename_or_obj, 497 drop_variables=drop_variables,

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/xarray/backends/zarr.py in open_dataset(self, filename_or_obj, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta, group, mode, synchronizer, consolidated, chunk_store, storage_options, stacklevel) 798 799 filename_or_obj = _normalize_path(filename_or_obj) --> 800 store = ZarrStore.open_group( 801 filename_or_obj, 802 group=group,

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/xarray/backends/zarr.py in open_group(cls, store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, append_dim, write_region, safe_chunks, stacklevel) 363 stacklevel=stacklevel, 364 ) --> 365 zarr_group = zarr.open_group(store, **open_kwargs) 366 elif consolidated: 367 # TODO: an option to pass the metadata_key keyword

~/Library/Caches/pypoetry/virtualenvs/-x204KUJE-py3.9/lib/python3.9/site-packages/zarr/hierarchy.py in open_group(store, mode, cache_attrs, synchronizer, path, chunk_store, storage_options) 1180 if contains_array(store, path=path): 1181 raise ContainsArrayError(path) -> 1182 raise GroupNotFoundError(path) 1183 1184 elif mode == 'w':

GroupNotFoundError: group not found at path '' ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS

commit: None python: 3.9.12 (main, Mar 26 2022, 15:44:31) [Clang 13.1.6 (clang-1316.0.21.2)] python-bits: 64 OS: Darwin OS-release: 21.3.0 machine: arm64 processor: arm byteorder: little LC_ALL: en_US.UTF-8 LANG: None LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None

xarray: 2022.3.0 pandas: 1.4.1 numpy: 1.22.3 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.11.1 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2021.12.0 distributed: 2021.12.0 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: None sparse: None setuptools: 60.9.3 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: 7.32.0 sphinx: None

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6484/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1221395426 PR_kwDOAMm_X843GAxX 6544 Attempt to improve CI caching, v2 max-sixty 5635139 closed 0     2 2022-04-29T18:40:31Z 2022-06-10T14:36:12Z 2022-06-10T11:33:00Z MEMBER   1 pydata/xarray/pulls/6544

Reverts pydata/xarray#6543 as discussed.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6544/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1218276081 PR_kwDOAMm_X8427ENH 6536 Remove duplicate tests max-sixty 5635139 closed 0     2 2022-04-28T06:43:18Z 2022-04-29T17:49:06Z 2022-04-29T17:47:04Z MEMBER   0 pydata/xarray/pulls/6536

Merge this in a few days when we know nothing is missing

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6536/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1218065176 PR_kwDOAMm_X8426XPm 6532 Restrict annotations to a single run in GHA max-sixty 5635139 closed 0     2 2022-04-28T01:30:51Z 2022-04-28T06:38:10Z 2022-04-28T03:17:22Z MEMBER   0 pydata/xarray/pulls/6532

Currently we get a lot of duplicates:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6532/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1200356907 I_kwDOAMm_X85Hi_4r 6473 RTD concurrency limit max-sixty 5635139 closed 0     2 2022-04-11T18:14:05Z 2022-04-19T06:29:24Z 2022-04-19T06:29:24Z MEMBER      

What is your issue?

From https://github.com/pydata/xarray/pull/6472, and some PRs this weekend:

Is anyone familiar with what's going on with RTD? Did our concurrency limit drop?

Are there alternatives (e.g. running the tests on GHA even if the actual docs get built on RTD?). If we have to pay RTD for a subscription for a bit until we make changes then we could do that (I'm happy to given my recently poor contribution track-record!)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6473/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1207211171 I_kwDOAMm_X85H9JSj 6499 Added `automerge` max-sixty 5635139 closed 0     2 2022-04-18T16:24:35Z 2022-04-18T18:21:39Z 2022-04-18T16:24:41Z MEMBER      

What is your issue?

@pydata/xarray

Because our pipeline takes a while, it can be helpful to have an option to "merge when tests pass" — I've now set that up. So you can click here and it'll do just that.

Someone annoyingly / confusingly, the "required checks" need to be specified manually, in https://github.com/pydata/xarray/settings/branch_protection_rules/2465574 — there's no option for just "all checks".

So if we change the checks — e.g. add Python 3.11 — that list needs to be updated. If we remove a check from the our CI and don't update the list, it won't be possible to merge the PR without clicking the red "Admin Override" box — so we should keep it up to date.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6499/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1207198806 PR_kwDOAMm_X842XpSv 6498 Restrict stalebot on projects & milestones max-sixty 5635139 closed 0     2 2022-04-18T16:11:10Z 2022-04-18T16:25:08Z 2022-04-18T16:11:12Z MEMBER   0 pydata/xarray/pulls/6498

Closes #6497

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6498/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1200531581 PR_kwDOAMm_X842B792 6474 Fix `Number` import max-sixty 5635139 closed 0     2 2022-04-11T21:04:47Z 2022-04-11T22:18:31Z 2022-04-11T21:07:51Z MEMBER   0 pydata/xarray/pulls/6474  
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6474/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1098504712 PR_kwDOAMm_X84ww9To 6152 Add pyupgrade onto pre-commit max-sixty 5635139 closed 0     2 2022-01-10T23:52:30Z 2022-01-19T20:40:16Z 2022-01-19T20:39:39Z MEMBER   0 pydata/xarray/pulls/6152
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

This auto-fixes problems, so the bar to add it on is low...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6152/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1076340035 PR_kwDOAMm_X84vp0L8 6060 Add release notes for 0.20.2 max-sixty 5635139 closed 0     2 2021-12-10T02:03:24Z 2021-12-10T02:58:58Z 2021-12-10T02:04:29Z MEMBER   0 pydata/xarray/pulls/6060
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6060/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
976210502 MDExOlB1bGxSZXF1ZXN0NzE3MjAxNTUz 5728 Type annotate tests max-sixty 5635139 closed 0     2 2021-08-21T19:56:06Z 2021-08-22T04:00:14Z 2021-08-22T03:32:22Z MEMBER   0 pydata/xarray/pulls/5728
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

Following on from https://github.com/pydata/xarray/pull/5690 & https://github.com/pydata/xarray/pull/5694, this type annotates most test files; though probably not most tests — I haven't done the huge files. They require between zero and a dozen fixes, and just ignoring assignment ignores some but not all the errors (I didn't end up using it much).

It also does useful things — fixes a bunch more annotations!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5728/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
974282338 MDExOlB1bGxSZXF1ZXN0NzE1NjEwMDE3 5714 Whatsnew for float-to-top max-sixty 5635139 closed 0     2 2021-08-19T05:21:00Z 2021-08-19T17:26:14Z 2021-08-19T17:26:11Z MEMBER   0 pydata/xarray/pulls/5714
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5714/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
930516190 MDExOlB1bGxSZXF1ZXN0Njc4MTkyMTUz 5537 Fix junit test results max-sixty 5635139 closed 0     2 2021-06-25T21:32:02Z 2021-06-25T22:51:02Z 2021-06-25T22:21:41Z MEMBER   0 pydata/xarray/pulls/5537
  • [x] Passes pre-commit run --all-files

(may require some iteration)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5537/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
907746039 MDExOlB1bGxSZXF1ZXN0NjU4NTQzMjYx 5412 Add junit test results to CI max-sixty 5635139 closed 0     2 2021-05-31T22:52:11Z 2021-06-03T03:46:00Z 2021-06-03T03:18:28Z MEMBER   0 pydata/xarray/pulls/5412
  • [x] Passes pre-commit run --all-files

I'm not sure this is the best approach; potentially the comment this app adds is too noisy: https://github.com/marketplace/actions/publish-unit-test-results.

But it would be good to get an idea of the test results from the PR page and to understand where the test suite spends its time

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5412/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
907723038 MDExOlB1bGxSZXF1ZXN0NjU4NTI0NTM4 5410 Promote backend test fixture to conftest max-sixty 5635139 closed 0     2 2021-05-31T21:28:39Z 2021-06-02T16:17:02Z 2021-06-02T16:16:59Z MEMBER   0 pydata/xarray/pulls/5410

Also adds an example of parameterizing a test in dataset.py

  • [x] Passes pre-commit run --all-files

This uses the backend fixture (though not the dataset fixture, like https://github.com/pydata/xarray/pull/5350/files did)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5410/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
897112103 MDExOlB1bGxSZXF1ZXN0NjQ5MTQ4MDkx 5353 Use dict in docs Dataset construction max-sixty 5635139 closed 0     2 2021-05-20T16:48:07Z 2021-05-21T22:37:28Z 2021-05-21T22:37:25Z MEMBER   0 pydata/xarray/pulls/5353
  • [x] Passes pre-commit run --all-files

As discussed in other issues — I find this way significantly easier to read, particularly when we're passing tuples of tuples as part of the construction. I did these manually but can write something to do it more broadly if people agree.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5353/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
895918276 MDU6SXNzdWU4OTU5MTgyNzY= 5348 v0.18.2 max-sixty 5635139 closed 0     2 2021-05-19T21:21:18Z 2021-05-20T01:51:12Z 2021-05-19T21:35:47Z MEMBER      

I'm about to release this as v0.18.2: https://github.com/pydata/xarray/compare/v0.18.1...max-sixty:release-0.18.2?expand=1 given https://github.com/pydata/xarray/issues/5346

Let me know any thoughts @pydata/xarray , thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5348/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
728893769 MDU6SXNzdWU3Mjg4OTM3Njk= 4535 Support operations with pandas Offset objects max-sixty 5635139 closed 0     2 2020-10-24T22:49:57Z 2021-03-06T23:02:01Z 2021-03-06T23:02:01Z MEMBER      

Is your feature request related to a problem? Please describe.

Currently xarray objects containting datetimes don't operate with pandas' offset objects:

python times = pd.date_range("2000-01-01", freq="6H", periods=10) ds = xr.Dataset( { "foo": (["time", "x", "y"], np.random.randn(10, 5, 3)), "bar": ("time", np.random.randn(10), {"meta": "data"}), "time": times, } ) ds.attrs["dsmeta"] = "dsdata" ds.resample(time="24H").mean("time").time + to_offset("8H")

raises: ```


TypeError Traceback (most recent call last) <ipython-input-29-f9de46fe6c54> in <module> ----> 1 ds.resample(time="24H").mean("time").time + to_offset("8H")

/usr/local/lib/python3.8/site-packages/xarray/core/dataarray.py in func(self, other) 2763 2764 variable = ( -> 2765 f(self.variable, other_variable) 2766 if not reflexive 2767 else f(other_variable, self.variable)

/usr/local/lib/python3.8/site-packages/xarray/core/variable.py in func(self, other) 2128 with np.errstate(all="ignore"): 2129 new_data = ( -> 2130 f(self_data, other_data) 2131 if not reflexive 2132 else f(other_data, self_data)

TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and 'pandas._libs.tslibs.offsets.Hour' ```

This is an issue because pandas resampling has deprecated loffset — from our test suite:

``` xarray/tests/test_dataset.py::TestDataset::test_resample_loffset /Users/maximilian/workspace/xarray/xarray/tests/test_dataset.py:3844: FutureWarning: 'loffset' in .resample() and in Grouper() is deprecated.

df.resample(freq="3s", loffset="8H")

becomes:

from pandas.tseries.frequencies import to_offset df = df.resample(freq="3s").mean() df.index = df.index.to_timestamp() + to_offset("8H")

ds.bar.to_series().resample("24H", loffset="-12H").mean()

```

...and so we'll need to support something like this in order to maintain existing behavior.

Describe the solution you'd like I'm not completely sure; I think probably supporting the operations between xarray objects containing datetime objects and pandas' offset objects.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4535/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
817091683 MDExOlB1bGxSZXF1ZXN0NTgwNjQyNzYw 4964 Some refinements to How To Release max-sixty 5635139 closed 0     2 2021-02-26T06:47:17Z 2021-02-26T21:53:28Z 2021-02-26T19:11:43Z MEMBER   0 pydata/xarray/pulls/4964
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4964/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
572995385 MDU6SXNzdWU1NzI5OTUzODU= 3811 Don't warn on empty reductions max-sixty 5635139 closed 0     2 2020-02-28T20:45:38Z 2021-02-21T23:05:46Z 2021-02-21T23:05:46Z MEMBER      

Numpy warns when computing over an all-NaN slice. We handle that case reasonably and so should handle and discard the warning.

MCVE Code Sample

```python In [1]: import xarray as xr

In [2]: import numpy as np

In [3]: da = xr.DataArray(np.asarray([np.nan]*3))

In [4]: da
Out[4]: <xarray.DataArray (dim_0: 3)> array([nan, nan, nan]) Dimensions without coordinates: dim_0

In [6]: da.mean()
[...]/python3.6/site-packages/xarray/core/nanops.py:142: RuntimeWarning: Mean of empty slice return np.nanmean(a, axis=axis, dtype=dtype) Out[6]: <xarray.DataArray ()> array(nan)

```

Expected Output

No warning

Problem Description

Somewhat discussed in https://github.com/pydata/xarray/issues/1164, and https://github.com/pydata/xarray/issues/1652, but starting a separate issue as it's more important than just noise in the test suite, and not covered by the existing work on comparisons & arithmetic

Output of xr.show_versions()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3811/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
684078209 MDExOlB1bGxSZXF1ZXN0NDcyMDQ3Mzk0 4369 Silencing numpy warnings max-sixty 5635139 closed 0     2 2020-08-22T22:31:11Z 2021-02-21T23:05:00Z 2020-09-02T22:26:32Z MEMBER   0 pydata/xarray/pulls/4369
  • [x] Closes #3811
  • [x] Tests added
  • [x] Passes isort . && black . && mypy . && flake8
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

Is this the right approach? Or should we be using np.errstate machinery?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4369/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
777580305 MDExOlB1bGxSZXF1ZXN0NTQ3ODM3MTU1 4752 Replace bare assert with assert_identical max-sixty 5635139 closed 0     2 2021-01-03T05:59:05Z 2021-01-05T04:18:33Z 2021-01-04T02:13:05Z MEMBER   0 pydata/xarray/pulls/4752
  • [x] closes #3908
  • [x] Passes isort . && black . && mypy . && flake8

IIRC assert_identical shows nice pytest error messages.

This was a quick regex but did a quick check that it looked reasonable.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4752/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
777588292 MDExOlB1bGxSZXF1ZXN0NTQ3ODQyOTEy 4754 Write black diff on errors max-sixty 5635139 closed 0     2 2021-01-03T07:17:45Z 2021-01-03T20:57:22Z 2021-01-03T20:57:19Z MEMBER   0 pydata/xarray/pulls/4754
  • [x] Passes isort . && black . && mypy . && flake8
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst

Inspired by @Illviljan in https://github.com/pydata/xarray/pull/4750

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4754/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
298620060 MDExOlB1bGxSZXF1ZXN0MTcwMjEyMDc3 1927 Some backend tests require dask max-sixty 5635139 closed 0     2 2018-02-20T14:46:30Z 2020-11-08T21:08:43Z 2020-11-08T19:40:35Z MEMBER   0 pydata/xarray/pulls/1927
  • [x] Closes https://github.com/pydata/xarray/issues/1923 (remove if there is no corresponding issue, which should only be the case for minor changes)
  • [x] Tests passed (for all non-documentation changes)

LMK if these are the right ones - I basically added the decorator to anything that was failing. Though not sure we need to be 100% accurate here - worst case we could skip the file - either people are writing backend code and have dask installed, or they're not...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1927/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
576692586 MDU6SXNzdWU1NzY2OTI1ODY= 3837 Should we run tests on docstrings? max-sixty 5635139 closed 0     2 2020-03-06T04:35:16Z 2020-09-11T12:34:34Z 2020-09-11T12:34:34Z MEMBER      

Currently almost none of the docstrings pass running pytest --doctest-modules xarray/core, though mostly for easy reasons.

Should we run these in CI?

I've recently started using docstring tests in another project, and they've work pretty well.

CC @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3837/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
688638584 MDExOlB1bGxSZXF1ZXN0NDc1ODUyMjQ5 4388 Pin pre-commit versions max-sixty 5635139 closed 0     2 2020-08-30T02:19:55Z 2020-08-31T16:31:54Z 2020-08-31T16:31:49Z MEMBER   0 pydata/xarray/pulls/4388
  • [x] Passes isort . && black . && mypy . && flake8
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

I had some issues with the version changing — this approach seems more explicit and less likely to cause hard-to-debug issues

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4388/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
665834531 MDExOlB1bGxSZXF1ZXN0NDU2Nzg2OTQz 4273 Add requires_scipy to least_squares test max-sixty 5635139 closed 0     2 2020-07-26T18:23:34Z 2020-08-16T18:00:33Z 2020-08-16T18:00:29Z MEMBER   0 pydata/xarray/pulls/4273
  • [x] Passes isort . && black . && mypy . && flake8
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4273/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
655282530 MDExOlB1bGxSZXF1ZXN0NDQ3ODE5MjIx 4217 Add release summary, some touch-ups max-sixty 5635139 closed 0     2 2020-07-11T21:28:51Z 2020-07-26T19:17:02Z 2020-07-23T15:26:40Z MEMBER   0 pydata/xarray/pulls/4217

This PR: - Proposes adding a Release Summary with a PR. This is something that maybe more people read than anything else we send, and yet it's the least reviewed text of anything I write; and I'm making guesses on what's important. - Proposes pasting the release summary into the GH Release page; to the extent people follow xarray by following our releases, it's nice to have something there. - Some touch-ups

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4217/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
132579684 MDU6SXNzdWUxMzI1Nzk2ODQ= 755 count docstring mistakenly includes skipna max-sixty 5635139 closed 0     2 2016-02-10T00:49:34Z 2020-07-24T16:09:25Z 2020-07-24T16:09:25Z MEMBER      

Is this a mistake or am I missing something?

http://xray.readthedocs.org/en/stable/generated/xarray.DataArray.count.html?highlight=count#xarray.DataArray.count

skipna : bool, optional If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/755/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
586450690 MDU6SXNzdWU1ODY0NTA2OTA= 3881 Flaky test: test_uamiv_format_write max-sixty 5635139 closed 0     2 2020-03-23T19:13:34Z 2020-03-23T20:32:15Z 2020-03-23T20:32:15Z MEMBER      

I've seen a couple of failures recently on this test. Flaky tests are really annoying and would be great to fix or if impossible, remove it. Does anyone have any ideas what's causing this?

``` __ TestPseudoNetCDFFormat.test_uamiv_format_write __

self = <xarray.tests.test_backends.TestPseudoNetCDFFormat object at 0x7f15352b9d00>

def test_uamiv_format_write(self):
    fmtkw = {"format": "uamiv"}

    expected = open_example_dataset(
        "example.uamiv", engine="pseudonetcdf", backend_kwargs=fmtkw
    )
    with self.roundtrip(
        expected,
        save_kwargs=fmtkw,
        open_kwargs={"backend_kwargs": fmtkw},
        allow_cleanup_failure=True,
    ) as actual:
      assert_identical(expected, actual)

E AssertionError: Left and right Dataset objects are not identical E
E
E
E Differing attributes: E L WTIME: 190117 E R WTIME: 190118

xarray/tests/test_backends.py:3563: AssertionError ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3881/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
571802536 MDExOlB1bGxSZXF1ZXN0MzgwNjIyMzgy 3802 Raise on multiple string args to groupby max-sixty 5635139 closed 0     2 2020-02-27T03:50:03Z 2020-02-29T20:48:07Z 2020-02-29T20:47:12Z MEMBER   0 pydata/xarray/pulls/3802
  • [x] Tests added
  • [x] Passes isort -rc . && black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

From the comment:

While we don't generally check the type of every arg, passing multiple dimensions as multiple arguments is common enough, and the consequences hidden enough (strings evaluate as true) to warrant checking here. A future version could make squeeze kwarg only, but would face backward-compat issues.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3802/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
551727037 MDExOlB1bGxSZXF1ZXN0MzY0Mzk2MTk3 3707 remove PR pre-black instructions max-sixty 5635139 closed 0     2 2020-01-18T06:01:15Z 2020-01-29T21:39:10Z 2020-01-29T17:21:31Z MEMBER   0 pydata/xarray/pulls/3707

I don't think these should be needed any longer

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3707/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
512873086 MDExOlB1bGxSZXF1ZXN0MzMyNzkyOTM2 3451 Remove deprecated behavior from dataset.drop docstring max-sixty 5635139 closed 0     2 2019-10-26T19:06:16Z 2019-10-29T15:03:53Z 2019-10-29T14:49:17Z MEMBER   0 pydata/xarray/pulls/3451

I'm less up to speed on this behavior, but IIUC this part of the docstring refers to deprecated behavior—is that correct or am I missing something?

xref https://github.com/pydata/xarray/issues/3266

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3451/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
508791645 MDU6SXNzdWU1MDg3OTE2NDU= 3414 Allow ellipsis in place of xr.ALL_DIMS? max-sixty 5635139 closed 0     2 2019-10-18T00:44:48Z 2019-10-28T21:14:42Z 2019-10-28T21:14:42Z MEMBER      

@crusaderky had a good idea to allow ellipsis (...) as a placeholder for 'other dims' in transpose.

What about using it as a placeholder for xr.ALL_DIMS in groupby etc operations? I find it nicer than custom sentinel values, and I think should be fairly low-confusion—thoughts?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3414/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
509490043 MDExOlB1bGxSZXF1ZXN0MzMwMDQ0MDY2 3419 Python3.6 idioms max-sixty 5635139 closed 0     2 2019-10-19T18:15:48Z 2019-10-21T01:32:07Z 2019-10-21T00:16:58Z MEMBER   0 pydata/xarray/pulls/3419
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

Use python 3.6+ idioms; most changes are f-strings

Mostly from pyupgrade --py36-plus **/*.py

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3419/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
490746942 MDExOlB1bGxSZXF1ZXN0MzE1MjY5NjIw 3292 Remove some deprecations max-sixty 5635139 closed 0     2 2019-09-08T12:14:31Z 2019-09-08T22:58:28Z 2019-09-08T22:58:16Z MEMBER   0 pydata/xarray/pulls/3292
  • [x] Closes some of #3280
  • [x] Passes black . && mypy . && flake8
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3292/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
478715330 MDExOlB1bGxSZXF1ZXN0MzA1NzcyMTc4 3195 Update black instructions max-sixty 5635139 closed 0     2 2019-08-08T22:28:59Z 2019-08-09T00:27:28Z 2019-08-09T00:27:09Z MEMBER   0 pydata/xarray/pulls/3195

This needed updating with the relevant commit post-black change

I also added a line to apply the patch of manual changes we made on top of the black changes. It makes the list of steps a bit burdensome. But I've tested them a couple of times and, where people have lots of code changes, it's still going to be much easier than resolving manually.

I generally wouldn't want to suggest people curl data from the internet (even if we trust the individual). I think it's probably OK in this instance: the address contains a hash of the contents, and it's only being fed into git apply -, not executed. But lmk if that's still a security concern.

I'll update the issue I opened on black with the above issue; we should have done one commit with only the black changes, and then another with any manual changes.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3195/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
467015096 MDU6SXNzdWU0NjcwMTUwOTY= 3098 Codecov bot comments? max-sixty 5635139 closed 0     2 2019-07-11T17:21:46Z 2019-07-18T01:12:38Z 2019-07-18T01:12:38Z MEMBER      

ref https://github.com/pydata/xarray/pull/3090#issuecomment-510323490

Do we want the bot commenting on the PR, at least while the early checks are wrong? People can always click on Details in the Codecov check (e.g. https://codecov.io/gh/pydata/xarray/compare/8f0d9e5c9909c93a90306ed7cb5a80c1c2e1c97d...ab6960f623017afdc99c34bcbb69b402aea3f7d4/diff) to see a full report.

Happy to PR to disable, lmk

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3098/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
423511704 MDU6SXNzdWU0MjM1MTE3MDQ= 2833 Integrate has undefined name 'dim' max-sixty 5635139 closed 0     2 2019-03-20T23:09:19Z 2019-07-05T07:10:37Z 2019-07-05T07:10:37Z MEMBER      

https://github.com/pydata/xarray/blob/master/xarray/core/dataset.py#L4085

Should that be called coord or dim? Currently there's a variable that's undefined:

python raise ValueError('Coordinate {} does not exist.'.format(dim)) I would have made a quick fix but not sure the correct name

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2833/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
448340294 MDU6SXNzdWU0NDgzNDAyOTQ= 2990 Some minor errors in repo / flake8 max-sixty 5635139 closed 0     2 2019-05-24T20:24:04Z 2019-06-24T18:18:25Z 2019-06-24T18:18:24Z MEMBER      

Currently we use pycodestyle: https://github.com/pydata/xarray/blob/ccd0b047ea8ca89c68ab6cfa942557e676e7d402/.travis.yml#L63

I think we used to use flake8. I can't find / remember the reason we moved to pycodestyle.

master has some non-trivial issues that flake would catch, including a test overwritting another and undefined variables: ``` flake8 xarray --ignore=I,W503,W504,F401,E265,E402

xarray/core/options.py:62:8: F632 use ==/!= to compare str, bytes, and int literals xarray/core/dataset.py:4148:69: F821 undefined name 'dim' xarray/backends/netCDF4_.py:177:12: F632 use ==/!= to compare str, bytes, and int literals xarray/tests/test_dataarray.py:1264:9: F841 local variable 'foo' is assigned to but never used xarray/tests/test_dataarray.py:1270:18: F821 undefined name 'x' xarray/tests/test_dataarray.py:1301:5: F811 redefinition of unused 'test_reindex_fill_value' from line 1262 xarray/tests/test_dataarray.py:1647:16: F632 use ==/!= to compare str, bytes, and int literals xarray/tests/test_dataarray.py:1648:16: F632 use ==/!= to compare str, bytes, and int literals xarray/tests/test_dataset.py:4759:8: F632 use ==/!= to compare str, bytes, and int literals xarray/tests/test_dataset.py:4761:10: F632 use ==/!= to compare str, bytes, and int literals xarray/tests/test_distributed.py:62:9: F811 redefinition of unused 'loop' from line 12 xarray/tests/test_distributed.py:92:9: F811 redefinition of unused 'loop' from line 12 xarray/tests/test_distributed.py:117:49: F811 redefinition of unused 'loop' from line 12 xarray/tests/test_distributed.py:141:53: F811 redefinition of unused 'loop' from line 12 xarray/tests/test_distributed.py:152:51: F811 redefinition of unused 'loop' from line 12 ```

Happy to fix these in a PR. For ensuring these don't crop up again, any objection to flake8?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2990/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
455275723 MDExOlB1bGxSZXF1ZXN0Mjg3NTU4MTE3 3019 Update issue templates max-sixty 5635139 closed 0     2 2019-06-12T15:19:04Z 2019-06-15T03:35:23Z 2019-06-15T03:35:17Z MEMBER   0 pydata/xarray/pulls/3019

This: - Updates to the newer GitHub format - Strengthens the language around MCVE. It doesn't say people have to, but makes it less of a suggestion - Only contains 'Bug Report'. Should we have others? I don't immediately see how they'd be different

(I realize because I did this from the GH website, it created from this repo rather than my own fork. That's a mistake.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3019/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
454313971 MDExOlB1bGxSZXF1ZXN0Mjg2NzkwNDg5 3011 Pytest capture uses match, not message max-sixty 5635139 closed 0     2 2019-06-10T18:46:43Z 2019-06-11T15:01:23Z 2019-06-11T15:01:19Z MEMBER   0 pydata/xarray/pulls/3011
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API

message was always going being ignored, and with the newer pytest version raises a warning that an unknown kwarg is supplied

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3011/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
393807679 MDExOlB1bGxSZXF1ZXN0MjQwNzE4NDAx 2629 Flake fixed max-sixty 5635139 closed 0     2 2018-12-24T04:23:47Z 2019-06-10T19:09:44Z 2018-12-25T01:21:50Z MEMBER   0 pydata/xarray/pulls/2629

Towards https://github.com/pydata/xarray/issues/2627

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2629/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
365526259 MDU6SXNzdWUzNjU1MjYyNTk= 2451 Shift changes non-float arrays to object, even for shift=0 max-sixty 5635139 closed 0     2 2018-10-01T15:50:38Z 2019-03-04T16:31:57Z 2019-03-04T16:31:57Z MEMBER      

```python In [15]: xr.DataArray(np.random.randint(2,size=(100,100)).astype(bool)).shift(dim_0=0) Out[15]: <xarray.DataArray (dim_0: 100, dim_1: 100)> array([[False, True, True, ..., True, True, False], [False, True, False, ..., False, True, True], [False, True, False, ..., False, True, False], ..., [False, True, False, ..., False, True, True], [True, False, True, ..., False, False, False], [False, True, True, ..., True, True, False]], dtype=object) # <-- could be bool Dimensions without coordinates: dim_0, dim_1

```

Problem description

This causes memory bloat

Expected Output

As above with dtype=bool

Output of xr.show_versions()

In [16]: xr.show_versions() INSTALLED VERSIONS ------------------ commit: f9c4169150286fa1aac020ab965380ed21fe1148 python: 2.7.15.final.0 python-bits: 64 OS: Darwin OS-release: 18.0.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: None.None xarray: 0.10.9+12.gf9c41691 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.0 netCDF4: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None PseudonetCDF: None rasterio: None iris: None bottleneck: 1.2.1 cyordereddict: None dask: None distributed: None matplotlib: 2.1.2 cartopy: None seaborn: 0.8.1 setuptools: 39.2.0 pip: 18.0 conda: None pytest: 3.6.3 IPython: 5.8.0 sphinx: None

The shift=0 is mainly theoretical. To avoid casting to object in practical scenarios, we could add a fill_value argument (e.g. fill_value=False) and fill with that rather than NaN

CC @Ivocrnkovic

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2451/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
402783087 MDExOlB1bGxSZXF1ZXN0MjQ3Mzg4NzMx 2703 deprecate compat & encoding max-sixty 5635139 closed 0     2 2019-01-24T16:14:21Z 2019-02-01T03:16:13Z 2019-02-01T03:16:10Z MEMBER   0 pydata/xarray/pulls/2703

Still need to adjust the tests

  • [x] Closes https://github.com/pydata/xarray/issues/1188
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2703/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
399340074 MDExOlB1bGxSZXF1ZXN0MjQ0NzkwMjQ5 2677 Small typo max-sixty 5635139 closed 0     2 2019-01-15T13:15:31Z 2019-01-15T15:40:19Z 2019-01-15T13:29:54Z MEMBER   0 pydata/xarray/pulls/2677

@fujiisoup is this a small typo?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2677/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
367493434 MDExOlB1bGxSZXF1ZXN0MjIwOTA5OTkx 2470 fill_value in shift max-sixty 5635139 closed 0     2 2018-10-06T20:33:29Z 2018-12-28T01:07:17Z 2018-12-27T22:58:30Z MEMBER   0 pydata/xarray/pulls/2470
  • [x] Closes #https://github.com/pydata/xarray/issues/2451
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

Should we be more defensive around which fill_values can be passed? Currently, if the array and float values have incompatible dtypes, we don't preemtively warn or cast, apart from the case of np.nan, which then uses the default filler

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2470/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
367424540 MDU6SXNzdWUzNjc0MjQ1NDA= 2468 LooseVersion check on xarray in tests seems unstable max-sixty 5635139 closed 0     2 2018-10-06T05:07:47Z 2018-10-10T13:47:23Z 2018-10-10T13:47:23Z MEMBER      

There's an elegant check against the xarray version to decide whether to run a test, so the test 'comes online' at 0.12: https://github.com/pydata/xarray/blob/638b251c622359b665208276a2cb23b0fbc5141b/xarray/tests/test_dataarray.py#L2029

But unfortunately, this seems very unstable in tests, because without a release, LooseVersion can't interpret the strings correctly (e.g. LooseVersion ('0.10.9+29.g33d9391a')) - A lot of the time it raises: https://travis-ci.org/max-sixty/xarray/jobs/437913418#L1036 - Occasionally it runs the test, failing to return that we're prior to 0.12: https://travis-ci.org/max-sixty/xarray/jobs/437914645#L5036

Here's the bug in the python issue tracker: https://bugs.python.org/issue14894

Is that synopsis correct? Should we attempt to take another approach? I'll disable it in my current check so tests can pass, but lmk thoughts.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2468/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
367486218 MDExOlB1bGxSZXF1ZXN0MjIwOTA1MTkw 2469 isort max-sixty 5635139 closed 0     2 2018-10-06T19:05:54Z 2018-10-08T01:38:00Z 2018-10-07T22:39:14Z MEMBER   0 pydata/xarray/pulls/2469

Do we want to keep isort up to date? I think it's a balance between consistency vs. overhead

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2469/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
271131362 MDU6SXNzdWUyNzExMzEzNjI= 1691 Coordinates passed as sets raise with a bad error message max-sixty 5635139 closed 0     2 2017-11-03T22:06:28Z 2018-08-08T15:56:57Z 2018-08-08T15:56:57Z MEMBER      

If a coordinate is passed as a set, xr raises with a bad error message:

```python

In [12]: xr.Dataset(dict(date=[1,2,3], sec={4}))

MissingDimensionsError Traceback (most recent call last) <ipython-input-12-40ccdd94e21f> in <module>() ----> 1 xr.Dataset(dict(date=[1,2,3], sec={4}))

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/dataset.py in init(self, data_vars, coords, attrs, compat) 360 coords = {} 361 if data_vars is not None or coords is not None: --> 362 self._set_init_vars_and_dims(data_vars, coords, compat) 363 if attrs is not None: 364 self.attrs = attrs

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/dataset.py in _set_init_vars_and_dims(self, data_vars, coords, compat) 378 379 variables, coord_names, dims = merge_data_and_coords( --> 380 data_vars, coords, compat=compat) 381 382 self._variables = variables

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/merge.py in merge_data_and_coords(data, coords, compat, join) 363 objs = [data, coords] 364 explicit_coords = coords.keys() --> 365 return merge_core(objs, compat, join, explicit_coords=explicit_coords) 366 367

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/merge.py in merge_core(objs, compat, join, priority_arg, explicit_coords, indexes) 425 coerced = coerce_pandas_values(objs) 426 aligned = deep_align(coerced, join=join, copy=False, indexes=indexes) --> 427 expanded = expand_variable_dicts(aligned) 428 429 coord_names, noncoord_names = determine_coords(coerced)

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/merge.py in expand_variable_dicts(list_of_variable_dicts) 211 var_dicts.append(coords) 212 --> 213 var = as_variable(var, name=name) 214 sanitized_vars[name] = var 215

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xarray/core/variable.py in as_variable(obj, name) 103 'cannot set variable %r with %r-dimensional data ' 104 'without explicit dimension names. Pass a tuple of ' --> 105 '(dims, data) instead.' % (name, data.ndim)) 106 obj = Variable(name, obj, fastpath=True) 107 else:

MissingDimensionsError: cannot set variable 'sec' with 0-dimensional data without explicit dimension names. Pass a tuple of (dims, data) instead. But OK if a list:python In [13]: xr.Dataset(dict(date=[1,2,3], sec=[4])) Out[13]: <xarray.Dataset> Dimensions: (date: 3, sec: 1) Coordinates: * date (date) int64 1 2 3 * sec (sec) int64 4 Data variables: empty

```

Problem description

There may be reasons to not allow sets: they're not ordered, so unless you're constructing your data using the coords, you'll get random results

The error message should be better though. And I would vote to handle sets the same as lists

Expected Output

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.1.final.0 python-bits: 64 OS: Darwin OS-release: 17.0.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.0rc1-2-gf83361c pandas: 0.21.0 numpy: 1.13.3 scipy: 0.19.1 netCDF4: None h5netcdf: None Nio: None bottleneck: 1.2.1 cyordereddict: None dask: None matplotlib: 2.0.2 cartopy: None seaborn: 0.8.1 setuptools: 36.5.0 pip: 9.0.1 conda: None pytest: 3.2.3 IPython: 6.2.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1691/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
161991202 MDU6SXNzdWUxNjE5OTEyMDI= 890 BUG: Dataset constructor puts lists in coords rather that data_vars max-sixty 5635139 closed 0     2 2016-06-23T18:28:44Z 2018-07-31T18:28:29Z 2018-07-31T18:28:29Z MEMBER      

I'd expect a to be a data_vars rather than a coord here:

python In [9]: xr.Dataset(data_vars={'a': [2,3]}, attrs={'name':'hello'}) Out[9]: <xarray.Dataset> Dimensions: (a: 2) Coordinates: * a (a) int64 2 3 Data variables: *empty* Attributes: name: hello

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/890/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
211860631 MDU6SXNzdWUyMTE4NjA2MzE= 1294 python 3.6 tests break with bottleneck installed max-sixty 5635139 closed 0     2 2017-03-04T06:35:24Z 2017-12-10T01:52:34Z 2017-12-10T01:52:34Z MEMBER      

Installing 3.6 environment (from the ci path in xarray): tests pass on master Then installing bottleneck: 3 tests in test_dataarray.py fail on master

I can debug further unless anyone has a view

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1294/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
194243942 MDExOlB1bGxSZXF1ZXN0OTcwNTcwMzI= 1157 PERF: Use len rather than size max-sixty 5635139 closed 0     2 2016-12-08T04:23:24Z 2016-12-09T18:36:50Z 2016-12-09T18:36:50Z MEMBER   0 pydata/xarray/pulls/1157

Potential mitigation for https://github.com/pandas-dev/pandas/issues/14822

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1157/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
125092434 MDU6SXNzdWUxMjUwOTI0MzQ= 708 'to_array' creates a read-only numpy array max-sixty 5635139 closed 0     2 2016-01-06T01:46:12Z 2016-01-06T02:42:32Z 2016-01-06T02:42:32Z MEMBER      

Is this intended? It's creating some problems downstream with pandas, but maybe that's a pandas issue?

Note the WRITEABLE : False here:

``` python In [126]: ds=xray.Dataset({'a':xray.DataArray(pd.np.random.rand(5,3))}, coords={'b': xray.DataArray(pd.np.random.rand(5))})

In [127]: ds.to_array('d').b.values.flags Out[127]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : False ALIGNED : True UPDATEIFCOPY : False ```

Without the to_array, it's fine:

python In [128]: ds.b.values.flags Out[128]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False

xref https://github.com/pydata/pandas/issues/11502

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/708/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
124176573 MDU6SXNzdWUxMjQxNzY1NzM= 689 Should Dataset enforce some ordering of dims in its variables? max-sixty 5635139 closed 0     2 2015-12-29T07:41:02Z 2015-12-29T21:20:56Z 2015-12-29T21:20:56Z MEMBER      

I'm not sure on this one. I'm currently having a bunch of issues with this sort of Dataset: (notice the dims are (d, c) and (c, d) for different variables)

python <xray.Dataset> Dimensions: (c: 193, d: 6781) Coordinates: * d (d) object 5218 5219 5220 5221 5222 5223 5224 ... * c (c) object LDS. ... j (c, d) bool False False False False ... Data variables: r (d, c) float64 nan -0.05083 nan ... s (d, c) float64 nan -0.05083 nan ... n (c, d) float64 nan nan nan nan nan ...

In my case, this is particularly painful when passing the result of ds.r.to_pandas() into a function expecting a DataFrame with a certain orientation when that orientation isn't reliable.

Is this a problem generally?

If it is, I could imagine a few solutions - enforce ordering, offer a method on a DataSet to align the dims, offer a kwarg on .to_pandas() to allow specifying the dims-axis mapping, etc

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/689/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
122171841 MDU6SXNzdWUxMjIxNzE4NDE= 679 Is 'name' an xray supported attribute? max-sixty 5635139 closed 0     2 2015-12-15T01:34:48Z 2015-12-15T03:16:37Z 2015-12-15T03:16:37Z MEMBER      

If it is, a Dataset constructor should take a list of DataArrays, and use their names as keys? (and anywhere else you need to provide a dict-like mapping with names)

If it's not, we potentially shouldn't be using it in the internals.

I think it's the first, given it's in the docs (although not throughout the docs).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/679/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
115979105 MDU6SXNzdWUxMTU5NzkxMDU= 652 ENH: Apply numpy function to named axes max-sixty 5635139 closed 0     2 2015-11-09T22:11:19Z 2015-11-10T16:18:24Z 2015-11-10T16:18:24Z MEMBER      

I'm currently transitioning sequences of pandas Panels over to xray Datasets. Part of our process applies a set of functions to Panels; for example:

python panel = panel.apply(lambda x: x.rank(ascending=False), axis=(1, 2)) df = np.nanpercentile(panel, q=75, axis=2)

One of the benefits of xray is the clarity that comes from named axes. Is there a way of applying a function over named axes? For example:

python result = data_array.apply(np.argsort, axis=('Risk', 'Dates')) result = data_array.apply(np.nanpercentile, axis='Dates')

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/652/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 63.768ms · About: xarray-datasette