home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1,180 rows where user = 2443309 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, updated_at (date)

These facets timed out: issue

user 1

  • jhamman · 1,180 ✖

author_association 1

  • MEMBER 1,180
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1561302572 https://github.com/pydata/xarray/issues/7873#issuecomment-1561302572 https://api.github.com/repos/pydata/xarray/issues/7873 IC_kwDOAMm_X85dD5Ys jhamman 2443309 2023-05-24T14:47:56Z 2023-05-24T14:47:56Z MEMBER

We dropped Python 3.8 support prior to the Pandas 2 release and have no plans to backport support at this time.

xref: #7765

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
  No `Xarray` conda package compatible with pandas>=2 for python 3.8 1724137371
1553453937 https://github.com/pydata/xarray/pull/7019#issuecomment-1553453937 https://api.github.com/repos/pydata/xarray/issues/7019 IC_kwDOAMm_X85cl9Nx jhamman 2443309 2023-05-18T18:27:52Z 2023-05-18T18:27:52Z MEMBER

👏 Congrats @TomNicholas on getting this in! Such an important contribution. 👏

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Generalize handling of chunked array types 1368740629
1537490801 https://github.com/pydata/xarray/issues/7707#issuecomment-1537490801 https://api.github.com/repos/pydata/xarray/issues/7707 IC_kwDOAMm_X85bpD9x jhamman 2443309 2023-05-07T16:50:35Z 2023-05-07T16:50:35Z MEMBER

See https://github.com/pydata/xarray/pull/7825 for a PR fixing the outstanding Zarr V3 failures.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1650481625
1530740203 https://github.com/pydata/xarray/pull/7793#issuecomment-1530740203 https://api.github.com/repos/pydata/xarray/issues/7793 IC_kwDOAMm_X85bPT3r jhamman 2443309 2023-05-02T01:15:48Z 2023-05-02T01:15:48Z MEMBER

Thanks @keewis!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  adjust the deprecation policy for python 1688716198
1523700289 https://github.com/pydata/xarray/issues/7765#issuecomment-1523700289 https://api.github.com/repos/pydata/xarray/issues/7765 IC_kwDOAMm_X85a0dJB jhamman 2443309 2023-04-26T16:20:47Z 2023-04-26T16:20:47Z MEMBER

@keewis - you are probably the best person for this task. Can you take on updating our min_deps_check.py script?

{
    "total_count": 3,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
  Revisiting Xarray's Minimum dependency versions policy 1673579421
1521182209 https://github.com/pydata/xarray/issues/7765#issuecomment-1521182209 https://api.github.com/repos/pydata/xarray/issues/7765 IC_kwDOAMm_X85aq2YB jhamman 2443309 2023-04-25T05:43:31Z 2023-04-25T05:43:31Z MEMBER

Instead, maybe we should extend the support for python versions by about 6 months, to a total of 30 months? That would effectively align us with NEP-29, which is our upper limit anyways since that's what our dependencies follow (even if their releases don't usually happen at exactly that date).

This seems like a good action item to come from this. And seems to align with the thrust of #7777.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Revisiting Xarray's Minimum dependency versions policy 1673579421
1513752157 https://github.com/pydata/xarray/issues/7765#issuecomment-1513752157 https://api.github.com/repos/pydata/xarray/issues/7765 IC_kwDOAMm_X85aOgZd jhamman 2443309 2023-04-18T20:24:16Z 2023-04-18T20:24:16Z MEMBER

@keewis - thanks for the clarifications on the the version policy related to Python 3.8. Very helpful.

Instead, maybe we should extend the support for python versions by about 6 months, to a total of 30 months? That would effectively align us with NEP-29, which is our upper limit anyways since that's what our dependencies follow (even if their releases don't usually happen at exactly that date).

This is an interesting proposal. Worth considering.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Revisiting Xarray's Minimum dependency versions policy 1673579421
1500454096 https://github.com/pydata/xarray/issues/1599#issuecomment-1500454096 https://api.github.com/repos/pydata/xarray/issues/1599 IC_kwDOAMm_X85ZbxzQ jhamman 2443309 2023-04-07T16:47:09Z 2023-04-07T16:47:09Z MEMBER

@jmccreight - I don't think there is any specific reason this didn't get done. Still open to contributions here if you are interested.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray to_dict() without converting with numpy tolist() 261727170
1494573149 https://github.com/pydata/xarray/issues/7705#issuecomment-1494573149 https://api.github.com/repos/pydata/xarray/issues/7705 IC_kwDOAMm_X85ZFWBd jhamman 2443309 2023-04-03T15:52:04Z 2023-04-03T15:52:04Z MEMBER

Thanks for following up @28raining. Based on your response, I'm going to close this as it doesn't seem that Xarray is the issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using xarray in Docker on a mac fails with "No such file or directory: 'gdal-config'" 1649994877
1494570926 https://github.com/pydata/xarray/issues/7710#issuecomment-1494570926 https://api.github.com/repos/pydata/xarray/issues/7710 IC_kwDOAMm_X85ZFVeu jhamman 2443309 2023-04-03T15:50:42Z 2023-04-03T15:50:42Z MEMBER

This works today using the region parameter in to_zarr. Docs are here: https://docs.xarray.dev/en/stable/user-guide/io.html#appending-to-existing-zarr-stores

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update a region of a zarr dataset 1651808718
1492739561 https://github.com/pydata/xarray/issues/7079#issuecomment-1492739561 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85Y-WXp jhamman 2443309 2023-04-01T00:00:24Z 2023-04-01T00:00:24Z MEMBER

@kthyng - any difference when running with parallel=True vs parallel=False?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1492630943 https://github.com/pydata/xarray/issues/7705#issuecomment-1492630943 https://api.github.com/repos/pydata/xarray/issues/7705 IC_kwDOAMm_X85Y972f jhamman 2443309 2023-03-31T21:28:14Z 2023-03-31T21:28:14Z MEMBER

This doesn't seem like an issue with Xarray. If you change your dockerfile to:

FROM python:3.11.2-slim RUN /usr/local/bin/python -m pip install --upgrade pip RUN pip install xarray RUN pip install rasterio

I suspect you will find that you get the same error on the rasterio line and not on the xarray line.

A second note: Xarray and netcdf4 have not released complete support for python 3.11. For that reason, you may also want to use the python 3.10 image.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Using xarray in Docker on a mac fails with "No such file or directory: 'gdal-config'" 1649994877
1492098719 https://github.com/pydata/xarray/issues/6323#issuecomment-1492098719 https://api.github.com/repos/pydata/xarray/issues/6323 IC_kwDOAMm_X85Y756f jhamman 2443309 2023-03-31T15:14:59Z 2023-03-31T15:14:59Z MEMBER

This issue was discussed at this week's dev meeting. I will summarize what we discussed:

  1. General agreement that propagating encoding through arbitrary operations (e.g. slice, chunk, computation) leads to inconsistent states that are hard to protect against. This often leads to problems when serializing datasets in our backends.
  2. The primary benefit of keeping encoding on Xarray objects is the ability to exactly roundtrip datasets. However, this benefit is less obvious after a dataset has been modified.
  3. We currently have two APIs for setting encoding (e.g. to_netcdf(..., encoding={...}) and ds.encoding = {...}). We should change this by deprecating setting encoding on Xarray objects using the .encoding property.
  4. We can move towards providing utilities that expose a dataset's source encoding (e.g. open_dataset(..., return_encoding=True).

Specific action items that can happen now: - [x] add reset_encoding to Dataset/DatAarray api (https://github.com/pydata/xarray/issues/7686) - [ ] add a DeprecationWarning to the @property.setter for encoding on Dataset/DatAarray/Variable - [ ] document the change in callout in the Xarray user guide.

Longer term action items: - [ ] add option to backend readers to keep / discard interpreted encoding attributes - [ ] disable all encoding propagation by discarding encoding attributes once a Dataset has been modified.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  propagation of `encoding` 1158378382
1490950943 https://github.com/pydata/xarray/pull/7689#issuecomment-1490950943 https://api.github.com/repos/pydata/xarray/issues/7689 IC_kwDOAMm_X85Y3hsf jhamman 2443309 2023-03-30T20:59:01Z 2023-03-30T20:59:01Z MEMBER

I see how we could do that on the variable but its a bit messy on the Dataset/DataArray. I suggest we wait until this is a requested feature.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add reset_encoding to dataset/dataarray/variable 1642922680
1490870522 https://github.com/pydata/xarray/pull/7689#issuecomment-1490870522 https://api.github.com/repos/pydata/xarray/issues/7689 IC_kwDOAMm_X85Y3OD6 jhamman 2443309 2023-03-30T20:00:18Z 2023-03-30T20:00:18Z MEMBER

In before a user asks for it, should we allow deleting only certain keys like dtype for example?

Not opposed to this but do you think we'll really be asked for this? What sort of API do you envision?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  add reset_encoding to dataset/dataarray/variable 1642922680
1487709306 https://github.com/pydata/xarray/issues/7692#issuecomment-1487709306 https://api.github.com/repos/pydata/xarray/issues/7692 IC_kwDOAMm_X85YrKR6 jhamman 2443309 2023-03-28T22:58:30Z 2023-03-28T22:58:30Z MEMBER

Fair enough. I was extrapolating a bit based on the response to #7496. If someone wants to bite off the writable backends task, I'm all for it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature proposal: DataArray.to_zarr() 1644429340
1487525733 https://github.com/pydata/xarray/pull/7670#issuecomment-1487525733 https://api.github.com/repos/pydata/xarray/issues/7670 IC_kwDOAMm_X85Yqddl jhamman 2443309 2023-03-28T20:06:00Z 2023-03-28T20:06:00Z MEMBER

Should we ping cfgrib to let them know about this change, so they can transfer the removed tests, if they did not do this already?

☝️ @alexamici and @aurghs

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Delete built-in cfgrib backend 1639732867
1487524043 https://github.com/pydata/xarray/issues/7692#issuecomment-1487524043 https://api.github.com/repos/pydata/xarray/issues/7692 IC_kwDOAMm_X85YqdDL jhamman 2443309 2023-03-28T20:04:35Z 2023-03-28T20:04:35Z MEMBER

I was once a fan of the save_dataset route but I think we've opted to stick with the format specific to_ methods.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Feature proposal: DataArray.to_zarr() 1644429340
1485989169 https://github.com/pydata/xarray/issues/7686#issuecomment-1485989169 https://api.github.com/repos/pydata/xarray/issues/7686 IC_kwDOAMm_X85YkmUx jhamman 2443309 2023-03-27T23:21:27Z 2023-03-27T23:21:27Z MEMBER

As I said in #4817, I think we should pursue the keep_encoding option (in addition to the proposal in this PR).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add reset_encoding to Dataset and DataArray objects 1642635191
1485987859 https://github.com/pydata/xarray/issues/5336#issuecomment-1485987859 https://api.github.com/repos/pydata/xarray/issues/5336 IC_kwDOAMm_X85YkmAT jhamman 2443309 2023-03-27T23:19:41Z 2023-03-27T23:19:41Z MEMBER

This seems like a reasonable thing to do. We can likely reuse a lot of the machinery from keep_attrs to make this happen. @snowman2 - is this something you would be up to work on?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ENH: Add keep_encoding to global options 894788930
1485792995 https://github.com/pydata/xarray/issues/6323#issuecomment-1485792995 https://api.github.com/repos/pydata/xarray/issues/6323 IC_kwDOAMm_X85Yj2bj jhamman 2443309 2023-03-27T20:06:07Z 2023-03-27T20:06:24Z MEMBER

See also https://github.com/pydata/xarray/issues/7686. The ideas presented here are also great!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  propagation of `encoding` 1158378382
1485339487 https://github.com/pydata/xarray/issues/7079#issuecomment-1485339487 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85YiHtf jhamman 2443309 2023-03-27T15:28:39Z 2023-03-27T15:28:39Z MEMBER

@cefect, @pnorton-usgs, @kthyng - Is this still an issue for you? If so, could you try to run the xarray test suite in #7079 and report back? We haven't been able to trigger the error reported here so we could use some help running the test suite in an "offending" environment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1484400508 https://github.com/pydata/xarray/issues/7672#issuecomment-1484400508 https://api.github.com/repos/pydata/xarray/issues/7672 IC_kwDOAMm_X85Yeid8 jhamman 2443309 2023-03-27T02:44:10Z 2023-03-27T02:44:10Z MEMBER

@margocrawf - thanks for opening this bug report and for providing a complete example demonstrating the issue. Unfortunately (or perhaps fortunately), running your example does not reproduce the same error for me. I have a few questions:

  1. Can you add the full output of show_versions? In particular, make sure to include Zarr.
  2. Have you tried swapping Zarr for the netcdf4 backend?
  3. Are you able to try with a separate environment? If so, try your example above with the latest versions of Xarray/Pandas/Numpy/Zarr and report back if the same issue exists.
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_zarr writes unexpected NaNs with chunks=-1 1639841581
1484386454 https://github.com/pydata/xarray/issues/7680#issuecomment-1484386454 https://api.github.com/repos/pydata/xarray/issues/7680 IC_kwDOAMm_X85YefCW jhamman 2443309 2023-03-27T02:23:15Z 2023-03-27T02:23:15Z MEMBER

@abunimeh - Thanks for opening this issue. Can you expand on the feature a bit more? What API would you like to see? ds.to_netcdf(..., track_order=False)?

I suspect this will need to be treated like invalid_netcdf as it will only apply to the h5netcdf backend:

https://github.com/pydata/xarray/blob/86f3f21ab3d0dff6fdb4a0bccd27c62f9e4a3238/xarray/core/dataset.py#L1892-L1895

_Note: it would be nice if we had backend_kwargs on to_netcdf since the variety of options scipy/netcdf4/h5netcdf support are increasingly different.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow track_order to be passed to h5netcdf 1641109066
1484380153 https://github.com/pydata/xarray/pull/7551#issuecomment-1484380153 https://api.github.com/repos/pydata/xarray/issues/7551 IC_kwDOAMm_X85Yedf5 jhamman 2443309 2023-03-27T02:15:54Z 2023-03-27T02:15:54Z MEMBER

@djhoese - I think we're just missing tests for this. No reason it can't go out in the next release.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for the new compression arguments. 1596511582
1470801895 https://github.com/pydata/xarray/issues/1440#issuecomment-1470801895 https://api.github.com/repos/pydata/xarray/issues/1440 IC_kwDOAMm_X85Xqqfn jhamman 2443309 2023-03-15T20:33:53Z 2023-03-15T20:34:39Z MEMBER

@lskopintseva - This feature has not been implemented in Xarray (yet). In the meantime, you might find something like this helpful:

python ds = xr.open_dataset("dataset.nc") for v in ds.data_vars: # get variable chunksizes chunksizes = ds[v].encoding.get('chunksizes', None) if chunksizes is not None: chunks = dict(zip(ds[v].dims, chunksizes)) ds[v] = ds[v].chunk(chunks) # chunk the array using the underlying chunksizes

FWIW, I think this would be a nice feature to add to the netcdf4 and h5netcdf backends in Xarray. Contributions welcome!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  If a NetCDF file is chunked on disk, open it with compatible dask chunks 233350060
1468854639 https://github.com/pydata/xarray/issues/3374#issuecomment-1468854639 https://api.github.com/repos/pydata/xarray/issues/3374 IC_kwDOAMm_X85XjPFv jhamman 2443309 2023-03-14T21:14:33Z 2023-03-14T21:14:33Z MEMBER

Agreed. I think we can close this now.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Key error in to_netcdf  502720385
1452481865 https://github.com/pydata/xarray/issues/7577#issuecomment-1452481865 https://api.github.com/repos/pydata/xarray/issues/7577 IC_kwDOAMm_X85Wkx1J jhamman 2443309 2023-03-02T20:09:19Z 2023-03-02T20:09:19Z MEMBER

@tomvothecoder - We would welcome a PR to add xCDAT project to the page linked above.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Consider adding xCDAT to list of Xarray related projects 1607416298
1426084810 https://github.com/pydata/xarray/issues/7519#issuecomment-1426084810 https://api.github.com/repos/pydata/xarray/issues/7519 IC_kwDOAMm_X85VAFPK jhamman 2443309 2023-02-10T16:57:54Z 2023-02-10T16:57:54Z MEMBER

Thanks for the report @derhintze. I agree this seems like a bug. I'm a bit confused by this actually. Our __getitem__ implementation is here:

https://github.com/pydata/xarray/blob/7683442774c8036e0b13851df62bda067b2a65d5/xarray/core/dataset.py#L1418-L1441

and the keys view of a dataset is not hashable:

python isinstance(d.keys(), typing.Hashable) False

Which should be triggering the second @overload in the code above. So I'm not sure what's going on!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Selecting variables from Dataset with view on dict keys is of type DataArray 1579956621
1423258353 https://github.com/pydata/xarray/issues/7515#issuecomment-1423258353 https://api.github.com/repos/pydata/xarray/issues/7515 IC_kwDOAMm_X85U1TLx jhamman 2443309 2023-02-08T21:24:45Z 2023-02-08T21:24:45Z MEMBER

Here's a small example to get things started:

python x = aesara.shared(np.random.standard_normal((3, 4))) xda = xr.DataArray(x, dims=('x', 'y')) this currently returns the following error: ```python traceback


ValueError Traceback (most recent call last) Cell In[138], line 2 1 x = aesara.shared(np.random.standard_normal((3, 4))) ----> 2 xda = xr.DataArray(x, dims=('x', 'y'))

File ~/miniforge3/envs/demo-env/lib/python3.10/site-packages/xarray/core/dataarray.py:428, in DataArray.init(self, data, coords, dims, name, attrs, indexes, fastpath) 426 data = _check_data_shape(data, coords, dims) 427 data = as_compatible_data(data) --> 428 coords, dims = _infer_coords_and_dims(data.shape, coords, dims) 429 variable = Variable(dims, data, attrs, fastpath=True) 430 indexes, coords = _create_indexes_from_coords(coords)

File ~/miniforge3/envs/demo-env/lib/python3.10/site-packages/xarray/core/dataarray.py:142, in _infer_coords_and_dims(shape, coords, dims) 140 dims = tuple(dims) 141 elif len(dims) != len(shape): --> 142 raise ValueError( 143 "different number of dimensions on data " 144 f"and dims: {len(shape)} vs {len(dims)}" 145 ) 146 else: 147 for d in dims:

ValueError: different number of dimensions on data and dims: 0 vs 2 ```

This tells me there is a bit of work to do at the core of Aesara's numpy compatibility. Xarray will make frequent references to attributes like data.shape, data.ndim, etc expecting to get numpy-like results.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Aesara as an array backend in Xarray 1575494367
1422987752 https://github.com/pydata/xarray/issues/7515#issuecomment-1422987752 https://api.github.com/repos/pydata/xarray/issues/7515 IC_kwDOAMm_X85U0RHo jhamman 2443309 2023-02-08T17:29:36Z 2023-02-08T17:29:36Z MEMBER

Thanks all for the discussion. Welcome @brandonwillard, @twiecki, and @rlouf to the Xarray project. And thanks to @rabernat and @TomNicholas for helping orient the conversation.

As a next step, I think it would be fun to try putting an Aesara array into Xarray and see how it goes. In our experience, this process inevitably brings up a few issues where interaction between Xarray and Aesara developers is fruitful.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Aesara as an array backend in Xarray 1575494367
1411224057 https://github.com/pydata/xarray/issues/7324#issuecomment-1411224057 https://api.github.com/repos/pydata/xarray/issues/7324 IC_kwDOAMm_X85UHZH5 jhamman 2443309 2023-01-31T23:42:53Z 2023-01-31T23:42:53Z MEMBER

@adanb13 - thanks for opening this issue. It would be helpful if you can share a sample workflow that produces the memory error that you mention above.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Memory issues arising when trying to make dataArray values JSON serializable 1465230135
1411223051 https://github.com/pydata/xarray/pull/7323#issuecomment-1411223051 https://api.github.com/repos/pydata/xarray/issues/7323 IC_kwDOAMm_X85UHY4L jhamman 2443309 2023-01-31T23:41:29Z 2023-01-31T23:41:29Z MEMBER

@adanb13 - do you have plans to revisit this PR? If not, do you mind if we close it for now? Based on the comments above, I think an issue discussing the use case and potential solutions would be a good next step.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  (Issue #7324) added functions that return data values in memory efficient manner 1465047346
1411179632 https://github.com/pydata/xarray/pull/817#issuecomment-1411179632 https://api.github.com/repos/pydata/xarray/issues/817 IC_kwDOAMm_X85UHORw jhamman 2443309 2023-01-31T22:51:57Z 2023-01-31T22:51:57Z MEMBER

Closing this as stale and out of date with our current backends. @swnesbitt (or others) - feel free to open a new PR if you feel there is more to do here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  modified: xarray/backends/api.py 146079798
1411070038 https://github.com/pydata/xarray/pull/7496#issuecomment-1411070038 https://api.github.com/repos/pydata/xarray/issues/7496 IC_kwDOAMm_X85UGzhW jhamman 2443309 2023-01-31T21:05:47Z 2023-01-31T21:05:47Z MEMBER

@paigem recently added the following to the open_dataset docstring (https://github.com/pydata/xarray/pull/7438):

In order to reproduce the default behavior of xr.open_zarr(...) use `xr.open_dataset(..., engine='zarr', chunks={})`.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate open_zarr 1564661430
1410658476 https://github.com/pydata/xarray/pull/3526#issuecomment-1410658476 https://api.github.com/repos/pydata/xarray/issues/3526 IC_kwDOAMm_X85UFPCs jhamman 2443309 2023-01-31T16:08:36Z 2023-01-31T16:08:36Z MEMBER

Bring back this old issue since I think there is value in getting something like this in. My suggestion would be to write a zarr-specific _validate_attrs function and use that, rather than stubbing in a special case for zarr in the generic one. @eddienko , I know this has sat for a long time but would you like to try to finish this up?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow nested dictionaries in the Zarr backend (#3517) 522519084
1410626513 https://github.com/pydata/xarray/pull/4395#issuecomment-1410626513 https://api.github.com/repos/pydata/xarray/issues/4395 IC_kwDOAMm_X85UFHPR jhamman 2443309 2023-01-31T15:48:17Z 2023-01-31T15:48:17Z MEMBER

@hmaarrfk - wondering what your current thoughts on this issue are? Perhaps this can be handled better upstream in zarr-python? Should we close this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  WIP: Ensure that zarr.ZipStores are closed 689502005
1410622606 https://github.com/pydata/xarray/pull/6956#issuecomment-1410622606 https://api.github.com/repos/pydata/xarray/issues/6956 IC_kwDOAMm_X85UFGSO jhamman 2443309 2023-01-31T15:46:11Z 2023-01-31T15:46:11Z MEMBER

@ianliu - are you interested in finishing up this PR? From my perspective, this seems like a useful feature that would be nice to get in to xarray.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expose `memory` argument for "netcdf4" engine 1352315409
1409716721 https://github.com/pydata/xarray/issues/7079#issuecomment-1409716721 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85UBpHx jhamman 2443309 2023-01-31T03:57:43Z 2023-01-31T03:57:43Z MEMBER

Update: I pushed two new tests to #7488. They are not failing in our test env. If someone that has reported this issue could try running the test suite, that would be super helpful in terms of confirming where the problem lies.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1409358970 https://github.com/pydata/xarray/issues/7079#issuecomment-1409358970 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85UARx6 jhamman 2443309 2023-01-30T21:22:22Z 2023-01-30T23:33:01Z MEMBER

I've opened #7488 which I think has actually exposed a few other failures. I doubt I'll have much time to put into this issue in the near time so anyone should feel free to jump in here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1409092949 https://github.com/pydata/xarray/pull/7491#issuecomment-1409092949 https://api.github.com/repos/pydata/xarray/issues/7491 IC_kwDOAMm_X85T_Q1V jhamman 2443309 2023-01-30T18:13:59Z 2023-01-30T18:13:59Z MEMBER

Thanks @jrbourbeau! I just merged #7458 which moved our isort config to ruff. I suspect pre-commit will be fixed once you move to main.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix `isort` `pre-commit` install 1562931560
1406984977 https://github.com/pydata/xarray/issues/7478#issuecomment-1406984977 https://api.github.com/repos/pydata/xarray/issues/7478 IC_kwDOAMm_X85T3OMR jhamman 2443309 2023-01-27T19:34:17Z 2023-01-27T19:34:17Z MEMBER

Hi @alimanfoo! I agree that the current behavior is a bit annoying for users. We may need to develop a refresh_engines function that:

  • updates the BACKEND_ENTRYPOINTS dict (it would be nice if entrypoints could help us here)
  • clears the list_engines cache
  • reinstates the list_engines cache
{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refresh list of backends (engines)? 1558347743
1404113750 https://github.com/pydata/xarray/issues/7079#issuecomment-1404113750 https://api.github.com/repos/pydata/xarray/issues/7079 IC_kwDOAMm_X85TsRNW jhamman 2443309 2023-01-25T19:18:37Z 2023-01-25T19:18:37Z MEMBER

It would be great if someone could put together a MCVE that reproduces the issue here. We have multiple tests in our test suite that use open_mfdataset with parallel=True, including one that runs against a distributed scheduler and one that runs against the threaded scheduler, so I'm surprised we're not catching this. In any event, the next step would be to develop a test that that triggers the error so we can sort out a fix.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset parallel=True failing with netcdf4 >= 1.6.1 1385031286
1402872179 https://github.com/pydata/xarray/pull/7461#issuecomment-1402872179 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TniFz jhamman 2443309 2023-01-25T00:23:38Z 2023-01-25T00:23:38Z MEMBER

After thinking about this a bit more, I suggest we leave the numpy dtype issue for later. I'd rather not import the private dtypes (at least as part of this PR).

So plan for going forward. @dcherian has already approved this PR. I think it would be good to get one more reviewer to double check things here. Then, assuming things are looking good, I'd like to merge. I will open an issue about the dtype import to track that separately.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1402493257 https://github.com/pydata/xarray/pull/7461#issuecomment-1402493257 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TmFlJ jhamman 2443309 2023-01-24T19:41:24Z 2023-01-24T19:41:24Z MEMBER

@Illviljan, @shoyer, or @keewis - do any of you have suggestions for how to respond to this comment? https://github.com/pydata/xarray/blob/b21f62ee37eea3650a58e9ffa3a7c9f4ae83006b/xarray/core/types.py#L57-L62

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1399042271 https://github.com/pydata/xarray/pull/7458#issuecomment-1399042271 https://api.github.com/repos/pydata/xarray/issues/7458 IC_kwDOAMm_X85TY7Df jhamman 2443309 2023-01-20T22:49:39Z 2023-01-20T22:49:39Z MEMBER

I plan to leave this as a WIP PR until #7461 is merged. There is a bit of overlap and dropping Python 3.8 will make this much easier to finish up.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Lint with ruff 1549639421
1398677525 https://github.com/pydata/xarray/pull/7461#issuecomment-1398677525 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TXiAV jhamman 2443309 2023-01-20T17:07:02Z 2023-01-20T20:51:26Z MEMBER

I'm doing a bit of an audit on our conditional version logic. A few questions that I don't know how to resolve on my own.

  • [ ] _SupportsDType - @headtr1ck, do you have a suggestion for how to handle this comment: https://github.com/pydata/xarray/blob/b21f62ee37eea3650a58e9ffa3a7c9f4ae83006b/xarray/core/types.py#L57-L62

  • [x] timedeltas - @spencerkclark, do you have a suggestion for how to handle this comment: https://github.com/pydata/xarray/blob/b21f62ee37eea3650a58e9ffa3a7c9f4ae83006b/xarray/coding/times.py#L358-L363

  • [x] GenericAlias - @Illviljan, do you have a suggestion for how to handle this comment: https://github.com/pydata/xarray/blob/b4e3cbcf17374b68477ed3ff7a8a52c82837ad91/xarray/core/coordinates.py#L31-L40

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1398582379 https://github.com/pydata/xarray/pull/7461#issuecomment-1398582379 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TXKxr jhamman 2443309 2023-01-20T15:46:50Z 2023-01-20T15:46:50Z MEMBER

@pydata/xarray - This PR is ready for discussion / review.

Our minimum versions policy says its time to drop Python 3.8. But do we want to do that?

I'll note that work is already underway to support Python 3.11 (#7316).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1385772394 https://github.com/pydata/xarray/issues/7448#issuecomment-1385772394 https://api.github.com/repos/pydata/xarray/issues/7448 IC_kwDOAMm_X85SmTVq jhamman 2443309 2023-01-17T17:25:33Z 2023-01-17T17:25:33Z MEMBER

Thanks for the report @bmaranville - we can get this fixed asap.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  REPL not working on xarray.dev 1536707912
1380360662 https://github.com/pydata/xarray/pull/7433#issuecomment-1380360662 https://api.github.com/repos/pydata/xarray/issues/7433 IC_kwDOAMm_X85SRqHW jhamman 2443309 2023-01-12T13:30:54Z 2023-01-12T13:30:54Z MEMBER

Thanks @stefank0!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fix typo 1530593890
1372887496 https://github.com/pydata/xarray/pull/7418#issuecomment-1372887496 https://api.github.com/repos/pydata/xarray/issues/7418 IC_kwDOAMm_X85R1JnI jhamman 2443309 2023-01-05T22:45:17Z 2023-01-05T22:45:17Z MEMBER

I personally favor just copying the code into Xarray and archiving the old repo.

I also lean in this direction. At this point, I see little downside to making this change at this point. My suggestion to import xarray-datatree into xarray was meant low-lift compromise.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Import datatree in xarray? 1519552711
1371755247 https://github.com/pydata/xarray/pull/3858#issuecomment-1371755247 https://api.github.com/repos/pydata/xarray/issues/3858 IC_kwDOAMm_X85Rw1Lv jhamman 2443309 2023-01-05T03:58:54Z 2023-01-05T03:58:54Z MEMBER

I believe this can be closed now. Pynio is on the way out and it seems like we were leaning away from including this anyway.

@pgierz - please feel free to reopen if I have that wrong.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Backend env 579722569
1371540921 https://github.com/pydata/xarray/issues/7403#issuecomment-1371540921 https://api.github.com/repos/pydata/xarray/issues/7403 IC_kwDOAMm_X85RwA25 jhamman 2443309 2023-01-04T23:24:46Z 2023-01-04T23:24:46Z MEMBER

I think the above example can be reduced to just:

python ds = xr.Dataset() ds.to_zarr('test.zarr') ds.to_zarr('test.zarr')

An interesting data point here is that to_netcdf defaults to w mode. And this works fine:

python ds = xr.Dataset() ds.to_netcdf('test.nc') ds.to_netcdf('test.nc')

Personally, I'll make the argument that we should switch (carefully) from -w to w for the default case.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Zarr error when trying to overwrite part of existing store 1512290017
1352342017 https://github.com/pydata/xarray/issues/7354#issuecomment-1352342017 https://api.github.com/repos/pydata/xarray/issues/7354 IC_kwDOAMm_X85QmxoB jhamman 2443309 2022-12-14T23:08:43Z 2022-12-14T23:09:06Z MEMBER

After thinking about this for a bit longer, I think we should be strongly considering dropping source encoding for datasets generated by open_mfdataset. Or, if nothing else, thinking about ways to alert the user that encoding was not consistent across all of the datasets loaded.

Other relevant issues: - https://github.com/pydata/xarray/issues/1614 - https://github.com/pydata/xarray/issues/6323 - https://github.com/pydata/xarray/issues/7039

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  'open_mfdataset' zarr zip timestamp issue 1474785646
1352337857 https://github.com/pydata/xarray/issues/7354#issuecomment-1352337857 https://api.github.com/repos/pydata/xarray/issues/7354 IC_kwDOAMm_X85QmwnB jhamman 2443309 2022-12-14T23:04:58Z 2022-12-14T23:04:58Z MEMBER

I took a minute to look into this and think I understand what is going on. First, a little debugging:

python for name in [files[0], files[1], path]: print(name) ds = xr.open_zarr(name, decode_cf=False) print(' > time.attrs', ds.time.attrs) print(' > time.encoding', ds.time.encoding)

``` tmp_dir/2022-09-01T03:00:00.zarr.zip

time.attrs {'calendar': 'proleptic_gregorian', 'units': 'days since 2022-09-01 03:00:00'} time.encoding {'chunks': (1,), 'preferred_chunks': {'time': 1}, 'compressor': Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0), 'filters': None, 'dtype': dtype('int64')} tmp_dir/2022-09-01T04:00:00.zarr.zip time.attrs {'calendar': 'proleptic_gregorian', 'units': 'days since 2022-09-01 04:00:00'} time.encoding {'chunks': (1,), 'preferred_chunks': {'time': 1}, 'compressor': Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0), 'filters': None, 'dtype': dtype('int64')} tmp.zarr.zip time.attrs {'calendar': 'proleptic_gregorian', 'units': 'days since 2022-09-01'} time.encoding {'chunks': (1,), 'preferred_chunks': {'time': 1}, 'compressor': Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0), 'filters': None, 'dtype': dtype('int64')} ```

A few things that I noticed: - the dtype of the time variable is int64. - the units attr is days since ....

open_mfdataset tends to take the units of the first file and doesn't check if all the others agree. It also does not clear out the dtype encoding.

One quick solution here is that you could add

python del dataset['time'].encoding['units'] to the line right after your open_mfdataset call. You could also update the dtype of your time variable to be a float64.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  'open_mfdataset' zarr zip timestamp issue 1474785646
1345876149 https://github.com/pydata/xarray/issues/7354#issuecomment-1345876149 https://api.github.com/repos/pydata/xarray/issues/7354 IC_kwDOAMm_X85QOHC1 jhamman 2443309 2022-12-12T04:57:05Z 2022-12-12T04:57:05Z MEMBER

@peterdudfield - have you tried this workflow with the latest version of xarray (2022.12.0)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  'open_mfdataset' zarr zip timestamp issue 1474785646
1339876391 https://github.com/pydata/xarray/pull/7360#issuecomment-1339876391 https://api.github.com/repos/pydata/xarray/issues/7360 IC_kwDOAMm_X85P3OQn jhamman 2443309 2022-12-06T19:25:02Z 2022-12-06T19:25:02Z MEMBER

This is failing with the following error:

/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/distributed/scheduler.py:120: error: disable_error_code: Invalid error code(s): annotation-unchecked [misc]

Possibly related to https://github.com/pydata/xarray/pull/7319

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1477162465
1320965798 https://github.com/pydata/xarray/issues/4491#issuecomment-1320965798 https://api.github.com/repos/pydata/xarray/issues/4491 IC_kwDOAMm_X85OvFam jhamman 2443309 2022-11-19T20:46:10Z 2022-11-19T20:46:10Z MEMBER

I changed the title of this issue to reflect the current situation. PyNIO is unlikely to have another release and its compatibility with the Xarray and the rest of the ecosystem is quickly waning. If someone decides to pick up PyNIO maintenance in the future, they can provide backend entrypoint.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate pynio backend 715730538
1320922387 https://github.com/pydata/xarray/pull/7301#issuecomment-1320922387 https://api.github.com/repos/pydata/xarray/issues/7301 IC_kwDOAMm_X85Ou60T jhamman 2443309 2022-11-19T16:46:55Z 2022-11-19T16:46:55Z MEMBER

Maybe we should remove the pynio backend tests all together since they will be skipped anyway if no env has pynio enabled anymore?

I think we should keep them in the test suite as long as the backend is in the code. Although we able to install pynio in our test envs, I suspect it is still possible to construct an environment centered around the pynio that would still work.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate pynio backend 1456026667
1320701325 https://github.com/pydata/xarray/pull/6475#issuecomment-1320701325 https://api.github.com/repos/pydata/xarray/issues/6475 IC_kwDOAMm_X85OuE2N jhamman 2443309 2022-11-19T00:31:47Z 2022-11-19T00:31:47Z MEMBER

This is ready to merge once https://github.com/pydata/xarray/pull/7300 is in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement Zarr v3 spec support 1200581329
1320669382 https://github.com/pydata/xarray/issues/4491#issuecomment-1320669382 https://api.github.com/repos/pydata/xarray/issues/4491 IC_kwDOAMm_X85Ot9DG jhamman 2443309 2022-11-18T23:54:07Z 2022-11-18T23:54:07Z MEMBER

I suggest we plan to remove the pynio backend in a future release. The project is very likely dead.

https://github.com/NCAR/pynio/issues/53

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate pynio backend 715730538
1304558730 https://github.com/pydata/xarray/pull/6475#issuecomment-1304558730 https://api.github.com/repos/pydata/xarray/issues/6475 IC_kwDOAMm_X85NwfyK jhamman 2443309 2022-11-05T14:38:18Z 2022-11-05T14:38:18Z MEMBER

@grlee77, @rabernat, @joshmoore, and others - I think this is ready to review and/or merge. The Zarr-V3 tests are active in the CI Upstream / upstream-dev GitHub Action. The test failure on readthedocs is unrelated to this PR.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement Zarr v3 spec support 1200581329
1294086466 https://github.com/pydata/xarray/pull/6475#issuecomment-1294086466 https://api.github.com/repos/pydata/xarray/issues/6475 IC_kwDOAMm_X85NIjFC jhamman 2443309 2022-10-27T21:32:42Z 2022-10-27T21:32:42Z MEMBER

@grlee77 - I'm curious if you are planning to return to this PR or if it would be helpful if someone brought it to completion?

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  implement Zarr v3 spec support 1200581329
1281657040 https://github.com/pydata/xarray/pull/6475#issuecomment-1281657040 https://api.github.com/repos/pydata/xarray/issues/6475 IC_kwDOAMm_X85MZIjQ jhamman 2443309 2022-10-18T00:23:20Z 2022-10-18T00:23:20Z MEMBER

A separate issue is that consolidated metadata isn't in the core Zarr v3 spec, so we will need to have a Zarr Enhancement Proposal to formally define how the metadata should be stored. In the experimental API, it behaves as for v2 and is stored at /meta/root/consolidated by default.

I think it would be fine to disallow consolidated metadata for v3 until there is a spec in place. This is going to be experimental for some time so I don't see the harm in raising an error when consolidated=True and version=3. I think this is better than guessing what the v3 extension will specify.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  implement Zarr v3 spec support 1200581329
1280184096 https://github.com/pydata/xarray/pull/7172#issuecomment-1280184096 https://api.github.com/repos/pydata/xarray/issues/7172 IC_kwDOAMm_X85MTg8g jhamman 2443309 2022-10-17T02:22:00Z 2022-10-17T02:22:00Z MEMBER

Thanks @hmaarrfk - this looks great.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Lazy import dask.distributed to reduce import time of xarray 1410575877
1255626863 https://github.com/pydata/xarray/issues/6894#issuecomment-1255626863 https://api.github.com/repos/pydata/xarray/issues/6894 IC_kwDOAMm_X85K11hv jhamman 2443309 2022-09-22T22:35:58Z 2022-09-22T22:35:58Z MEMBER

@asmeurer recently pointed me to https://data-apis.org/array-api-tests/. Would that be useful here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Public testing framework for duck array integration 1332231863
1197113605 https://github.com/pydata/xarray/issues/6837#issuecomment-1197113605 https://api.github.com/repos/pydata/xarray/issues/6837 IC_kwDOAMm_X85HWoEF jhamman 2443309 2022-07-27T18:02:39Z 2022-07-27T18:02:39Z MEMBER

While there is overlap in the behavior here, I've always thought of these two methods as having distinct applications. .compute() is a Dask collection method. .load(), which predates Xarray's Dask integration, was originally meant to load data from our lazy loading backend arrays (e.g. a netCDF file).

Edit: up until I read this issue, I somehow assumed compute would only work with dask while load would also load our lazy array implementation into memory. Not sure how I got that impression, but maybe that's another argument to remove / align load?

This was my impression as well but now I understand that the primary difference is that load is inplace while compute is not.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Clarify difference between `.load()` and `.compute()` 1319621859
1192699113 https://github.com/pydata/xarray/issues/6816#issuecomment-1192699113 https://api.github.com/repos/pydata/xarray/issues/6816 IC_kwDOAMm_X85HFyTp jhamman 2443309 2022-07-22T15:37:04Z 2022-07-22T15:37:04Z MEMBER

Hi @lumbric - thanks for the report. Though its often challenging when coming from complicated workflows, getting to a MCVE is really the only way to sort out what is going wrong here.

A few suggestions that will hopefully help you get there:

  • Try running your calculation using the synchronous dask scheduler. Dask's graph execution order is not always deterministic so sometimes leads to the conclusion that errors are not reproducible.
  • Try reconstructing a similar workflow with dummy data.

FWIW, the error you are getting tells me that you have an Index/Coordinate with duplicate values in it. Are you sure that is not possible?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  pandas.errors.InvalidIndexError is raised in some runs when using chunks and map_blocks() 1315111684
1071148047 https://github.com/pydata/xarray/pull/5692#issuecomment-1071148047 https://api.github.com/repos/pydata/xarray/issues/5692 IC_kwDOAMm_X84_2GwP jhamman 2443309 2022-03-17T17:52:24Z 2022-03-17T17:52:24Z MEMBER

Huge! Amazing effort here @benbovy!

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Explicit indexes 966983801
1071053923 https://github.com/pydata/xarray/issues/6373#issuecomment-1071053923 https://api.github.com/repos/pydata/xarray/issues/6373 IC_kwDOAMm_X84_1vxj jhamman 2443309 2022-03-17T16:30:25Z 2022-03-17T16:30:25Z MEMBER

It would be nice to avoid this process but I think we need to keep it. Without extract_zarr_variable_encoding and checks against a valid set of options, we have no way to filter out encoding keys that do not apply to the backend. By way of an example:

python In [1]: import xarray as xr In [2]: ds = xr.tutorial.open_dataset('rasm').chunk('100mb') In [3]: ds.Tair.encoding Out[3]: {'source': '/Users/jhamman/Library/Caches/xarray_tutorial_data/eee06791cc19e59f074155a82a7ffe90-rasm.nc', 'original_shape': (36, 205, 275), 'dtype': dtype('float64'), '_FillValue': 9.969209968386869e+36, 'coordinates': 'yc xc'}

Passing most of these to zarr.create would result in a UserWarning. We could inpspect zarr.create to pull the list of valid arguments but I think it is probably cleaner to just list the expected arguments as we do.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Zarr backend should avoid checking for invalid encodings 1171932478
1071033351 https://github.com/pydata/xarray/pull/6348#issuecomment-1071033351 https://api.github.com/repos/pydata/xarray/issues/6348 IC_kwDOAMm_X84_1qwH jhamman 2443309 2022-03-17T16:14:36Z 2022-03-17T16:14:36Z MEMBER

Thanks @tomwhite for making this change!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow write_empty_chunks to be set in Zarr encoding 1165278675
1036719578 https://github.com/pydata/xarray/issues/6269#issuecomment-1036719578 https://api.github.com/repos/pydata/xarray/issues/6269 IC_kwDOAMm_X849yxXa jhamman 2443309 2022-02-11T22:58:12Z 2022-02-11T22:58:12Z MEMBER

To be fair, ds.info is not 100% CDL, but it's darn close.

I think making ds.info CDL compliant would be a great feature addition.

Describe alternatives you've considered

Some kind of schema object that can be used to validate or generate an xarray Dataset, but does not contain any data.

You may be interested in xarray-schema then. We're actively working on / using this project and would be more than happy to think about how a cdl-like schema fits in there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Adding CDL Parser/`open_cdl`? 1132894350
1028301979 https://github.com/pydata/xarray/pull/6096#issuecomment-1028301979 https://api.github.com/repos/pydata/xarray/issues/6096 IC_kwDOAMm_X849SqSb jhamman 2443309 2022-02-02T19:52:31Z 2022-02-02T19:52:31Z MEMBER

@ax3l - The packaging dependency should be resolved in https://github.com/pydata/xarray/pull/6207.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Replace distutils.version with packaging.version 1086346755
1026306642 https://github.com/pydata/xarray/pull/6207#issuecomment-1026306642 https://api.github.com/repos/pydata/xarray/issues/6207 IC_kwDOAMm_X849LDJS jhamman 2443309 2022-01-31T23:16:11Z 2022-01-31T23:16:11Z MEMBER

This is ready to merge. @max-sixty - whenever you are ready, go ahead and merge and make the release.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix missing dependecy definition of 'packaging' 1118113789
1026000505 https://github.com/pydata/xarray/pull/6217#issuecomment-1026000505 https://api.github.com/repos/pydata/xarray/issues/6217 IC_kwDOAMm_X849J4Z5 jhamman 2443309 2022-01-31T17:02:49Z 2022-01-31T17:02:49Z MEMBER

Thanks @Peder2911 for opening a PR. The discussion and changes in #6207 are a bit further along so I suggest we focus efforts there.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added missing dependency packaging 1119412984
1025992494 https://github.com/pydata/xarray/pull/6207#issuecomment-1025992494 https://api.github.com/repos/pydata/xarray/issues/6207 IC_kwDOAMm_X849J2cu jhamman 2443309 2022-01-31T16:54:46Z 2022-01-31T16:58:13Z MEMBER

@pydata/xarray - we should try to get this merged and a new release up ASAP as bug reports are starting to pile up. I've pushed changes to the CI configs, documentation, and elsewhere. I've also pulled in the recent changes on main which should hopefully fix the failing CI in the first commit.

Also, thanks @s-weigand for opening this PR and raising the issue.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix missing dependecy definition of 'packaging' 1118113789
1025382413 https://github.com/pydata/xarray/issues/6176#issuecomment-1025382413 https://api.github.com/repos/pydata/xarray/issues/6176 IC_kwDOAMm_X849HhgN jhamman 2443309 2022-01-31T05:02:19Z 2022-01-31T05:02:19Z MEMBER

After thinking on this one a bit, I'm back to thinking we should zero-pad the months. I don't think there is a clear right choice here, with both options offering pros/cons. Thanks @ksunden in particular for sharing your perspective). I suggest we try this in 2022.02.0. See #6214 for more details.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray versioning to switch to CalVer 1108564253
1020463316 https://github.com/pydata/xarray/issues/6176#issuecomment-1020463316 https://api.github.com/repos/pydata/xarray/issues/6176 IC_kwDOAMm_X8480wjU jhamman 2443309 2022-01-24T19:27:38Z 2022-01-24T19:27:38Z MEMBER

Interesting insights @ksunden, thanks for sharing!

Do others have thoughts here? I would support stripping the leading zeros from the MM part of the version string in favor of consistency here.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray versioning to switch to CalVer 1108564253
1017189009 https://github.com/pydata/xarray/issues/6033#issuecomment-1017189009 https://api.github.com/repos/pydata/xarray/issues/6033 IC_kwDOAMm_X848oRKR jhamman 2443309 2022-01-20T07:25:28Z 2022-01-20T19:59:22Z MEMBER

It is worth mentioning that, specifically when using Zarr with fsspec, you have multiple layers of caching available.

  1. You can ask fsspec to cache locally:

python path = 's3://hrrrzarr/sfc/20211124/20211124_00z_fcst.zarr/surface/PRES' ds = xr.open_zarr('simplecache::'+path) (more details on configuration: https://filesystem-spec.readthedocs.io/en/latest/features.html#caching-files-locally)

  1. You can ask Zarr to cache chunks as they are read:

python mapper = fsspec.get_mapper(path) store = LRUStoreCache(mapper, max_size=1e9) ds = xr.open_zarr(store)

(more details on configuration here: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.LRUStoreCache)

  1. Configure a more complex mapper/cache using 3rd party mappers (i.e. Zict)

perhaps @martindurant has more to add here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Threadlocking in DataArray calculations for zarr data depending on where it's loaded from (S3 vs local) 1064837571
1017171539 https://github.com/pydata/xarray/issues/6167#issuecomment-1017171539 https://api.github.com/repos/pydata/xarray/issues/6167 IC_kwDOAMm_X848oM5T jhamman 2443309 2022-01-20T06:53:37Z 2022-01-20T06:53:37Z MEMBER

@chiaweh2 - a PR to improve the documentation would certainly be welcome. Do you think it makes sense to add this to the docstring or to the user guide page on IO (https://xarray.pydata.org/en/stable/user-guide/io.html#chunk-based-compression)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray.to_netcdf encoding option not working for complevel and shuffle 1104213904
1016641816 https://github.com/pydata/xarray/pull/6108#issuecomment-1016641816 https://api.github.com/repos/pydata/xarray/issues/6108 IC_kwDOAMm_X848mLkY jhamman 2443309 2022-01-19T16:29:11Z 2022-01-19T16:29:11Z MEMBER

+1 on making this change. However, just to note, Pandas seems to still have the the interpolation argument (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.quantile.html) so we will be introducing an inconsistency there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  quantile: rename interpolation arg to method 1088615118
1008488315 https://github.com/pydata/xarray/issues/1900#issuecomment-1008488315 https://api.github.com/repos/pydata/xarray/issues/1900 IC_kwDOAMm_X848HE97 jhamman 2443309 2022-01-10T02:06:59Z 2022-01-10T02:06:59Z MEMBER

Related to the Pandera integration, we are prototyping the xarray schema validation functionality in the xarray-schema project.

{
    "total_count": 3,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 3,
    "eyes": 0
}
  Representing & checking Dataset schemas  295959111
1007836513 https://github.com/pydata/xarray/issues/6124#issuecomment-1007836513 https://api.github.com/repos/pydata/xarray/issues/6124 IC_kwDOAMm_X848El1h jhamman 2443309 2022-01-08T00:18:34Z 2022-01-08T00:18:34Z MEMBER

I'm also late to the party but I would say I fall squarely in the Dataset is a dict-like camp. If we remove __bool__, should we also remove __len__? Basically, everything @dopplershift aligns with my perspective here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bool(ds) should raise a "the truth value of a Dataset is ambiguous" error 1090229430
988994886 https://github.com/pydata/xarray/issues/4001#issuecomment-988994886 https://api.github.com/repos/pydata/xarray/issues/4001 IC_kwDOAMm_X8468t1G jhamman 2443309 2021-12-08T16:58:36Z 2021-12-08T16:58:36Z MEMBER

@pydata/xarray and others - we're going to cancel the next regularly scheduled meeting (Dec. 22, 2021) so everyone can take a break. We'll be back Jan. 5, 2022.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [community] Bi-weekly community developers meeting 606530049
985824018 https://github.com/pydata/xarray/issues/3981#issuecomment-985824018 https://api.github.com/repos/pydata/xarray/issues/3981 IC_kwDOAMm_X846wnsS jhamman 2443309 2021-12-03T20:59:27Z 2021-12-03T20:59:27Z MEMBER

@andy-sweet - please do join the next call. I've added it to the meeting agenda.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [Proposal] Expose Variable without Pandas dependency 602256880
985051449 https://github.com/pydata/xarray/issues/3981#issuecomment-985051449 https://api.github.com/repos/pydata/xarray/issues/3981 IC_kwDOAMm_X846trE5 jhamman 2443309 2021-12-02T22:25:29Z 2021-12-02T22:25:39Z MEMBER

Hi @sofroniewn - This is certainly something we still want to work on (see this section of our current roadmap and a more detailed proposal that included work in this area). I actually think the relevant part of the linked proposal is the best we have for a working plan here (text copied from doc below):

Xvariable: New lightweight Variable API (labeled arrays without coordinates)

This work area is about cleaning up Xarray’s internals, and allowing our low-level “Variable” data structure to be usable by other projects in the scientific Python ecosystem. We have identified the following key tasks and deliverables:

a. Formalize Xarray’s contract for valid data inside “Variable”, and remove/replace some legacy features that would be hard to justify for a generic library: 1. Move the “encoding” attribute from “Variable” onto a new “duck array” class that can be used inside a “Variable” (#5082) 2. Expose Xarray’s internal model for “explicit array indexing” as a public API. Xarray uses this feature because supporting the full complexity of NumPy’s full indexing API is hard for many array implementations.

b. Separate out and possibly rename/re-brand to Xvariable parts of Xarray internals into new projects. This will increase their visibility, find new users for these tools and improve the maintainability of Xarray itself. 1. The ”Variable” class will move into a separate project that only depends upon NumPy, and that Xarray in-turn will depend upon. We hope this project will be of interest to users interested in simpler tools than Xarray (#3981). 2. The new package will support indexing and a limited series of other operations lazily on arrays loaded from disk or remote storage, without loading the entire array into memory. This project is of interest to other projects such as Napari (#5081).

c. API stabilization, code consolidation and maintenance of the external project.

We have a bi-weekly developer call on Wednesday mornings (#4001), one idea would be devote 10-15 minutes of our next meeting to this topic. Is that something you and/or @andy-sweet would be up for joining?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [Proposal] Expose Variable without Pandas dependency 602256880
969446050 https://github.com/pydata/xarray/issues/5878#issuecomment-969446050 https://api.github.com/repos/pydata/xarray/issues/5878 IC_kwDOAMm_X845yJKi jhamman 2443309 2021-11-15T23:45:20Z 2021-11-15T23:45:20Z MEMBER

Thought I would drop a related note here. Gcsfs just added support for fixed-key metadata: https://github.com/fsspec/gcsfs/pull/429. So if you are testing out different fsspec/gcsfs options for caching, make sure you are using gcsfs==2021.11.0.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  problem appending to zarr on GCS when using json token  1030811490
969073904 https://github.com/pydata/xarray/pull/4140#issuecomment-969073904 https://api.github.com/repos/pydata/xarray/issues/4140 IC_kwDOAMm_X845wuTw jhamman 2443309 2021-11-15T16:17:59Z 2021-11-15T16:17:59Z MEMBER

closed via https://github.com/rasterio/rasterio/pull/2141

{
    "total_count": 4,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 4,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support file-like objects in xarray.open_rasterio 636451398
963721281 https://github.com/pydata/xarray/pull/5956#issuecomment-963721281 https://api.github.com/repos/pydata/xarray/issues/5956 IC_kwDOAMm_X845cThB jhamman 2443309 2021-11-09T01:11:58Z 2021-11-09T01:11:58Z MEMBER

I plan to merge this tomorrow unless I hear otherwise.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Create CITATION.cff 1047795001
963476936 https://github.com/pydata/xarray/issues/5954#issuecomment-963476936 https://api.github.com/repos/pydata/xarray/issues/5954 IC_kwDOAMm_X845bX3I jhamman 2443309 2021-11-08T18:59:14Z 2021-11-08T18:59:14Z MEMBER

Thanks @rabernat for opening up this issue. I think now that the refactor for read support is completed, it is a great time to discuss the opportunities for adding write support to the plugin interface.

pinging @aurghs and @alexamici since I know they have some thoughts developed here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Writeable backends via entrypoints 1047608434
915810862 https://github.com/pydata/xarray/issues/5782#issuecomment-915810862 https://api.github.com/repos/pydata/xarray/issues/5782 IC_kwDOAMm_X842liou jhamman 2443309 2021-09-09T06:47:46Z 2021-09-09T06:47:46Z MEMBER

Hi @dimzog. Thank you for taking time to report this issue. However, it seems to me that this would be better reported on the geopandas and/or rioxarray issue tracker.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  rio.crs seems to misbehave with geopandas.to_crs 991857313
904970588 https://github.com/pydata/xarray/issues/4118#issuecomment-904970588 https://api.github.com/repos/pydata/xarray/issues/4118 IC_kwDOAMm_X8418MFc jhamman 2443309 2021-08-24T21:00:33Z 2021-08-24T21:00:33Z MEMBER

Thanks @TomNicholas! I've just been starting to look into this. I'm going to give it a spin and would be happy to help with your numbers 3 and 4.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Feature Request: Hierarchical storage and processing in xarray 628719058
870952879 https://github.com/pydata/xarray/pull/5526#issuecomment-870952879 https://api.github.com/repos/pydata/xarray/issues/5526 MDEyOklzc3VlQ29tbWVudDg3MDk1Mjg3OQ== jhamman 2443309 2021-06-29T22:10:51Z 2021-06-29T22:10:51Z MEMBER

Thanks @chrisroat - let's see if @rabernat or @shoyer has an opinion here then.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Handle empty containers in zarr chunk checks 929518413
869831502 https://github.com/pydata/xarray/pull/5526#issuecomment-869831502 https://api.github.com/repos/pydata/xarray/issues/5526 MDEyOklzc3VlQ29tbWVudDg2OTgzMTUwMg== jhamman 2443309 2021-06-28T16:32:04Z 2021-06-28T16:32:04Z MEMBER

Thanks @chrisroat for the PR. This looks great. I do think we should add an entry in whats-new.rst though. Once that is done, I'll merge this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Handle empty containers in zarr chunk checks 929518413
867083353 https://github.com/pydata/xarray/pull/5501#issuecomment-867083353 https://api.github.com/repos/pydata/xarray/issues/5501 MDEyOklzc3VlQ29tbWVudDg2NzA4MzM1Mw== jhamman 2443309 2021-06-23T18:58:58Z 2021-06-23T18:58:58Z MEMBER

@grouny - thanks for this PR! And I see that this is your first time contributing to Xarray -- Welcome!

This fix you propose looks great. Apart from the linter complaining about some whitespace, I think this would benefit from a simple test. Would you mind adding a test to xarray/tests/test_backends.py?

Q for @snowman2 - is rioxarray handling complex datatypes in this way?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fix dtype complex for rasterio backend 925533850
867052541 https://github.com/pydata/xarray/pull/5019#issuecomment-867052541 https://api.github.com/repos/pydata/xarray/issues/5019 MDEyOklzc3VlQ29tbWVudDg2NzA1MjU0MQ== jhamman 2443309 2021-06-23T18:10:43Z 2021-06-23T18:10:43Z MEMBER

@chrisroat - apologies for letting this sit for so long without a review. If you are interested in finishing it up, we may need to open a new PR to account for the closure here. If so, please ping me directly and I'll get on the review asap.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Handle empty containers in zarr chunk checks 827233565
867024800 https://github.com/pydata/xarray/pull/5019#issuecomment-867024800 https://api.github.com/repos/pydata/xarray/issues/5019 MDEyOklzc3VlQ29tbWVudDg2NzAyNDgwMA== jhamman 2443309 2021-06-23T17:27:55Z 2021-06-23T17:27:55Z MEMBER

@keewis - did we mean to close this or was this related to the switch to main (#5516)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Handle empty containers in zarr chunk checks 827233565
859234848 https://github.com/pydata/xarray/pull/5445#issuecomment-859234848 https://api.github.com/repos/pydata/xarray/issues/5445 MDEyOklzc3VlQ29tbWVudDg1OTIzNDg0OA== jhamman 2443309 2021-06-11T03:26:15Z 2021-06-11T03:26:15Z MEMBER

pinging @dcherian and/or @crusaderky who may have thoughts on this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add `xr.unify_chunks()` top level method 912932344
848873711 https://github.com/pydata/xarray/issues/4001#issuecomment-848873711 https://api.github.com/repos/pydata/xarray/issues/4001 MDEyOklzc3VlQ29tbWVudDg0ODg3MzcxMQ== jhamman 2443309 2021-05-26T15:35:23Z 2021-05-26T16:01:43Z MEMBER

Something seems to have changed in our regular Zoom room. For this week, let's use this Zoom link.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [community] Bi-weekly community developers meeting 606530049
799958657 https://github.com/pydata/xarray/issues/5038#issuecomment-799958657 https://api.github.com/repos/pydata/xarray/issues/5038 MDEyOklzc3VlQ29tbWVudDc5OTk1ODY1Nw== jhamman 2443309 2021-03-16T05:21:12Z 2021-03-16T05:21:12Z MEMBER

Thanks for the report. Can you say more about how you installed Xarray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [tests] ImportError: Pandas requires version '0.12.3' or newer of 'xarray' (version '0.0.0' currently installed).  832404698

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2563.42ms · About: xarray-datasette