home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

20 rows where author_association = "CONTRIBUTOR" and user = 23487320 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 9

  • Xarray open_mfdataset with engine Zarr 6
  • open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 4
  • xarray.open_mzar: open multiple zarr files (in parallel) 2
  • support darkmode 2
  • Reset file pointer to 0 when reading file stream 2
  • Flexible backends - Harmonise zarr chunking with other backends chunking 1
  • Backend / plugin system `remove_duplicates` raises AttributeError on discovering duplicates 1
  • Opening fsspec s3 file twice results in invalid start byte 1
  • deprecate open_zarr 1

user 1

  • weiji14 · 20 ✖

author_association 1

  • CONTRIBUTOR · 20 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1435869985 https://github.com/pydata/xarray/pull/7496#issuecomment-1435869985 https://api.github.com/repos/pydata/xarray/issues/7496 IC_kwDOAMm_X85VlaMh weiji14 23487320 2023-02-19T04:55:26Z 2023-02-19T04:55:26Z CONTRIBUTOR

The inconsistency in the chunks argument is non-ideal, but that could be handled by a separate deprecation process.

There was some discussion on whether xr.open_dataset should default to chunks=None, or switch to open_zarr's chunks="auto" at https://github.com/pydata/xarray/pull/4187#issuecomment-690885702 (the last attempt at deprecating open_zarr :slightly_smiling_face:). Just bringing it up for debate.

Also, quite a few people were in favour of keeping open_zarr back in 2020 (see e.g. https://github.com/pydata/xarray/pull/4187#issuecomment-652482831), but maybe times have changed, and a consistent API is more desirable than convenience?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate open_zarr 1564661430
1330782936 https://github.com/pydata/xarray/pull/7304#issuecomment-1330782936 https://api.github.com/repos/pydata/xarray/issues/7304 IC_kwDOAMm_X85PUiLY weiji14 23487320 2022-11-29T15:00:21Z 2022-11-29T15:09:15Z CONTRIBUTOR

It loos reasonable to me. I'm not sure if the warning is needed or not - we don't expect anyone to see it, or if they do, necessarily do anything about it. It's not unusual for code interacting with a file-like object to move the file pointer.

Hmm, in that case, I'm leaning towards removing the warning. The file pointer is reset anyway after reading the magic byte number, and that hasn't caused any issues (as mentioned in https://github.com/pydata/xarray/issues/6813#issuecomment-1205503288), so it should be more or less safe. Let me push another commit. Edit: done at 929cb62977d630a00ace9747bc86066555b83d0d.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reset file pointer to 0 when reading file stream 1458347938
1322461240 https://github.com/pydata/xarray/pull/7304#issuecomment-1322461240 https://api.github.com/repos/pydata/xarray/issues/7304 IC_kwDOAMm_X85O0yg4 weiji14 23487320 2022-11-21T18:10:20Z 2022-11-21T18:13:07Z CONTRIBUTOR

Traceback from the 2 test failures at https://github.com/pydata/xarray/actions/runs/3516849430/jobs/5893926099#step:9:252

```python-traceback =================================== FAILURES =================================== ____ TestH5NetCDFFileObject.test_open_twice ______ [gw2] linux -- Python 3.10.7 /home/runner/micromamba-root/envs/xarray-tests/bin/python

self = <xarray.tests.test_backends.TestH5NetCDFFileObject object at 0x7f211de81e40>

def test_open_twice(self) -> None:
    expected = create_test_data()
    expected.attrs["foo"] = "bar"
  with pytest.raises(ValueError, match=r"read/write pointer not at the start"):

E Failed: DID NOT RAISE <class 'ValueError'>

/home/runner/work/xarray/xarray/xarray/tests/test_backends.py:3034: Failed ___ TestH5NetCDFFileObject.test_open_fileobj _____ [gw2] linux -- Python 3.10.7 /home/runner/micromamba-root/envs/xarray-tests/bin/python

self = <xarray.tests.test_backends.TestH5NetCDFFileObject object at 0x7f211de82530>

@requires_scipy
def test_open_fileobj(self) -> None:
    # open in-memory datasets instead of local file paths
    expected = create_test_data().drop_vars("dim3")
    expected.attrs["foo"] = "bar"
    with create_tmp_file() as tmp_file:
        expected.to_netcdf(tmp_file, engine="h5netcdf")

        with open(tmp_file, "rb") as f:
            with open_dataset(f, engine="h5netcdf") as actual:
                assert_identical(expected, actual)

            f.seek(0)
            with open_dataset(f) as actual:
                assert_identical(expected, actual)

            f.seek(0)
            with BytesIO(f.read()) as bio:
                with open_dataset(bio, engine="h5netcdf") as actual:
                    assert_identical(expected, actual)

            f.seek(0)
            with pytest.raises(TypeError, match="not a valid NetCDF 3"):
                open_dataset(f, engine="scipy")

        # TODO: this additional open is required since scipy seems to close the file
        # when it fails on the TypeError (though didn't when we used
        # `raises_regex`?). Ref https://github.com/pydata/xarray/pull/5191
        with open(tmp_file, "rb") as f:
            f.seek(8)
            with pytest.raises(
                ValueError,
                match="match in any of xarray's currently installed IO",
            ):
              with pytest.warns(
                    RuntimeWarning,
                    match=re.escape("'h5netcdf' fails while guessing"),
                ):

E Failed: DID NOT WARN. No warnings of type (<class 'RuntimeWarning'>,) matching the regex were emitted. E Regex: 'h5netcdf'\ fails\ while\ guessing E Emitted warnings: [ UserWarning('cannot guess the engine, file-like object read/write pointer not at the start of the file, so resetting file pointer to zero. If this does not work, please close and reopen, or use a context manager'), E RuntimeWarning("deallocating CachingFileManager(<class 'h5netcdf.core.File'>, <_io.BufferedReader name='/tmp/tmpoxdfl12i/temp-720.nc'>, mode='r', kwargs={'invalid_netcdf': None, 'decode_vlen_strings': True}, manager_id='b62ec6c8-b328-409c-bc5d-bbab265bea51'), but file is not already closed. This may indicate a bug.")]

/home/runner/work/xarray/xarray/xarray/tests/test_backends.py:3076: Failed =========================== short test summary info ============================ FAILED xarray/tests/test_backends.py::TestH5NetCDFFileObject::test_open_twice - Failed: DID NOT RAISE <class 'ValueError'> FAILED xarray/tests/test_backends.py::TestH5NetCDFFileObject::test_open_fileobj - Failed: DID NOT WARN. No warnings of type (<class 'RuntimeWarning'>,) matching the regex were emitted. Regex: 'h5netcdf'\ fails\ while\ guessing Emitted warnings: [ UserWarning('cannot guess the engine, file-like object read/write pointer not at the start of the file, so resetting file pointer to zero. If this does not work, please close and reopen, or use a context manager'), RuntimeWarning("deallocating CachingFileManager(<class 'h5netcdf.core.File'>, <_io.BufferedReader name='/tmp/tmpoxdfl12i/temp-720.nc'>, mode='r', kwargs={'invalid_netcdf': None, 'decode_vlen_strings': True}, manager_id='b62ec6c8-b328-409c-bc5d-bbab265bea51'), but file is not already closed. This may indicate a bug.")] = 2 failed, 14608 passed, 1190 skipped, 203 xfailed, 73 xpassed, 54 warnings in 581.98s (0:09:41) = ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Reset file pointer to 0 when reading file stream 1458347938
1322443376 https://github.com/pydata/xarray/issues/6813#issuecomment-1322443376 https://api.github.com/repos/pydata/xarray/issues/6813 IC_kwDOAMm_X85O0uJw weiji14 23487320 2022-11-21T17:55:16Z 2022-11-21T17:56:17Z CONTRIBUTOR

Just hitting into this same issue mentioned downstream at https://github.com/xarray-contrib/datatree/pull/130 while trying to read ICESat-2 HDF5 files from S3, but realized that the fix should happening in xarray, so I've started a PR at #7304 to fix this (thanks @djhoese for the code snippet at https://github.com/pydata/xarray/issues/6813#issuecomment-1204300671)! Will look into the unit test failures and fix them one by one as they pop up on CI.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening fsspec s3 file twice results in invalid start byte 1310058435
962958896 https://github.com/pydata/xarray/issues/5944#issuecomment-962958896 https://api.github.com/repos/pydata/xarray/issues/5944 IC_kwDOAMm_X845ZZYw weiji14 23487320 2021-11-08T09:21:10Z 2021-11-08T09:22:16Z CONTRIBUTOR

I'm getting a similar issue with xarray=0.20.1 when rioxarray and rasterio are both installed, looks like #5931 didn't fully fix things? Here's the full traceback.

```python-traceback __ testopen_variable_filter[open_rasterio_engine] ___

open_rasterio = <function open_rasterio_engine at 0x7fa99ca7ec10>

def test_open_variable_filter(open_rasterio):
  with open_rasterio(
        os.path.join(TEST_INPUT_DATA_DIR, "PLANET_SCOPE_3D.nc"), variable=["blue"]
    ) as rds:

test/integration/test_integration__io.py:185:


test/conftest.py:103: in open_rasterio_engine return xr.open_dataset(file_name_or_object, engine="rasterio", **kwargs) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/api.py:481: in open_dataset backend = plugins.get_backend(engine) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:158: in get_backend engines = list_engines() ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:103: in list_engines return build_engines(entrypoints) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:92: in build_engines entrypoints = remove_duplicates(entrypoints)


entrypoints = [EntryPoint(name='rasterio', value='rioxarray.xarray_plugin:RasterioBackend', group='xarray.backends'), EntryPoint(nam...rray.backends'), EntryPoint(name='rasterio', value='rioxarray.xarray_plugin:RasterioBackend', group='xarray.backends')]

def remove_duplicates(entrypoints):
    # sort and group entrypoints by name
    entrypoints = sorted(entrypoints, key=lambda ep: ep.name)
    entrypoints_grouped = itertools.groupby(entrypoints, key=lambda ep: ep.name)
    # check if there are multiple entrypoints for the same name
    unique_entrypoints = []
    for name, matches in entrypoints_grouped:
        matches = list(matches)
        unique_entrypoints.append(matches[0])
        matches_len = len(matches)
        if matches_len > 1:
          selected_module_name = matches[0].module_name

E AttributeError: 'EntryPoint' object has no attribute 'module_name'

../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:29: AttributeError ================================ warnings summary ================================= test/integration/test_integration__io.py::test_open_variable_filter[open_rasterio] /home/username/projects/rioxarray/rioxarray/_io.py:366: DeprecationWarning: string or file could not be read to its end due to unmatched data; this will raise a ValueError in the future. new_val = np.fromstring(value.strip("{}"), dtype="float", sep=",")

-- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED test/integration/test_integration__io.py::test_open_variable_filter[open_rasterio_engine] !!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!

```

Output of xr.show_versions()

``` INSTALLED VERSIONS ------------------ commit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.10.0-8-amd64 machine: x86_64 processor: byteorder: little LC_ALL: None LANG: en_NZ.UTF-8 LOCALE: ('en_NZ', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 0.20.1 pandas: 1.3.4 numpy: 1.21.4 scipy: 1.7.1 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.10 cfgrib: None iris: None bottleneck: None dask: 2021.11.0 distributed: 2021.11.0 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2021.11.0 cupy: None pint: None sparse: None setuptools: 58.5.3 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: None sphinx: 1.8.5 ```
{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Backend / plugin system `remove_duplicates` raises AttributeError on discovering duplicates 1046454702
721466404 https://github.com/pydata/xarray/issues/4496#issuecomment-721466404 https://api.github.com/repos/pydata/xarray/issues/4496 MDEyOklzc3VlQ29tbWVudDcyMTQ2NjQwNA== weiji14 23487320 2020-11-04T01:47:30Z 2020-11-04T01:49:39Z CONTRIBUTOR

Just a general comment on the xr.open_dataset(engine="zarr") part, I prefer to keep or reduce the amount of chunks= options (i.e. Option 1) rather than add another chunks="encoded" option.

For those who are confused, this is the current state of xr.open_mfdataset (correct me if I'm wrong):

| :arrow_down: engine\chunk :arrow_right: | None (default) | 'auto' | {} | -1 | |--------------------------------------------------------| -------------------|-------|----|-------| | None (i.e. default for NetCDF) | np.ndarray | dask.Array (produces origintal chunks as in NetCDF obj??) | dask.Array (rechunked into 1 chunk) | dask.Array (rechunked into 1 chunk) | | zarr | np.ndarray | dask.Array (original chunks as in Zarr obj) | dask.Array (original chunks as in Zarr obj) | dask.Array (rechunked into 1 chunk + UserWarning) |

Sample code to test (run in jupyter notebook to see the dask chunk visual):

```python import xarray as xr import fsspec # Opening NetCDF dataset: xr.Dataset = xr.open_dataset( "http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/HRRR/CONUS_2p5km/Best", chunks={} ) dataset.Temperature_height_above_ground.data # Opening Zarr zstore = fsspec.get_mapper( url="gs://cmip6/CMIP/NCAR/CESM2/historical/r9i1p1f1/Amon/tas/gn/" ) dataset: xr.Dataset = xr.open_dataset( filename_or_obj=zstore, engine="zarr", chunks={}, backend_kwargs=dict(consolidated=True), ) dataset.tas.data ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Flexible backends - Harmonise zarr chunking with other backends chunking 717410970
652702644 https://github.com/pydata/xarray/pull/4187#issuecomment-652702644 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MjcwMjY0NA== weiji14 23487320 2020-07-01T23:59:32Z 2020-07-03T04:23:34Z CONTRIBUTOR

I agree. I think we should keep open_zarr around.

Just wanted to mention that two of the reviewers in the last PR (see https://github.com/pydata/xarray/pull/4003#issuecomment-619644606 and https://github.com/pydata/xarray/pull/4003#issuecomment-620169860) seemed in favour of deprecating open_zarr. If I'm counting the votes correctly (did I miss anyone?), that's 2 for, and 2 against. We'll need a tiebreaker :laughing:

As reminder (because it took me a while to remember!), one goal with this refactor is to have open_mfdataset work with all backends (including zarr and rasterio) by specifying the engine kwarg.

Yes exactly, time does fly (half a year has gone by already!).

Currently I'm trying to piggyback Zarr into test_openmfdataset_manyfiles from #1983, ~~and am currently having trouble finding out why opening Zarr stores via open_mfdataset doesn't return a dask backed array like the other engines (Edit: it only happens when chunks is None, see https://github.com/pydata/xarray/pull/4187#discussion_r448734418). Might need to spend another day digging through the code to see if this is expected behaviour.~~ Edit: got a workaround solution in b3d6a6a46f8ead25b6f7f593f7b46f43a4de650c by using chunks="auto" as was the default in open_zarr.

As a note I'm working on implementing zarr spec v3 in zarr-python, still deciding how we want to handle the new spec/API.

If there are any changes that you would like or dislike in an API, feedback is welcome.

Thanks for chipping in @Carreau! I'm sure the community will have some useful suggestions. Just cross-referencing https://zarr-developers.github.io/zarr/specs/2019/06/19/zarr-v3-update.html so others can get a better feel for where things are at.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
652260859 https://github.com/pydata/xarray/pull/4187#issuecomment-652260859 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MjI2MDg1OQ== weiji14 23487320 2020-07-01T08:02:14Z 2020-07-01T22:23:27Z CONTRIBUTOR

I wonder if it's really worth deprecating open_zarr(). open_dataset(..., engine='zarr') is a bit more verbose, especially with backend_kwargs to pass optional arguments. It seems pretty harmless to keep open_zarr() around, especially if it's just an alias for open_datraset(engine='zarr').

Depends on which line in the Zen of Python you want to follow - "Simple is better than complex", or "There should be one-- and preferably only one --obvious way to do it". From a maintenance perspective, it's balancing the cost of a deprecation cycle vs writing code that tests both instances I guess.

We could also automatically detect zarr stores in open_dataset without requiring engine='zarr' if:

  1. the argument inherits from collections.abc.Mapping, and
  2. it contains a key '.zgroup', corresponding to zarr metadata.

As for the annoyance of needing to write backend_kwargs={"consolidated": True}, I wonder if we could detect this automatically by checking for the existence of a .zmetadata key? This would add a small amount of overhead (one file access) but this probably would not be prohibitively expensive.

These are some pretty good ideas. I also wonder if there's a way to mimic the dataset identifiers like in rasterio, something like xr.open_dataset("zarr:some_zarrfile.zarr"). Feels a lot more like fsspec's url chaining too.

Counter-argument would be that the cyclomatic complexity of open_dataset is already too high, and it really should be refactored before adding more 'magic'. Especially if new backend engines come online (e.g. #4142).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
652104356 https://github.com/pydata/xarray/pull/4187#issuecomment-652104356 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MjEwNDM1Ng== weiji14 23487320 2020-06-30T23:42:58Z 2020-07-01T03:04:11Z CONTRIBUTOR

Four more failures, something to do with dask? Seems related to #3919 and #3921.

  • [ ] TestZarrDictStore.test_vectorized_indexing - IndexError: only slices with step >= 1 are supported
  • [x] TestZarrDictStore.test_manual_chunk - ZeroDivisionError: integer division or modulo by zero
  • [ ] TestZarrDirectoryStore.test_vectorized_indexing - IndexError: only slices with step >= 1 are supported
  • [x] TestZarrDirectoryStore.test_manual_chunk - ZeroDivisionError: integer division or modulo by zero

Edit: Fixed the ZeroDivisionErrror in 6fbeadf41a1a547383da0c8f4499c99099dbdf97. The IndexError was fixed in a hacky way though, see https://github.com/pydata/xarray/pull/4187#discussion_r448077275.

```python-traceback =================================== FAILURES =================================== __________________ TestZarrDictStore.test_vectorized_indexing __________________ self = <xarray.tests.test_backends.TestZarrDictStore object at 0x7f5832433940> @pytest.mark.xfail( not has_dask, reason="the code for indexing without dask handles negative steps in slices incorrectly", ) def test_vectorized_indexing(self): in_memory = create_test_data() with self.roundtrip(in_memory) as on_disk: indexers = { "dim1": DataArray([0, 2, 0], dims="a"), "dim2": DataArray([0, 2, 3], dims="a"), } expected = in_memory.isel(**indexers) actual = on_disk.isel(**indexers) # make sure the array is not yet loaded into memory assert not actual["var1"].variable._in_memory assert_identical(expected, actual.load()) # do it twice, to make sure we're switched from # vectorized -> numpy when we cached the values actual = on_disk.isel(**indexers) assert_identical(expected, actual) def multiple_indexing(indexers): # make sure a sequence of lazy indexings certainly works. with self.roundtrip(in_memory) as on_disk: actual = on_disk["var3"] expected = in_memory["var3"] for ind in indexers: actual = actual.isel(**ind) expected = expected.isel(**ind) # make sure the array is not yet loaded into memory assert not actual.variable._in_memory assert_identical(expected, actual.load()) # two-staged vectorized-indexing indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": DataArray([[0, 4], [1, 3], [2, 2]], dims=["a", "b"]), }, {"a": DataArray([0, 1], dims=["c"]), "b": DataArray([0, 1], dims=["c"])}, ] multiple_indexing(indexers) # vectorized-slice mixed indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": slice(None, 10), } ] multiple_indexing(indexers) # vectorized-integer mixed indexers = [ {"dim3": 0}, {"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])}, {"a": slice(None, None, 2)}, ] multiple_indexing(indexers) # vectorized-integer mixed indexers = [ {"dim3": 0}, {"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])}, {"a": 1, "b": 0}, ] multiple_indexing(indexers) # with negative step slice. indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": slice(-1, 1, -1), } ] > multiple_indexing(indexers) xarray/tests/test_backends.py:686: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/tests/test_backends.py:642: in multiple_indexing assert_identical(expected, actual.load()) xarray/core/dataarray.py:814: in load ds = self._to_temp_dataset().load(**kwargs) xarray/core/dataset.py:666: in load v.load() xarray/core/variable.py:381: in load self._data = np.asarray(self._data) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:677: in __array__ self._ensure_cached() xarray/core/indexing.py:674: in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:653: in __array__ return np.asarray(self.array, dtype=dtype) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:557: in __array__ return np.asarray(array[self.key], dtype=None) xarray/backends/zarr.py:57: in __getitem__ return array[key.tuple] /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:572: in __getitem__ return self.get_basic_selection(selection, fields=fields) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:698: in get_basic_selection fields=fields) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:738: in _get_basic_selection_nd indexer = BasicIndexer(selection, self) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:279: in __init__ dim_indexer = SliceDimIndexer(dim_sel, dim_len, dim_chunk_len) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:107: in __init__ err_negative_step() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def err_negative_step(): > raise IndexError('only slices with step >= 1 are supported') E IndexError: only slices with step >= 1 are supported /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/errors.py:55: IndexError _____________________ TestZarrDictStore.test_manual_chunk ______________________ self = <xarray.tests.test_backends.TestZarrDictStore object at 0x7f5832b80cf8> @requires_dask @pytest.mark.filterwarnings("ignore:Specified Dask chunks") def test_manual_chunk(self): original = create_test_data().chunk({"dim1": 3, "dim2": 4, "dim3": 3}) # All of these should return non-chunked arrays NO_CHUNKS = (None, 0, {}) for no_chunk in NO_CHUNKS: open_kwargs = {"chunks": no_chunk} > with self.roundtrip(original, open_kwargs=open_kwargs) as actual: xarray/tests/test_backends.py:1594: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__ return next(self.gen) xarray/tests/test_backends.py:1553: in roundtrip with self.open(store_target, **open_kwargs) as ds: /usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__ return next(self.gen) xarray/tests/test_backends.py:1540: in open with xr.open_dataset(store_target, engine="zarr", **kwargs) as ds: xarray/backends/api.py:587: in open_dataset ds = maybe_decode_store(store, chunks) xarray/backends/api.py:511: in maybe_decode_store for k, v in ds.variables.items() xarray/backends/api.py:511: in <dictcomp> for k, v in ds.variables.items() xarray/backends/zarr.py:398: in maybe_chunk var = var.chunk(chunk_spec, name=name2, lock=None) xarray/core/variable.py:1007: in chunk data = da.from_array(data, chunks, name=name, lock=lock, **kwargs) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2712: in from_array chunks, x.shape, dtype=x.dtype, previous_chunks=previous_chunks /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2447: in normalize_chunks (), /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2445: in <genexpr> for s, c in zip(shape, chunks) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: in blockdims_from_blockshape for d, bd in zip(shape, chunks) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .0 = <zip object at 0x7f58332d9d48> ((bd,) * (d // bd) + ((d % bd,) if d % bd else ()) if d else (0,)) > for d, bd in zip(shape, chunks) ) E ZeroDivisionError: integer division or modulo by zero /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: ZeroDivisionError _______________ TestZarrDirectoryStore.test_vectorized_indexing ________________ self = <xarray.tests.test_backends.TestZarrDirectoryStore object at 0x7f5832a08a20> @pytest.mark.xfail( not has_dask, reason="the code for indexing without dask handles negative steps in slices incorrectly", ) def test_vectorized_indexing(self): in_memory = create_test_data() with self.roundtrip(in_memory) as on_disk: indexers = { "dim1": DataArray([0, 2, 0], dims="a"), "dim2": DataArray([0, 2, 3], dims="a"), } expected = in_memory.isel(**indexers) actual = on_disk.isel(**indexers) # make sure the array is not yet loaded into memory assert not actual["var1"].variable._in_memory assert_identical(expected, actual.load()) # do it twice, to make sure we're switched from # vectorized -> numpy when we cached the values actual = on_disk.isel(**indexers) assert_identical(expected, actual) def multiple_indexing(indexers): # make sure a sequence of lazy indexings certainly works. with self.roundtrip(in_memory) as on_disk: actual = on_disk["var3"] expected = in_memory["var3"] for ind in indexers: actual = actual.isel(**ind) expected = expected.isel(**ind) # make sure the array is not yet loaded into memory assert not actual.variable._in_memory assert_identical(expected, actual.load()) # two-staged vectorized-indexing indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": DataArray([[0, 4], [1, 3], [2, 2]], dims=["a", "b"]), }, {"a": DataArray([0, 1], dims=["c"]), "b": DataArray([0, 1], dims=["c"])}, ] multiple_indexing(indexers) # vectorized-slice mixed indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": slice(None, 10), } ] multiple_indexing(indexers) # vectorized-integer mixed indexers = [ {"dim3": 0}, {"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])}, {"a": slice(None, None, 2)}, ] multiple_indexing(indexers) # vectorized-integer mixed indexers = [ {"dim3": 0}, {"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])}, {"a": 1, "b": 0}, ] multiple_indexing(indexers) # with negative step slice. indexers = [ { "dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]), "dim3": slice(-1, 1, -1), } ] > multiple_indexing(indexers) xarray/tests/test_backends.py:686: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/tests/test_backends.py:642: in multiple_indexing assert_identical(expected, actual.load()) xarray/core/dataarray.py:814: in load ds = self._to_temp_dataset().load(**kwargs) xarray/core/dataset.py:666: in load v.load() xarray/core/variable.py:381: in load self._data = np.asarray(self._data) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:677: in __array__ self._ensure_cached() xarray/core/indexing.py:674: in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:653: in __array__ return np.asarray(self.array, dtype=dtype) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray return array(a, dtype, copy=False, order=order) xarray/core/indexing.py:557: in __array__ return np.asarray(array[self.key], dtype=None) xarray/backends/zarr.py:57: in __getitem__ return array[key.tuple] /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:572: in __getitem__ return self.get_basic_selection(selection, fields=fields) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:698: in get_basic_selection fields=fields) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:738: in _get_basic_selection_nd indexer = BasicIndexer(selection, self) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:279: in __init__ dim_indexer = SliceDimIndexer(dim_sel, dim_len, dim_chunk_len) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:107: in __init__ err_negative_step() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def err_negative_step(): > raise IndexError('only slices with step >= 1 are supported') E IndexError: only slices with step >= 1 are supported /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/errors.py:55: IndexError ___________________ TestZarrDirectoryStore.test_manual_chunk ___________________ self = <xarray.tests.test_backends.TestZarrDirectoryStore object at 0x7f5831763ef0> @requires_dask @pytest.mark.filterwarnings("ignore:Specified Dask chunks") def test_manual_chunk(self): original = create_test_data().chunk({"dim1": 3, "dim2": 4, "dim3": 3}) # All of these should return non-chunked arrays NO_CHUNKS = (None, 0, {}) for no_chunk in NO_CHUNKS: open_kwargs = {"chunks": no_chunk} > with self.roundtrip(original, open_kwargs=open_kwargs) as actual: xarray/tests/test_backends.py:1594: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__ return next(self.gen) xarray/tests/test_backends.py:1553: in roundtrip with self.open(store_target, **open_kwargs) as ds: /usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__ return next(self.gen) xarray/tests/test_backends.py:1540: in open with xr.open_dataset(store_target, engine="zarr", **kwargs) as ds: xarray/backends/api.py:587: in open_dataset ds = maybe_decode_store(store, chunks) xarray/backends/api.py:511: in maybe_decode_store for k, v in ds.variables.items() xarray/backends/api.py:511: in <dictcomp> for k, v in ds.variables.items() xarray/backends/zarr.py:398: in maybe_chunk var = var.chunk(chunk_spec, name=name2, lock=None) xarray/core/variable.py:1007: in chunk data = da.from_array(data, chunks, name=name, lock=lock, **kwargs) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2712: in from_array chunks, x.shape, dtype=x.dtype, previous_chunks=previous_chunks /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2447: in normalize_chunks (), /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2445: in <genexpr> for s, c in zip(shape, chunks) /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: in blockdims_from_blockshape for d, bd in zip(shape, chunks) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .0 = <zip object at 0x7f58324b7f48> ((bd,) * (d // bd) + ((d % bd,) if d % bd else ()) if d else (0,)) > for d, bd in zip(shape, chunks) ) E ZeroDivisionError: integer division or modulo by zero /usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: ZeroDivisionError ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
651772649 https://github.com/pydata/xarray/pull/4187#issuecomment-651772649 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MTc3MjY0OQ== weiji14 23487320 2020-06-30T12:56:00Z 2020-06-30T12:56:00Z CONTRIBUTOR

Is it ok to drop the deprecated auto_chunk tests here in this PR (or leave it to another PR)? The deprecation warning was first added in https://github.com/pydata/xarray/pull/2530/commits/ae4cf0ab19b3e563bde90a48b3e6ee615930d4a1, and I see that auto_chunk was used back in v0.12.1 at http://xarray.pydata.org/en/v0.12.1/generated/xarray.open_zarr.html.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
651662701 https://github.com/pydata/xarray/pull/4187#issuecomment-651662701 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MTY2MjcwMQ== weiji14 23487320 2020-06-30T09:03:53Z 2020-06-30T09:24:12Z CONTRIBUTOR

Nevermind, I found it. There was an if that should have been an elif. Onward to the next error - UnboundLocalError. Edit: Also fixed!

```python-traceback =================================== FAILURES =================================== __________________________ TestDataset.test_lazy_load __________________________ self = <xarray.tests.test_dataset.TestDataset object at 0x7f4aed5df940> def test_lazy_load(self): store = InaccessibleVariableDataStore() create_test_data().dump_to_store(store) for decode_cf in [True, False]: > ds = open_dataset(store, decode_cf=decode_cf) xarray/tests/test_dataset.py:4188: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/backends/api.py:587: in open_dataset ds = maybe_decode_store(store) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ store = <xarray.tests.test_dataset.InaccessibleVariableDataStore object at 0x7f4aed5dfb38> lock = False def maybe_decode_store(store, lock=False): ds = conventions.decode_cf( store, mask_and_scale=mask_and_scale, decode_times=decode_times, concat_characters=concat_characters, decode_coords=decode_coords, drop_variables=drop_variables, use_cftime=use_cftime, decode_timedelta=decode_timedelta, ) _protect_dataset_variables_inplace(ds, cache) > if chunks is not None: E UnboundLocalError: local variable 'chunks' referenced before assignment xarray/backends/api.py:466: UnboundLocalError ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
651624166 https://github.com/pydata/xarray/pull/4187#issuecomment-651624166 https://api.github.com/repos/pydata/xarray/issues/4187 MDEyOklzc3VlQ29tbWVudDY1MTYyNDE2Ng== weiji14 23487320 2020-06-30T08:02:03Z 2020-06-30T09:23:32Z CONTRIBUTOR

This is the one test failure (AttributeError) on Linux py36-bare-minimum:

```python-traceback =================================== FAILURES =================================== __________________________ TestDataset.test_lazy_load __________________________ self = <xarray.tests.test_dataset.TestDataset object at 0x7fa80b2b7be0> def test_lazy_load(self): store = InaccessibleVariableDataStore() create_test_data().dump_to_store(store) for decode_cf in [True, False]: > ds = open_dataset(store, decode_cf=decode_cf) xarray/tests/test_dataset.py:4188: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/backends/api.py:578: in open_dataset engine = _get_engine_from_magic_number(filename_or_obj) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ filename_or_obj = <xarray.tests.test_dataset.InaccessibleVariableDataStore object at 0x7fa80b2b7d30> def _get_engine_from_magic_number(filename_or_obj): # check byte header to determine file type if isinstance(filename_or_obj, bytes): magic_number = filename_or_obj[:8] else: > if filename_or_obj.tell() != 0: E AttributeError: 'InaccessibleVariableDataStore' object has no attribute 'tell' xarray/backends/api.py:116: AttributeError ```

Been scratching my head debugging this one. There doesn't seem to be an obvious reason why this test is failing, since 1) this test isn't for Zarr and 2) this test shouldn't be affected by the new if blocks checking if engine=="zarr". Will need to double check the logic here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Xarray open_mfdataset with engine Zarr 647804004
651481343 https://github.com/pydata/xarray/pull/4003#issuecomment-651481343 https://api.github.com/repos/pydata/xarray/issues/4003 MDEyOklzc3VlQ29tbWVudDY1MTQ4MTM0Mw== weiji14 23487320 2020-06-30T02:24:23Z 2020-06-30T02:33:37Z CONTRIBUTOR

Sure, I can move it, but I just wanted to make sure @Mikejmnez gets the credit for this PR. Edit: moved to https://github.com/pydata/xarray/pull/4187.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.open_mzar: open multiple zarr files (in parallel) 606683601
651397892 https://github.com/pydata/xarray/pull/4003#issuecomment-651397892 https://api.github.com/repos/pydata/xarray/issues/4003 MDEyOklzc3VlQ29tbWVudDY1MTM5Nzg5Mg== weiji14 23487320 2020-06-29T22:15:08Z 2020-06-29T23:06:10Z CONTRIBUTOR

@Mikejmnez, do you mind if I pick up working on this branch? I'd be really keen to see it get into xarray 0.16, and then it will be possible to resolve the intake-xarray issue at https://github.com/intake/intake-xarray/issues/70. ~~Not sure if it's possible to get commit access here, or if I should just submit a PR to your fork, or maybe there's a better way?~~ Edit: I've opened up a pull request to the fork.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  xarray.open_mzar: open multiple zarr files (in parallel) 606683601
632345916 https://github.com/pydata/xarray/pull/4036#issuecomment-632345916 https://api.github.com/repos/pydata/xarray/issues/4036 MDEyOklzc3VlQ29tbWVudDYzMjM0NTkxNg== weiji14 23487320 2020-05-21T21:06:15Z 2020-05-21T21:06:15Z CONTRIBUTOR

Cool, there doesn't seem to be an easy [theme=dark] solution (at least from my poor CSS knowledge), so I've posted a question on the Atom forums at https://discuss.atom.io/t/how-to-detect-dark-theme-on-atom/74937 to ask.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support darkmode 613044689
632010808 https://github.com/pydata/xarray/pull/4036#issuecomment-632010808 https://api.github.com/repos/pydata/xarray/issues/4036 MDEyOklzc3VlQ29tbWVudDYzMjAxMDgwOA== weiji14 23487320 2020-05-21T10:29:33Z 2020-05-21T10:29:33Z CONTRIBUTOR

This looks awesome! Is it possible to port this to the Atom editor as well? This is what it looks like currently on the 'One Dark' Atom editor theme:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  support darkmode 613044689
520693149 https://github.com/pydata/xarray/issues/3185#issuecomment-520693149 https://api.github.com/repos/pydata/xarray/issues/3185 MDEyOklzc3VlQ29tbWVudDUyMDY5MzE0OQ== weiji14 23487320 2019-08-13T05:24:06Z 2019-08-13T05:26:03Z CONTRIBUTOR

xr.show_versions() gets its libnetcdf version from netCDF4 (specially netCDF4.__netcdf4libversion__). So I'm guessing that somehow netCDF4 is picking up libnetcdf from somewhere else -- maybe you pip installed it? It might be worth trying another fresh conda environment...

I'm not sure where it's picking up the libnetcdf 4.6.3 version from, but I found your comment at https://github.com/pydata/xarray/issues/2535#issuecomment-445944261 and think it might indeed be an incompatibility issue with rasterio and netCDF4 binary wheels (do rasterio wheels include netcdf binaries?). Probably somewhat related to https://github.com/mapbox/rasterio/issues/1574 too.

Managed to get things to work by combining the workaround in this Pull Request and StackOverflow post, basically having pip compile the netcdf python package from source instead of using the wheel:

bash HDF5_DIR=$CONDA_PREFIX pip install --no-binary netCDF4 netCDF4==1.4.2 where $CONDA_PREFIX is the path to the conda environment e.g. /home/jovyan/.conda/envs/name-of-env. I've tested my MCVE code sample above and it works up to the latest netCDF4==1.5.1.2 version!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946
520658129 https://github.com/pydata/xarray/issues/3185#issuecomment-520658129 https://api.github.com/repos/pydata/xarray/issues/3185 MDEyOklzc3VlQ29tbWVudDUyMDY1ODEyOQ== weiji14 23487320 2019-08-13T01:54:54Z 2019-08-13T01:54:54Z CONTRIBUTOR

Yes, there's https://gdal.org/drivers/raster/netcdf.html :smile: I've done a bit more debugging (having temporarily isolated salem from my script) and am still having issues with my setup.

The clean xarray-tests conda environment that works with netcdf==1.5.1.2 has libnetcdf: 4.6.2, but for some strange reason, running xr.show_versions() on my setup shows libnetcdf: 4.6.3 even though conda list | grep libnetcdf shows that I've installed libnetcdf 4.6.2 h056eaf5_1002 conda-forge.

Not sure if this libnetcdf 4.6.3 version is the problem, but it stands out the most (to me at least) when looking at the diff between my setup and the clean one. Is there a way to check the order in which xarray looks for the netcdf binaries as I feel it might be a PATH related issue. Also not sure if this issue fits here in xarray or somewhere else now...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946
518930975 https://github.com/pydata/xarray/issues/3185#issuecomment-518930975 https://api.github.com/repos/pydata/xarray/issues/3185 MDEyOklzc3VlQ29tbWVudDUxODkzMDk3NQ== weiji14 23487320 2019-08-07T04:05:19Z 2019-08-07T04:05:19Z CONTRIBUTOR

Hold on, the coordinates seems to be parsed out correctly from the netCDF file (even with netCDF==1.5.1.2) when I have a clean conda installation created following the instructions at https://xarray.pydata.org/en/latest/contributing.html#creating-a-python-environment.

I've isolated the issue and think the problem arises when I also import salem (an xarray accessor)... Will try to narrow this down before I close this issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946
518860992 https://github.com/pydata/xarray/issues/3185#issuecomment-518860992 https://api.github.com/repos/pydata/xarray/issues/3185 MDEyOklzc3VlQ29tbWVudDUxODg2MDk5Mg== weiji14 23487320 2019-08-06T22:02:44Z 2019-08-06T22:02:44Z CONTRIBUTOR

Well open_rasterio did have an "experimental" warning on it in the docs :laughing:, but it was really nice having it work on GeoTIFFs and NetCDF files. I've forked the repo and will try to debug the situation a bit more. If anyone who's worked on that part of the codebase before has any pointers on what might be the cause of this issue / where to start that would be great.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 641.78ms · About: xarray-datasette