issue_comments
20 rows where author_association = "CONTRIBUTOR" and user = 23487320 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- weiji14 · 20 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1435869985 | https://github.com/pydata/xarray/pull/7496#issuecomment-1435869985 | https://api.github.com/repos/pydata/xarray/issues/7496 | IC_kwDOAMm_X85VlaMh | weiji14 23487320 | 2023-02-19T04:55:26Z | 2023-02-19T04:55:26Z | CONTRIBUTOR |
There was some discussion on whether Also, quite a few people were in favour of keeping |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
deprecate open_zarr 1564661430 | |
1330782936 | https://github.com/pydata/xarray/pull/7304#issuecomment-1330782936 | https://api.github.com/repos/pydata/xarray/issues/7304 | IC_kwDOAMm_X85PUiLY | weiji14 23487320 | 2022-11-29T15:00:21Z | 2022-11-29T15:09:15Z | CONTRIBUTOR |
Hmm, in that case, I'm leaning towards removing the warning. The file pointer is reset anyway after reading the magic byte number, and that hasn't caused any issues (as mentioned in https://github.com/pydata/xarray/issues/6813#issuecomment-1205503288), so it should be more or less safe. Let me push another commit. Edit: done at 929cb62977d630a00ace9747bc86066555b83d0d. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Reset file pointer to 0 when reading file stream 1458347938 | |
1322461240 | https://github.com/pydata/xarray/pull/7304#issuecomment-1322461240 | https://api.github.com/repos/pydata/xarray/issues/7304 | IC_kwDOAMm_X85O0yg4 | weiji14 23487320 | 2022-11-21T18:10:20Z | 2022-11-21T18:13:07Z | CONTRIBUTOR | Traceback from the 2 test failures at https://github.com/pydata/xarray/actions/runs/3516849430/jobs/5893926099#step:9:252 ```python-traceback =================================== FAILURES =================================== ____ TestH5NetCDFFileObject.test_open_twice ______ [gw2] linux -- Python 3.10.7 /home/runner/micromamba-root/envs/xarray-tests/bin/python self = <xarray.tests.test_backends.TestH5NetCDFFileObject object at 0x7f211de81e40>
/home/runner/work/xarray/xarray/xarray/tests/test_backends.py:3034: Failed ___ TestH5NetCDFFileObject.test_open_fileobj _____ [gw2] linux -- Python 3.10.7 /home/runner/micromamba-root/envs/xarray-tests/bin/python self = <xarray.tests.test_backends.TestH5NetCDFFileObject object at 0x7f211de82530>
/home/runner/work/xarray/xarray/xarray/tests/test_backends.py:3076: Failed =========================== short test summary info ============================ FAILED xarray/tests/test_backends.py::TestH5NetCDFFileObject::test_open_twice - Failed: DID NOT RAISE <class 'ValueError'> FAILED xarray/tests/test_backends.py::TestH5NetCDFFileObject::test_open_fileobj - Failed: DID NOT WARN. No warnings of type (<class 'RuntimeWarning'>,) matching the regex were emitted. Regex: 'h5netcdf'\ fails\ while\ guessing Emitted warnings: [ UserWarning('cannot guess the engine, file-like object read/write pointer not at the start of the file, so resetting file pointer to zero. If this does not work, please close and reopen, or use a context manager'), RuntimeWarning("deallocating CachingFileManager(<class 'h5netcdf.core.File'>, <_io.BufferedReader name='/tmp/tmpoxdfl12i/temp-720.nc'>, mode='r', kwargs={'invalid_netcdf': None, 'decode_vlen_strings': True}, manager_id='b62ec6c8-b328-409c-bc5d-bbab265bea51'), but file is not already closed. This may indicate a bug.")] = 2 failed, 14608 passed, 1190 skipped, 203 xfailed, 73 xpassed, 54 warnings in 581.98s (0:09:41) = ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Reset file pointer to 0 when reading file stream 1458347938 | |
1322443376 | https://github.com/pydata/xarray/issues/6813#issuecomment-1322443376 | https://api.github.com/repos/pydata/xarray/issues/6813 | IC_kwDOAMm_X85O0uJw | weiji14 23487320 | 2022-11-21T17:55:16Z | 2022-11-21T17:56:17Z | CONTRIBUTOR | Just hitting into this same issue mentioned downstream at https://github.com/xarray-contrib/datatree/pull/130 while trying to read ICESat-2 HDF5 files from S3, but realized that the fix should happening in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Opening fsspec s3 file twice results in invalid start byte 1310058435 | |
962958896 | https://github.com/pydata/xarray/issues/5944#issuecomment-962958896 | https://api.github.com/repos/pydata/xarray/issues/5944 | IC_kwDOAMm_X845ZZYw | weiji14 23487320 | 2021-11-08T09:21:10Z | 2021-11-08T09:22:16Z | CONTRIBUTOR | I'm getting a similar issue with ```python-traceback __ testopen_variable_filter[open_rasterio_engine] ___ open_rasterio = <function open_rasterio_engine at 0x7fa99ca7ec10>
test/integration/test_integration__io.py:185: test/conftest.py:103: in open_rasterio_engine return xr.open_dataset(file_name_or_object, engine="rasterio", **kwargs) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/api.py:481: in open_dataset backend = plugins.get_backend(engine) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:158: in get_backend engines = list_engines() ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:103: in list_engines return build_engines(entrypoints) ../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:92: in build_engines entrypoints = remove_duplicates(entrypoints) entrypoints = [EntryPoint(name='rasterio', value='rioxarray.xarray_plugin:RasterioBackend', group='xarray.backends'), EntryPoint(nam...rray.backends'), EntryPoint(name='rasterio', value='rioxarray.xarray_plugin:RasterioBackend', group='xarray.backends')]
../../../miniconda3/envs/rioxarray/lib/python3.9/site-packages/xarray/backends/plugins.py:29: AttributeError ================================ warnings summary ================================= test/integration/test_integration__io.py::test_open_variable_filter[open_rasterio] /home/username/projects/rioxarray/rioxarray/_io.py:366: DeprecationWarning: string or file could not be read to its end due to unmatched data; this will raise a ValueError in the future. new_val = np.fromstring(value.strip("{}"), dtype="float", sep=",") -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED test/integration/test_integration__io.py::test_open_variable_filter[open_rasterio_engine] !!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!! ``` Output of
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46)
[GCC 9.4.0]
python-bits: 64
OS: Linux
OS-release: 5.10.0-8-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_NZ.UTF-8
LOCALE: ('en_NZ', 'UTF-8')
libhdf5: 1.12.1
libnetcdf: 4.8.1
xarray: 0.20.1
pandas: 1.3.4
numpy: 1.21.4
scipy: 1.7.1
netCDF4: 1.5.8
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.5.1.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.10
cfgrib: None
iris: None
bottleneck: None
dask: 2021.11.0
distributed: 2021.11.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2021.11.0
cupy: None
pint: None
sparse: None
setuptools: 58.5.3
pip: 21.3.1
conda: None
pytest: 6.2.5
IPython: None
sphinx: 1.8.5
```
|
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Backend / plugin system `remove_duplicates` raises AttributeError on discovering duplicates 1046454702 | |
721466404 | https://github.com/pydata/xarray/issues/4496#issuecomment-721466404 | https://api.github.com/repos/pydata/xarray/issues/4496 | MDEyOklzc3VlQ29tbWVudDcyMTQ2NjQwNA== | weiji14 23487320 | 2020-11-04T01:47:30Z | 2020-11-04T01:49:39Z | CONTRIBUTOR | Just a general comment on the For those who are confused, this is the current state of | :arrow_down: engine\chunk :arrow_right: | None (default) | 'auto' | {} | -1 | |--------------------------------------------------------| -------------------|-------|----|-------| | None (i.e. default for NetCDF) | np.ndarray | dask.Array (produces origintal chunks as in NetCDF obj??) | dask.Array (rechunked into 1 chunk) | dask.Array (rechunked into 1 chunk) | | zarr | np.ndarray | dask.Array (original chunks as in Zarr obj) | dask.Array (original chunks as in Zarr obj) | dask.Array (rechunked into 1 chunk + UserWarning) | Sample code to test (run in jupyter notebook to see the dask chunk visual):
```python
import xarray as xr
import fsspec
# Opening NetCDF
dataset: xr.Dataset = xr.open_dataset(
"http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/HRRR/CONUS_2p5km/Best", chunks={}
)
dataset.Temperature_height_above_ground.data
# Opening Zarr
zstore = fsspec.get_mapper(
url="gs://cmip6/CMIP/NCAR/CESM2/historical/r9i1p1f1/Amon/tas/gn/"
)
dataset: xr.Dataset = xr.open_dataset(
filename_or_obj=zstore,
engine="zarr",
chunks={},
backend_kwargs=dict(consolidated=True),
)
dataset.tas.data
```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Flexible backends - Harmonise zarr chunking with other backends chunking 717410970 | |
652702644 | https://github.com/pydata/xarray/pull/4187#issuecomment-652702644 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MjcwMjY0NA== | weiji14 23487320 | 2020-07-01T23:59:32Z | 2020-07-03T04:23:34Z | CONTRIBUTOR |
Just wanted to mention that two of the reviewers in the last PR (see https://github.com/pydata/xarray/pull/4003#issuecomment-619644606 and https://github.com/pydata/xarray/pull/4003#issuecomment-620169860) seemed in favour of deprecating
Yes exactly, time does fly (half a year has gone by already!). Currently I'm trying to piggyback Zarr into
Thanks for chipping in @Carreau! I'm sure the community will have some useful suggestions. Just cross-referencing https://zarr-developers.github.io/zarr/specs/2019/06/19/zarr-v3-update.html so others can get a better feel for where things are at. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
652260859 | https://github.com/pydata/xarray/pull/4187#issuecomment-652260859 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MjI2MDg1OQ== | weiji14 23487320 | 2020-07-01T08:02:14Z | 2020-07-01T22:23:27Z | CONTRIBUTOR |
Depends on which line in the Zen of Python you want to follow - "Simple is better than complex", or "There should be one-- and preferably only one --obvious way to do it". From a maintenance perspective, it's balancing the cost of a deprecation cycle vs writing code that tests both instances I guess.
These are some pretty good ideas. I also wonder if there's a way to mimic the dataset identifiers like in rasterio, something like Counter-argument would be that the cyclomatic complexity of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
652104356 | https://github.com/pydata/xarray/pull/4187#issuecomment-652104356 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MjEwNDM1Ng== | weiji14 23487320 | 2020-06-30T23:42:58Z | 2020-07-01T03:04:11Z | CONTRIBUTOR | Four more failures, something to do with dask? Seems related to #3919 and #3921.
Edit: Fixed the
```python-traceback
=================================== FAILURES ===================================
__________________ TestZarrDictStore.test_vectorized_indexing __________________
self = <xarray.tests.test_backends.TestZarrDictStore object at 0x7f5832433940>
@pytest.mark.xfail(
not has_dask,
reason="the code for indexing without dask handles negative steps in slices incorrectly",
)
def test_vectorized_indexing(self):
in_memory = create_test_data()
with self.roundtrip(in_memory) as on_disk:
indexers = {
"dim1": DataArray([0, 2, 0], dims="a"),
"dim2": DataArray([0, 2, 3], dims="a"),
}
expected = in_memory.isel(**indexers)
actual = on_disk.isel(**indexers)
# make sure the array is not yet loaded into memory
assert not actual["var1"].variable._in_memory
assert_identical(expected, actual.load())
# do it twice, to make sure we're switched from
# vectorized -> numpy when we cached the values
actual = on_disk.isel(**indexers)
assert_identical(expected, actual)
def multiple_indexing(indexers):
# make sure a sequence of lazy indexings certainly works.
with self.roundtrip(in_memory) as on_disk:
actual = on_disk["var3"]
expected = in_memory["var3"]
for ind in indexers:
actual = actual.isel(**ind)
expected = expected.isel(**ind)
# make sure the array is not yet loaded into memory
assert not actual.variable._in_memory
assert_identical(expected, actual.load())
# two-staged vectorized-indexing
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": DataArray([[0, 4], [1, 3], [2, 2]], dims=["a", "b"]),
},
{"a": DataArray([0, 1], dims=["c"]), "b": DataArray([0, 1], dims=["c"])},
]
multiple_indexing(indexers)
# vectorized-slice mixed
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": slice(None, 10),
}
]
multiple_indexing(indexers)
# vectorized-integer mixed
indexers = [
{"dim3": 0},
{"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])},
{"a": slice(None, None, 2)},
]
multiple_indexing(indexers)
# vectorized-integer mixed
indexers = [
{"dim3": 0},
{"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])},
{"a": 1, "b": 0},
]
multiple_indexing(indexers)
# with negative step slice.
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": slice(-1, 1, -1),
}
]
> multiple_indexing(indexers)
xarray/tests/test_backends.py:686:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
xarray/tests/test_backends.py:642: in multiple_indexing
assert_identical(expected, actual.load())
xarray/core/dataarray.py:814: in load
ds = self._to_temp_dataset().load(**kwargs)
xarray/core/dataset.py:666: in load
v.load()
xarray/core/variable.py:381: in load
self._data = np.asarray(self._data)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:677: in __array__
self._ensure_cached()
xarray/core/indexing.py:674: in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:653: in __array__
return np.asarray(self.array, dtype=dtype)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:557: in __array__
return np.asarray(array[self.key], dtype=None)
xarray/backends/zarr.py:57: in __getitem__
return array[key.tuple]
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:572: in __getitem__
return self.get_basic_selection(selection, fields=fields)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:698: in get_basic_selection
fields=fields)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:738: in _get_basic_selection_nd
indexer = BasicIndexer(selection, self)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:279: in __init__
dim_indexer = SliceDimIndexer(dim_sel, dim_len, dim_chunk_len)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:107: in __init__
err_negative_step()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def err_negative_step():
> raise IndexError('only slices with step >= 1 are supported')
E IndexError: only slices with step >= 1 are supported
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/errors.py:55: IndexError
_____________________ TestZarrDictStore.test_manual_chunk ______________________
self = <xarray.tests.test_backends.TestZarrDictStore object at 0x7f5832b80cf8>
@requires_dask
@pytest.mark.filterwarnings("ignore:Specified Dask chunks")
def test_manual_chunk(self):
original = create_test_data().chunk({"dim1": 3, "dim2": 4, "dim3": 3})
# All of these should return non-chunked arrays
NO_CHUNKS = (None, 0, {})
for no_chunk in NO_CHUNKS:
open_kwargs = {"chunks": no_chunk}
> with self.roundtrip(original, open_kwargs=open_kwargs) as actual:
xarray/tests/test_backends.py:1594:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__
return next(self.gen)
xarray/tests/test_backends.py:1553: in roundtrip
with self.open(store_target, **open_kwargs) as ds:
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__
return next(self.gen)
xarray/tests/test_backends.py:1540: in open
with xr.open_dataset(store_target, engine="zarr", **kwargs) as ds:
xarray/backends/api.py:587: in open_dataset
ds = maybe_decode_store(store, chunks)
xarray/backends/api.py:511: in maybe_decode_store
for k, v in ds.variables.items()
xarray/backends/api.py:511: in <dictcomp>
for k, v in ds.variables.items()
xarray/backends/zarr.py:398: in maybe_chunk
var = var.chunk(chunk_spec, name=name2, lock=None)
xarray/core/variable.py:1007: in chunk
data = da.from_array(data, chunks, name=name, lock=lock, **kwargs)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2712: in from_array
chunks, x.shape, dtype=x.dtype, previous_chunks=previous_chunks
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2447: in normalize_chunks
(),
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2445: in <genexpr>
for s, c in zip(shape, chunks)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: in blockdims_from_blockshape
for d, bd in zip(shape, chunks)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.0 = <zip object at 0x7f58332d9d48>
((bd,) * (d // bd) + ((d % bd,) if d % bd else ()) if d else (0,))
> for d, bd in zip(shape, chunks)
)
E ZeroDivisionError: integer division or modulo by zero
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: ZeroDivisionError
_______________ TestZarrDirectoryStore.test_vectorized_indexing ________________
self = <xarray.tests.test_backends.TestZarrDirectoryStore object at 0x7f5832a08a20>
@pytest.mark.xfail(
not has_dask,
reason="the code for indexing without dask handles negative steps in slices incorrectly",
)
def test_vectorized_indexing(self):
in_memory = create_test_data()
with self.roundtrip(in_memory) as on_disk:
indexers = {
"dim1": DataArray([0, 2, 0], dims="a"),
"dim2": DataArray([0, 2, 3], dims="a"),
}
expected = in_memory.isel(**indexers)
actual = on_disk.isel(**indexers)
# make sure the array is not yet loaded into memory
assert not actual["var1"].variable._in_memory
assert_identical(expected, actual.load())
# do it twice, to make sure we're switched from
# vectorized -> numpy when we cached the values
actual = on_disk.isel(**indexers)
assert_identical(expected, actual)
def multiple_indexing(indexers):
# make sure a sequence of lazy indexings certainly works.
with self.roundtrip(in_memory) as on_disk:
actual = on_disk["var3"]
expected = in_memory["var3"]
for ind in indexers:
actual = actual.isel(**ind)
expected = expected.isel(**ind)
# make sure the array is not yet loaded into memory
assert not actual.variable._in_memory
assert_identical(expected, actual.load())
# two-staged vectorized-indexing
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": DataArray([[0, 4], [1, 3], [2, 2]], dims=["a", "b"]),
},
{"a": DataArray([0, 1], dims=["c"]), "b": DataArray([0, 1], dims=["c"])},
]
multiple_indexing(indexers)
# vectorized-slice mixed
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": slice(None, 10),
}
]
multiple_indexing(indexers)
# vectorized-integer mixed
indexers = [
{"dim3": 0},
{"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])},
{"a": slice(None, None, 2)},
]
multiple_indexing(indexers)
# vectorized-integer mixed
indexers = [
{"dim3": 0},
{"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"])},
{"a": 1, "b": 0},
]
multiple_indexing(indexers)
# with negative step slice.
indexers = [
{
"dim1": DataArray([[0, 7], [2, 6], [3, 5]], dims=["a", "b"]),
"dim3": slice(-1, 1, -1),
}
]
> multiple_indexing(indexers)
xarray/tests/test_backends.py:686:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
xarray/tests/test_backends.py:642: in multiple_indexing
assert_identical(expected, actual.load())
xarray/core/dataarray.py:814: in load
ds = self._to_temp_dataset().load(**kwargs)
xarray/core/dataset.py:666: in load
v.load()
xarray/core/variable.py:381: in load
self._data = np.asarray(self._data)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:677: in __array__
self._ensure_cached()
xarray/core/indexing.py:674: in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:653: in __array__
return np.asarray(self.array, dtype=dtype)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/numpy/core/numeric.py:501: in asarray
return array(a, dtype, copy=False, order=order)
xarray/core/indexing.py:557: in __array__
return np.asarray(array[self.key], dtype=None)
xarray/backends/zarr.py:57: in __getitem__
return array[key.tuple]
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:572: in __getitem__
return self.get_basic_selection(selection, fields=fields)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:698: in get_basic_selection
fields=fields)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/core.py:738: in _get_basic_selection_nd
indexer = BasicIndexer(selection, self)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:279: in __init__
dim_indexer = SliceDimIndexer(dim_sel, dim_len, dim_chunk_len)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/indexing.py:107: in __init__
err_negative_step()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def err_negative_step():
> raise IndexError('only slices with step >= 1 are supported')
E IndexError: only slices with step >= 1 are supported
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/zarr/errors.py:55: IndexError
___________________ TestZarrDirectoryStore.test_manual_chunk ___________________
self = <xarray.tests.test_backends.TestZarrDirectoryStore object at 0x7f5831763ef0>
@requires_dask
@pytest.mark.filterwarnings("ignore:Specified Dask chunks")
def test_manual_chunk(self):
original = create_test_data().chunk({"dim1": 3, "dim2": 4, "dim3": 3})
# All of these should return non-chunked arrays
NO_CHUNKS = (None, 0, {})
for no_chunk in NO_CHUNKS:
open_kwargs = {"chunks": no_chunk}
> with self.roundtrip(original, open_kwargs=open_kwargs) as actual:
xarray/tests/test_backends.py:1594:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__
return next(self.gen)
xarray/tests/test_backends.py:1553: in roundtrip
with self.open(store_target, **open_kwargs) as ds:
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/contextlib.py:81: in __enter__
return next(self.gen)
xarray/tests/test_backends.py:1540: in open
with xr.open_dataset(store_target, engine="zarr", **kwargs) as ds:
xarray/backends/api.py:587: in open_dataset
ds = maybe_decode_store(store, chunks)
xarray/backends/api.py:511: in maybe_decode_store
for k, v in ds.variables.items()
xarray/backends/api.py:511: in <dictcomp>
for k, v in ds.variables.items()
xarray/backends/zarr.py:398: in maybe_chunk
var = var.chunk(chunk_spec, name=name2, lock=None)
xarray/core/variable.py:1007: in chunk
data = da.from_array(data, chunks, name=name, lock=lock, **kwargs)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2712: in from_array
chunks, x.shape, dtype=x.dtype, previous_chunks=previous_chunks
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2447: in normalize_chunks
(),
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:2445: in <genexpr>
for s, c in zip(shape, chunks)
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: in blockdims_from_blockshape
for d, bd in zip(shape, chunks)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.0 = <zip object at 0x7f58324b7f48>
((bd,) * (d // bd) + ((d % bd,) if d % bd else ()) if d else (0,))
> for d, bd in zip(shape, chunks)
)
E ZeroDivisionError: integer division or modulo by zero
/usr/share/miniconda/envs/xarray-tests/lib/python3.6/site-packages/dask/array/core.py:954: ZeroDivisionError
```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
651772649 | https://github.com/pydata/xarray/pull/4187#issuecomment-651772649 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MTc3MjY0OQ== | weiji14 23487320 | 2020-06-30T12:56:00Z | 2020-06-30T12:56:00Z | CONTRIBUTOR | Is it ok to drop the deprecated |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
651662701 | https://github.com/pydata/xarray/pull/4187#issuecomment-651662701 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MTY2MjcwMQ== | weiji14 23487320 | 2020-06-30T09:03:53Z | 2020-06-30T09:24:12Z | CONTRIBUTOR | Nevermind, I found it. There was an
```python-traceback
=================================== FAILURES ===================================
__________________________ TestDataset.test_lazy_load __________________________
self = <xarray.tests.test_dataset.TestDataset object at 0x7f4aed5df940>
def test_lazy_load(self):
store = InaccessibleVariableDataStore()
create_test_data().dump_to_store(store)
for decode_cf in [True, False]:
> ds = open_dataset(store, decode_cf=decode_cf)
xarray/tests/test_dataset.py:4188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
xarray/backends/api.py:587: in open_dataset
ds = maybe_decode_store(store)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
store = <xarray.tests.test_dataset.InaccessibleVariableDataStore object at 0x7f4aed5dfb38>
lock = False
def maybe_decode_store(store, lock=False):
ds = conventions.decode_cf(
store,
mask_and_scale=mask_and_scale,
decode_times=decode_times,
concat_characters=concat_characters,
decode_coords=decode_coords,
drop_variables=drop_variables,
use_cftime=use_cftime,
decode_timedelta=decode_timedelta,
)
_protect_dataset_variables_inplace(ds, cache)
> if chunks is not None:
E UnboundLocalError: local variable 'chunks' referenced before assignment
xarray/backends/api.py:466: UnboundLocalError
```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
651624166 | https://github.com/pydata/xarray/pull/4187#issuecomment-651624166 | https://api.github.com/repos/pydata/xarray/issues/4187 | MDEyOklzc3VlQ29tbWVudDY1MTYyNDE2Ng== | weiji14 23487320 | 2020-06-30T08:02:03Z | 2020-06-30T09:23:32Z | CONTRIBUTOR | This is the one test failure (AttributeError) on Linux py36-bare-minimum:
```python-traceback
=================================== FAILURES ===================================
__________________________ TestDataset.test_lazy_load __________________________
self = <xarray.tests.test_dataset.TestDataset object at 0x7fa80b2b7be0>
def test_lazy_load(self):
store = InaccessibleVariableDataStore()
create_test_data().dump_to_store(store)
for decode_cf in [True, False]:
> ds = open_dataset(store, decode_cf=decode_cf)
xarray/tests/test_dataset.py:4188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
xarray/backends/api.py:578: in open_dataset
engine = _get_engine_from_magic_number(filename_or_obj)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
filename_or_obj = <xarray.tests.test_dataset.InaccessibleVariableDataStore object at 0x7fa80b2b7d30>
def _get_engine_from_magic_number(filename_or_obj):
# check byte header to determine file type
if isinstance(filename_or_obj, bytes):
magic_number = filename_or_obj[:8]
else:
> if filename_or_obj.tell() != 0:
E AttributeError: 'InaccessibleVariableDataStore' object has no attribute 'tell'
xarray/backends/api.py:116: AttributeError
```
Been scratching my head debugging this one. There doesn't seem to be an obvious reason why this test is failing, since 1) this test isn't for Zarr and 2) this test shouldn't be affected by the new |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Xarray open_mfdataset with engine Zarr 647804004 | |
651481343 | https://github.com/pydata/xarray/pull/4003#issuecomment-651481343 | https://api.github.com/repos/pydata/xarray/issues/4003 | MDEyOklzc3VlQ29tbWVudDY1MTQ4MTM0Mw== | weiji14 23487320 | 2020-06-30T02:24:23Z | 2020-06-30T02:33:37Z | CONTRIBUTOR | Sure, I can move it, but I just wanted to make sure @Mikejmnez gets the credit for this PR. Edit: moved to https://github.com/pydata/xarray/pull/4187. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_mzar: open multiple zarr files (in parallel) 606683601 | |
651397892 | https://github.com/pydata/xarray/pull/4003#issuecomment-651397892 | https://api.github.com/repos/pydata/xarray/issues/4003 | MDEyOklzc3VlQ29tbWVudDY1MTM5Nzg5Mg== | weiji14 23487320 | 2020-06-29T22:15:08Z | 2020-06-29T23:06:10Z | CONTRIBUTOR | @Mikejmnez, do you mind if I pick up working on this branch? I'd be really keen to see it get into xarray 0.16, and then it will be possible to resolve the intake-xarray issue at https://github.com/intake/intake-xarray/issues/70. ~~Not sure if it's possible to get commit access here, or if I should just submit a PR to your fork, or maybe there's a better way?~~ Edit: I've opened up a pull request to the fork. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray.open_mzar: open multiple zarr files (in parallel) 606683601 | |
632345916 | https://github.com/pydata/xarray/pull/4036#issuecomment-632345916 | https://api.github.com/repos/pydata/xarray/issues/4036 | MDEyOklzc3VlQ29tbWVudDYzMjM0NTkxNg== | weiji14 23487320 | 2020-05-21T21:06:15Z | 2020-05-21T21:06:15Z | CONTRIBUTOR | Cool, there doesn't seem to be an easy |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support darkmode 613044689 | |
632010808 | https://github.com/pydata/xarray/pull/4036#issuecomment-632010808 | https://api.github.com/repos/pydata/xarray/issues/4036 | MDEyOklzc3VlQ29tbWVudDYzMjAxMDgwOA== | weiji14 23487320 | 2020-05-21T10:29:33Z | 2020-05-21T10:29:33Z | CONTRIBUTOR | This looks awesome! Is it possible to port this to the Atom editor as well? This is what it looks like currently on the 'One Dark' Atom editor theme: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
support darkmode 613044689 | |
520693149 | https://github.com/pydata/xarray/issues/3185#issuecomment-520693149 | https://api.github.com/repos/pydata/xarray/issues/3185 | MDEyOklzc3VlQ29tbWVudDUyMDY5MzE0OQ== | weiji14 23487320 | 2019-08-13T05:24:06Z | 2019-08-13T05:26:03Z | CONTRIBUTOR |
I'm not sure where it's picking up the libnetcdf 4.6.3 version from, but I found your comment at https://github.com/pydata/xarray/issues/2535#issuecomment-445944261 and think it might indeed be an incompatibility issue with rasterio and netCDF4 binary wheels (do rasterio wheels include netcdf binaries?). Probably somewhat related to https://github.com/mapbox/rasterio/issues/1574 too. Managed to get things to work by combining the workaround in this Pull Request and StackOverflow post, basically having pip compile the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946 | |
520658129 | https://github.com/pydata/xarray/issues/3185#issuecomment-520658129 | https://api.github.com/repos/pydata/xarray/issues/3185 | MDEyOklzc3VlQ29tbWVudDUyMDY1ODEyOQ== | weiji14 23487320 | 2019-08-13T01:54:54Z | 2019-08-13T01:54:54Z | CONTRIBUTOR | Yes, there's https://gdal.org/drivers/raster/netcdf.html :smile: I've done a bit more debugging (having temporarily isolated salem from my script) and am still having issues with my setup. The clean xarray-tests conda environment that works with Not sure if this libnetcdf 4.6.3 version is the problem, but it stands out the most (to me at least) when looking at the diff between my setup and the clean one. Is there a way to check the order in which xarray looks for the netcdf binaries as I feel it might be a PATH related issue. Also not sure if this issue fits here in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946 | |
518930975 | https://github.com/pydata/xarray/issues/3185#issuecomment-518930975 | https://api.github.com/repos/pydata/xarray/issues/3185 | MDEyOklzc3VlQ29tbWVudDUxODkzMDk3NQ== | weiji14 23487320 | 2019-08-07T04:05:19Z | 2019-08-07T04:05:19Z | CONTRIBUTOR | Hold on, the coordinates seems to be parsed out correctly from the netCDF file (even with netCDF==1.5.1.2) when I have a clean conda installation created following the instructions at https://xarray.pydata.org/en/latest/contributing.html#creating-a-python-environment. I've isolated the issue and think the problem arises when I also import |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946 | |
518860992 | https://github.com/pydata/xarray/issues/3185#issuecomment-518860992 | https://api.github.com/repos/pydata/xarray/issues/3185 | MDEyOklzc3VlQ29tbWVudDUxODg2MDk5Mg== | weiji14 23487320 | 2019-08-06T22:02:44Z | 2019-08-06T22:02:44Z | CONTRIBUTOR | Well |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_rasterio does not read coordinates from netCDF file properly with netCDF4>=1.4.2 477081946 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 9