home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

3 rows where user = 6943441 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • open 2
  • closed 1

type 1

  • issue 3

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
516306758 MDU6SXNzdWU1MTYzMDY3NTg= 3476 Error when writing string coordinate variables to zarr jsadler2 6943441 open 0     16 2019-11-01T19:32:29Z 2024-01-25T17:39:42Z   NONE      

I saved an xarray dataset to zarr using to_zarr. I then later tried to read that dataset from the original zarr, re-chunk it, and then write to a new zarr. When I did that I get a strange error. I attached a zip of minimal version of the zarr dataset that I am using. test_sm_zarr.zip

MCVE Code Sample

```python import xarray as xr

sm_from_zarr = xr.open_zarr('test_sm_zarr') sm_from_zarr.to_zarr('test_sm_zarr_from', mode='w')

```

Expected Output

No error

Problem Description

I get this error: C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\core\merge.py:18: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version PANDAS_TYPES = (pd.Series, pd.DataFrame, pd.Panel) C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\core\dataarray.py:1829: FutureWarning: The Panel class is removed from pandas. Accessing it from the top-level namespace will also be removed in the next version 'DataArray', pd.Series, pd.DataFrame, pd.Panel]: Traceback (most recent call last): File "rechunk_test.py", line 38, in <module> sm_from_zarr.to_zarr('test_sm_zarr_from', mode='w') File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\core\dataset.py", line 1414, in to_zarr consolidated=consolidated, append_dim=append_dim) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\api.py", line 1101, in to_zarr dump_to_store(dataset, zstore, writer, encoding=encoding) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\api.py", line 929, in dump_to_store unlimited_dims=unlimited_dims) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\zarr.py", line 366, in store unlimited_dims=unlimited_dims) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\zarr.py", line 432, in set_variables writer.add(v.data, zarr_array) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\common.py", line 173, in add target[...] = source File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1115, in __setitem__ self.set_basic_selection(selection, value, fields=fields) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1210, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1501, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1550, in _set_selection self._chunk_setitem(chunk_coords, chunk_selection, chunk_value, fields=fields) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1659, in _chunk_setitem fields=fields) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1723, in _chunk_setitem_nosync cdata = self._encode_chunk(chunk) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\zarr\core.py", line 1769, in _encode_chunk chunk = f.encode(chunk) File "numcodecs/vlen.pyx", line 108, in numcodecs.vlen.VLenUTF8.encode TypeError: expected unicode string, found 20 BUT I think it has something to do with the datatype of one of my coordinates, site_code. Because, if it do this I get no error: ```python import xarray as xr

sm_from_zarr = xr.open_zarr('test_sm_zarr') sm_from_zarr['site_code'] = sm_from_zarr.site_code.astype('str') sm_from_zarr.to_zarr('test_sm_zarr_from', mode='w')

`` Before converting the datatype of thesite_codecoordinate isobject`

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.2 pandas: 0.25.1 numpy: 1.17.1 scipy: 1.3.1 netCDF4: 1.5.1.2 pydap: installed h5netcdf: None h5py: None Nio: None zarr: 2.3.2 cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.3.0 distributed: 2.5.1 matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.2.0 pip: 19.2.3 conda: None pytest: 5.1.2 IPython: 7.8.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3476/reactions",
    "total_count": 8,
    "+1": 8,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
502720385 MDU6SXNzdWU1MDI3MjAzODU= 3374 Key error in to_netcdf jsadler2 6943441 closed 0     5 2019-10-04T16:04:30Z 2023-03-14T21:14:33Z 2023-03-14T21:14:33Z NONE      

MCVE Code Sample

```python import xarray as xr

p = range(10) ds = xr.Dataset({'precip': (['time'], p)}, coords={'time': range(10)}) ds.attrs['Conventions'] = ['COARDS', 'GrADS'] ds.to_netcdf('a.nc') ```

Expected Output

Saves data to a.nc without error

Problem Description

In version 0.12.1 this works without error. In 0.12.2 it throws an error:

File "xarray_error.py", line 6, in <module> ds.to_netcdf('a.nc') File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\core\dataset.py", line 1365, in to_netcdf compute=compute) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\api.py", line 900, in to_netcdf store.close() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\scipy_.py", line 226, in close self._manager.close() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\xarray\backends\file_manager.py", line 188, in close file.close() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 299, in close self.flush() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 409, in flush self._write() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 420, in _write self._write_gatt_array() File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 442, in _write_gatt_array self._write_att_array(self._attributes) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 450, in _write_att_array self._write_att_values(values) File "C:\Users\jsadler\AppData\Local\Continuum\anaconda3\envs\nwm\lib\site-packages\scipy\io\netcdf.py", line 562, in _write_att_values nc_type = REVERSE[values.dtype.char, values.dtype.itemsize] KeyError: ('U', 24)

If instead the attribute is saved as as string instead of a list, it works: ```python import xarray as xr

p = range(10) ds = xr.Dataset({'precip': (['time'], p)}, coords={'time': range(10)}) ds.attrs['Conventions'] = "['COARDS', 'GrADS']" ds.to_netcdf('a.nc') ```

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: None.None libhdf5: None libnetcdf: None xarray: 0.12.2 pandas: 0.25.1 numpy: 1.17.1 scipy: 1.3.1 netCDF4: None pydap: installed h5netcdf: None h5py: None Nio: None zarr: 2.3.2 cftime: None nc_time_axis: None PseudonetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.3.0 distributed: 2.5.1 matplotlib: 3.1.1 cartopy: None seaborn: None numbagg: None setuptools: 41.2.0 pip: 19.2.3 conda: None pytest: 5.1.2 IPython: 7.8.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3374/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
861684673 MDU6SXNzdWU4NjE2ODQ2NzM= 5189 KeyError pulling from Nasa server with Pydap jsadler2 6943441 open 0     12 2021-04-19T18:41:11Z 2021-06-17T09:49:48Z   NONE      

What happened: I'm trying to pull data from this NASA server: https://hydro1.gesdisc.eosdis.nasa.gov/dods/NLDAS_FORA0125_H.002?. Through pydap, I can create a DataSet representing the data, but when I try to get the data I get this error:

```python-traceback

KeyError Traceback (most recent call last) ~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in _getitem_string(self, key) 403 try: --> 404 return self._dict[quote(key)] 405 except KeyError:

KeyError: 'tmp2m%2Etmp2m'

During handling of the above exception, another exception occurred:

IndexError Traceback (most recent call last) <ipython-input-17-3efbb8f7b71f> in <module> ----> 1 ds['tmp2m'].isel(time=0).values

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/dataarray.py in values(self) 632 def values(self) -> np.ndarray: 633 """The array's data as a numpy.ndarray""" --> 634 return self.variable.values 635 636 @values.setter

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/variable.py in values(self) 552 def values(self): 553 """The variable's data as a numpy.ndarray""" --> 554 return _as_array_or_item(self._data) 555 556 @values.setter

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/variable.py in _as_array_or_item(data) 285 data = data.get() 286 else: --> 287 data = np.asarray(data) 288 if data.ndim == 0: 289 if data.dtype.kind == "M":

~/my-conda-envs/nwm/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --> 102 return array(a, dtype, copy=False, order=order) 103 104

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in array(self, dtype) 691 692 def array(self, dtype=None): --> 693 self._ensure_cached() 694 return np.asarray(self.array, dtype=dtype) 695

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in _ensure_cached(self) 688 def _ensure_cached(self): 689 if not isinstance(self.array, NumpyIndexingAdapter): --> 690 self.array = NumpyIndexingAdapter(np.asarray(self.array)) 691 692 def array(self, dtype=None):

~/my-conda-envs/nwm/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --> 102 return array(a, dtype, copy=False, order=order) 103 104

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in array(self, dtype) 661 662 def array(self, dtype=None): --> 663 return np.asarray(self.array, dtype=dtype) 664 665 def getitem(self, key):

~/my-conda-envs/nwm/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --> 102 return array(a, dtype, copy=False, order=order) 103 104

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in array(self, dtype) 566 def array(self, dtype=None): 567 array = as_indexable(self.array) --> 568 return np.asarray(array[self.key], dtype=None) 569 570 def transpose(self, order):

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/conventions.py in getitem(self, key) 60 61 def getitem(self, key): ---> 62 return np.asarray(self.array[key], dtype=self.dtype) 63 64

~/my-conda-envs/nwm/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --> 102 return array(a, dtype, copy=False, order=order) 103 104

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/coding/variables.py in array(self, dtype) 68 69 def array(self, dtype=None): ---> 70 return self.func(self.array) 71 72 def repr(self):

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/coding/variables.py in _apply_mask(data, encoded_fill_values, decoded_fill_value, dtype) 136 ) -> np.ndarray: 137 """Mask all matching values in a NumPy arrays.""" --> 138 data = np.asarray(data, dtype=dtype) 139 condition = False 140 for fv in encoded_fill_values:

~/my-conda-envs/nwm/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like) 100 return _asarray_with_like(a, dtype=dtype, order=order, like=like) 101 --> 102 return array(a, dtype, copy=False, order=order) 103 104

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in array(self, dtype) 566 def array(self, dtype=None): 567 array = as_indexable(self.array) --> 568 return np.asarray(array[self.key], dtype=None) 569 570 def transpose(self, order):

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/backends/pydap_.py in getitem(self, key) 36 def getitem(self, key): 37 return indexing.explicit_indexing_adapter( ---> 38 key, self.shape, indexing.IndexingSupport.BASIC, self._getitem 39 ) 40

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/core/indexing.py in explicit_indexing_adapter(key, shape, indexing_support, raw_indexing_method) 851 """ 852 raw_key, numpy_indices = decompose_indexer(key, shape, indexing_support) --> 853 result = raw_indexing_method(raw_key.tuple) 854 if numpy_indices.tuple: 855 # index the loaded np.ndarray

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/backends/pydap_.py in _getitem(self, key) 43 # downloading coordinate data twice 44 array = getattr(self.array, "array", self.array) ---> 45 result = robust_getitem(array, key, catch=ValueError) 46 # in some cases, pydap doesn't squeeze axes automatically like numpy 47 axis = tuple(n for n, k in enumerate(key) if isinstance(k, integer_types))

~/my-conda-envs/nwm/lib/python3.7/site-packages/xarray/backends/common.py in robust_getitem(array, key, catch, max_retries, initial_delay) 51 for n in range(max_retries + 1): 52 try: ---> 53 return array[key] 54 except catch: 55 if n == max_retries:

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in getitem(self, index) 318 def getitem(self, index): 319 out = copy.copy(self) --> 320 out.data = self._get_data_index(index) 321 return out 322

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in _get_data_index(self, index) 347 return np.vectorize(decode_np_strings)(self._data[index]) 348 else: --> 349 return self._data[index] 350 351 def _get_data(self):

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/handlers/dap.py in getitem(self, index) 147 dataset = build_dataset(dds) 148 dataset.data = unpack_data(BytesReader(data), dataset) --> 149 return dataset[self.id].data 150 151 def len(self):

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in getitem(self, key) 423 def getitem(self, key): 424 if isinstance(key, string_types): --> 425 return self._getitem_string(key) 426 elif (isinstance(key, tuple) and 427 all(isinstance(name, string_types)

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in _getitem_string(self, key) 407 if len(splitted) > 1: 408 try: --> 409 return self[splitted[0]]['.'.join(splitted[1:])] 410 except KeyError: 411 return self['.'.join(splitted[1:])]

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in getitem(self, index) 318 def getitem(self, index): 319 out = copy.copy(self) --> 320 out.data = self._get_data_index(index) 321 return out 322

~/my-conda-envs/nwm/lib/python3.7/site-packages/pydap/model.py in _get_data_index(self, index) 347 return np.vectorize(decode_np_strings)(self._data[index]) 348 else: --> 349 return self._data[index] 350 351 def _get_data(self):

IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

``` What you expected to happen:

I should be able to select the data w/o error.

Minimal Complete Verifiable Example:

(a nasa username and password are required):

```python from pydap.client import open_url from pydap.cas.urs import setup_session import xarray as xr

base_url = "https://hydro1.gesdisc.eosdis.nasa.gov/dods/NLDAS_FORA0125_H.002?"

session = setup_session("USER", "PASSWORD", check_url=base_url)

store = xr.backends.PydapDataStore.open(base_url, session=session)

ds = xr.open_dataset(store)

ds['tmp2m'].isel(time=0, lat=0, lon=0).values ```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] python-bits: 64 OS: Linux OS-release: 4.14.219-164.354.amzn2.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: C.UTF-8 LANG: C.UTF-8 LOCALE: en_US.UTF-8 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.3 numpy: 1.20.1 scipy: 1.6.0 netCDF4: 1.5.6 pydap: installed h5netcdf: None h5py: None Nio: None zarr: 2.6.1 cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.1 cfgrib: None iris: None bottleneck: None dask: 2021.02.0 distributed: 2021.02.0 matplotlib: 3.3.4 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: None IPython: 7.21.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.126ms · About: xarray-datasette