issues
10 rows where type = "issue" and user = 206773 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
762323609 | MDU6SXNzdWU3NjIzMjM2MDk= | 4681 | Uncompressed Zarr arrays can no longer be written to Zarr | forman 206773 | open | 0 | 2 | 2020-12-11T13:02:28Z | 2023-10-24T23:08:35Z | NONE | What happened: We create Since xarray 0.16.2 and Zarr 2.6.1 this approach doesnt work anymore. When we write datasets opened from such store using
(Full traceback is below.) It seems that our static numpy arrays won't be encoded at all, because they are uncompressed. If we use a compressor, it works again. (That's our current workaround.) What you expected to happen: Before data is written into a Zarr chunk store, it must be encoded from numpy arrays to bytes.
This does not seem to happen if uncompressed data is written, that is, the the Zarr encoding's Minimal Complete Verifiable Example: A minimal, self-contained example is the entire test module test_reprod_27.py of the xcube Sentinel Hub plugin Original issue in the Sentinel Hub xcube plugin is xcube-sh #27. Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.6 | packaged by conda-forge | (default, Nov 27 2020, 18:58:29) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: de_DE.cp1252 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.2 pandas: 1.1.5 numpy: 1.19.4 scipy: 1.5.3 netCDF4: 1.5.5 pydap: installed h5netcdf: None h5py: None Nio: None zarr: 2.6.1 cftime: 1.3.0 nc_time_axis: None PseudoNetCDF: None rasterio: 1.1.5 cfgrib: None iris: None bottleneck: None dask: 2.30.0 distributed: 2.30.1 matplotlib: 3.3.3 cartopy: None seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20201009 pip: 20.3.1 conda: None pytest: 6.1.2 IPython: 7.19.0 sphinx: 3.3.1Traceback: traceback: ``` File "D:\Projects\xcube\xcube\cli_gen2\write.py", line 47, in write_cube data_id = writer.write_data(cube, File "D:\Projects\xcube\xcube\core\store\stores\s3.py", line 213, in write_data self._new_s3_writer(writer_id).write_data(data, data_id=path, replace=replace, write_params) File "D:\Projects\xcube\xcube\core\store\accessors\dataset.py", line 313, in write_data data.to_zarr(s3fs.S3Map(root=f'{bucket_name}/{data_id}' if bucket_name else data_id, File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\core\dataset.py", line 1745, in to_zarr return to_zarr( File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\backends\api.py", line 1481, in to_zarr dump_to_store(dataset, zstore, writer, encoding=encoding) File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\backends\api.py", line 1158, in dump_to_store store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims) File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\backends\zarr.py", line 473, in store self.set_variables( File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\backends\zarr.py", line 549, in set_variables writer.add(v.data, zarr_array, region) File "D:\Miniconda3\envs\xcube\lib\site-packages\xarray\backends\common.py", line 143, in add target[region] = source File "D:\Miniconda3\envs\xcube\lib\site-packages\zarr\core.py", line 1122, in setitem self.set_basic_selection(selection, value, fields=fields) File "D:\Miniconda3\envs\xcube\lib\site-packages\zarr\core.py", line 1217, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "D:\Miniconda3\envs\xcube\lib\site-packages\zarr\core.py", line 1508, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "D:\Miniconda3\envs\xcube\lib\site-packages\zarr\core.py", line 1580, in _set_selection self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values, File "D:\Miniconda3\envs\xcube\lib\site-packages\zarr\core.py", line 1709, in _chunk_setitems self.chunk_store.setitems({k: v for k, v in zip(ckeys, cdatas)}) File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\mapping.py", line 110, in setitems self.fs.pipe(values) File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\asyn.py", line 121, in wrapper return maybe_sync(func, self, args, kwargs) File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\asyn.py", line 100, in maybe_sync return sync(loop, func, args, kwargs) File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\asyn.py", line 71, in sync raise exc.with_traceback(tb) File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\asyn.py", line 55, in f result[0] = await future File "D:\Miniconda3\envs\xcube\lib\site-packages\fsspec\asyn.py", line 211, in _pipe await asyncio.gather( File "D:\Miniconda3\envs\xcube\lib\site-packages\s3fs\core.py", line 608, in _pipe_file return await self._call_s3( File "D:\Miniconda3\envs\xcube\lib\site-packages\s3fs\core.py", line 225, in _call_s3 raise translate_boto_error(err) from err File "D:\Miniconda3\envs\xcube\lib\site-packages\s3fs\core.py", line 207, in _call_s3 return await method(additional_kwargs) File "D:\Miniconda3\envs\xcube\lib\site-packages\aiobotocore\client.py", line 123, in _make_api_call request_dict = await self._convert_to_request_dict( File "D:\Miniconda3\envs\xcube\lib\site-packages\aiobotocore\client.py", line 171, in _convert_to_request_dict request_dict = self._serializer.serialize_to_request( File "D:\Miniconda3\envs\xcube\lib\site-packages\botocore\validate.py", line 297, in serialize_to_request raise ParamValidationError(report=report.generate_report()) Invalid type for parameter Body, value: [55.0475 55.0465 55.0455 ... 53.0025 53.0015 53.0005], type: <class 'numpy.ndarray'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1226272301 | I_kwDOAMm_X85JF24t | 6573 | 32- vs 64-bit coordinates coordinates in where() | forman 206773 | open | 0 | 6 | 2022-05-05T06:57:36Z | 2022-09-28T08:17:09Z | NONE | What happened?I'm struggling whether this is a bug or not. At least I faced a very unexpected behaviour. For two given data arrays However if the coordinates of The behaviour is likely caused by the fact that the indexes generated for the coordinates are no longer strictly equal, therefore What did you expect to happen?In the case described above, the dimensions and coordinates of Minimal Complete Verifiable Example```Python import numpy as np import xarray as xr c32 = xr.DataArray(np.linspace(0, 1, 10, dtype=np.float32), dims='x') c64 = xr.DataArray(np.linspace(0, 1, 10, dtype=np.float64), dims='x') c3 = c32.where(c64 > 0.5) assert len(c32) == len(c3) v32 = xr.DataArray(np.random.random(10), dims='x', coords=dict(x=c32)) v64 = xr.DataArray(np.random.random(10), dims='x', coords=dict(x=c64)) v3 = v32.where(v64 > 0.5) assert len(v32) == len(v3) --> Assertion error, Expected :10, Actual :2``` MVCE confirmation
Relevant log outputNo response Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:17:03) [MSC v.1929 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: AMD64 Family 25 Model 80 Stepping 0, AuthenticAMD
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('de_DE', 'cp1252')
libhdf5: 1.12.1
libnetcdf: 4.8.1
xarray: 2022.3.0
pandas: 1.4.2
numpy: 1.21.6
scipy: 1.8.0
netCDF4: 1.5.8
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.11.3
cftime: 1.6.0
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.10
cfgrib: None
iris: None
bottleneck: None
dask: 2022.04.1
distributed: 2022.4.1
matplotlib: 3.5.1
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.3.0
cupy: None
pint: None
sparse: None
setuptools: 62.1.0
pip: 22.0.4
conda: None
pytest: 7.1.2
IPython: 8.2.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
906748201 | MDU6SXNzdWU5MDY3NDgyMDE= | 5405 | Control CF-encoding in to_zarr() | forman 206773 | open | 0 | 2 | 2021-05-30T12:57:40Z | 2021-06-23T15:47:32Z | NONE | Is your feature request related to a problem? Please describe. I believe, xarray's When appending data, xarray will always CF-encode variable data according to encoding information of existing variables before it appends new data. This is fine if data to be appended is decoded, but if the data to be appended is already encoded (e.g. because it was previously read by See also xarray issue #5263 and my actual problem described in https://github.com/bcdev/nc2zarr/issues/35. Describe the solution you'd like A possible hack is to redundantly use I'd like to control whether encoding of data shall take place when appending. If I already have encoded data, I'd like to call For example, when I uncomment line 469 in Minimal Complete Verifiable Example: Here is a test that explains the observed inconsistency. ```python import shutil import unittest import numpy as np import xarray as xr import zarr SRC_DS_1_PATH = 'src_ds_1.zarr' SRC_DS_2_PATH = 'src_ds_2.zarr' DST_DS_PATH = 'dst_ds.zarr' class XarrayToZarrAppendInconsistencyTest(unittest.TestCase): @classmethod def del_paths(cls): for path in (SRC_DS_1_PATH, SRC_DS_2_PATH, DST_DS_PATH): shutil.rmtree(path, ignore_errors=True)
``` Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 15:50:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 26 Stepping 5, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: de_DE.cp1252 libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.17.0 pandas: 1.2.2 numpy: 1.20.1 scipy: 1.6.0 netCDF4: 1.5.6 pydap: installed h5netcdf: None h5py: None Nio: None zarr: 2.6.1 cftime: 1.4.1 nc_time_axis: None PseudoNetCDF: None rasterio: 1.2.0 cfgrib: None iris: None bottleneck: None dask: 2021.02.0 distributed: 2021.02.0 matplotlib: 3.3.4 cartopy: 0.19.0.post1 seaborn: None numbagg: None pint: None setuptools: 49.6.0.post20210108 pip: 21.0.1 conda: None pytest: 6.2.2 IPython: 7.21.0 sphinx: 3.5.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
258500654 | MDU6SXNzdWUyNTg1MDA2NTQ= | 1576 | Variable of dtype int8 casted to float64 | forman 206773 | closed | 0 | 11 | 2017-09-18T14:28:32Z | 2020-11-09T07:06:31Z | 2020-11-09T07:06:30Z | NONE | I'm using a CF-compliant dataset from the ESA Land Cover CCI Project that contains a variable
If I switch off CF decoding I get the original data type.
I'd actually expect it to be converted to The dataset is available here: ftp://anon-ftp.ceda.ac.uk/neodc/esacci/land_cover/data/land_cover_maps/v1.6.1/ESACCI-LC-L4-LCCS-Map-300m-P5Y-2010-v1.6.1.nc. Note the file is ~3 GB. Btw, the attributes of the variable are
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146287030 | MDU6SXNzdWUxNDYyODcwMzA= | 819 | N-D rolling | forman 206773 | closed | 0 | 5 | 2016-04-06T11:42:42Z | 2019-02-27T17:48:20Z | 2019-02-27T17:48:20Z | NONE | Dear xarray Team, We just discovered xarray and it seems to be a fantastic candidate to serve as a core library for our climate data toolbox we are about to implement. While investigating the API we recognized that the
is limited to a single Actually, I also asked myself why the Anyway, thanks for xarray! Regards Norman |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
165540933 | MDU6SXNzdWUxNjU1NDA5MzM= | 899 | Let open_mfdataset() respect cell boundary variables | forman 206773 | closed | 0 | 5 | 2016-07-14T11:36:49Z | 2019-02-25T19:28:23Z | 2019-02-25T19:28:23Z | NONE | I recently faced a problem with We could solve the problem by using the preprocess argument and turning these data variables into coordinates variables with ds.set_coords('lat_bnds', inplace=True). However it would be nice to prevent concatenation of variables that don't have the concat_dim, e.g. by a keyword argument selective_concat or respect_cell_bnds_vars or so. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146975644 | MDU6SXNzdWUxNDY5NzU2NDQ= | 822 | value scaling wrong in special cases | forman 206773 | closed | 0 | 13 | 2016-04-08T16:29:33Z | 2019-02-19T02:11:31Z | 2019-02-19T02:11:31Z | NONE | For the same netCDF file used in #821, the value scaling seems to be wrongly applied to compute float64 surface temperature values from a (signed)
Values are roughly -50 to 600 Kelvin instead of 270 to 310 Kelvin. It seems like the problem arises from misinterpreting the signed short raw values in the netCDF file. Here is a notebook that better explains the issue: https://github.com/CCI-Tools/sandbox/blob/4c7a98a4efd1ba55152d2799b499cb27027c2b45/notebooks/norman/xarray-sst-issues.ipynb |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
321553778 | MDU6SXNzdWUzMjE1NTM3Nzg= | 2109 | Dataset.expand_dims() not lazy | forman 206773 | closed | 0 | 2 | 2018-05-09T12:39:44Z | 2018-05-09T15:45:31Z | 2018-05-09T15:45:31Z | NONE | The following won't come back for a very long time or will fail with an out-of-memory error: ```python
Problem descriptionWhen I call Dataset.expand_dims('time') on one of my ~2GB datasets (compressed), it seems to load all data data into memory, at least memory consumption goes beyond 12GB eventually ending in an out-of-memory exception. (Sorry for the German UI.) Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
258744901 | MDU6SXNzdWUyNTg3NDQ5MDE= | 1579 | Support for unsigned data | forman 206773 | closed | 0 | 3 | 2017-09-19T08:57:15Z | 2017-09-21T15:46:30Z | 2017-09-20T13:15:36Z | NONE | The "old" NetCDF 3 format doesn't have explicit support for unsigned integer types and therefore a recommendation/convention exists to set the variable attribute Are there any plans to interpret the I'd really like to help out, but I fear I still don't know enough about dask to provide an efficient PR for that. My workaround is to manually convert the variables in question which are of type
which results in an |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146908323 | MDU6SXNzdWUxNDY5MDgzMjM= | 821 | datetime units interpretation wrong in special cases | forman 206773 | closed | 0 | 3 | 2016-04-08T11:55:44Z | 2016-04-09T16:55:10Z | 2016-04-09T16:54:10Z | NONE | Hi there, I have a datetime issue with a certain type of (CF-compliant!) netCDF files orginating from the ESA CCI Sea Surface Temperature project. With other climate data, everthings seems fine. When I open such a netCDF file, the datetime value(s) of the time dimension seem to be wrong. If I do
I get
The time dimension is
and the time value is Here is the link to the data: ftp://anon-ftp.ceda.ac.uk/neodc/esacci/sst/data/lt/Analysis/L4/v01.1/2010/01/01/20100101120000-ESACCI-L4_GHRSST-SSTdepth-OSTIA-GLOB_LT-v02.0-fv01.1.nc I'm not sure whether this is actually a CF-specific issue with which xarray doesn't want to deal with. If so, could you please give some advice to get arround this. I'm sure other xarray lovers will face this issue sooner or later. Thanks! -- Norman |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);