issues
2 rows where state = "open" and user = 25071375 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
927617256 | MDU6SXNzdWU5Mjc2MTcyNTY= | 5511 | Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result | josephnowak 25071375 | open | 0 | 5 | 2021-06-22T20:42:46Z | 2023-05-14T17:17:56Z | CONTRIBUTOR | What happened: I was trying to append new data to an existing Zarr file with a time-series dataset (a financial index) and I start to notice that sometimes it produce PermissonError or randomly appear some NaN, so I check and the problem looks like is something related to multiple threads/process trying to write the same chunk (probably the lasts that has different size). What you expected to happen: I would like to be able to store the data perfectly or it should be sufficient if it raise a NotImplemented error in case that this kind of appends is incorrect Minimal Complete Verifiable Example: Probably you have to run many times this code to reproduce the errors, basically, you will see the PermissonError or an increment in the number of NaNs (it should has always 0) ```python import numpy as np import pandas as pd import xarray as xr Dummy data to recreate the problem, the 308 is because my original data had this number of datesdates = pd.bdate_range('2017-09-05', '2018-11-27')[:308] index = xr.DataArray( data=np.random.rand(len(dates)), dims=['date'], coords={'date': np.array(dates, np.datetime64)} ) Store a slice of the index in a Zarr file (new_index) using chunks with size 30start_date = np.datetime64('2017-09-05') end_date = np.datetime64('2018-03-13') index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', mode='w' ) Append the rest of the data to the new_index Zarr filestart_date = np.datetime64('2018-03-14') end_date = np.datetime64('2018-11-27') Sometimes this section of code can produce PermissionError, probably two or more process/threads of Dask are tryingto write at the same time in the same chunks and I suppose that last chunks that end with a different sizeand is necessary to 'rewrite' them are those chunks that cause the problem.index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', append_dim='date' ) The final result can contain many nan even when there is not nan in the original datasetthis behaviour is aleatory so I suppose that is related with the aforementioned errorprint(xr.open_zarr('new_index')['data'].isnull().sum().compute()) print(index.isnull().sum().compute()) ``` Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: ('es_ES', 'cp1252') libhdf5: 1.10.4 libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.2 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.06.0 distributed: 2021.06.1 matplotlib: 3.3.4 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 52.0.0.post20210125 pip: 21.1.2 conda: 4.10.1 pytest: 6.2.4 IPython: 7.22.0 sphinx: 4.0.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1462295936 | I_kwDOAMm_X85XKN2A | 7317 | Error using to_zarr method with fsspec simplecache | josephnowak 25071375 | open | 0 | 0 | 2022-11-23T19:26:38Z | 2022-12-13T16:38:03Z | CONTRIBUTOR | What happened?I'm trying to use the fsspec simplecache implementation to read and write a set of Zarr files using Xarray (and some others with Dask) but during the writing process, I always get a KeyError even using the mode="w" (The complete error is on the relevant log output). I raised a similar issue on Dask https://github.com/dask/dask/issues/9680 but when I tried the same solution of adding the "overwrite" parameter (which I think should be not necessary because Xarray already offers the mode="w" option) through the "storage_options" parameter that the "to_zarr" method offers it raises the following error: "ValueError: store must be a string to use storage_options. Got <class 'fsspec.mapping.FSMap'>", so apparently I can not apply the same solution. What did you expect to happen?The "to_zarr" method should be able to store the array even using the simplecache filesystem (at least Dask can). Minimal Complete Verifiable Example```Python mapper = fsspec.get_mapper('error_cache_write') cache_mapper = fsspec.filesystem( "simplecache", fs=mapper.fs, cache_storage='cache/files', same_names=True ).get_mapper('error_cache_write') arr = xr.DataArray( [1, 2, 3], coords={ "test_coord": [5,6,7] } ).to_dataset(name="data") This erases the cache and the array itself.cache_mapper.fs.clear_cache() cache_mapper.clear() arr.to_zarr(cache_mapper, mode="w") Using the storage_optionsarr.to_zarr(cache_mapper, mode="w", storage_options={"overwrite": True}) ``` MVCE confirmation
Relevant log output```Python KeyError Traceback (most recent call last) Input In [9], in <cell line: 16>() 8 arr = xr.DataArray( 9 [1, 2, 3], 10 coords={ 11 "test_coord": [5,6,7] 12 } 13 ).to_dataset(name="data") 14 # cache_mapper.fs.clear_cache() 15 # cache_mapper.clear() ---> 16 arr.to_zarr(cache_mapper, mode="w") File /opt/conda/lib/python3.9/site-packages/xarray/core/dataset.py:2081, in Dataset.to_zarr(self, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options) 1971 """Write dataset contents to a zarr group. 1972 1973 Zarr chunks are determined in the following way: (...) 2077 The I/O user guide, with more details and examples. 2078 """ 2079 from ..backends.api import to_zarr -> 2081 return to_zarr( # type: ignore 2082 self, 2083 store=store, 2084 chunk_store=chunk_store, 2085 storage_options=storage_options, 2086 mode=mode, 2087 synchronizer=synchronizer, 2088 group=group, 2089 encoding=encoding, 2090 compute=compute, 2091 consolidated=consolidated, 2092 append_dim=append_dim, 2093 region=region, 2094 safe_chunks=safe_chunks, 2095 ) File /opt/conda/lib/python3.9/site-packages/xarray/backends/api.py:1657, in to_zarr(dataset, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options) 1655 writer = ArrayWriter() 1656 # TODO: figure out how to properly handle unlimited_dims -> 1657 dump_to_store(dataset, zstore, writer, encoding=encoding) 1658 writes = writer.sync(compute=compute) 1660 if compute: File /opt/conda/lib/python3.9/site-packages/xarray/backends/api.py:1277, in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims) 1274 if encoder: 1275 variables, attrs = encoder(variables, attrs) -> 1277 store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims) File /opt/conda/lib/python3.9/site-packages/xarray/backends/zarr.py:550, in ZarrStore.store(self, variables, attributes, check_encoding_set, writer, unlimited_dims) 548 for vn in existing_variable_names: 549 vars_with_encoding[vn] = variables[vn].copy(deep=False) --> 550 vars_with_encoding[vn].encoding = existing_vars[vn].encoding 551 vars_with_encoding, _ = self.encode(vars_with_encoding, {}) 552 variables_encoded.update(vars_with_encoding) KeyError: 'data' ``` Anything else we need to know?No response Environment
INSTALLED VERSIONS
------------------
commit: None
python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46)
[GCC 9.4.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-1065-aws
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.1
libnetcdf: None
xarray: 2022.11.0
pandas: 1.4.0
numpy: 1.22.4
scipy: 1.9.1
netCDF4: None
pydap: None
h5netcdf: None
h5py: 3.6.0
Nio: None
zarr: 2.13.3
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.3.2
dask: 2022.11.1
distributed: None
matplotlib: 3.5.1
cartopy: None
seaborn: 0.11.2
numbagg: None
fsspec: 2022.8.0
cupy: None
pint: None
sparse: None
flox: 0.5.9
numpy_groupies: 0.9.19
setuptools: 59.1.1
pip: 22.0.3
conda: 4.11.0
pytest: 7.1.3
IPython: 8.4.0
sphinx: None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);