issues: 927617256
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
927617256 | MDU6SXNzdWU5Mjc2MTcyNTY= | 5511 | Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result | 25071375 | open | 0 | 5 | 2021-06-22T20:42:46Z | 2023-05-14T17:17:56Z | CONTRIBUTOR | What happened: I was trying to append new data to an existing Zarr file with a time-series dataset (a financial index) and I start to notice that sometimes it produce PermissonError or randomly appear some NaN, so I check and the problem looks like is something related to multiple threads/process trying to write the same chunk (probably the lasts that has different size). What you expected to happen: I would like to be able to store the data perfectly or it should be sufficient if it raise a NotImplemented error in case that this kind of appends is incorrect Minimal Complete Verifiable Example: Probably you have to run many times this code to reproduce the errors, basically, you will see the PermissonError or an increment in the number of NaNs (it should has always 0) ```python import numpy as np import pandas as pd import xarray as xr Dummy data to recreate the problem, the 308 is because my original data had this number of datesdates = pd.bdate_range('2017-09-05', '2018-11-27')[:308] index = xr.DataArray( data=np.random.rand(len(dates)), dims=['date'], coords={'date': np.array(dates, np.datetime64)} ) Store a slice of the index in a Zarr file (new_index) using chunks with size 30start_date = np.datetime64('2017-09-05') end_date = np.datetime64('2018-03-13') index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', mode='w' ) Append the rest of the data to the new_index Zarr filestart_date = np.datetime64('2018-03-14') end_date = np.datetime64('2018-11-27') Sometimes this section of code can produce PermissionError, probably two or more process/threads of Dask are tryingto write at the same time in the same chunks and I suppose that last chunks that end with a different sizeand is necessary to 'rewrite' them are those chunks that cause the problem.index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', append_dim='date' ) The final result can contain many nan even when there is not nan in the original datasetthis behaviour is aleatory so I suppose that is related with the aforementioned errorprint(xr.open_zarr('new_index')['data'].isnull().sum().compute()) print(index.isnull().sum().compute()) ``` Anything else we need to know?: Environment: Output of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: ('es_ES', 'cp1252') libhdf5: 1.10.4 libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.2 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.06.0 distributed: 2021.06.1 matplotlib: 3.3.4 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 52.0.0.post20210125 pip: 21.1.2 conda: 4.10.1 pytest: 6.2.4 IPython: 7.22.0 sphinx: 4.0.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | issue |