home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

6 rows where user = 25071375 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 5
  • pull 1

state 2

  • closed 4
  • open 2

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2215890029 I_kwDOAMm_X86EE8xt 8894 Rolling reduction with a custom function generates an excesive use of memory that kills the workers josephnowak 25071375 closed 0     8 2024-03-29T19:15:28Z 2024-04-01T20:57:59Z 2024-03-30T01:49:17Z CONTRIBUTOR      

What happened?

Hi, I have been trying to use a custom function on the rolling reduction method, the original function tries to filter the nan values (any numpy function that I have used that handles nans generates the same problem) to later apply some simple aggregate functions, but it is killing all my workers even when the data is very small (I have 7 workers and all of them have 3 Gb of RAM).

What did you expect to happen?

I would expect less use of memory taking into account the size of the rolling window, the simplicity of the function and the amount of data used on the example.

Minimal Complete Verifiable Example

```Python import numpy as np import dask.array as da import xarray as xr import dask

def f(x, axis): # If I replace np.nansum by np.sum everything works perfectly and the amount of memory used is very small return np.nansum(x, axis=axis)

arr = xr.DataArray( dask.array.zeros( shape=(300, 30000), dtype=float, chunks=(30, 6000) ), dims=["a", "b"], coords={"a": list(range(300)), "b": list(range(30000))} )

arr.rolling(a=252).reduce(f).chunk({"a": 252}).to_zarr("/data/test/test_write", mode="w") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [ ] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

Python KilledWorker: Attempted to run task ('nansum-overlap-sum-aggregate-sum-aggregate-e732de6ad917d5f4084b05192ca671c4', 0, 0) on 4 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://172.18.0.2:39937. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 4.14.275-207.503.amzn2.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.3 libnetcdf: None xarray: 2024.1.0 pandas: 2.2.1 numpy: 1.26.3 scipy: 1.11.4 netCDF4: None pydap: None h5netcdf: None h5py: 3.10.0 Nio: None zarr: 2.16.1 cftime: None nc_time_axis: None iris: None bottleneck: 1.3.7 dask: 2024.1.0 distributed: 2024.1.0 matplotlib: 3.8.2 cartopy: None seaborn: 0.13.1 numbagg: 0.7.0 fsspec: 2023.12.2 cupy: None pint: None sparse: 0.15.1 flox: 0.8.9 numpy_groupies: 0.10.2 setuptools: 69.0.3 pip: 23.3.2 conda: 23.11.0 pytest: 7.4.4 mypy: None IPython: 8.20.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8894/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
865206283 MDU6SXNzdWU4NjUyMDYyODM= 5210 Probable error using zarr process synchronizer josephnowak 25071375 closed 0     1 2021-04-22T17:05:10Z 2023-10-14T20:36:18Z 2023-10-14T20:36:18Z CONTRIBUTOR      

Hi I was trying to use Xarray open_zarr with the Zarr ProcessSynchronizer class and it produces a set of errors, I don't know if those errors are produced because I don't understand the logic of the ProcessSynchronizer or is a simple bug. I have a small code which reproduces the problems, basically, if I put a different path in the Zarr ProcessSynchronizer class all the error disappear but it creates a new folder.

```python import xarray import zarr import numpy as np

arr = xarray.DataArray( data=np.array([ [1, 2, 7, 4, 5], [np.nan, 3, 5, 5, 6], [3, 3, np.nan, 5, 6], [np.nan, 3, 10, 5, 6], [np.nan, 7, 8, 5, 6], ], dtype=float), dims=['index', 'columns'], coords={'index': [0, 1, 2, 3, 4], 'columns': [0, 1, 2, 3, 4]}, )

If the synchronizer is created using another path the code will work without any error but it creates a new folder,

that is the correct way to use the process synchronizer?

synchronizer = zarr.ProcessSynchronizer('dummy_array.sync')

Using the original path produce a set of weird problems

synchronizer = zarr.ProcessSynchronizer('dummy_array')

Executing the commented code I obtain: PermissionError: [WinError 5].

arr.to_dataset(name='data').to_zarr('dummy_array', mode='w', synchronizer=synchronizer, compute=True)

arr.to_dataset(name='data').to_zarr('dummy_array', mode='w', compute=True)

If this section of the code is uncommented It will throw a different error when xarray.open_zarr being executed

a = zarr.open_array('dummy_array/data', synchronizer=synchronizer, mode='r')

PermissionError: [Errno 13] Permission denied

xarray.open_zarr('dummy_array', synchronizer=synchronizer) ```

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: es_ES.cp1252 libhdf5: 1.10.4 libnetcdf: None xarray: 0.17.0 pandas: 1.1.3 numpy: 1.19.2 scipy: 1.5.2 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.7.1 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2.30.0 distributed: 2.30.1 matplotlib: 3.3.2 cartopy: None seaborn: 0.11.0 numbagg: None pint: None setuptools: 50.3.1.post20201107 pip: 21.0.1 conda: 4.10.0 pytest: 6.2.3 IPython: 7.19.0 sphinx: 3.2.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5210/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
927617256 MDU6SXNzdWU5Mjc2MTcyNTY= 5511 Appending data to a dataset stored in Zarr format produce PermissonError or NaN values in the final result josephnowak 25071375 open 0     5 2021-06-22T20:42:46Z 2023-05-14T17:17:56Z   CONTRIBUTOR      

What happened: I was trying to append new data to an existing Zarr file with a time-series dataset (a financial index) and I start to notice that sometimes it produce PermissonError or randomly appear some NaN, so I check and the problem looks like is something related to multiple threads/process trying to write the same chunk (probably the lasts that has different size).

What you expected to happen: I would like to be able to store the data perfectly or it should be sufficient if it raise a NotImplemented error in case that this kind of appends is incorrect

Minimal Complete Verifiable Example: Probably you have to run many times this code to reproduce the errors, basically, you will see the PermissonError or an increment in the number of NaNs (it should has always 0) ```python import numpy as np import pandas as pd import xarray as xr

Dummy data to recreate the problem, the 308 is because my original data had this number of dates

dates = pd.bdate_range('2017-09-05', '2018-11-27')[:308] index = xr.DataArray( data=np.random.rand(len(dates)), dims=['date'], coords={'date': np.array(dates, np.datetime64)} )

Store a slice of the index in a Zarr file (new_index) using chunks with size 30

start_date = np.datetime64('2017-09-05') end_date = np.datetime64('2018-03-13') index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', mode='w' )

Append the rest of the data to the new_index Zarr file

start_date = np.datetime64('2018-03-14') end_date = np.datetime64('2018-11-27')

Sometimes this section of code can produce PermissionError, probably two or more process/threads of Dask are trying

to write at the same time in the same chunks and I suppose that last chunks that end with a different size

and is necessary to 'rewrite' them are those chunks that cause the problem.

index.loc[start_date: end_date].to_dataset( name='data' ).chunk( {'date': 30} ).to_zarr( 'new_index', append_dim='date' )

The final result can contain many nan even when there is not nan in the original dataset

this behaviour is aleatory so I suppose that is related with the aforementioned error

print(xr.open_zarr('new_index')['data'].isnull().sum().compute()) print(index.isnull().sum().compute())

```

Anything else we need to know?:

Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 165 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: ('es_ES', 'cp1252') libhdf5: 1.10.4 libnetcdf: None xarray: 0.18.2 pandas: 1.2.4 numpy: 1.20.2 scipy: 1.6.2 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.06.0 distributed: 2021.06.1 matplotlib: 3.3.4 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 52.0.0.post20210125 pip: 21.1.2 conda: 4.10.1 pytest: 6.2.4 IPython: 7.22.0 sphinx: 4.0.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5511/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1462295936 I_kwDOAMm_X85XKN2A 7317 Error using to_zarr method with fsspec simplecache josephnowak 25071375 open 0     0 2022-11-23T19:26:38Z 2022-12-13T16:38:03Z   CONTRIBUTOR      

What happened?

I'm trying to use the fsspec simplecache implementation to read and write a set of Zarr files using Xarray (and some others with Dask) but during the writing process, I always get a KeyError even using the mode="w" (The complete error is on the relevant log output).

I raised a similar issue on Dask https://github.com/dask/dask/issues/9680 but when I tried the same solution of adding the "overwrite" parameter (which I think should be not necessary because Xarray already offers the mode="w" option) through the "storage_options" parameter that the "to_zarr" method offers it raises the following error: "ValueError: store must be a string to use storage_options. Got <class 'fsspec.mapping.FSMap'>", so apparently I can not apply the same solution.

What did you expect to happen?

The "to_zarr" method should be able to store the array even using the simplecache filesystem (at least Dask can).

Minimal Complete Verifiable Example

```Python mapper = fsspec.get_mapper('error_cache_write') cache_mapper = fsspec.filesystem( "simplecache", fs=mapper.fs, cache_storage='cache/files', same_names=True ).get_mapper('error_cache_write')

arr = xr.DataArray( [1, 2, 3], coords={ "test_coord": [5,6,7] } ).to_dataset(name="data")

This erases the cache and the array itself.

cache_mapper.fs.clear_cache() cache_mapper.clear()

arr.to_zarr(cache_mapper, mode="w")

Using the storage_options

arr.to_zarr(cache_mapper, mode="w", storage_options={"overwrite": True}) ```

MVCE confirmation

  • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [x] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [x] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python KeyError Traceback (most recent call last) Input In [9], in <cell line: 16>() 8 arr = xr.DataArray( 9 [1, 2, 3], 10 coords={ 11 "test_coord": [5,6,7] 12 } 13 ).to_dataset(name="data") 14 # cache_mapper.fs.clear_cache() 15 # cache_mapper.clear() ---> 16 arr.to_zarr(cache_mapper, mode="w")

File /opt/conda/lib/python3.9/site-packages/xarray/core/dataset.py:2081, in Dataset.to_zarr(self, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options) 1971 """Write dataset contents to a zarr group. 1972 1973 Zarr chunks are determined in the following way: (...) 2077 The I/O user guide, with more details and examples. 2078 """ 2079 from ..backends.api import to_zarr -> 2081 return to_zarr( # type: ignore 2082 self, 2083 store=store, 2084 chunk_store=chunk_store, 2085 storage_options=storage_options, 2086 mode=mode, 2087 synchronizer=synchronizer, 2088 group=group, 2089 encoding=encoding, 2090 compute=compute, 2091 consolidated=consolidated, 2092 append_dim=append_dim, 2093 region=region, 2094 safe_chunks=safe_chunks, 2095 )

File /opt/conda/lib/python3.9/site-packages/xarray/backends/api.py:1657, in to_zarr(dataset, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options) 1655 writer = ArrayWriter() 1656 # TODO: figure out how to properly handle unlimited_dims -> 1657 dump_to_store(dataset, zstore, writer, encoding=encoding) 1658 writes = writer.sync(compute=compute) 1660 if compute:

File /opt/conda/lib/python3.9/site-packages/xarray/backends/api.py:1277, in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims) 1274 if encoder: 1275 variables, attrs = encoder(variables, attrs) -> 1277 store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)

File /opt/conda/lib/python3.9/site-packages/xarray/backends/zarr.py:550, in ZarrStore.store(self, variables, attributes, check_encoding_set, writer, unlimited_dims) 548 for vn in existing_variable_names: 549 vars_with_encoding[vn] = variables[vn].copy(deep=False) --> 550 vars_with_encoding[vn].encoding = existing_vars[vn].encoding 551 vars_with_encoding, _ = self.encode(vars_with_encoding, {}) 552 variables_encoded.update(vars_with_encoding)

KeyError: 'data' ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:46) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.4.0-1065-aws machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: None xarray: 2022.11.0 pandas: 1.4.0 numpy: 1.22.4 scipy: 1.9.1 netCDF4: None pydap: None h5netcdf: None h5py: 3.6.0 Nio: None zarr: 2.13.3 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2022.11.1 distributed: None matplotlib: 3.5.1 cartopy: None seaborn: 0.11.2 numbagg: None fsspec: 2022.8.0 cupy: None pint: None sparse: None flox: 0.5.9 numpy_groupies: 0.9.19 setuptools: 59.1.1 pip: 22.0.3 conda: 4.11.0 pytest: 7.1.3 IPython: 8.4.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7317/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1089504942 PR_kwDOAMm_X84wUTRX 6118 New algorithm for forward filling josephnowak 25071375 closed 0     3 2021-12-27T22:36:37Z 2022-01-03T23:06:06Z 2022-01-03T17:55:59Z CONTRIBUTOR   0 pydata/xarray/pulls/6118
  • [x] Closes #6112
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6118/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1088893989 I_kwDOAMm_X85A5zQl 6112 Forward Fill not working when there are all-NaN chunks josephnowak 25071375 closed 0     6 2021-12-27T01:27:05Z 2022-01-03T17:55:59Z 2022-01-03T17:55:59Z CONTRIBUTOR      

What happened: I'm working with a report dataset that only has data on some specific periods of time, the problem is that when I use the forward fill method it returns me many nans even on the last cells (it's a forward fill without limit).

What you expected to happen: The array should not have nans in the last cells if it has data in any other cell or there should be a warning somewhere.

Minimal Complete Verifiable Example:

```python import xarray as xr

xr.DataArray( [1, 2, np.nan, np.nan, np.nan, np.nan], dims=['a'] ).chunk( 2 ).ffill( 'a' ).compute() ``` output: array([ 1., 2., 2., 2., nan, nan])

Anything else we need to know?: I check a little bit the internal code of Xarray for forward filling when it use dask and I think that the problem is that the algorithm makes an initial forward fill on all the blocks and then it makes a map_overlap for forward filling between chunks which in case that there is an empty chunk will not work due that it is going to take the last value of the empty chunk which is nan (hope this help). Environment:

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:25:18) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.4.0-1025-aws machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: None libnetcdf: None xarray: 0.20.2 pandas: 1.3.5 numpy: 1.21.4 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: 2.10.3 cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2021.12.0 distributed: 2021.12.0 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2021.11.1 cupy: None pint: None sparse: None setuptools: 59.4.0 pip: 21.3.1 conda: None pytest: 6.2.5 IPython: 7.30.1 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6112/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 236.295ms · About: xarray-datasette