home / github / issues

Menu
  • Search all tables
  • GraphQL API

issues: 2215890029

This data as json

id node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2215890029 I_kwDOAMm_X86EE8xt 8894 Rolling reduction with a custom function generates an excesive use of memory that kills the workers 25071375 closed 0     8 2024-03-29T19:15:28Z 2024-04-01T20:57:59Z 2024-03-30T01:49:17Z CONTRIBUTOR      

What happened?

Hi, I have been trying to use a custom function on the rolling reduction method, the original function tries to filter the nan values (any numpy function that I have used that handles nans generates the same problem) to later apply some simple aggregate functions, but it is killing all my workers even when the data is very small (I have 7 workers and all of them have 3 Gb of RAM).

What did you expect to happen?

I would expect less use of memory taking into account the size of the rolling window, the simplicity of the function and the amount of data used on the example.

Minimal Complete Verifiable Example

```Python import numpy as np import dask.array as da import xarray as xr import dask

def f(x, axis): # If I replace np.nansum by np.sum everything works perfectly and the amount of memory used is very small return np.nansum(x, axis=axis)

arr = xr.DataArray( dask.array.zeros( shape=(300, 30000), dtype=float, chunks=(30, 6000) ), dims=["a", "b"], coords={"a": list(range(300)), "b": list(range(30000))} )

arr.rolling(a=252).reduce(f).chunk({"a": 252}).to_zarr("/data/test/test_write", mode="w") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [ ] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

Python KilledWorker: Attempted to run task ('nansum-overlap-sum-aggregate-sum-aggregate-e732de6ad917d5f4084b05192ca671c4', 0, 0) on 4 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://172.18.0.2:39937. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0] python-bits: 64 OS: Linux OS-release: 4.14.275-207.503.amzn2.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.3 libnetcdf: None xarray: 2024.1.0 pandas: 2.2.1 numpy: 1.26.3 scipy: 1.11.4 netCDF4: None pydap: None h5netcdf: None h5py: 3.10.0 Nio: None zarr: 2.16.1 cftime: None nc_time_axis: None iris: None bottleneck: 1.3.7 dask: 2024.1.0 distributed: 2024.1.0 matplotlib: 3.8.2 cartopy: None seaborn: 0.13.1 numbagg: 0.7.0 fsspec: 2023.12.2 cupy: None pint: None sparse: 0.15.1 flox: 0.8.9 numpy_groupies: 0.10.2 setuptools: 69.0.3 pip: 23.3.2 conda: 23.11.0 pytest: 7.4.4 mypy: None IPython: 8.20.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8894/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed 13221727 issue

Links from other tables

  • 2 rows from issues_id in issues_labels
  • 0 rows from issue in issue_comments
Powered by Datasette · Queries took 0.842ms · About: xarray-datasette