home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 884197067

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/5604#issuecomment-884197067 https://api.github.com/repos/pydata/xarray/issues/5604 884197067 IC_kwDOAMm_X840s8bL 25606497 2021-07-21T13:38:22Z 2021-07-21T14:33:53Z NONE

Hi there, I have a very similar problem and before I open another issue I rather share my example here:

Minimal Complete Verifiable Example:

This little computation uses >500 MB of memory even if the file reveals only a size of 154MB: ```python with xr.open_dataset(climdata+'tavg_subset.nc', chunks={"latitude": 300, "longitude": 300}) as ds: print(ds)

<xarray.Dataset>
Dimensions:    (latitude: 168, longitude: 664, time: 731)
Coordinates:
  * time       (time) datetime64[ns] 1971-01-01 1971-01-02 ... 1972-12-31
  * longitude  (longitude) float64 20.27 20.3 20.33 20.36 ... 40.92 40.95 40.98
  * latitude   (latitude) float64 40.23 40.2 40.17 40.14 ... 35.08 35.05 35.02
Data variables:
    tavg       (time, latitude, longitude) float32 dask.array<chunksize=(731, 168, 300), meta=np.ndarray>


annualMean = ds.tavg.resample(time="1Y").mean('time', keep_attrs=True)
annualMean.to_netcdf("outputMean.nc", format="NETCDF4_CLASSIC", engine="netcdf4")

```

My problem is that the original files are each >120GB in size and I run into out-of-memory error on our HPC (asking for 10 CPUs with 16GB each).

I thought xarray processes everything in chunks for not overusing the memory - but something seems really wrong here!?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  944996552
Powered by Datasette · Queries took 0.819ms · About: xarray-datasette