home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where user = 25606497 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

issue 2

  • Extremely Large Memory usage for a very small variable 1
  • Matrix Index is tilted using combine_by_coords 1

user 1

  • areichmuth · 2 ✖

author_association 1

  • NONE 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
912082889 https://github.com/pydata/xarray/issues/5760#issuecomment-912082889 https://api.github.com/repos/pydata/xarray/issues/5760 IC_kwDOAMm_X842XUfJ areichmuth 25606497 2021-09-02T21:49:57Z 2021-09-02T21:49:57Z NONE

Thank you @TomNicholas - strangely I can't reproduce it anymore on my local machine - it all happened on our slurm. The result is correct according to the input file index. In my case I calculated annual and seasonal climate variables on the same input files, but the matrix index i,j were different. One with upper left corner (0,0) and the other one with (0,1167) - as shown in ncview. Nevertheless here is what I did - you can test it with https://www.unidata.ucar.edu/software/netcdf/examples/sresa1b_ncar_ccsm3-example.nc:

```python

import numpy as np import xarray as xr

chunks=4

lonrange=256 latrange=128

creating the chunks - our slurm can't handle dask_jobqueue and dask chunking wasnt possible as well

x=[x.tolist() for x in np.array_split(range(lonrange), chunks)] xextend = [[sublist[0],sublist[-1]] for sublist in x]

y=[y.tolist() for y in np.array_split(range(latrange), chunks)] yextend = [[sublist[0],sublist[-1]] for sublist in y]

concatenating the chunks

allChunks = [[x,y] for x in xextend for y in yextend]

for k in range(0,chunks*chunks):

inter = str(k)

tas = xr.open_dataset('~/pathToFile/sresa1b_ncar_ccsm3-example.nc').isel(longitude=slice(min(allChunks[k][0]), max(allChunks[k][0])), latitude=slice(min(allChunks[k][1]), max(allChunks[k][1])))
    ##instead of my climate calculations
tas.rename({'tas':'test1'}).to_netcdf('~/pathToFile/climateCalculation1_'+inter+'.nc')
tas.rename({'tas':'test2'}).to_netcdf('~/pathToFile/climateCalculation2_'+inter+'.nc')

#combining the single data arrays per chunk
    ##combine using nested
with xr.open_mfdataset('~/pathToFile/climateCalculation*'+inter+'.nc', chunks=-1, parallel=True, engine='h5netcdf', combine='nested') as ds:
    ds.to_netcdf('~/pathToFile/nestedClimateAnnualCalculations_'+inter+'.nc')
    #combine using default coords
    with xr.open_mfdataset('~/pathToFile/climateCalculation*'+inter+'.nc', chunks=-1, parallel=True, engine='h5netcdf') as ds:
        ds.to_netcdf('~/pathToFile/climateAnnualCalculations_'+inter+'.nc')

combining all chunks to one final file

nested input

with xr.open_mfdataset('~/pathToFile/climateCalculations/nestedClimateAnnualCalculations_*', chunks=-1, parallel=True, engine='h5netcdf') as ds:
ds.to_netcdf('~/pathToFile/climateAnnualCalculationsCombinedNested.nc')

with xr.open_mfdataset('~/pathToFile/climateCalculations/climateAnnualCalculations_*', chunks=-1, parallel=True, engine='h5netcdf') as ds:
ds.to_netcdf('~/pathToFile/climateAnnualCalculationsCombined.nc')

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Matrix Index is tilted using combine_by_coords  986436135
884197067 https://github.com/pydata/xarray/issues/5604#issuecomment-884197067 https://api.github.com/repos/pydata/xarray/issues/5604 IC_kwDOAMm_X840s8bL areichmuth 25606497 2021-07-21T13:38:22Z 2021-07-21T14:33:53Z NONE

Hi there, I have a very similar problem and before I open another issue I rather share my example here:

Minimal Complete Verifiable Example:

This little computation uses >500 MB of memory even if the file reveals only a size of 154MB: ```python with xr.open_dataset(climdata+'tavg_subset.nc', chunks={"latitude": 300, "longitude": 300}) as ds: print(ds)

<xarray.Dataset>
Dimensions:    (latitude: 168, longitude: 664, time: 731)
Coordinates:
  * time       (time) datetime64[ns] 1971-01-01 1971-01-02 ... 1972-12-31
  * longitude  (longitude) float64 20.27 20.3 20.33 20.36 ... 40.92 40.95 40.98
  * latitude   (latitude) float64 40.23 40.2 40.17 40.14 ... 35.08 35.05 35.02
Data variables:
    tavg       (time, latitude, longitude) float32 dask.array<chunksize=(731, 168, 300), meta=np.ndarray>


annualMean = ds.tavg.resample(time="1Y").mean('time', keep_attrs=True)
annualMean.to_netcdf("outputMean.nc", format="NETCDF4_CLASSIC", engine="netcdf4")

```

My problem is that the original files are each >120GB in size and I run into out-of-memory error on our HPC (asking for 10 CPUs with 16GB each).

I thought xarray processes everything in chunks for not overusing the memory - but something seems really wrong here!?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Extremely Large Memory usage for a very small variable  944996552

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.02ms · About: xarray-datasette