issue_comments: 884197067
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5604#issuecomment-884197067 | https://api.github.com/repos/pydata/xarray/issues/5604 | 884197067 | IC_kwDOAMm_X840s8bL | 25606497 | 2021-07-21T13:38:22Z | 2021-07-21T14:33:53Z | NONE | Hi there, I have a very similar problem and before I open another issue I rather share my example here: Minimal Complete Verifiable Example: This little computation uses >500 MB of memory even if the file reveals only a size of 154MB: ```python with xr.open_dataset(climdata+'tavg_subset.nc', chunks={"latitude": 300, "longitude": 300}) as ds: print(ds)
``` My problem is that the original files are each >120GB in size and I run into out-of-memory error on our HPC (asking for 10 CPUs with 16GB each). I thought xarray processes everything in chunks for not overusing the memory - but something seems really wrong here!? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
944996552 |