home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 516186795

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/3165#issuecomment-516186795 https://api.github.com/repos/pydata/xarray/issues/3165 516186795 MDEyOklzc3VlQ29tbWVudDUxNjE4Njc5NQ== 13084427 2019-07-29T22:30:37Z 2019-07-29T22:30:37Z NONE

Did you try converting np.zeros((5000, 50000) to use dask.array.zeros instead? The former will allocate 2 GB of data within each chunk

Thank you for your suggestion. Tried as you suggested, still with same error.

```python import numpy as np import xarray as xr import dask.array as da

from dask.distributed import Client

temp= xr.DataArray(da.zeros((5000, 50000)),dims=("x","y")).chunk({"y":100, }) temp.rolling(x=100).mean() ```

I have also tried saving the array to nc file and read it after that. Still rolling gives same error (with or without bottleneck and different chunks). Even though it says memory error, it doesn't consume too much memory.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  473692721
Powered by Datasette · Queries took 78.559ms · About: xarray-datasette