home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 516187643

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/3165#issuecomment-516187643 https://api.github.com/repos/pydata/xarray/issues/3165 516187643 MDEyOklzc3VlQ29tbWVudDUxNjE4NzY0Mw== 1217238 2019-07-29T22:33:56Z 2019-07-29T22:33:56Z MEMBER

You want to use the chunks argument inside da.zeros, e.g., da.zeros((5000, 50000), chunks=100).

On Mon, Jul 29, 2019 at 3:30 PM peterhob notifications@github.com wrote:

Did you try converting np.zeros((5000, 50000) to use dask.array.zeros instead? The former will allocate 2 GB of data within each chunk

Thank you for your suggestion. Tried as you suggested, still with same error.

import numpy as npimport xarray as xrimport dask.array as da# from dask.distributed import Client temp= xr.DataArray(da.zeros((5000, 50000)),dims=("x","y")).chunk({"y":100, }) temp.rolling(x=100).mean()

I have also tried saving the array to nc file and read it after that. Still rolling gives same error (with or without bottleneck and different chunks). Even though it says memory error, it doesn't consume too much memory.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3165?email_source=notifications&email_token=AAJJFVT7OTCOARO4WQZBCODQB5VQ5A5CNFSM4IHLGUAKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3CGFKY#issuecomment-516186795, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJJFVXZRBM7FOHY4NAT53TQB5VQ5ANCNFSM4IHLGUAA .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  473692721
Powered by Datasette · Queries took 1.87ms · About: xarray-datasette