issue_comments: 1065536538
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4043#issuecomment-1065536538 | https://api.github.com/repos/pydata/xarray/issues/4043 | 1065536538 | IC_kwDOAMm_X84_gswa | 8419421 | 2022-03-11T21:16:59Z | 2022-03-11T21:16:59Z | NONE | I believe I am experiencing a similar issue, although with code that I thought was smart enough to chunk the data request into smaller pieces: ``` import numpy as np import xarray as xr from dask.diagnostics import ProgressBar import intake wrf_url = ('https://rda.ucar.edu/thredds/catalog/files/g/ds612.0/' 'PGW3D/2006/catalog.xml') catalog_u = intake.open_thredds_merged(wrf_url, path=['_U_2006060']) catalog_v = intake.open_thredds_merged(wrf_url, path=['_V_2006060']) ds_u = catalog_u.to_dask() ds_u['U'] = ds_u.U.chunk("auto") ds_v = catalog_v.to_dask() ds_v['V'] = ds_v.V.chunk("auto") ds = xr.merge((ds_u, ds_v)) def unstagger(ds, var, coord, new_coord): var1 = ds[var].isel({coord: slice(None, -1)}) var2 = ds[var].isel({coord: slice(1, None)}) return ((var1 + var2) / 2).rename({coord: new_coord}) with ProgressBar(): ds['U_unstaggered'] = unstagger(ds, 'U', 'west_east_stag', 'west_east') ds['V_unstaggered'] = unstagger(ds, 'V', 'south_north_stag', 'south_north') ds['speed'] = np.hypot(ds.U_unstaggered, ds.V_unstaggered) ds.speed.isel(bottom_top=10).sel(Time='2006-06-07T18:00').plot() ``` This throws an error because, according to the RDA help folks, a request for an entire variable is made, which far exceeds their server's 500 MB request limit:
Here's the error:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
614144170 |