issue_comments: 497420732
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2281#issuecomment-497420732 | https://api.github.com/repos/pydata/xarray/issues/2281 | 497420732 | MDEyOklzc3VlQ29tbWVudDQ5NzQyMDczMg== | 539688 | 2019-05-30T17:50:32Z | 2019-05-31T23:43:54Z | NONE | @shoyer and @crusaderky That's right, that is how I was actually dealing with this problem prior trying xarray ... by flattening the grid coordinates and performing either gridding (with scipy's
This is important information. For the record, here is so far what I found to be best performant: ``` import xarray as xr from scipy.interpolate import griddata Here x/y are dummy 1D coords that wont be used.da1 = xr.DataArray(cube1, [('t', t_cube1) , ('y', range(cube1.shape[1])), ('x', range(cube1.shape[2]))]) Regrid t_cube1 onto t_cube2 first since time will always map 1 to 1 between cubes.This operation is very fast.print('regridding in time ...') cube1 = da1.interp(t=t_cube2).values Regrid each 2D field (X_cube1/Y_cube1 onto X_cube2/Y_cube2), one at a timeprint('regridding in space ...') cube3 = np.full_like(cube2, np.nan) for k in range(t_cube2.shape[0]): print('regridding:', k) cube3[:,:,k] = griddata((X_cube1.ravel(), Y_cube1.ravel()), cube1[k,:,:].ravel(), (X_cube2, Y_cube2), fill_value=np.nan, method='linear') ``` Performance is not that bad... for ~150 time steps and ~1500 nodes in x and y it takes less than 10 min. I think this can be sped up by computing the interpolation weights between grids in the first iteration and cache them (I think xESMF does this). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
340486433 |