home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 495871201

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2281#issuecomment-495871201 https://api.github.com/repos/pydata/xarray/issues/2281 495871201 MDEyOklzc3VlQ29tbWVudDQ5NTg3MTIwMQ== 6213168 2019-05-25T06:55:33Z 2019-05-25T06:59:03Z MEMBER

@fspaolo I never tried using my algorithm to perform 2D interpolation, but this should work: ``` from xarray_extras.interpolate import splrep, splev

da = splev(x_new, splrep(da, 'x')) da = splev(y_new, splrep(da, 'y')) da = splev(t_new, splrep(da, 't')) ``` Add k=1 to downgrade from cubic to linear interpolation and get a speed boost.

You can play around with dask to increase performance by using all your CPUs (or more with dask distributed), although you have to remember that an original dim can't be broken on multiple chunks when you apply splrep to it:

from xarray_extras.interpolate import splrep, splev da = da.chunk(t=TCHUNK) da = splev(x_new, splrep(da, 'x')) da = splev(y_new, splrep(da, 'y')) da = da.chunk(x=SCHUNK, y=SCHUNK).chunk(t=-1) da = splev(t_new, splrep(da, 't')) da = da.compute() where TCHUNK and SCHUNK are integers you'll have to play with. The rule of thumb is that you want to have your chunks 5~100 MBs each.

If you end up finding out that chunking along an interpolation dimension is important for you, it is possible to implement it with dask ghosting techniques, just painfully complicated.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  340486433
Powered by Datasette · Queries took 0.506ms · About: xarray-datasette