home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 497150401

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2281#issuecomment-497150401 https://api.github.com/repos/pydata/xarray/issues/2281 497150401 MDEyOklzc3VlQ29tbWVudDQ5NzE1MDQwMQ== 1217238 2019-05-29T23:58:42Z 2019-05-29T23:58:42Z MEMBER

So how to perform this operation... or am I missing something?

Sorry, i don't think there's an easy way to do this directly in xarray right now.

My concern with scipy.interpolate.griddata is that the performance might be miserable... griddata takes an arbitrary stream of data points in a D-dimensional space. It doesn't know if those source data points have a gridded/mesh structure. A curvilinear grid mesh needs to be flatten into a stream of points before passed to griddata(). Might not be too bad for nearest-neighbour search, but very inefficient for linear/bilinear method, where knowing the mesh structure beforehand can save a lot of computation.

Thinking a little more about this, I wonder if this the performance could actually be OK as long as the spatial grid is not too big, i.e., if we reuse the same grid many times for different variables/times.

In particular, SciPy's griddata either makes use of a scipy.spatial.KDTree (for nearest neighbor lookups) and scipy.spatial.Delaunay(for linear interpolation, on a triangular mesh). We could build these data structures once (and potentially even cache them in indexes on xarray objects), and likewise calculate the sparse interpolation coefficients once for repeated use.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  340486433
Powered by Datasette · Queries took 158.918ms · About: xarray-datasette