issue_comments: 50401000
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/191#issuecomment-50401000 | https://api.github.com/repos/pydata/xarray/issues/191 | 50401000 | MDEyOklzc3VlQ29tbWVudDUwNDAxMDAw | 2835718 | 2014-07-28T21:07:30Z | 2014-07-28T21:07:30Z | NONE | Stephan, I think that I could contribute some functions to do 'nearest' and linear interpolation in n-dimensions; these should be able to take advantage of the indexing afforded by As far as I can tell, higher-order interpolation (spline, etc.) requires fitting functions to the entirety of the dataset, which is pretty slow/ram-intensive with large datasets, and many of the fuctions require the data to be on a regular grid (I am not sure what the For the function signature, I was thinking about something simple, like:
This could return a Series or DataFrame. But thinking about this a little more, there are kind of two sides to interpolation: What I think of as 'sampling', where we pull values at points from within a grid or structured array (like in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
38849807 |