home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "NONE", issue = 38849807 and user = 2835718 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • cossatot · 1 ✖

issue 1

  • interpolate/sample array at point · 1 ✖

author_association 1

  • NONE · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
50401000 https://github.com/pydata/xarray/issues/191#issuecomment-50401000 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDUwNDAxMDAw cossatot 2835718 2014-07-28T21:07:30Z 2014-07-28T21:07:30Z NONE

Stephan,

I think that I could contribute some functions to do 'nearest' and linear interpolation in n-dimensions; these should be able to take advantage of the indexing afforded by xray and not have to deal with Scipy's functions, because they only require selecting values in the neighborhood of the points.

As far as I can tell, higher-order interpolation (spline, etc.) requires fitting functions to the entirety of the dataset, which is pretty slow/ram-intensive with large datasets, and many of the fuctions require the data to be on a regular grid (I am not sure what the xray / netcdf requirements are here). For this, it's probably better to use the scipy functions (at least I personally don't have the knowledge to write anything comparable without more study).

For the function signature, I was thinking about something simple, like:

xray.interpolate(point[s], array, field[s], order), or DataArray.interpolate(point[s], order) where points are the points to interpolate field values from the array, and order is like in map_coordinates (0=nearest, 1=linear, 3=cubic, etc.).

This could return a Series or DataFrame.

But thinking about this a little more, there are kind of two sides to interpolation: What I think of as 'sampling', where we pull values at points from within a grid or structured array (like in map_coordinates), and then the creation of arrays from unstructured (or just smaller) point sets. Both are equally common in geoscience, and probably require pretty different treatment programmatically, so it might be wise to make room for both early.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 17.644ms · About: xarray-datasette