home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "NONE" and issue = 38849807 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 5

  • nfaggian 1
  • cossatot 1
  • jgerardsimcock 1
  • saulomeirelles 1
  • den-run-ai 1

issue 1

  • interpolate/sample array at point · 5 ✖

author_association 1

  • NONE · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
300359772 https://github.com/pydata/xarray/issues/191#issuecomment-300359772 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDMwMDM1OTc3Mg== jgerardsimcock 6101444 2017-05-10T02:55:49Z 2017-05-10T02:55:49Z NONE

I have a dataset that looks like the following:

<xarray.Dataset> Dimensions: (lat: 720, lon: 1440, time: 365) Coordinates: * time (time) datetime64[ns] 2006-01-01T12:00:00 2006-01-02T12:00:00 ... * lat (lat) float32 -89.875 -89.625 -89.375 -89.125 -88.875 -88.625 ... * lon (lon) float32 0.125 0.375 0.625 0.875 1.125 1.375 1.625 1.875 ... Data variables: tasmax (time, lat, lon) float64 272.6 272.6 272.6 272.6 272.6 272.6 ... Attributes: parent_experiment: historical parent_experiment_id: historical parent_experiment_rip: r1i1p1 Conventions: CF-1.4 institution: NASA Earth Exchange, NASA Ames Research C... institute_id: NASA-Ames realm: atmos modeling_realm: atmos version: 1.0 downscalingModel: BCSD experiment_id: rcp85 frequency: day realization: 1 initialization_method: 1 physics_version: 1 tracking_id: fd966a07-baec-44e4-8f69-e3cfb2d70dfa driving_data_tracking_ids: N/A driving_model_ensemble_member: r1i1p1 driving_experiment_name: historical driving_experiment: historical model_id: BCSD references: BCSD method: Thrasher et al., 2012, Hydro... DOI: http://dx.doi.org/10.7292/W0MW2F2G experiment: RCP8.5 title: ACCESS1-0 global downscaled NEX CMIP5 Cli... contact: Dr. Rama Nemani: rama.nemani@nasa.gov, Dr... disclaimer: This data is considered provisional and s... resolution_id: 0.25 degree project_id: NEXGDDP table_id: Table day (12 November 2010) source: BCSD 2014 creation_date: 2015-01-07T20:16:06Z forcing: N/A product: output

I am trying to do a linear interpolation for each day where the temp is nan. Is there a straightforward way to do this in Xarray?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807
150618114 https://github.com/pydata/xarray/issues/191#issuecomment-150618114 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDE1MDYxODExNA== saulomeirelles 7504461 2015-10-23T16:00:26Z 2015-10-23T16:00:59Z NONE

Hi All,

This is indeed an excellent project with great potential!

I am wondering if there is any progress on the interpolation issue. I am working with an irregular time series which I would pretty much like to upsample using xray.

Thanks for all the effort!

Saulo

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807
132262589 https://github.com/pydata/xarray/issues/191#issuecomment-132262589 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDEzMjI2MjU4OQ== den-run-ai 7870949 2015-08-18T16:10:29Z 2015-08-18T16:10:29Z NONE

+1

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807
60332922 https://github.com/pydata/xarray/issues/191#issuecomment-60332922 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDYwMzMyOTIy nfaggian 377869 2014-10-24T01:19:47Z 2014-10-24T01:19:47Z NONE

For what its worth, I wrote this today. Its a long way from being useful but I find it's working well enough to fill gaps in data after a reindex()

``` py from scipy import interpolate, ndimage

def linterp(data, index, interp_index, order=1): """ Parameters ---------- data: nd-array (cube). index: index (floats) associated with the cube. interp_index: float interpolation poing. Returns ------- interpolated: nd-array An interpolated field. """

# Form a cube of the values, which we will imagine is our function f()
cube = np.array(data, dtype=np.float)

# Form a relationship, and this can be non-linear, between slice indexes and
# the "index".
m = interpolate.interp1d(index, range(len(index)))

# Form a set of coordinates to sample over - x
y, x = np.mgrid[0:data[0].shape[0], 0:data[0].shape[1]]
z = np.ones_like(y) * m(interp_index)

# Perform the sampling f(x), map coordinates is performing a linear
# interpolation of the coordinates in the cube.
return ndimage.map_coordinates(
    cube,
    [z, y, x],
    order=order,
    cval=np.nan)

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807
50401000 https://github.com/pydata/xarray/issues/191#issuecomment-50401000 https://api.github.com/repos/pydata/xarray/issues/191 MDEyOklzc3VlQ29tbWVudDUwNDAxMDAw cossatot 2835718 2014-07-28T21:07:30Z 2014-07-28T21:07:30Z NONE

Stephan,

I think that I could contribute some functions to do 'nearest' and linear interpolation in n-dimensions; these should be able to take advantage of the indexing afforded by xray and not have to deal with Scipy's functions, because they only require selecting values in the neighborhood of the points.

As far as I can tell, higher-order interpolation (spline, etc.) requires fitting functions to the entirety of the dataset, which is pretty slow/ram-intensive with large datasets, and many of the fuctions require the data to be on a regular grid (I am not sure what the xray / netcdf requirements are here). For this, it's probably better to use the scipy functions (at least I personally don't have the knowledge to write anything comparable without more study).

For the function signature, I was thinking about something simple, like:

xray.interpolate(point[s], array, field[s], order), or DataArray.interpolate(point[s], order) where points are the points to interpolate field values from the array, and order is like in map_coordinates (0=nearest, 1=linear, 3=cubic, etc.).

This could return a Series or DataFrame.

But thinking about this a little more, there are kind of two sides to interpolation: What I think of as 'sampling', where we pull values at points from within a grid or structured array (like in map_coordinates), and then the creation of arrays from unstructured (or just smaller) point sets. Both are equally common in geoscience, and probably require pretty different treatment programmatically, so it might be wise to make room for both early.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interpolate/sample array at point 38849807

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 644.471ms · About: xarray-datasette