home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "MEMBER" and issue = 484622545 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer 4

issue 1

  • interp and reindex should work for 1d -> nd indexing · 4 ✖

author_association 1

  • MEMBER · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
582987147 https://github.com/pydata/xarray/issues/3252#issuecomment-582987147 https://api.github.com/repos/pydata/xarray/issues/3252 MDEyOklzc3VlQ29tbWVudDU4Mjk4NzE0Nw== shoyer 1217238 2020-02-06T16:26:11Z 2020-02-06T16:26:11Z MEMBER

I recently wrote a version of scipy.ndimage.map_coordinates for JAX in pure NumPt that I think could be straightforwardly ported into xarray.

On Thu, Feb 6, 2020 at 7:00 AM David Huard notifications@github.com wrote:

Just got bit by this as well. Computing monthly quantile correction factors, so I have an array with dimensions (month, quantile, lon, lat). I then want to apply these correction factors to a time series (time, lon, lat), so I compute the month and quantile of my time series, and want to interp into the quantile correction factors. This doesn't work because both the factors and the time series have (lat, lon) dimensions.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3252?email_source=notifications&email_token=AAJJFVWXXHT6ISDFN6ETRO3RBQQ2VA5CNFSM4IPBQQPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEK7QWSA#issuecomment-582945608, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJJFVUF4COJXNL377B3FZDRBQQ2VANCNFSM4IPBQQPA .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interp and reindex should work for 1d -> nd indexing 484622545
524572529 https://github.com/pydata/xarray/issues/3252#issuecomment-524572529 https://api.github.com/repos/pydata/xarray/issues/3252 MDEyOklzc3VlQ29tbWVudDUyNDU3MjUyOQ== shoyer 1217238 2019-08-24T18:45:53Z 2019-08-24T18:45:53Z MEMBER

We could probably use this our own version for all linear and nearest neighbor interpolation. Then we won’t need scipy installed for that.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interp and reindex should work for 1d -> nd indexing 484622545
524457928 https://github.com/pydata/xarray/issues/3252#issuecomment-524457928 https://api.github.com/repos/pydata/xarray/issues/3252 MDEyOklzc3VlQ29tbWVudDUyNDQ1NzkyOA== shoyer 1217238 2019-08-23T20:50:29Z 2019-08-23T20:50:29Z MEMBER

Linear interpolation for 1d -> nd is just a matter of averaging two indexing selections. If we leverage xarray's vectorized indexing operations to do the hard work, it should work automatically for dask arrays, sparse arrays and xarray's internal backend array types, all with any number of dimensions.

On Fri, Aug 23, 2019 at 2:17 PM Noah D Brenowitz notifications@github.com wrote:

In my experience, computing w efficiently is the tricky part. The function is slightly different, but metpy https://unidata.github.io/MetPy/latest/api/generated/metpy.interpolate.interpolate_1d.html uses a lot of tricks to make this work efficiently. A manual for-loop is much cleaner for this kind of stencil calculation IMO. What kind of duck arrays were you thinking of?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/pydata/xarray/issues/3252?email_source=notifications&email_token=AAJJFVQMRK7YOBW6LGJSXM3QGBAUHA5CNFSM4IPBQQPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5BHDBY#issuecomment-524448135, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJJFVUICIWBNW3EILTCQ53QGBAUHANCNFSM4IPBQQPA .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interp and reindex should work for 1d -> nd indexing 484622545
524439002 https://github.com/pydata/xarray/issues/3252#issuecomment-524439002 https://api.github.com/repos/pydata/xarray/issues/3252 MDEyOklzc3VlQ29tbWVudDUyNDQzOTAwMg== shoyer 1217238 2019-08-23T19:43:21Z 2019-08-23T19:43:21Z MEMBER

We could implement linear interpolation just as w * array.sel(..., method='ffill') + (1 - w) * array.sel(..., method='bfill'), where w is calculated from the indexes. This is probably about as efficient as scipy. Numba could be slightly faster (by avoiding a few memory allocations) but it will be hard to make it work as generally, especially on duck arrays.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  interp and reindex should work for 1d -> nd indexing 484622545

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.98ms · About: xarray-datasette