home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 317421267 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 2 ✖

issue 1

  • New feature: interp1d · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
386906920 https://github.com/pydata/xarray/issues/2079#issuecomment-386906920 https://api.github.com/repos/pydata/xarray/issues/2079 MDEyOklzc3VlQ29tbWVudDM4NjkwNjkyMA== crusaderky 6213168 2018-05-06T19:30:30Z 2018-05-06T19:30:30Z MEMBER

As I was dissatisfied with the prototype, I scrapped it and rewrote it mocking the splrep/splev API. However my functions don't wrap around scipy.interpolate.splrep/splev, as those don't accept an n-dimensional y, but instead they wrap around scipy.interpolate.make_interp_spline and scipy.interpolate.BSpline (which is what scipy.interpolate.interp1d does too). Compared to the prototype above:

  • lost support for Akima, PCHIP, and the non-spline options of interp1d
  • MUCH more memory-efficient than before, particularly on distributed
  • no more hacks - splrep produces a plain Dataset, which can be stored on NetCDF, sliced, etc. etc.
  • gained ability to have chunks on x_new

I built a production-quality version (inclusive of documentation, unit tests, and all the trimmings) at https://github.com/crusaderky/xarray_extras. Happy to discuss moving it to somebody else's module.

You still can't have a chunked x. It is possible to implement it with dask.array.ghost.ghost, although it would be mutually exclusive with a chunked x_new - contributions are welcome.

Closing this ticket as I agree this is beyond the scope of the core xarray package.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New feature: interp1d 317421267
384366752 https://github.com/pydata/xarray/issues/2079#issuecomment-384366752 https://api.github.com/repos/pydata/xarray/issues/2079 MDEyOklzc3VlQ29tbWVudDM4NDM2Njc1Mg== crusaderky 6213168 2018-04-25T17:22:33Z 2018-04-25T17:22:33Z MEMBER

For my use case slrep caching is critical, as I need to interpolate 20-something curves roughly 4000 times on different points. Changing the application to gather all points from downstream and do one big interpolation would not be feasible as it would kill off my RAM and be very hostile to a distributed environment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  New feature: interp1d 317421267

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 30.55ms · About: xarray-datasette