home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 756425955 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • dcherian · 3 ✖

issue 1

  • Comprehensive benchmarking suite · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
963563959 https://github.com/pydata/xarray/issues/4648#issuecomment-963563959 https://api.github.com/repos/pydata/xarray/issues/4648 IC_kwDOAMm_X845btG3 dcherian 2448579 2021-11-08T20:54:19Z 2021-11-08T20:54:19Z MEMBER

@TomAugspurger are you still in charge of the pydata benchmarking machine? If so, could you add xarray to the list please (https://pandas.pydata.org/speed/)? @Illviljan has made major improvements so it should be a lot faster now

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Comprehensive benchmarking suite 756425955
901370189 https://github.com/pydata/xarray/issues/4648#issuecomment-901370189 https://api.github.com/repos/pydata/xarray/issues/4648 IC_kwDOAMm_X841udFN dcherian 2448579 2021-08-18T19:24:15Z 2021-08-18T19:26:31Z MEMBER

Looks like Quansight thinks that GH actions is a good place to benchmark scikit-learn: https://labs.quansight.org/blog/2021/08/github-actions-benchmarks/ so may be we can set that up for our existing benchmarks.

Here's the workflow: https://github.com/jaimergp/scikit-image/blob/main/.github/workflows/benchmarks-cron.yml

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Comprehensive benchmarking suite 756425955
738969196 https://github.com/pydata/xarray/issues/4648#issuecomment-738969196 https://api.github.com/repos/pydata/xarray/issues/4648 MDEyOklzc3VlQ29tbWVudDczODk2OTE5Ng== dcherian 2448579 2020-12-04T19:19:55Z 2020-12-04T19:22:04Z MEMBER

Thanks @scottyhq

One other thing that often gets neglected in test suites is operating on remote data.

This is lining up with the "pangeo integration tests" that came up in a Pangeo meeting (cc @rabernat).

Regardless whether it fits, I think adding benchmarks+tests for the xarray+zarr+fsspec (or xarray+mfdataset+netCDF) is an important and unmet need of the Pangeo community in general that we could address.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Comprehensive benchmarking suite 756425955

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 160.931ms · About: xarray-datasette