home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 503163130 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • crusaderky · 3 ✖

issue 1

  • Speed up isel and __getitem__ · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
539237559 https://github.com/pydata/xarray/pull/3375#issuecomment-539237559 https://api.github.com/repos/pydata/xarray/issues/3375 MDEyOklzc3VlQ29tbWVudDUzOTIzNzU1OQ== crusaderky 6213168 2019-10-07T22:51:09Z 2019-10-07T22:51:09Z MEMBER

Would you be willing to add a few of these cases to the benchmarks?

Yes, in due time. It's out of scope for this PR though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up isel and __getitem__ 503163130
538910300 https://github.com/pydata/xarray/pull/3375#issuecomment-538910300 https://api.github.com/repos/pydata/xarray/issues/3375 MDEyOklzc3VlQ29tbWVudDUzODkxMDMwMA== crusaderky 6213168 2019-10-07T09:13:07Z 2019-10-07T09:13:51Z MEMBER

I see that all tests in benchmarks/indexing.py use arrays with 2~6 million points. While this is important to spot any case where the numpy underlying functions start being unnecessarily called more than once, it also means any performance improvement or degradation in any of the pure-Python code will be completely drowned out.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up isel and __getitem__ 503163130
538906863 https://github.com/pydata/xarray/pull/3375#issuecomment-538906863 https://api.github.com/repos/pydata/xarray/issues/3375 MDEyOklzc3VlQ29tbWVudDUzODkwNjg2Mw== crusaderky 6213168 2019-10-07T09:04:04Z 2019-10-07T09:08:12Z MEMBER

@jhamman hm. I'm looking at it now for the first time. On first sight, it's a good start, but it's missing some important use cases:

  • DataArray slicing
  • slicing when there are no IndexVariables
  • no benchmarks for .sel()
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Speed up isel and __getitem__ 503163130

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 286.129ms · About: xarray-datasette