home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 735199603 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • max-sixty 3
  • dionhaefner 1
  • pep8speaks 1

author_association 3

  • MEMBER 3
  • CONTRIBUTOR 1
  • NONE 1

issue 1

  • Optimize slice_slice for faster isel of huge datasets · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
722580087 https://github.com/pydata/xarray/pull/4560#issuecomment-722580087 https://api.github.com/repos/pydata/xarray/issues/4560 MDEyOklzc3VlQ29tbWVudDcyMjU4MDA4Nw== max-sixty 5635139 2020-11-05T19:07:18Z 2020-11-05T19:07:18Z MEMBER

Awesome! This is a great PR, thank you v much @dionhaefner

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize slice_slice for faster isel of huge datasets 735199603
721689342 https://github.com/pydata/xarray/pull/4560#issuecomment-721689342 https://api.github.com/repos/pydata/xarray/issues/4560 MDEyOklzc3VlQ29tbWVudDcyMTY4OTM0Mg== pep8speaks 24736507 2020-11-04T11:56:14Z 2020-11-05T14:28:08Z NONE

Hello @dionhaefner! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers:

Comment last updated at 2020-11-05 14:28:08 UTC
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize slice_slice for faster isel of huge datasets 735199603
721815105 https://github.com/pydata/xarray/pull/4560#issuecomment-721815105 https://api.github.com/repos/pydata/xarray/issues/4560 MDEyOklzc3VlQ29tbWVudDcyMTgxNTEwNQ== max-sixty 5635139 2020-11-04T15:54:47Z 2020-11-04T15:54:47Z MEMBER

Awesome! Thanks @dionhaefner !

Any thoughts from others? Otherwise I'll merge in a day or so

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize slice_slice for faster isel of huge datasets 735199603
721690545 https://github.com/pydata/xarray/pull/4560#issuecomment-721690545 https://api.github.com/repos/pydata/xarray/issues/4560 MDEyOklzc3VlQ29tbWVudDcyMTY5MDU0NQ== dionhaefner 11994217 2020-11-04T11:59:15Z 2020-11-04T11:59:15Z CONTRIBUTOR

How's this?

Benchmark before: 15.7±0.1ms

Benchmark after: 112±4μs

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize slice_slice for faster isel of huge datasets 735199603
721276226 https://github.com/pydata/xarray/pull/4560#issuecomment-721276226 https://api.github.com/repos/pydata/xarray/issues/4560 MDEyOklzc3VlQ29tbWVudDcyMTI3NjIyNg== max-sixty 5635139 2020-11-03T17:37:18Z 2020-11-03T17:37:18Z MEMBER

Hi @dionhaefner ! Thanks for the PR; we're happy to have you as a contributor.

The code looks good. I think it's fairly obviously quicker at least for larger arrays. If you'd be up for writing an asv, that would give us confidence and prevent regressions.

Would you like to add a note to whatsnew?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize slice_slice for faster isel of huge datasets 735199603

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.875ms · About: xarray-datasette