home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "COLLABORATOR", issue = 1421441672 and user = 43316012 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • headtr1ck · 4 ✖

issue 1

  • Optimize some copying · 4 ✖

author_association 1

  • COLLABORATOR · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1331203925 https://github.com/pydata/xarray/pull/7209#issuecomment-1331203925 https://api.github.com/repos/pydata/xarray/issues/7209 IC_kwDOAMm_X85PWI9V headtr1ck 43316012 2022-11-29T19:47:08Z 2022-11-29T19:47:08Z COLLABORATOR

Are we merging this anyway, or should we try harder to find a benchmark that shows some improvement?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize some copying 1421441672
1304652497 https://github.com/pydata/xarray/pull/7209#issuecomment-1304652497 https://api.github.com/repos/pydata/xarray/issues/7209 IC_kwDOAMm_X85Nw2rR headtr1ck 43316012 2022-11-05T22:23:41Z 2022-11-05T22:31:27Z COLLABORATOR

I added a benchmark for swap_dims which uses Variable.to_index_variable when swapping into an existing coord, but I can not see any significant improvement...

Any ideas what else to test? Maybe Indexes.copy_indexes, but I have not found a more high-level method that can take advantage of the memo dict...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize some copying 1421441672
1300775683 https://github.com/pydata/xarray/pull/7209#issuecomment-1300775683 https://api.github.com/repos/pydata/xarray/issues/7209 IC_kwDOAMm_X85NiEMD headtr1ck 43316012 2022-11-02T16:05:13Z 2022-11-02T16:05:13Z COLLABORATOR

The change does matter - but deep copies are still much more expensive than they used to be (as to be expected, I guess)

Do you by any chance know which parts have improved, so we can add them as a benchmark here?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize some copying 1421441672
1295881068 https://github.com/pydata/xarray/pull/7209#issuecomment-1295881068 https://api.github.com/repos/pydata/xarray/issues/7209 IC_kwDOAMm_X85NPZNs headtr1ck 43316012 2022-10-29T15:56:23Z 2022-10-29T15:56:23Z COLLABORATOR

Thanks @headtr1ck do we have a benchmark for this, if not can we add one please?

Since the benchmark didn't change we either don't have one or my change doesn't matter much, haha.

I think the most important change is the shallow copy for Variable.to_index_variable, but I will have to check how to test this in a useful way, probably better testing some functions that use it indirectly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Optimize some copying 1421441672

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 25.178ms · About: xarray-datasette