home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 759709924 and user = 6130352 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • eric-czech · 1 ✖

issue 1

  • Fancy indexing a Dataset with dask DataArray triggers multiple computes · 1 ✖

author_association 1

  • NONE 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
740993933 https://github.com/pydata/xarray/issues/4663#issuecomment-740993933 https://api.github.com/repos/pydata/xarray/issues/4663 MDEyOklzc3VlQ29tbWVudDc0MDk5MzkzMw== eric-czech 6130352 2020-12-08T20:38:44Z 2020-12-08T20:39:23Z NONE

I like using our raise_if_dask_computes context since it points out where the compute is happening

Oo nice, great to know about that.

This looks like a duplicate of #2801. If you agree, can we move the conversation there?

Defining a general strategy for handling unknown chunk sizes seems like a good umbrella for it. I would certainly mention the multiple executions though, that seems somewhat orthogonal.

Have there been prior discussions about the fact that dask doesn't support consecutive slicing operations well (i.e. applying filters one after the other)? I am wondering what the thinking is on how far off that is in dask vs simply trying to support the current behavior well. I.e. maybe forcing evaluation of indexer arrays is the practical solution for the foreseeable future if xarray didn't do so more than once.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fancy indexing a Dataset with dask DataArray triggers multiple computes 759709924

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.977ms · About: xarray-datasette