home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 653430454 and user = 2448579 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • dcherian · 3 ✖

issue 1

  • Support for duck Dask Arrays · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
663135877 https://github.com/pydata/xarray/issues/4208#issuecomment-663135877 https://api.github.com/repos/pydata/xarray/issues/4208 MDEyOklzc3VlQ29tbWVudDY2MzEzNTg3Nw== dcherian 2448579 2020-07-23T17:31:18Z 2020-07-23T17:31:18Z MEMBER

Re:rechunk, this should be part of the spec I guess. We need this for DataArray.chunk().

xarray does do some automatic rechunking in variable.py. But this comment: # chunked data should come out with the same chunks; this makes # it feasible to combine shifted and unshifted data # TODO: remove this once dask.array automatically aligns chunks suggest that we could delete that automatic rechunking today.

This will probably be very fast because you're probably just returning the name of the underlying dask array as well as the unit of the pint array/quatity.

ah yes, we can rely on the underlying array library to optimize this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for duck Dask Arrays 653430454
663117842 https://github.com/pydata/xarray/issues/4208#issuecomment-663117842 https://api.github.com/repos/pydata/xarray/issues/4208 MDEyOklzc3VlQ29tbWVudDY2MzExNzg0Mg== dcherian 2448579 2020-07-23T16:55:11Z 2020-07-23T16:55:11Z MEMBER

A couple of things came up in #4221 1. how do we ask a duck dask array to rechunk itself? pint seems to forward the .rechunk call but that isn't formalized anywhere AFAICT. 2. less important: should duck dask arrays cache their token somewhere? dask.array uses .name to do this and xarray uses that to check equality cheaply. We can use tokenize of course. But I'm wondering if it's worth asking duck dask arrays to cache their token as an optimization.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for duck Dask Arrays 653430454
656358078 https://github.com/pydata/xarray/issues/4208#issuecomment-656358078 https://api.github.com/repos/pydata/xarray/issues/4208 MDEyOklzc3VlQ29tbWVudDY1NjM1ODA3OA== dcherian 2448579 2020-07-09T21:22:56Z 2020-07-09T21:22:56Z MEMBER

We have https://github.com/pydata/xarray/blob/master/xarray/core/pycompat.py which defines dask_array_type and sparse_array_type and then use isinstance(da, dask_array_type) in a bunch of places (e.g. duck_array_ops).

re duck array check: @keewis added this recently https://github.com/pydata/xarray/blob/f3ca63a4ac5c091a92085b477a0d34c08df88aa6/xarray/core/utils.py#L250-L253

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support for duck Dask Arrays 653430454

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 165.308ms · About: xarray-datasette