home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER", issue = 274797981 and user = 1197350 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • rabernat · 2 ✖

issue 1

  • Switch our lazy array classes to use Dask instead? · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
345434065 https://github.com/pydata/xarray/issues/1725#issuecomment-345434065 https://api.github.com/repos/pydata/xarray/issues/1725 MDEyOklzc3VlQ29tbWVudDM0NTQzNDA2NQ== rabernat 1197350 2017-11-18T10:47:48Z 2017-11-18T10:47:48Z MEMBER

Since #1532, the repr for dask-backed variables does not show any values (to avoid triggering computations). But numpy-backed lazily-masked-and-scaled data is treated differently: it is shown.

This highlights an important difference between how LazilyIndexedArray and dask array work: with dask, either you compute a whole chunk or you compute nothing. With LazilyIndexedArray, you can slice the array however you want and only apply mask_and_scale to the specific items you have selected. This small difference has big performance implications, especially for the "medium sized" datasets @benbovy refers to. If we changed so that decode_cf used dask, you would have to compute the whole chunk in order to see the repr.

So on second thought, maybe the system we have now is better than using dask for "everything lazy."

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Switch our lazy array classes to use Dask instead? 274797981
345374271 https://github.com/pydata/xarray/issues/1725#issuecomment-345374271 https://api.github.com/repos/pydata/xarray/issues/1725 MDEyOklzc3VlQ29tbWVudDM0NTM3NDI3MQ== rabernat 1197350 2017-11-17T21:46:14Z 2017-11-17T21:46:14Z MEMBER

I just had to confront and understand how lazy CF decoding worked in order to move forward with #1528. In my initial implementation, I applied chunking to variables directly in ZarrStore. However, I learned that decode_cf_variable did not preserve variable chunks. So I switched the chunking to after the call to decode_cf.

My impression after this exercise is that having two different definitions of "lazy" within xarray leads to developer confusion! So I favor putting dask more central in xarray's data model.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Switch our lazy array classes to use Dask instead? 274797981

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 4239.415ms · About: xarray-datasette