home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 282178751 and user = 3019665 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jakirkham · 3 ✖

issue 1

  • Add compute=False keywords to `to_foo` functions · 3 ✖

author_association 1

  • NONE 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
367164232 https://github.com/pydata/xarray/issues/1784#issuecomment-367164232 https://api.github.com/repos/pydata/xarray/issues/1784 MDEyOklzc3VlQ29tbWVudDM2NzE2NDIzMg== jakirkham 3019665 2018-02-20T23:58:47Z 2018-02-20T23:58:47Z NONE

What is store in this case? Sorry not very familiar with how xarray does things.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add compute=False keywords to `to_foo` functions 282178751
352036122 https://github.com/pydata/xarray/issues/1784#issuecomment-352036122 https://api.github.com/repos/pydata/xarray/issues/1784 MDEyOklzc3VlQ29tbWVudDM1MjAzNjEyMg== jakirkham 3019665 2017-12-15T15:38:14Z 2017-12-15T15:38:14Z NONE

In case anyone is curious, PR ( https://github.com/dask/dask/pull/2980 ) contains this work. Feedback welcome.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add compute=False keywords to `to_foo` functions 282178751
351837521 https://github.com/pydata/xarray/issues/1784#issuecomment-351837521 https://api.github.com/repos/pydata/xarray/issues/1784 MDEyOklzc3VlQ29tbWVudDM1MTgzNzUyMQ== jakirkham 3019665 2017-12-14T21:13:30Z 2017-12-14T21:13:30Z NONE

Just to give a brief synopsis of what we are working in Dask in case it is valuable for this or other contexts, have given an overview of the relevant work below.

With Matthew's help am trying to add a keep argument to da.store. By default keep=False, which is the current behavior of da.store. If keep=True however, it returns Dask Arrays that can lazily load data written by da.store. Thus allowing the stored result to be linked to later computations before it is fully written. The compute argument of da.store affects whether to submit the storage tasks immediately (adding Futures into the resultant Dask Array) or whether to hold off until a later computation step triggers it.

This sort of functionality could be useful for a variety of situations including the one Matthew has described above. Also this could be useful for viewing partially computed results before they are totally done. Another use case could be more rapid batching of computations with many intermediate values. There is also an opportunity to re-explore caching in this context; thus, revisiting an area that many people have previously shown interest in.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add compute=False keywords to `to_foo` functions 282178751

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.407ms · About: xarray-datasette