home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER" and issue = 148771214 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • shoyer 2
  • rabernat 1

issue 1

  • Storing history of xarray operations · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
424782418 https://github.com/pydata/xarray/issues/826#issuecomment-424782418 https://api.github.com/repos/pydata/xarray/issues/826 MDEyOklzc3VlQ29tbWVudDQyNDc4MjQxOA== rabernat 1197350 2018-09-26T16:28:08Z 2018-09-26T16:28:08Z MEMBER

This package addresses this issue: https://github.com/recipy/recipy

They are working on xarray support: https://github.com/recipy/recipy/issues/176

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Storing history of xarray operations 148771214
210694521 https://github.com/pydata/xarray/issues/826#issuecomment-210694521 https://api.github.com/repos/pydata/xarray/issues/826 MDEyOklzc3VlQ29tbWVudDIxMDY5NDUyMQ== shoyer 1217238 2016-04-16T00:17:46Z 2016-04-16T00:17:46Z MEMBER

Yes, my main concern is code bloat. Storing things like the computation graph and command line flags used to invoke a script are certainly useful things to do, and I use versions of this stuff all the time. But they are orthogonal to the labeled data focus of xarray so they belong better in another library.

If you want to take this approach, you might start by using something like dask.imperative, and extracting the task dependencies from the resulting task graph. Or you could even try to work with the full dask graphs created by using dask.array with xarray, but these can get pretty big.

Getting parameters from the calling script is even easier -- just inspect sys.argv and set it as an attribute before saving files (or write your own function). The main complexity here is picking a convention, not implementing it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Storing history of xarray operations 148771214
210677143 https://github.com/pydata/xarray/issues/826#issuecomment-210677143 https://api.github.com/repos/pydata/xarray/issues/826 MDEyOklzc3VlQ29tbWVudDIxMDY3NzE0Mw== shoyer 1217238 2016-04-15T23:04:21Z 2016-04-15T23:04:21Z MEMBER

This might be out of scope for xarray.

It's relatively straight forward to build computation graphs, but summarizing them in a useful, succint way is hard. There are a lot of judgment calls involved.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Storing history of xarray operations 148771214

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.37ms · About: xarray-datasette