home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where author_association = "CONTRIBUTOR" and issue = 196541604 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • pwolfram 1

issue 1

  • Some queries · 1 ✖

author_association 1

  • CONTRIBUTOR · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
268359031 https://github.com/pydata/xarray/issues/1173#issuecomment-268359031 https://api.github.com/repos/pydata/xarray/issues/1173 MDEyOklzc3VlQ29tbWVudDI2ODM1OTAzMQ== pwolfram 4295853 2016-12-20T21:03:31Z 2016-12-20T21:03:31Z CONTRIBUTOR

@JoyMonteiro and @shoyer, as I've been thinking about this more and especially regarding #463, I was planning on building on opener from #1128 to essentially open, read, and then close a file each time a read get operation was needed on a newCDF file. My initial view was that output fundamentally would be serial but as @JoyMonteiro points out, there may be a benefit to making a provision for parallel output. However, we will probably run into the same netCDF limitation on the number of open files. Would we want similar functionality on opener for set as well as the get methods? I'm not sure how something like sync would work in this context and suspect this could lead to problems. Presumably we would be requiring writing each dimension, attribute, variable, etc at each call with its own associated open, write, and close. I obviously need to find the time to dig into this further...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Some queries 196541604

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.464ms · About: xarray-datasette