home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 427768540 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • shoyer · 1 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
478765910 https://github.com/pydata/xarray/issues/2862#issuecomment-478765910 https://api.github.com/repos/pydata/xarray/issues/2862 MDEyOklzc3VlQ29tbWVudDQ3ODc2NTkxMA== shoyer 1217238 2019-04-01T22:10:02Z 2019-04-01T22:10:02Z MEMBER

Something like this should definitely work: f = xr.open_dataset('dataset.nc') n = f.compute() f.close() n.to_netcdf(path='dataset.nc')

Deep copying maintains dask arrays, so they are still linked to the original file on disk. If you close that file, then dask is definitely going to error when you attempt to use it. I agree that there is an opportunity for better error messages here, though.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  cannot properly .close() a dataset opened with `chunks` argument? 427768540

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 5586.274ms · About: xarray-datasette