home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER" and issue = 309949357 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer 2

issue 1

  • Can't re-save netCDF after opening it and modifying it? · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
377728367 https://github.com/pydata/xarray/issues/2029#issuecomment-377728367 https://api.github.com/repos/pydata/xarray/issues/2029 MDEyOklzc3VlQ29tbWVudDM3NzcyODM2Nw== shoyer 1217238 2018-03-31T22:37:06Z 2018-03-31T22:37:06Z MEMBER

That's right, xarray will never modify a file on disk unless you use to_netcdf().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't re-save netCDF after opening it and modifying it? 309949357
377572708 https://github.com/pydata/xarray/issues/2029#issuecomment-377572708 https://api.github.com/repos/pydata/xarray/issues/2029 MDEyOklzc3VlQ29tbWVudDM3NzU3MjcwOA== shoyer 1217238 2018-03-30T17:10:12Z 2018-03-30T17:10:12Z MEMBER

The problem is that xarray is that when you open up a dataset, xarray does lazy loading of the data from the source file. This lazy loading breaks when you override the source file. As a user, the work around is to always load files entirely from disk, e.g., by calling .load(), or to not attempt to override existing files.

I'm not quite sure how we should improve this, but this does certainly come up with some frequency, especially for new users. A friendlier warning/error would be nice, but I'm not sure how to detect this behavior in general (this information is not currently very accessible).

We could potentially always write to temporary files in to_netcdf() and then rename in a final step after writing the data. As a bonus, this results in atomic writes on most platforms.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Can't re-save netCDF after opening it and modifying it? 309949357

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1441.102ms · About: xarray-datasette