home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 201428093 and user = 10050469 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • fmaussion · 5 ✖

issue 1

  • to_netcdf() fails to append to an existing file · 5 ✖

author_association 1

  • MEMBER 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
334259359 https://github.com/pydata/xarray/issues/1215#issuecomment-334259359 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDMzNDI1OTM1OQ== fmaussion 10050469 2017-10-04T19:09:55Z 2017-10-04T19:09:55Z MEMBER

@jhamman no I haven't looked into this any further (and I also forgot what my workaround at that time actually was).

I also think your example should work, and that we should never check for values on disk: if the dims and coordinates names match, write the variable and assume the coordinates are ok.

If the variable already exists on file, match the behavior of netCDF4 (I actually don't know what netCDF4 does in that case)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093
274319422 https://github.com/pydata/xarray/issues/1215#issuecomment-274319422 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDI3NDMxOTQyMg== fmaussion 10050469 2017-01-22T09:22:52Z 2017-01-22T12:50:58Z MEMBER

I see.

but perhaps we don't need to fix this for v0.9.

Agreed, but it would be good to get this working some day. For now I can see an easy workaround for my purposes.

Another possibility would be to give the user control on whether existing variables should be ignored, overwritten or raise an error when appending to a file.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093
273731575 https://github.com/pydata/xarray/issues/1215#issuecomment-273731575 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDI3MzczMTU3NQ== fmaussion 10050469 2017-01-19T10:05:59Z 2017-01-19T10:05:59Z MEMBER

I did a few tests: the regression happened in https://github.com/pydata/xarray/pull/1017

Something in the way coordinates variables have changes implies that the writing is happening differently now. The question is whether this should now be handled downstream (in the netcdf backend) or upstream (at the dataset level)?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093
273441335 https://github.com/pydata/xarray/issues/1215#issuecomment-273441335 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDI3MzQ0MTMzNQ== fmaussion 10050469 2017-01-18T10:36:41Z 2017-01-18T10:36:41Z MEMBER

Note that the problems occurs because the backend wants to write the 'dim' coordinate each time. At the second call, the coordinate variable already exists and this raises the error.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093
273333969 https://github.com/pydata/xarray/issues/1215#issuecomment-273333969 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDI3MzMzMzk2OQ== fmaussion 10050469 2017-01-17T23:25:30Z 2017-01-17T23:25:30Z MEMBER

An even simpler example:

```python import os import xarray as xr

path = 'test.nc' if os.path.exists(path): os.remove(path)

ds = xr.Dataset() ds['dim'] = ('dim', [0, 1, 2]) ds['var1'] = ('dim', [10, 11, 12]) ds['var2'] = ('dim', [13, 14, 15])

ds[['var1']].to_netcdf(path) ds[['var2']].to_netcdf(path, 'a') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 53.14ms · About: xarray-datasette