home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 201428093 and user = 2443309 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jhamman · 2 ✖

issue 1

  • to_netcdf() fails to append to an existing file · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
336213403 https://github.com/pydata/xarray/issues/1215#issuecomment-336213403 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDMzNjIxMzQwMw== jhamman 2443309 2017-10-12T17:45:30Z 2017-10-12T17:45:30Z MEMBER

@TWellman - not yet, see #1215.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093
334251264 https://github.com/pydata/xarray/issues/1215#issuecomment-334251264 https://api.github.com/repos/pydata/xarray/issues/1215 MDEyOklzc3VlQ29tbWVudDMzNDI1MTI2NA== jhamman 2443309 2017-10-04T18:40:39Z 2017-10-04T18:40:39Z MEMBER

@fmaussion and @shoyer - I have a use case that could use this. I'm wondering if either of you have looked at this any further since January?

If not, I'll propose a path forward that fits my use case and we can iterate on the details until we're satisfied:

Do we load existing variable values to check them for equality with the new values, or alternatively always skip or override them?

I don't think loading variables already written to disk is practical. My preference would be to only append missing variables/coordinates.

How do we handle cases where dims, attrs or encoding differs from the exiting variable? Do we attempt to delete and replace the existing variable, update it inplace or error?

differing dims: raise an error

I'd like to implement this but to keep it as simple as possible. A trivial use case like this should work:

```Python fname = 'out.nc' dates = pd.date_range('2016-01-01', freq='1D', periods=45) ds = xr.Dataset() for var in ['A', 'B', 'C']: ds[var] = xr.DataArray(np.random.random((len(dates), 4, 5)), dims=('time', 'x', 'y'), coords={'time': dates})

for var in ds.data_vars: ds[[var]].to_netcdf(fname, mode='a') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  to_netcdf() fails to append to an existing file 201428093

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 90.168ms · About: xarray-datasette