home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 309227775 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: updated_at (date)

These facets timed out: issue

user 1

  • shoyer · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
430792980 https://github.com/pydata/xarray/issues/2022#issuecomment-430792980 https://api.github.com/repos/pydata/xarray/issues/2022 MDEyOklzc3VlQ29tbWVudDQzMDc5Mjk4MA== shoyer 1217238 2018-10-17T21:17:11Z 2018-10-17T21:17:11Z MEMBER

We are just adding or completely overwriting variables. This works currently (from the docs: "If mode=’a’, existing variables will be overwritten"). But I'm not sure what happens if there is a conflict between coordinates among the new and old variables.

I'm pretty sure the coordinates will just get overwritten, too, at least as long as the coordinate arrays have the same shape. If they have different shapes, you probably will get an error. We certainly don't do any checks for alignment currently.

ds1 has some of the same variables as ds2, possibly with overlapping coordinates. In this case, we want to do some kind of append. If there is no overlap between coordinates, then it's straightforward: put the extra values from ds1 into file2.nc.

This is only case I would try to solve to the initial implementation. It's probably 20% of the work (to add a keyword argument like extend='time') and covers 80% of the use-cases.

If we need alignment, I'm sure we could make that work in a follow-up. Certainly it would be less error prone to use.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Enable Append/concat to existing zarr datastore 309227775
377088567 https://github.com/pydata/xarray/issues/2022#issuecomment-377088567 https://api.github.com/repos/pydata/xarray/issues/2022 MDEyOklzc3VlQ29tbWVudDM3NzA4ODU2Nw== shoyer 1217238 2018-03-29T01:10:16Z 2018-03-29T01:10:16Z MEMBER

This would probably make sense to think about along-side support for appending along an existing dimension in a netCDF file (https://github.com/pydata/xarray/issues/1672).

I can see a few potential ways to write the syntax. Probably supplying a range of indices along a dimension to write to would make the most sense, e.g., to_zarr(..., destination={'time': slice(1000, 2000)}) to indicate writing to positions 1000-2000 along the time dimension.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Enable Append/concat to existing zarr datastore 309227775

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2562.45ms · About: xarray-datasette