home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 613012939 and user = 7799184 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • rafa-guedes · 1 ✖

issue 1

  • Support parallel writes to regions of zarr stores · 1 ✖

author_association 1

  • CONTRIBUTOR 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
721504192 https://github.com/pydata/xarray/pull/4035#issuecomment-721504192 https://api.github.com/repos/pydata/xarray/issues/4035 MDEyOklzc3VlQ29tbWVudDcyMTUwNDE5Mg== rafa-guedes 7799184 2020-11-04T04:23:58Z 2020-11-04T04:23:58Z CONTRIBUTOR

@shoyer thanks for implementing this, it is going to be very useful. I am trying to write this dataset below:

dsregion: ``` <xarray.Dataset> Dimensions: (latitude: 2041, longitude: 4320, time: 31) Coordinates: * latitude (latitude) float32 -80.0 -79.916664 -79.833336 ... 89.916664 90.0 * time (time) datetime64[ns] 2008-10-01T12:00:00 ... 2008-10-31T12:00:00 * longitude (longitude) float32 -180.0 -179.91667 ... 179.83333 179.91667 Data variables: vo (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> uo (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> sst (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> ssh (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray>

```

As a region of this other dataset:

dset: <xarray.Dataset> Dimensions: (latitude: 2041, longitude: 4320, time: 9490) Coordinates: * latitude (latitude) float32 -80.0 -79.916664 -79.833336 ... 89.916664 90.0 * longitude (longitude) float32 -180.0 -179.91667 ... 179.83333 179.91667 * time (time) datetime64[ns] 1993-01-01T12:00:00 ... 2018-12-25T12:00:00 Data variables: ssh (time, latitude, longitude) float64 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> sst (time, latitude, longitude) float64 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> uo (time, latitude, longitude) float64 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> vo (time, latitude, longitude) float64 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray>

Using the following call:

dsregion.to_zarr(dset_url, region={"time": slice(5752, 5783)})

But I got stuck on the conditional below within xarray/backends/api.py:

1347 non_matching_vars = [ 1348 k 1349 for k, v in ds_to_append.variables.items() 1350 if not set(region).intersection(v.dims) 1351 ] 1352 import ipdb; ipdb.set_trace() -> 1353 if non_matching_vars: 1354 raise ValueError( 1355 f"when setting `region` explicitly in to_zarr(), all " 1356 f"variables in the dataset to write must have at least " 1357 f"one dimension in common with the region's dimensions " 1358 f"{list(region.keys())}, but that is not " 1359 f"the case for some variables here. To drop these variables " 1360 f"from this dataset before exporting to zarr, write: " 1361 f".drop({non_matching_vars!r})" 1362 )

Apparently because time is not a dimension in coordinate variables ["longitude", "latitude"]:

ipdb> p non_matching_vars ['latitude', 'longitude'] ipdb> p set(region) {'time'}

Should this checking be performed for all variables, or only for data_variables?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Support parallel writes to regions of zarr stores 613012939

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.205ms · About: xarray-datasette