issue_comments
3 rows where author_association = "CONTRIBUTOR" and issue = 613012939 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Support parallel writes to regions of zarr stores · 3 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 721504192 | https://github.com/pydata/xarray/pull/4035#issuecomment-721504192 | https://api.github.com/repos/pydata/xarray/issues/4035 | MDEyOklzc3VlQ29tbWVudDcyMTUwNDE5Mg== | rafa-guedes 7799184 | 2020-11-04T04:23:58Z | 2020-11-04T04:23:58Z | CONTRIBUTOR | @shoyer thanks for implementing this, it is going to be very useful. I am trying to write this dataset below: dsregion: ``` <xarray.Dataset> Dimensions: (latitude: 2041, longitude: 4320, time: 31) Coordinates: * latitude (latitude) float32 -80.0 -79.916664 -79.833336 ... 89.916664 90.0 * time (time) datetime64[ns] 2008-10-01T12:00:00 ... 2008-10-31T12:00:00 * longitude (longitude) float32 -180.0 -179.91667 ... 179.83333 179.91667 Data variables: vo (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> uo (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> sst (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> ssh (time, latitude, longitude) float32 dask.array<chunksize=(30, 510, 1080), meta=np.ndarray> ``` As a region of this other dataset: dset:
Using the following call:
But I got stuck on the conditional below within
Apparently because
Should this checking be performed for all variables, or only for data_variables? |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Support parallel writes to regions of zarr stores 613012939 | |
| 627799236 | https://github.com/pydata/xarray/pull/4035#issuecomment-627799236 | https://api.github.com/repos/pydata/xarray/issues/4035 | MDEyOklzc3VlQ29tbWVudDYyNzc5OTIzNg== | nbren12 1386642 | 2020-05-13T07:22:40Z | 2020-05-13T07:22:40Z | CONTRIBUTOR | @rabernat I learn something new everyday. sorry for cluttering up this PR with my ignorance haha. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Support parallel writes to regions of zarr stores 613012939 | |
| 627090332 | https://github.com/pydata/xarray/pull/4035#issuecomment-627090332 | https://api.github.com/repos/pydata/xarray/issues/4035 | MDEyOklzc3VlQ29tbWVudDYyNzA5MDMzMg== | nbren12 1386642 | 2020-05-12T03:44:14Z | 2020-05-12T03:44:14Z | CONTRIBUTOR | @rabernat pointed this PR out to me, and this is great progress towards allowing more database-like CRUD operations on zarr datasets. A similar neat feature would be to read xarray datasets from regions of zarr groups w/o dask arrays. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Support parallel writes to regions of zarr stores 613012939 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 2