issues
4 rows where repo = 13221727, type = "pull" and user = 31115101 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 376167325 | MDExOlB1bGxSZXF1ZXN0MjI3NDQ3NjUz | 2533 | Check shapes of coordinates and data during DataArray construction | lilyminium 31115101 | open | 0 | 11 | 2018-10-31T21:28:04Z | 2022-06-09T14:50:17Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2533 |
This sets DataArrayGroupBy.reduce(shortcut=False), as the shortcut first constructs a DataArray with the previous coordinates and the new mutated data before updating the coordinates; this order of events now raises a ValueError. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | ||||||
| 376116903 | MDExOlB1bGxSZXF1ZXN0MjI3NDA3NTM2 | 2530 | Manually specify chunks in open_zarr | lilyminium 31115101 | closed | 0 | 6 | 2018-10-31T19:04:05Z | 2019-04-18T14:35:21Z | 2019-04-18T14:34:29Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2530 |
This adds a |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 369935730 | MDExOlB1bGxSZXF1ZXN0MjIyNzM3ODA5 | 2487 | Zarr chunking (GH2300) | lilyminium 31115101 | closed | 0 | 5 | 2018-10-14T19:35:06Z | 2018-11-02T04:59:19Z | 2018-11-02T04:59:04Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2487 |
I don't fully understand the ins-and-outs of Zarr, but it seems that if it can be serialised with a smaller end-chunk to begin with, then saving a Dataset constructed from Zarr should not have an issue with that either. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 369934524 | MDExOlB1bGxSZXF1ZXN0MjIyNzM3MDMx | 2486 | Zarr determine chunks | lilyminium 31115101 | closed | 0 | 1 | 2018-10-14T19:21:10Z | 2018-10-14T19:26:40Z | 2018-10-14T19:26:40Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2486 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);