issues
3 rows where state = "closed", type = "pull" and user = 6574622 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1128485610 | PR_kwDOAMm_X84yTE49 | 6258 | removed check for last dask chunk size in to_zarr | d70-t 6574622 | closed | 0 | 4 | 2022-02-09T12:34:43Z | 2022-02-09T15:13:21Z | 2022-02-09T15:12:32Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6258 | When storing a dask-chunked dataset to zarr, the size of the last chunk in each dimension does not matter, as this single last chunk will be written to any number of zarr chunks, but none of the zarr chunks which are being written to will be accessed by any other dask chunk.
cc'ing @rabernat who seems to have worked on this lately. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6258/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
673513695 | MDExOlB1bGxSZXF1ZXN0NDYzMzYyMTIw | 4312 | allow manual zarr encoding on unchunked dask dimensions | d70-t 6574622 | closed | 0 | 3 | 2020-08-05T12:49:04Z | 2022-02-09T09:31:51Z | 2020-08-19T14:58:09Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4312 | If a dask array is chunked along one dimension but not chunked along another, any manually specified zarr chunk size should be valid, but before this patch, this resulted in an error.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
817302678 | MDExOlB1bGxSZXF1ZXN0NTgwODE3NDQ5 | 4966 | conventions: decode unsigned integers to signed if _Unsigned=false | d70-t 6574622 | closed | 0 | 5 | 2021-02-26T12:05:51Z | 2021-03-12T14:21:12Z | 2021-03-12T14:20:20Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4966 | netCDF3 doesn't know unsigned while OPeNDAP doesn't know signed (bytes). Depending on which backend source is used, the original data is stored with the wrong signedness and needs to be decoded based on the _Unsigned attribute. While the netCDF3 variant is already implemented, this commit adds the symmetric case covering OPeNDAP.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);