issues
8 rows where repo = 13221727, type = "pull" and user = 306380 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 336371511 | MDExOlB1bGxSZXF1ZXN0MTk3ODQwODgw | 2255 | Add automatic chunking to open_rasterio | mrocklin 306380 | closed | 0 | 10 | 2018-06-27T20:15:07Z | 2022-04-07T20:21:24Z | 2022-04-07T20:21:24Z | MEMBER | 0 | pydata/xarray/pulls/2255 | This uses the automatic chunking in dask 0.18+ to chunk rasterio datasets in a nicely aligned way. Currently this doesn't implement tests due to a difficulty in creating chunked tiff images. This also uncovered some inefficiencies in how Dask doesn't align rechunking to existing chunk schemes.
I could use help on how the following:
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 390443869 | MDExOlB1bGxSZXF1ZXN0MjM4MjA5ODI1 | 2603 | Support HighLevelGraphs | mrocklin 306380 | closed | 0 | 2 | 2018-12-12T22:52:28Z | 2018-12-13T17:13:10Z | 2018-12-13T17:13:00Z | MEMBER | 0 | pydata/xarray/pulls/2603 | Fixes https://github.com/dask/dask/issues/4291
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 372640063 | MDExOlB1bGxSZXF1ZXN0MjI0NzY5Mjg1 | 2500 | Avoid use of deprecated get= parameter in tests | mrocklin 306380 | closed | 0 | 7 | 2018-10-22T18:25:58Z | 2018-10-23T10:31:37Z | 2018-10-23T00:22:51Z | MEMBER | 0 | pydata/xarray/pulls/2500 |
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 296434660 | MDExOlB1bGxSZXF1ZXN0MTY4NjIyOTM3 | 1904 | Replace task_state with tasks in dask test | mrocklin 306380 | closed | 0 | 4 | 2018-02-12T16:19:14Z | 2018-02-12T21:08:06Z | 2018-02-12T21:08:06Z | MEMBER | 0 | pydata/xarray/pulls/1904 | This internal state was changed in the latest release
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 276448264 | MDExOlB1bGxSZXF1ZXN0MTU0NDI5ODYz | 1741 | Auto flake | mrocklin 306380 | closed | 0 | 2 | 2017-11-23T18:00:47Z | 2018-01-14T20:49:20Z | 2018-01-14T20:49:20Z | MEMBER | 0 | pydata/xarray/pulls/1741 |
I had a free half hour so I decided to run autoflake and autopep8 tools on the codebase. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1741/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 279198672 | MDExOlB1bGxSZXF1ZXN0MTU2MzQwNDk3 | 1760 | Fix DataArray.__dask_scheduler__ to point to dask.threaded.get | mrocklin 306380 | closed | 0 | 8 | 2017-12-05T00:12:21Z | 2017-12-07T22:13:42Z | 2017-12-07T22:09:18Z | MEMBER | 0 | pydata/xarray/pulls/1760 | Previously this erroneously pointed to an optimize function, likely a copy-paste error. For testing this also redirects the .compute methods to use the dask.compute function directly if dask.version >= '0.16.0'. Closes #1759
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 269886091 | MDExOlB1bGxSZXF1ZXN0MTQ5NzMxMDM5 | 1674 | Support Dask interface | mrocklin 306380 | closed | 0 | 12 | 2017-10-31T09:15:52Z | 2017-11-07T18:37:06Z | 2017-11-07T18:31:45Z | MEMBER | 0 | pydata/xarray/pulls/1674 | This integrates the new dask interface methods into XArray. This will place XArray as a first-class dask collection and help in particular with newer dask.distributed features.
Builds on work from @jcrist here: https://github.com/dask/dask/pull/2748 Depends on https://github.com/dask/dask/pull/2847 |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1674/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull | |||||
| 218942553 | MDExOlB1bGxSZXF1ZXN0MTEzOTM2OTk3 | 1349 | Add persist method to DataSet | mrocklin 306380 | closed | 0 | 10 | 2017-04-03T13:59:02Z | 2017-04-04T16:19:20Z | 2017-04-04T16:14:17Z | MEMBER | 0 | pydata/xarray/pulls/1349 | Fixes https://github.com/pydata/xarray/issues/1344
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);