pull_requests
10 rows where user = 4295853
This data as json, CSV (advanced)
Suggested facets: updated_at, base, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63734185 | MDExOlB1bGxSZXF1ZXN0NjM3MzQxODU= | 800 | closed | 0 | Adds dask lock capability for backend writes | pwolfram 4295853 | This fixes an error on an asynchronous write for `to_netcdf` resulting in an `dask.async.RuntimeError: NetCDF: HDF error` Resolves issue https://github.com/pydata/xarray/issues/793 following dask improvement at https://github.com/dask/dask/pull/1053 following advice of @shoyer. | 2016-03-22T14:59:56Z | 2016-03-22T22:44:50Z | 2016-03-22T22:32:30Z | 2016-03-22T22:32:30Z | bbe969024eddf5b91caf1844efc5b0be8c19e286 | 0 | 759db6e17958b9e5800560cdf5cf46bf16ad5584 | 899d1be2fd1e57b42663360c7b6e17eae4160ecd | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/800 | ||||
64801017 | MDExOlB1bGxSZXF1ZXN0NjQ4MDEwMTc= | 812 | closed | 0 | Adds cummulative operators to API | pwolfram 4295853 | This PR will add cumsum and cumprod as discussed in https://github.com/pydata/xarray/issues/791 as well ensuring `cumprod` works for the API, resolving issues discussed at https://github.com/pydata/xarray/issues/807. TO DO (dependencies) - [x] Add `nancumprod` and `nancumsum` to numpy (https://github.com/numpy/numpy/pull/7421) - [x] Add `nancumprod` and `nancumsum` to dask (https://github.com/dask/dask/pull/1077) This PR extends infrastructure to support `cumsum` and `cumprod` (https://github.com/pydata/xarray/issues/791). References: - https://github.com/numpy/numpy/pull/7421 cc @shoyer, @jhamman | 2016-03-31T14:37:50Z | 2016-10-03T21:11:30Z | 2016-10-03T21:05:33Z | 2016-10-03T21:05:32Z | 9cf107b522188e206bd15f4620bc3b55ca30e616 | 0 | 129c807be4df7f07dce5825af3982ffd8052895b | c10a9df6522eca2c4d0942703f2b6fa21f9c8776 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/812 | ||||
64981337 | MDExOlB1bGxSZXF1ZXN0NjQ5ODEzMzc= | 815 | closed | 0 | Add drop=True option for where on Dataset and DataArray | pwolfram 4295853 | Addresses #811 to provide a Dataset and DataArray `sel_where` which returns a Dataset or DataArray of minimal coordinate size. | 2016-04-01T17:55:55Z | 2016-09-20T14:23:05Z | 2016-04-10T00:33:00Z | 2016-04-10T00:33:00Z | 28ee86afa683ce7c77e339d15d1220503df3d0ee | 0 | 80cc528bff9fabb5e79e9524d4a18bd81ef2e19e | 4fdf6d4001b1c8f7b8054c1f3bdbffa25949406b | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/815 | ||||
69172755 | MDExOlB1bGxSZXF1ZXN0NjkxNzI3NTU= | 845 | closed | 0 | Fixes doc typo | pwolfram 4295853 | To entirely add or removing coordinate arrays, you can use dictionary like to To entirely add or remove coordinate arrays, you can use dictionary like | 2016-05-06T16:04:18Z | 2016-09-20T14:23:14Z | 2016-05-06T16:35:05Z | 2016-05-06T16:35:05Z | 9f5a6ac3feb35feb7202aa00e3666c9310bab42c | 0 | 7d94c516baa65c75b58efce71f30813fda7ad152 | b25d145b092e1f0ae6555e9ac2103a7afb838592 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/845 | ||||
87742962 | MDExOlB1bGxSZXF1ZXN0ODc3NDI5NjI= | 1031 | closed | 0 | Fixes doc formating error | pwolfram 4295853 | This fixes a typo in the what's new documentation. | 2016-10-03T16:00:29Z | 2017-03-22T17:27:10Z | 2016-10-03T16:15:19Z | 2016-10-03T16:15:19Z | c10a9df6522eca2c4d0942703f2b6fa21f9c8776 | 0 | 54f50ba7153ed17283180e9f86cc8aa9acd20aa9 | 0e044ce807fa0ee15703c8b4088bf41ae8e99116 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1031 | ||||
87770959 | MDExOlB1bGxSZXF1ZXN0ODc3NzA5NTk= | 1032 | closed | 0 | Fixes documentation typo | pwolfram 4295853 | This is another really minor typo... | 2016-10-03T18:57:21Z | 2017-03-22T17:27:10Z | 2016-10-03T21:03:07Z | 2016-10-03T21:03:07Z | 573541e30ee3136237642cc30f4004cd4af66281 | 0 | d980c7afb4c575bc55a76a4f147bfc1cac40b96f | c10a9df6522eca2c4d0942703f2b6fa21f9c8776 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1032 | ||||
87995378 | MDExOlB1bGxSZXF1ZXN0ODc5OTUzNzg= | 1038 | closed | 0 | Attributes from netCDF4 intialization retained | pwolfram 4295853 | Ensures that attrs for open_mfdataset are now retained cc @shoyer | 2016-10-04T23:51:48Z | 2017-03-31T03:11:07Z | 2017-03-31T03:11:07Z | 2017-03-31T03:11:07Z | c0178b7b8b385dc65482099b1ff87ec81f7c51c2 | 0 | 0d183c568bac57e569478b47b6c2e3bbd1753495 | 371d034372bc7522098a142a0debf93916c49102 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1038 | ||||
100929563 | MDExOlB1bGxSZXF1ZXN0MTAwOTI5NTYz | 1198 | closed | 0 | Fixes OS error arising from too many files open | pwolfram 4295853 | Previously, DataStore did not judiciously close files, resulting in opening a large number of files that could result in an OSError related to too many files being open. This merge provides a solution for the netCDF, scipy, and h5netcdf backends. | 2017-01-10T18:37:41Z | 2017-03-23T19:21:27Z | 2017-03-23T19:20:03Z | 2017-03-23T19:20:03Z | 371d034372bc7522098a142a0debf93916c49102 | 0 | 20c5c3b660d02d200e414a38654b1ff2bbd1599e | b3fc6c4e4fafdf4f075b791594633970a787ad79 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1198 | ||||
113043425 | MDExOlB1bGxSZXF1ZXN0MTEzMDQzNDI1 | 1336 | closed | 0 | Marks slow, flaky, and failing tests | pwolfram 4295853 | Closes #1309 | 2017-03-28T19:03:20Z | 2017-04-07T04:36:08Z | 2017-04-03T05:30:16Z | 2017-04-03T05:30:16Z | 685ba0671785eb402d4ed8aa390c7421ed80e5c6 | 0 | 57d33249c233afc6cc0d71f88e16398eb0406f70 | d08efaf902cae5e5f28afff7d6f8182e35a53f46 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1336 | ||||
113497745 | MDExOlB1bGxSZXF1ZXN0MTEzNDk3NzQ1 | 1342 | closed | 0 | Ensures drop=True case works with empty mask | pwolfram 4295853 | Resolves error occurring for python 2.7 for the case of `where(mask, drop=True)` where the mask is empty. - [x] closes #1341 - [X] tests added - [x] tests passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew entry | 2017-03-30T18:45:34Z | 2017-04-02T22:45:01Z | 2017-04-02T22:43:53Z | 2017-04-02T22:43:53Z | 56cec4613715fb77d86fe3b2bd88137adc97e74d | 0 | dcef972ff5b95c3e82cd32fae10bf0a8824ce1c6 | c0178b7b8b385dc65482099b1ff87ec81f7c51c2 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1342 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);