pull_requests
6 rows where user = 221526
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
136113206 | MDExOlB1bGxSZXF1ZXN0MTM2MTEzMjA2 | 1508 | closed | 0 | ENH: Support using opened netCDF4.Dataset (Fixes #1459) | dopplershift 221526 | Make the filename argument to `NetCDF4DataStore` polymorphic so that a `Dataset` can be passed in. - [x] Closes #1459 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API #1459 discussed adding an alternate constructor (i.e. a class method) to `NetCDF4DataStore` to allow this, which would be my preferred approach rather than making a `filename` polymorphic (via `isinstance`). Unfortunately, alternate constructors only work by taking one set of parameters (or setting defaults) and then passing them to the original constructor. Given that, there's no way to make an alternate constructor without also making the original constructor somehow aware of this functionality--or breaking backwards-compatibility. I'm open to suggestions to the contrary. | 2017-08-16T20:19:01Z | 2017-08-31T22:24:36Z | 2017-08-31T17:18:51Z | 2017-08-31T17:18:51Z | b190501a011f3427ae6a3220d72a8d972cb7c203 | 0.10 2415632 | 0 | 0e79adcc13dfd6da76a06eca57adda8d18327a33 | 174bad061dc5ac37a4b5e849ad2afa957127745f | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1508 | |||
177593352 | MDExOlB1bGxSZXF1ZXN0MTc3NTkzMzUy | 2016 | closed | 0 | Allow _FillValue and missing_value to differ (Fixes #1749) | dopplershift 221526 | The CF standard permits both values, and them to have different values, so we should not be treating this as an error--just mask out all of them. - [x] Closes #1749 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) | 2018-03-26T23:20:10Z | 2018-04-20T00:35:22Z | 2018-03-31T01:16:00Z | 2018-03-31T01:16:00Z | 1b48ac87905676b18e947951b0cac23e70c7b40e | 0 | b2500bf0f43b544b7bbda927b3defe057d3a2d5c | 8e4231a28d8385e95c156f17ccfefeab537f63ed | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2016 | ||||
187258871 | MDExOlB1bGxSZXF1ZXN0MTg3MjU4ODcx | 2115 | closed | 0 | Fix docstring formatting for load(). | dopplershift 221526 | Need '::' to introduce a code literal block. This was causing MetPy's doc build to warn (since we inherit AbstractDataStore). | 2018-05-10T17:44:32Z | 2018-05-10T18:24:04Z | 2018-05-10T17:50:00Z | 2018-05-10T17:49:59Z | 6d8ac11ca0a785a6fe176eeca9b735c321a35527 | 0 | 36eadf6043459ce7245be152b73c469e26d56b15 | 70e2eb539d2fe33ee1b5efbd5d2476649dea898b | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2115 | ||||
188588812 | MDExOlB1bGxSZXF1ZXN0MTg4NTg4ODEy | 2144 | closed | 0 | Add strftime() to datetime accessor | dopplershift 221526 | This matches pandas and makes it possible to pass a datetime dataarray to something expecting to be able to use strftime(). - [x] Closes #2090 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API | 2018-05-16T23:37:34Z | 2020-04-23T22:40:41Z | 2019-06-01T03:22:44Z | 3ac45938aaea6ceca862dcf7f1a45d2f7917759d | 0 | de29145a9d090bf12bddccb3c42ad603a87764fe | bb87a9441d22b390e069d0fde58f297a054fd98a | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2144 | |||||
323948505 | MDExOlB1bGxSZXF1ZXN0MzIzOTQ4NTA1 | 3367 | closed | 0 | Remove setting of universal wheels | dopplershift 221526 | Universal wheels indicate that one wheel supports Python 2 and 3. This is no longer the case for xarray. This causes builds to generate files with names like xarray-0.13.0-py2.py3-none-any.whl, which can cause pip to incorrectly install the wheel on Python 2 when installing from a list of wheel files. | 2019-10-02T21:15:48Z | 2019-10-05T20:05:58Z | 2019-10-02T21:43:45Z | 2019-10-02T21:43:45Z | dd2b803a28cdf2f36210f5a7a46897cc47584ea6 | 0 | 09f0213edec8413424102bf41a4d1928cb19f78b | 21705e61503fb49f000186c0d556e5623bd5ac82 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/3367 | ||||
408237253 | MDExOlB1bGxSZXF1ZXN0NDA4MjM3MjUz | 3998 | closed | 0 | Fix handling of abbreviated units like msec | dopplershift 221526 | By default, xarray tries to decode times with pandas and falls back to cftime. This fixes the exception handler to fallback properly in the cases an unhandled abbreviated unit is passed in. An additional item here would be to add support for msec, etc. to xarray's handling, but I wasn't sure the best way to handle that. I'm happy just if things properly fall back to cftime. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API | 2020-04-23T22:43:51Z | 2020-04-24T19:18:00Z | 2020-04-24T07:16:10Z | 2020-04-24T07:16:10Z | 33a66d6380c26a59923922ee11e8ffcf0b4f379f | 0 | fa5688cc57ceec271bc142e16dfcf602ec725ebc | c788ee44008cdd65c8b6de40c737f1b28e173496 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/3998 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);