pull_requests
3 rows where user = 327925
This data as json, CSV (advanced)
Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)
| id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 15556956 | MDExOlB1bGxSZXF1ZXN0MTU1NTY5NTY= | 113 | closed | 0 | Most of Python 3 support | takluyver 327925 | This isn't entirely finished, but I need to stop working on it for a bit, and I think enough of it is ready to be reviewed. The core code is passing its tests; the remaining failures are all in talking to the Scipy and netCDF4 backends. I also have PRs open against Scipy (scipy/scipy#3617) and netCDF4 (Unidata/netcdf4-python#252) to fix bugs I've encountered there. Particular issues that came up: - There were quite a few circular imports. For now, I've fudged these to work rather than trying to reorganise the code. - `isinstance(x, int)` doesn't reliably catch numpy integer types - see e.g. numpy/numpy#2951. I changed several such cases to `isinstance(x, (int, np.integer))`. | 2014-05-06T18:31:56Z | 2014-07-15T20:36:05Z | 2014-05-09T01:39:01Z | 2014-05-09T01:39:01Z | 184fd39c0fa1574a03439998138297bdb193674c | 0.1.1 664063 | 0 | 6dbd8910080e9210700501c0ea671cf0dc44d90f | 8d6fbd7f4469ce73ed94cf09602efa0498f9dab6 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/113 | |||
| 314874029 | MDExOlB1bGxSZXF1ZXN0MzE0ODc0MDI5 | 3283 | open | 0 | Add hypothesis test for netCDF4 roundtrip | takluyver 327925 | Part of #1846: add a property-based test for reading & writing netCDF4 files. This is the first time I've played with Hypothesis, but it seems to be working - e.g. I got an error with float16, and the [netCDF docs show](https://www.unidata.ucar.edu/software/netcdf/docs/data_type.html) that 16-bit floats are not a supported data type. However: - This currently only tests a dataset with a single variable - it could be extended to multiple variables if that's useful. - It [looks like](https://www.unidata.ucar.edu/software/netcdf/docs/netcdf_data_set_components.html#Permitted) netCDF4 should support unicode characters, but it failed when I didn't have `max_codepoint=255` in there. I don't know if that's an expected limitation I'm not aware of, or a bug somewhere. But I thought I'd make the test pass for now. | 2019-09-06T09:33:48Z | 2022-11-21T22:45:13Z | e1c54ec5ff1e8fe66a9e3f55a805414fe11b6480 | 0 | fbeaf41971afe2b74378d7de4807575d033bdb24 | d1e4164f3961d7bbb3eb79037e96cae14f7182f8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/3283 | ||||||
| 314946040 | MDExOlB1bGxSZXF1ZXN0MzE0OTQ2MDQw | 3285 | closed | 0 | Hypothesis tests for roundtrip to & from pandas | takluyver 327925 | Part of #1846: test roundtripping between xarray DataArray & Dataset and pandas Series & DataFrame. I haven't particularly tried to hunt down corner cases (e.g. dataframes with 0 columns), in favour of adding tests that currently pass. But these tests probably form a useful platform if you do want to ensure corner cases like that behave nicely - just modify the limits and see what fails. | 2019-09-06T13:05:13Z | 2020-01-10T16:25:12Z | 2019-10-30T14:28:52Z | 2019-10-30T14:28:52Z | f115ad155067727882b683ca6fa7c231621dc965 | 0 | 5b0ae82951b099e1b48a6f983dc85225b163d4cc | 43d07b7b1d389a4bfc95c920149f4caa78653e81 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/3285 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[state] TEXT,
[locked] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[body] TEXT,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[merged_at] TEXT,
[merge_commit_sha] TEXT,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[draft] INTEGER,
[head] TEXT,
[base] TEXT,
[author_association] TEXT,
[auto_merge] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[url] TEXT,
[merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
ON [pull_requests] ([user]);