pull_requests
3 rows where user = 1610850
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
219303594 | MDExOlB1bGxSZXF1ZXN0MjE5MzAzNTk0 | 2449 | closed | 0 | Add 'to_iris' and 'from_iris' to methods Dataset | jacobtomlinson 1610850 | This PR adds `to_iris` and `from_iris` methods to DataSet. I've added this because I frequently find myself writing little list and dictionary comprehensions to pack and unpack both DataSets from DataArrays and Iris CubeLists from Cubes. - [x] Tests added (for all bug fixes or enhancements) - [ ] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API | 2018-10-01T09:02:26Z | 2023-09-18T09:33:53Z | 2023-09-18T09:33:53Z | f4361c448ff3b8b6674d706bd4845ec4827a9adb | 0 | 0a9da681024ce31bd04b8e650e9f3438c1d224f8 | d1e4164f3961d7bbb3eb79037e96cae14f7182f8 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2449 | |||||
447356800 | MDExOlB1bGxSZXF1ZXN0NDQ3MzU2ODAw | 4214 | closed | 0 | Add initial cupy tests | jacobtomlinson 1610850 | Added some initial unit tests for cupy. Mainly to create a place for cupy tests to go and to check some basic functionality. I've created a fixture which constructs the dataset from the Toy weather data example and converts the underlying arrays to cupy. Then I've added a test which checks that after calling operations such as `mean` and `groupby` the resulting DataArray is still backed by a cupy array. The main penalty with working on GPUs is accidentally shunting data back and forth between the GPU and system memory. Copying data over the PCI bus is slow compared to the rest of the work so should be avoided. So this first test is checking that we are leaving things on the GPU. Because this data copying is so expensive cupy have intentionally broken the `__array__` method and introduced a `.get` method instead. This means that users have to be explicit in converting back to numpy and copying back to the main memory. Therefore we will need to add some logic to xarray to use `.get` in appropriate situations such as plotting. - [x] Releated to #4212 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` | 2020-07-10T10:20:33Z | 2020-07-13T16:32:35Z | 2020-07-13T15:07:45Z | 2020-07-13T15:07:44Z | 52043bc57f20438e8923790bca90b646c82442ad | 0 | cff60a17ee7a5e05033711578e28ed94acb73121 | 7bf9df9d75c40bcbf2dd28c47204529a76561a3f | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/4214 | ||||
450345082 | MDExOlB1bGxSZXF1ZXN0NDUwMzQ1MDgy | 4232 | closed | 0 | Support cupy in as_shared_dtype | jacobtomlinson 1610850 | This implements solution 2 for #4231. cc @quasiben <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #4231 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` | 2020-07-16T16:52:30Z | 2020-07-27T10:32:48Z | 2020-07-24T20:38:58Z | 2020-07-24T20:38:57Z | b1c7e315e8a18e86c5751a0aa9024d41a42ca5e8 | 0 | 70dc244f61163dc5c1fba9a675a19cb654404ee8 | 349c5960f2008099ec99223b005df6552d3f85f9 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/4232 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);