issues
3 rows where repo = 13221727, type = "pull" and user = 1610850 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
365367839 | MDExOlB1bGxSZXF1ZXN0MjE5MzAzNTk0 | 2449 | Add 'to_iris' and 'from_iris' to methods Dataset | jacobtomlinson 1610850 | closed | 0 | 7 | 2018-10-01T09:02:26Z | 2023-09-18T09:33:53Z | 2023-09-18T09:33:53Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2449 | This PR adds
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
658374300 | MDExOlB1bGxSZXF1ZXN0NDUwMzQ1MDgy | 4232 | Support cupy in as_shared_dtype | jacobtomlinson 1610850 | closed | 0 | 1 | 2020-07-16T16:52:30Z | 2020-07-27T10:32:48Z | 2020-07-24T20:38:58Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4232 | This implements solution 2 for #4231. cc @quasiben
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
654678508 | MDExOlB1bGxSZXF1ZXN0NDQ3MzU2ODAw | 4214 | Add initial cupy tests | jacobtomlinson 1610850 | closed | 0 | 8 | 2020-07-10T10:20:33Z | 2020-07-13T16:32:35Z | 2020-07-13T15:07:45Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/4214 | Added some initial unit tests for cupy. Mainly to create a place for cupy tests to go and to check some basic functionality. I've created a fixture which constructs the dataset from the Toy weather data example and converts the underlying arrays to cupy. Then I've added a test which checks that after calling operations such as The main penalty with working on GPUs is accidentally shunting data back and forth between the GPU and system memory. Copying data over the PCI bus is slow compared to the rest of the work so should be avoided. So this first test is checking that we are leaving things on the GPU. Because this data copying is so expensive cupy have intentionally broken the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4214/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);