pull_requests: 168214895
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
168214895 | MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 | 1899 | closed | 0 | Vectorized lazy indexing | 6815844 | - [x] Closes #1897 - [x] Tests added (for all bug fixes or enhancements) - [x] Tests passed (for all non-documentation changes) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions. | 2018-02-09T11:22:01Z | 2018-06-08T01:21:06Z | 2018-03-06T22:00:57Z | 2018-03-06T22:00:57Z | 54468e1924174a03e7ead3be8545f687f084f4dd | 0 | 8e967105194d7b4208bcac22127cd0cb01a7a484 | dc3eebf3a514cfdc1039b63f2a542121d1328ba9 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/1899 |
Links from other tables
- 0 rows from pull_requests_id in labels_pull_requests