issues: 295838143
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
295838143 | MDExOlB1bGxSZXF1ZXN0MTY4MjE0ODk1 | 1899 | Vectorized lazy indexing | 6815844 | closed | 0 | 37 | 2018-02-09T11:22:02Z | 2018-06-08T01:21:06Z | 2018-03-06T22:00:57Z | MEMBER | 0 | pydata/xarray/pulls/1899 |
I tried to support lazy vectorised indexing inspired by #1897. More tests would be necessary but I want to decide whether it is worth to continue. My current implementation is + For outer/basic indexers, we combine successive indexers (as we are doing now). + For vectorised indexers, we just store them as is and index sequentially when the evaluation. The implementation was simpler than I thought, but it has a clear limitation. It requires to load array before the vectorised indexing (I mean, the evaluation time). If we make a vectorised indexing for a large array, the performance significantly drops and it is not noticeable until the evaluation time. I appreciate any suggestions. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1899/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 } |
13221727 | pull |