html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1897#issuecomment-364632449,https://api.github.com/repos/pydata/xarray/issues/1897,364632449,MDEyOklzc3VlQ29tbWVudDM2NDYzMjQ0OQ==,2443309,2018-02-10T07:22:08Z,2018-02-10T07:22:08Z,MEMBER,"@fujiisoup - thanks for jumping in here so quickly, I really appreciate it. I'll give your PR review and try to weigh in on the design as soon as possible. ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,295621576
https://github.com/pydata/xarray/issues/1897#issuecomment-364298613,https://api.github.com/repos/pydata/xarray/issues/1897,364298613,MDEyOklzc3VlQ29tbWVudDM2NDI5ODYxMw==,6815844,2018-02-09T00:45:15Z,2018-02-09T01:11:34Z,MEMBER,"> Or we could switch LazilyIndexedArray to store a chain of successive indexing operations, but that would potentially have non-desirable performance implications.
I think we can store the chain of successive indexing operations, and apply them sequentially when the evaluation.
But I am wondering if this operation has an advantage to the eager indexing.
(The total computation cost would be the same?)
A workaround would be to take care outer/basic indexers and vectorized indexers separately,
i.e, we can combine successive outer/basic indexers as we are doing now, and store the vectorized indexers and apply them at the evaluation time.
It would gain some benefit from the lazy indexing (if we can assume vectorized indexing is not so frequent.). ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,295621576
https://github.com/pydata/xarray/issues/1897#issuecomment-364294048,https://api.github.com/repos/pydata/xarray/issues/1897,364294048,MDEyOklzc3VlQ29tbWVudDM2NDI5NDA0OA==,1217238,2018-02-09T00:20:16Z,2018-02-09T00:20:16Z,MEMBER,"In the current version of LazilyIndexArray, we combined successive indexers into a single indexer, e.g., `array[k1][k2]` -> `array[k3]` without evaluating the array.
Someone would either need to think through how to do this for successive `VectorizedIndexer` objects (all other indexers can be converted in `VectorizedIndexer` objects if needed). Or we could switch `LazilyIndexedArray` to store a chain of successive indexing operations, but that would potentially have non-desirable performance implications.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,295621576
https://github.com/pydata/xarray/issues/1897#issuecomment-364292377,https://api.github.com/repos/pydata/xarray/issues/1897,364292377,MDEyOklzc3VlQ29tbWVudDM2NDI5MjM3Nw==,2443309,2018-02-09T00:11:42Z,2018-02-09T00:11:42Z,MEMBER,"I'd be interested in discussing what it would take to ""Support vectorized indexing with LazilyIndexedArray"". If it is not possible, then yes, we should be improve the error message.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,295621576
https://github.com/pydata/xarray/issues/1897#issuecomment-364281173,https://api.github.com/repos/pydata/xarray/issues/1897,364281173,MDEyOklzc3VlQ29tbWVudDM2NDI4MTE3Mw==,6815844,2018-02-08T23:13:40Z,2018-02-08T23:13:40Z,MEMBER,"I do not yet understand around here, but I guess `cache=True` implies to load all the data into memory but is still indexed lazily?
Is it reasonable to convert this directly to np.ndarray?
Or if it is not, the solution would be
+ Just improve the error message
+ Support vectorised indexing with `LazilyIndexedArray` (needs work)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,295621576