issue_comments: 364298613
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1897#issuecomment-364298613 | https://api.github.com/repos/pydata/xarray/issues/1897 | 364298613 | MDEyOklzc3VlQ29tbWVudDM2NDI5ODYxMw== | 6815844 | 2018-02-09T00:45:15Z | 2018-02-09T01:11:34Z | MEMBER |
I think we can store the chain of successive indexing operations, and apply them sequentially when the evaluation. But I am wondering if this operation has an advantage to the eager indexing. (The total computation cost would be the same?) A workaround would be to take care outer/basic indexers and vectorized indexers separately, i.e, we can combine successive outer/basic indexers as we are doing now, and store the vectorized indexers and apply them at the evaluation time. It would gain some benefit from the lazy indexing (if we can assume vectorized indexing is not so frequent.). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
295621576 |