issue_comments: 364573328
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1899#issuecomment-364573328 | https://api.github.com/repos/pydata/xarray/issues/1899 | 364573328 | MDEyOklzc3VlQ29tbWVudDM2NDU3MzMyOA== | 6815844 | 2018-02-09T21:28:26Z | 2018-02-09T21:28:26Z | MEMBER | Thanks, @shoyer
Do you think it is possible to consolidate I am wondering what computation cost we want to avoid by the lazy indexing. 1. The indexing itself is expensive so we want to minimize the number of indexing operation? 2. The original data is too large to fit into memory, and we want to load the smallest subset of the original array by the lazy indexing? If the reason 2 is the common case, I think it is not a good idea to consolidate all the lazy indexing as And I am also wondering as pointed out in #1725, what I am doing now was already implemented in dask. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
295838143 |