issue_comments: 364755370
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1899#issuecomment-364755370 | https://api.github.com/repos/pydata/xarray/issues/1899 | 364755370 | MDEyOklzc3VlQ29tbWVudDM2NDc1NTM3MA== | 6815844 | 2018-02-11T14:25:40Z | 2018-02-11T19:49:04Z | MEMBER | Based on the suggestion, I implemented the lazy vectorized indexing with index-consolidation. Now, every backend is virtually compatible to all the indexer types, i.e. basic-, outer- and vectorized-indexers. It sometimes consume large amount of memory if the indexer is unable to decompose efficiently, but it is always better than loading the full slice. The drawback is the unpredictability of how many data will be loaded. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
295838143 |