issue_comments: 317465968
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1473#issuecomment-317465968 | https://api.github.com/repos/pydata/xarray/issues/1473 | 317465968 | MDEyOklzc3VlQ29tbWVudDMxNzQ2NTk2OA== | 1217238 | 2017-07-24T15:48:33Z | 2017-07-24T15:48:33Z | MEMBER | With the current logic, we normalize everything into a standard indexer tuple in in core/indexing.pyclass IndexerTuple(tuple): """Base class for xarray indexing tuples."""
class BasicIndexer(IndexerTuple): """Tuple for basic indexing.""" class OuterIndexer(IndexerTuple): """Tuple for outer/orthogonal indexing (.oindex).""" class VectorizedIndexer(IndexerTuple): """Tuple for vectorized indexing (.vindex).""" in core/variable.pyclass Variable(...): def _broadcast_indexes(self, key): # return a BasicIndexer if possible, otherwise an OuterIndexer if possible # and finally a VectorizedIndexer in adapters for various backends/storage typesclass DaskArrayAdapter(...): def getitem(self, key): if isinstance(key, VectorizedIndexer): raise IndexError("dask doesn't yet support vectorized indexing") ... ``` This is a little more work at the outset because we have to handle each indexer type in each backend, but it avoids the error prone broadcasting/un-broadcasting logic. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
241578773 |