issue_comments: 107251624
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/411#issuecomment-107251624 | https://api.github.com/repos/pydata/xarray/issues/411 | 107251624 | MDEyOklzc3VlQ29tbWVudDEwNzI1MTYyNA== | 1217238 | 2015-05-31T22:03:15Z | 2015-05-31T22:03:15Z | MEMBER |
I'll definitely add a note to qualify that description in the docs -- sorry you went to all the trouble of writing up the bug report!
The open issue for this is #214, which describes how I currently do "diagonal" style indexing to extract stations. It preserves all the metadata, but is a little slow if you have a very long list of stations:. Note that the example in that first comment will work even with arrays that don't fit into memory if you use dask (which will be in the next xray release, which will be out today or tomorrow).
Full NumPy style fancy indexing is out of scope for xray. You can simply do way too many complex things with it for which preserving the metadata is impossible (e.g., you can scramble the elements of a 2D array into any arbitrary desired positions). Moreover, the underlying array indexing operations are only possible to do efficiently if the underlying array is backed by NumPy -- there's no way we'll do that with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
83000406 |