html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5179#issuecomment-844486483,https://api.github.com/repos/pydata/xarray/issues/5179,844486483,MDEyOklzc3VlQ29tbWVudDg0NDQ4NjQ4Mw==,1200058,2021-05-19T21:27:17Z,2021-05-19T21:27:17Z,NONE,"fyi, I updated the boolean indexing to support additional or missing dimensions:
https://gist.github.com/Hoeze/96616ef9d179180b0b7de97c97e00a27
I'm using this on a 4D-array with >300GB to flatten three of the four dimensions and it works, even on 64GB of RAM.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,860418546
https://github.com/pydata/xarray/issues/5179#issuecomment-821881984,https://api.github.com/repos/pydata/xarray/issues/5179,821881984,MDEyOklzc3VlQ29tbWVudDgyMTg4MTk4NA==,1200058,2021-04-17T20:22:13Z,2021-04-17T20:27:25Z,NONE,"@max-sixty The reason is that my method is basically a special case of point-wise indexing:
http://xarray.pydata.org/en/stable/indexing.html#more-advanced-indexing
You can get the same result by calling:
```python
core_dim_locs = {key: value for key, value in core_dim_locs_from_cond(mask, new_dim_name=""newdim"")}
# pointwise selection
data.sel(
dim_0=outliers_subset[""dim_0""],
dim_1=outliers_subset[""dim_1""],
dim_2=outliers_subset[""dim_2""]
)
```
(Note that you loose chunk information by this method, that's why it is less efficient)
When you want to select random items from a N-dimensional array, you can either model the result as some sparse array or by stacking the dimensions.
(OK, stacking the dimensions means also a sparse COO encoding...)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,860418546