html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4663#issuecomment-802732278,https://api.github.com/repos/pydata/xarray/issues/4663,802732278,MDEyOklzc3VlQ29tbWVudDgwMjczMjI3OA==,703554,2021-03-19T10:44:31Z,2021-03-19T10:44:31Z,CONTRIBUTOR,"Thanks @dcherian.
Just to add that if we make progress with supporting indexing with dask arrays then at some point I think we'll hit a separate issue, which is that xarray will require that the chunk sizes of the indexed arrays are computed, but currently calling the dask array method `compute_chunk_sizes()` is inefficient for n-d arrays. Raised here: https://github.com/dask/dask/issues/7416
In case anyone needs a workaround for indexing a dataset with a 1d boolean dask array, I'm currently using [this hacked implementation](https://github.com/malariagen/malariagen-data-python/blob/e39dac2404f8f8c37449169bee0f61dd9c6fcb8c/malariagen_data/util.py#L129) of a compress() style function that operates on an xarray dataset, which includes more efficient computation of chunk sizes. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924