issue_comments: 802732278
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/4663#issuecomment-802732278 | https://api.github.com/repos/pydata/xarray/issues/4663 | 802732278 | MDEyOklzc3VlQ29tbWVudDgwMjczMjI3OA== | 703554 | 2021-03-19T10:44:31Z | 2021-03-19T10:44:31Z | CONTRIBUTOR | Thanks @dcherian. Just to add that if we make progress with supporting indexing with dask arrays then at some point I think we'll hit a separate issue, which is that xarray will require that the chunk sizes of the indexed arrays are computed, but currently calling the dask array method In case anyone needs a workaround for indexing a dataset with a 1d boolean dask array, I'm currently using this hacked implementation of a compress() style function that operates on an xarray dataset, which includes more efficient computation of chunk sizes. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
759709924 |