html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4428#issuecomment-710683863,https://api.github.com/repos/pydata/xarray/issues/4428,710683863,MDEyOklzc3VlQ29tbWVudDcxMDY4Mzg2Mw==,2448579,2020-10-16T22:40:50Z,2020-10-16T22:40:50Z,MEMBER,"@TomAugspurger @jbusecke is seeing some funny behaviour in https://github.com/jbusecke/cmip6_preprocessing/issues/58
Here's a reproducer
``` python
import dask
import numpy as np
import xarray as xr
dask.config.set(
**{
""array.slicing.split_large_chunks"": True,
""array.chunk-size"": ""24 MiB"",
}
)
da = xr.DataArray(
dask.array.random.random((10, 1000, 2000), chunks=(-1, -1, 200)),
dims=[""x"", ""y"", ""time""],
coords={""x"": [3, 4, 5, 6, 7, 9, 8, 0, 2, 1]},
)
da
```


Which is basically
``` python
da.data[np.argsort(da.x.data), ...]
```

I don't understand why its rechunking when we are indexing with a list along a dimension with a single chunk...","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,702646191
https://github.com/pydata/xarray/issues/4428#issuecomment-693475844,https://api.github.com/repos/pydata/xarray/issues/4428,693475844,MDEyOklzc3VlQ29tbWVudDY5MzQ3NTg0NA==,2448579,2020-09-16T15:17:44Z,2020-09-16T15:17:44Z,MEMBER,"This looks like a consequence of https://github.com/dask/dask/pull/6514 . That change helps with cases like https://github.com/pydata/xarray/issues/4112
`sortby` is basically an `isel` indexing operation; so dask is automatically rechunking to make chunks with size < the default. You could fix this by setting an appropriate value in `array.chunk-size` either temporarily or permanently
``` python
with dask.config.set({""array.chunk-size"": ""256MiB""}): # or appropriate value
...
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,702646191