html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2227#issuecomment-1468649950,https://api.github.com/repos/pydata/xarray/issues/2227,1468649950,IC_kwDOAMm_X85XidHe,2448579,2023-03-14T18:49:51Z,2023-03-14T18:54:16Z,MEMBER,"A reproducible example would help but indexing with dask arrays is a bit limited.
On https://github.com/pydata/xarray/pull/5873 it's possible it will raise an error and ask you to compute the indexer. Also see https://github.com/dask/dask/issues/4156
EDIT: your slowdown is probably because it's compuing `Sn` multiple times. You could speed it up by calling compute once and passing a numpy array to `isel`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-1467929278,https://api.github.com/repos/pydata/xarray/issues/2227,1467929278,IC_kwDOAMm_X85XftK-,5637662,2023-03-14T11:32:10Z,2023-03-14T11:32:10Z,CONTRIBUTOR,"I see, they are not the same - the slow one is still a dask array, the other one is not:
```
Sn (r, theta, phi, sampling) float64 dask.array,
Sn (r, theta, phi, sampling) float64 nan nan nan nan ... nan nan nan
```
Otherwise they are the same, so this might be dask related ...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-1464180874,https://api.github.com/repos/pydata/xarray/issues/2227,1464180874,IC_kwDOAMm_X85XRaCK,1217238,2023-03-10T18:04:23Z,2023-03-10T18:04:23Z,MEMBER,"@dschwoerer are you sure that you are actually calculating the same thing in both cases? What _exactly_ do the values of `slc[d]` look like? I would test thing on smaller inputs to verify. My guess is that you are inadvertently calculating something different, recalling that Xarray's broadcasting rules differ slightly from NumPy's.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-1463894170,https://api.github.com/repos/pydata/xarray/issues/2227,1463894170,IC_kwDOAMm_X85XQUCa,5637662,2023-03-10T14:36:43Z,2023-03-10T14:36:43Z,CONTRIBUTOR,"I just changed
```
theisel = ds[k].isel(**slc, missing_dims=""ignore"")
```
to:
```
slcp = [slc[d] if d in slc else slice(None) for d in ds[k].dims]
theisel = ds[k].values[tuple(slcp)]
```
And that changed the runtime of my code from (unknown, still running after 3 hours) to around 10 seconds.
`ds[k]` is a 3 dimensional array
`slc[d]` are 7-d numpy array of integers","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-558700154,https://api.github.com/repos/pydata/xarray/issues/2227,558700154,MDEyOklzc3VlQ29tbWVudDU1ODcwMDE1NA==,2448579,2019-11-26T16:08:24Z,2019-11-26T16:08:24Z,MEMBER,"I don't know much about indexing but that PR propagates a ""new"" indexes property as part of #1603 (work towards enabling more flexible indexing), it doesn't change anything about ""indexing"". I think the dask docs may be more relevant to what you may be asking about: https://docs.dask.org/en/latest/array-slicing.html","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-558693816,https://api.github.com/repos/pydata/xarray/issues/2227,558693816,MDEyOklzc3VlQ29tbWVudDU1ODY5MzgxNg==,1200058,2019-11-26T15:54:25Z,2019-11-26T15:54:25Z,NONE,"Hi, I'd like to understand how `isel` works exactly in conjunction with dask arrays.
As it seems, #3481 propagates the `isel` operation onto each dask chunk for lazy evaluation. Is this correct?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-533193480,https://api.github.com/repos/pydata/xarray/issues/2227,533193480,MDEyOklzc3VlQ29tbWVudDUzMzE5MzQ4MA==,1217238,2019-09-19T15:49:24Z,2019-09-19T15:49:24Z,MEMBER,"Yes, align checks `index.equals(other)` first, which has a shortcut for the same object.
The real mystery here is why `time_filter.indexes['time']` and `ds.indexes['time']` are not the same object. I guess this is likely due to lazy initialization of indexes, and should be fixed eventually by the explicit indexes refactor.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-533119743,https://api.github.com/repos/pydata/xarray/issues/2227,533119743,MDEyOklzc3VlQ29tbWVudDUzMzExOTc0Mw==,2448579,2019-09-19T13:00:40Z,2019-09-19T13:00:40Z,MEMBER,"I think align tries to optimize that case, so maybe something's also possible there?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-533036570,https://api.github.com/repos/pydata/xarray/issues/2227,533036570,MDEyOklzc3VlQ29tbWVudDUzMzAzNjU3MA==,6213168,2019-09-19T08:57:44Z,2019-09-19T08:57:44Z,MEMBER,"Can we short-circuit the special case where the index of the array used for slicing is the same object as the index being sliced, so no alignment is needed?
```python
>>> time_filter.time._variable is ds.time._variable
True
>>> %timeit xr.align(time_filter, ds.a)
477 ms ± 13.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
the time spent on that align call could be zero!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-533033540,https://api.github.com/repos/pydata/xarray/issues/2227,533033540,MDEyOklzc3VlQ29tbWVudDUzMzAzMzU0MA==,6213168,2019-09-19T08:49:32Z,2019-09-19T08:49:32Z,MEMBER,"Before #3319:
```
%timeit ds.a.values[time_filter]
158 ms ± 1.14 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit ds.a.isel(time=time_filter.values)
2.57 s ± 3.65 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit ds.a.isel(time=time_filter)
3.12 s ± 37.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
After #3319:
```
%timeit ds.a.values[time_filter]
158 ms ± 2.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit ds.a.isel(time=time_filter.values)
665 ms ± 6.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit ds.a.isel(time=time_filter)
1.15 s ± 1.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
Good job!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-532804542,https://api.github.com/repos/pydata/xarray/issues/2227,532804542,MDEyOklzc3VlQ29tbWVudDUzMjgwNDU0Mg==,1217238,2019-09-18T18:17:22Z,2019-09-18T18:17:22Z,MEMBER,"https://github.com/pydata/xarray/pull/3319 gives us about a 2x performance boost. It could likely be much faster, but at least this fixes the regression.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-532787342,https://api.github.com/repos/pydata/xarray/issues/2227,532787342,MDEyOklzc3VlQ29tbWVudDUzMjc4NzM0Mg==,1217238,2019-09-18T17:33:38Z,2019-09-18T17:33:38Z,MEMBER,"Yes, I'm seeing similar numbers, about 10x slower indexing in a DataArray. This seems to have gotten slower over time. It would be good to track this down and add a benchmark!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-532780068,https://api.github.com/repos/pydata/xarray/issues/2227,532780068,MDEyOklzc3VlQ29tbWVudDUzMjc4MDA2OA==,2448579,2019-09-18T17:14:38Z,2019-09-18T17:14:38Z,MEMBER,"On master I'm seeing
```
%timeit ds.a.isel(time=time_filter)
3.65 s ± 29.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit ds.a.isel(time=time_filter.values)
2.99 s ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit ds.a.values[time_filter]
227 ms ± 6.59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
Can someone else reproduce?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-454162334,https://api.github.com/repos/pydata/xarray/issues/2227,454162334,MDEyOklzc3VlQ29tbWVudDQ1NDE2MjMzNA==,5635139,2019-01-14T21:09:49Z,2019-01-14T21:09:49Z,MEMBER,"In an effort to reduce the issue backlog, I'll close this, but please reopen if you disagree","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-425224969,https://api.github.com/repos/pydata/xarray/issues/2227,425224969,MDEyOklzc3VlQ29tbWVudDQyNTIyNDk2OQ==,291576,2018-09-27T20:05:05Z,2018-09-27T20:05:05Z,CONTRIBUTOR,"It would be ten files opened via xr.open_mfdataset() concatenated across a time dimension, each one looking like:
```
netcdf convect_gust_20180301_0000 {
dimensions:
latitude = 3502 ;
longitude = 7002 ;
variables:
double latitude(latitude) ;
latitude:_FillValue = NaN ;
latitude:_Storage = ""contiguous"" ;
latitude:_Endianness = ""little"" ;
double longitude(longitude) ;
longitude:_FillValue = NaN ;
longitude:_Storage = ""contiguous"" ;
longitude:_Endianness = ""little"" ;
float gust(latitude, longitude) ;
gust:_FillValue = NaNf ;
gust:units = ""m/s"" ;
gust:description = ""gust winds"" ;
gust:_Storage = ""chunked"" ;
gust:_ChunkSizes = 701, 1401 ;
gust:_DeflateLevel = 8 ;
gust:_Shuffle = ""true"" ;
gust:_Endianness = ""little"" ;
// global attributes:
:start_date = ""03/01/2018 00:00"" ;
:end_date = ""03/01/2018 01:00"" ;
:interval = ""half-open"" ;
:init_date = ""02/28/2018 22:00"" ;
:history = ""Created 2018-09-12 15:53:44.468144"" ;
:description = ""Convective Downscaling, format V2.0"" ;
:_NCProperties = ""version=1|netcdflibversion=4.6.1|hdf5libversion=1.10.1"" ;
:_SuperblockVersion = 0 ;
:_IsNetcdf4 = 1 ;
:_Format = ""netCDF-4"" ;
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424945257,https://api.github.com/repos/pydata/xarray/issues/2227,424945257,MDEyOklzc3VlQ29tbWVudDQyNDk0NTI1Nw==,2443309,2018-09-27T03:16:40Z,2018-09-27T03:16:40Z,MEMBER,"@WeatherGod - are you reading data from netCDF files by chance?
If so, can you share the compression/chunk layout for those (`ncdump -h -s file.nc`)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424795330,https://api.github.com/repos/pydata/xarray/issues/2227,424795330,MDEyOklzc3VlQ29tbWVudDQyNDc5NTMzMA==,291576,2018-09-26T17:06:44Z,2018-09-26T17:06:44Z,CONTRIBUTOR,"No, it does not make a difference. The example above peaks at around 5GB of memory (a bit much, but manageable). And it peaks similarly if we chunk it like you suggested.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424549023,https://api.github.com/repos/pydata/xarray/issues/2227,424549023,MDEyOklzc3VlQ29tbWVudDQyNDU0OTAyMw==,1217238,2018-09-26T00:54:24Z,2018-09-26T00:54:24Z,MEMBER,@WeatherGod does adding something like `da = da.chunk({'time': 1})` reproduce this with your example?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424485235,https://api.github.com/repos/pydata/xarray/issues/2227,424485235,MDEyOklzc3VlQ29tbWVudDQyNDQ4NTIzNQ==,291576,2018-09-25T20:14:02Z,2018-09-25T20:14:02Z,CONTRIBUTOR,"Yeah, it looks like if `da` is backed by a dask array, and you do a `.isel(win=window.compute())` because otherwise isel barfs on dask indexers, it seems, then the memory usage shoots through the roof. Note that in my case, the dask chunks are (1, 3000, 7000). If I do a `window.load()` prior to `window.isel()`, then the memory usage is perfectly reasonable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424479421,https://api.github.com/repos/pydata/xarray/issues/2227,424479421,MDEyOklzc3VlQ29tbWVudDQyNDQ3OTQyMQ==,291576,2018-09-25T19:54:59Z,2018-09-25T19:54:59Z,CONTRIBUTOR,"Just for posterity, though, here is my simplified (working!) example:
```
import numpy as np
import xarray as xr
da = xr.DataArray(np.random.randn(10, 3000, 7000),
dims=('time', 'latitude', 'longitude'))
window = da.rolling(time=2).construct('win')
indexes = window.argmax(dim='win')
result = window.isel(win=indexes)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424477465,https://api.github.com/repos/pydata/xarray/issues/2227,424477465,MDEyOklzc3VlQ29tbWVudDQyNDQ3NzQ2NQ==,291576,2018-09-25T19:48:20Z,2018-09-25T19:48:20Z,CONTRIBUTOR,"Huh, strange... I just tried a simplified version of what I was doing (particularly, no dask arrays), and everything worked fine. I'll have to investigate further.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424473282,https://api.github.com/repos/pydata/xarray/issues/2227,424473282,MDEyOklzc3VlQ29tbWVudDQyNDQ3MzI4Mg==,5635139,2018-09-25T19:35:57Z,2018-09-25T19:35:57Z,MEMBER,@WeatherGod do you have a reproducible example? I'm happy to have a look,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-424470752,https://api.github.com/repos/pydata/xarray/issues/2227,424470752,MDEyOklzc3VlQ29tbWVudDQyNDQ3MDc1Mg==,291576,2018-09-25T19:27:28Z,2018-09-25T19:27:28Z,CONTRIBUTOR,"I am looking into a similar performance issue with isel, but it seems that the issue is that it is creating arrays that are much bigger than needed. For my multidimensional case (time/x/y/window), what should end up only taking a few hundred MB is spiking up to 10's of GB of used RAM. Don't know if this might be a possible source of performance issues.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-396725591,https://api.github.com/repos/pydata/xarray/issues/2227,396725591,MDEyOklzc3VlQ29tbWVudDM5NjcyNTU5MQ==,1217238,2018-06-12T20:38:47Z,2018-06-12T20:38:47Z,MEMBER,"My measurements:
```
>>> %timeit ds.a.isel(time=time_filter)
1 loop, best of 3: 906 ms per loop
>>> %timeit ds.a.isel(time=time_filter.values)
1 loop, best of 3: 447 ms per loop
>>> %timeit ds.a.values[time_filter]
10 loops, best of 3: 169 ms per loop
```
Given the size of this gap, I suspect this could be improved with some investigation and profiling, but there is certainly an upper-limit on the possible performance gain.
One simple example is that indexing the dataset needs to index both `'a'` and `'time'`, so it's going to be at least twice as slow as only indexing `'a'`. So the second indexing expression `ds.a.isel(time=time_filter.values)` is only `447/(169*2) = 1.32` times slower than the best case scenario. ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-396675613,https://api.github.com/repos/pydata/xarray/issues/2227,396675613,MDEyOklzc3VlQ29tbWVudDM5NjY3NTYxMw==,1197350,2018-06-12T17:45:48Z,2018-06-12T17:45:48Z,MEMBER,"Another part of the matrix of possibilities. Takes about half the time if you pass `time_filter.values` (numpy array) rather than the `time_filter` DataArray:
```python
%timeit ds.a.isel(time=time_filter.values)
1.3 s ± 67.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-396675417,https://api.github.com/repos/pydata/xarray/issues/2227,396675417,MDEyOklzc3VlQ29tbWVudDM5NjY3NTQxNw==,4180033,2018-06-12T17:45:14Z,2018-06-12T17:45:14Z,NONE,"I am sorry @rabernat and @maxim-lian ,
the variable's name *time* and the simple example with the greater than filter are misleading. In general, it is about using a boolean mask via `isel` and that it is very slow. In my code, I am not able to use your workaround since my boolean mask is more complex.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-396662676,https://api.github.com/repos/pydata/xarray/issues/2227,396662676,MDEyOklzc3VlQ29tbWVudDM5NjY2MjY3Ng==,5635139,2018-06-12T17:02:34Z,2018-06-12T17:02:34Z,MEMBER,"@rabernat that's a good solution where it's a slice
When is a time that it needs to align a bool array? If you try and pass an array of unequal length, it doesn't work anyway:
```python
In [12]: ds.a.isel(time=time_filter[:-1])
IndexError: Boolean array size 54999999 is used to index array with shape (55000000,).
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890
https://github.com/pydata/xarray/issues/2227#issuecomment-396660606,https://api.github.com/repos/pydata/xarray/issues/2227,396660606,MDEyOklzc3VlQ29tbWVudDM5NjY2MDYwNg==,1197350,2018-06-12T16:55:55Z,2018-06-12T16:55:55Z,MEMBER,"I don't have experience using `isel` with boolean indexing. (Although the docs on [positional indexing](http://xarray.pydata.org/en/latest/indexing.html#positional-indexing) claim it is supported.) My guess is that that the time is being spent aligning the indexer with the array, which is unnecessary since you know they are already aligned. Probably not the most efficient pattern for xarray.
Here's how I would recommend writing the query using label-based selection:
```python
%timeit ds.a.sel(time=slice(50_001, None))
117 ms ± 5.29 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,331668890