html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4663#issuecomment-802953719,https://api.github.com/repos/pydata/xarray/issues/4663,802953719,MDEyOklzc3VlQ29tbWVudDgwMjk1MzcxOQ==,2448579,2021-03-19T16:23:32Z,2021-03-19T16:23:32Z,MEMBER,"> currently calling the dask array method compute_chunk_sizes() is inefficient for n-d arrays. Raised here: dask/dask#7416 ouch. thanks for raising that issue. > I'm currently using this hacked implementation of a compress() style function that operates on an xarray dataset, which includes more efficient computation of chunk sizes. I think we'd be open to adding a `.compress` for Dataset, DataArray, & Variable without the chunks hack :) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-802732278,https://api.github.com/repos/pydata/xarray/issues/4663,802732278,MDEyOklzc3VlQ29tbWVudDgwMjczMjI3OA==,703554,2021-03-19T10:44:31Z,2021-03-19T10:44:31Z,CONTRIBUTOR,"Thanks @dcherian. Just to add that if we make progress with supporting indexing with dask arrays then at some point I think we'll hit a separate issue, which is that xarray will require that the chunk sizes of the indexed arrays are computed, but currently calling the dask array method `compute_chunk_sizes()` is inefficient for n-d arrays. Raised here: https://github.com/dask/dask/issues/7416 In case anyone needs a workaround for indexing a dataset with a 1d boolean dask array, I'm currently using [this hacked implementation](https://github.com/malariagen/malariagen-data-python/blob/e39dac2404f8f8c37449169bee0f61dd9c6fcb8c/malariagen_data/util.py#L129) of a compress() style function that operates on an xarray dataset, which includes more efficient computation of chunk sizes. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-802058819,https://api.github.com/repos/pydata/xarray/issues/4663,802058819,MDEyOklzc3VlQ29tbWVudDgwMjA1ODgxOQ==,2448579,2021-03-18T16:15:19Z,2021-03-18T16:15:19Z,MEMBER,"I would start by trying to fix ``` python import dask.array as da import numpy as np from xarray.tests import raise_if_dask_computes with raise_if_dask_computes(max_computes=0): ds = xr.Dataset( dict( a=(""x"", da.from_array(np.random.randint(0, 100, 100))), b=((""x"", ""y""), da.random.random((100, 10))), ) ) ds.b.sel(x=ds.a.data) ``` specifically this `np.asarray` call ``` ~/work/python/xarray/xarray/core/dataset.py in _validate_indexers(self, indexers, missing_dims) 1960 yield k, np.empty((0,), dtype=""int64"") 1961 else: -> 1962 v = np.asarray(v) 1963 1964 if v.dtype.kind in ""US"": ``` Then the next issue is the multiple computes that happen when we pass a `DataArray` instead of a `dask.array` (`ds.b.sel(x=ds.a)`) In general, I guess we can only support this for unindexed dimensions since we wouldn't know what index labels to use without computing the indexer array. This seems to be the case in that colab notebook.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-802050621,https://api.github.com/repos/pydata/xarray/issues/4663,802050621,MDEyOklzc3VlQ29tbWVudDgwMjA1MDYyMQ==,2448579,2021-03-18T16:05:21Z,2021-03-18T16:05:36Z,MEMBER,"From @alimanfoo in #5054 ------------------------- I have a dataset comprising several variables. All variables are dask arrays (e.g., backed by zarr). I would like to use one of these variables, which is a 1d boolean array, to index the other variables along a large single dimension. The boolean indexing array is about ~40 million items long, with ~20 million true values. If I do this all via dask (i.e., not using xarray) then I can index one dask array with another dask array via fancy indexing. The indexing array is not loaded into memory or computed. If I need to know the shape and chunks of the resulting arrays I can call compute_chunk_sizes(), but still very little memory is required. If I do this via xarray.Dataset.isel() then a substantial amount of memory (several GB) is allocated during isel() and retained. This is problematic as in a real-world use case there are many arrays to be indexed and memory runs out on standard systems. There is a follow-on issue which is if I then want to run a computation over one of the indexed arrays, if the indexing was done via xarray then that leads to a further blow-up of multiple GB of memory usage, if using dask distributed cluster. I think the underlying issue here is that the indexing array is loaded into memory, and then gets copied multiple times when the dask graph is constructed. If using a distributed scheduler, further copies get made during scheduling of any subsequent computation. I made a notebook which illustrates the increased memory usage during Dataset.isel() here: [colab.research.google.com/drive/1bn7Sj0An7TehwltWizU8j_l2OvPeoJyo?usp=sharing]( https://colab.research.google.com/drive/1bn7Sj0An7TehwltWizU8j_l2OvPeoJyo?usp=sharing) This is possibly the same underlying issue (and use case) as raised by @eric-czech in #4663, so feel free to close this if you think it's a duplicate.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-740993933,https://api.github.com/repos/pydata/xarray/issues/4663,740993933,MDEyOklzc3VlQ29tbWVudDc0MDk5MzkzMw==,6130352,2020-12-08T20:38:44Z,2020-12-08T20:39:23Z,NONE,"> I like using our raise_if_dask_computes context since it points out where the compute is happening Oo nice, great to know about that. > This looks like a duplicate of #2801. If you agree, can we move the conversation there? Defining a general strategy for handling unknown chunk sizes seems like a good umbrella for it. I would certainly mention the multiple executions though, that seems somewhat orthogonal. Have there been prior discussions about the fact that dask doesn't support consecutive slicing operations well (i.e. applying filters one after the other)? I am wondering what the thinking is on how far off that is in dask vs simply trying to support the current behavior well. I.e. maybe forcing evaluation of indexer arrays is the practical solution for the foreseeable future if xarray didn't do so more than once.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-740958441,https://api.github.com/repos/pydata/xarray/issues/4663,740958441,MDEyOklzc3VlQ29tbWVudDc0MDk1ODQ0MQ==,2448579,2020-12-08T20:09:27Z,2020-12-08T20:09:27Z,MEMBER,I think the solution is to handle this case (dask-backed DataArray) in `_isel_fancy` in `dataset.py`.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-740941441,https://api.github.com/repos/pydata/xarray/issues/4663,740941441,MDEyOklzc3VlQ29tbWVudDc0MDk0MTQ0MQ==,2448579,2020-12-08T19:58:51Z,2020-12-08T20:02:55Z,MEMBER,"Thanks for the great example! This looks like a duplicate of https://github.com/pydata/xarray/issues/2801. If you agree, can we move the conversation there? I like using our `raise_if_dask_computes` context since it points out where the compute is happening ``` python import dask.array as da import numpy as np from xarray.tests import raise_if_dask_computes # Use a custom array type to know when data is being evaluated class Array(): def __init__(self, x): self.shape = (x.shape[0],) self.ndim = x.ndim self.dtype = 'bool' self.x = x def __getitem__(self, idx): if idx[0].stop > 0: print('Evaluating') return (self.x > .5).__getitem__(idx) with raise_if_dask_computes(max_computes=1): ds = xr.Dataset(dict( a=('x', da.from_array(Array(np.random.rand(100)))), b=(('x', 'y'), da.random.random((100, 10))), c=(('x', 'y'), da.random.random((100, 10))), d=(('x', 'y'), da.random.random((100, 10))), )) ds.sel(x=ds.a) ``` ```python -------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 26 d=(('x', 'y'), da.random.random((100, 10))), 27 )) ---> 28 ds.sel(x=ds.a) /project/mrgoodbar/dcherian/python/xarray/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 2211 self, indexers=indexers, method=method, tolerance=tolerance 2212 ) -> 2213 result = self.isel(indexers=pos_indexers, drop=drop) 2214 return result._overwrite_indexes(new_indexes) 2215 /project/mrgoodbar/dcherian/python/xarray/xarray/core/dataset.py in isel(self, indexers, drop, missing_dims, **indexers_kwargs) 2058 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, ""isel"") 2059 if any(is_fancy_indexer(idx) for idx in indexers.values()): -> 2060 return self._isel_fancy(indexers, drop=drop, missing_dims=missing_dims) 2061 2062 # Much faster algorithm for when all indexers are ints, slices, one-dimensional /project/mrgoodbar/dcherian/python/xarray/xarray/core/dataset.py in _isel_fancy(self, indexers, drop, missing_dims) 2122 indexes[name] = new_index 2123 elif var_indexers: -> 2124 new_var = var.isel(indexers=var_indexers) 2125 else: 2126 new_var = var.copy(deep=False) /project/mrgoodbar/dcherian/python/xarray/xarray/core/variable.py in isel(self, indexers, missing_dims, **indexers_kwargs) 1118 1119 key = tuple(indexers.get(dim, slice(None)) for dim in self.dims) -> 1120 return self[key] 1121 1122 def squeeze(self, dim=None): /project/mrgoodbar/dcherian/python/xarray/xarray/core/variable.py in __getitem__(self, key) 766 array `x.values` directly. 767 """""" --> 768 dims, indexer, new_order = self._broadcast_indexes(key) 769 data = as_indexable(self._data)[indexer] 770 if new_order: /project/mrgoodbar/dcherian/python/xarray/xarray/core/variable.py in _broadcast_indexes(self, key) 625 dims.append(d) 626 if len(set(dims)) == len(dims): --> 627 return self._broadcast_indexes_outer(key) 628 629 return self._broadcast_indexes_vectorized(key) /project/mrgoodbar/dcherian/python/xarray/xarray/core/variable.py in _broadcast_indexes_outer(self, key) 680 k = k.data 681 if not isinstance(k, BASIC_INDEXING_TYPES): --> 682 k = np.asarray(k) 683 if k.size == 0: 684 # Slice by empty list; numpy could not infer the dtype ~/miniconda3/envs/dcpy_old_dask/lib/python3.7/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order) 83 84 """""" ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ~/miniconda3/envs/dcpy_old_dask/lib/python3.7/site-packages/dask/array/core.py in __array__(self, dtype, **kwargs) 1374 1375 def __array__(self, dtype=None, **kwargs): -> 1376 x = self.compute() 1377 if dtype and x.dtype != dtype: 1378 x = x.astype(dtype) ~/miniconda3/envs/dcpy_old_dask/lib/python3.7/site-packages/dask/base.py in compute(self, **kwargs) 165 dask.base.compute 166 """""" --> 167 (result,) = compute(self, traverse=False, **kwargs) 168 return result 169 ~/miniconda3/envs/dcpy_old_dask/lib/python3.7/site-packages/dask/base.py in compute(*args, **kwargs) 450 postcomputes.append(x.__dask_postcompute__()) 451 --> 452 results = schedule(dsk, keys, **kwargs) 453 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) 454 /project/mrgoodbar/dcherian/python/xarray/xarray/tests/__init__.py in __call__(self, dsk, keys, **kwargs) 112 raise RuntimeError( 113 ""Too many computes. Total: %d > max: %d."" --> 114 % (self.total_computes, self.max_computes) 115 ) 116 return dask.get(dsk, keys, **kwargs) RuntimeError: Too many computes. Total: 2 > max: 1. ``` So here it looks like we don't support indexing by dask arrays, so as we loop through the dataset the `indexer` array gets computed each time. As you say, this should be avoided.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924 https://github.com/pydata/xarray/issues/4663#issuecomment-740945474,https://api.github.com/repos/pydata/xarray/issues/4663,740945474,MDEyOklzc3VlQ29tbWVudDc0MDk0NTQ3NA==,2448579,2020-12-08T20:01:38Z,2020-12-08T20:01:38Z,MEMBER,"I commented too soon. ``` ds.sel(x=ds.a.data) ``` only computes once (!) so there's something else going on possibly","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924