id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 496460488,MDU6SXNzdWU0OTY0NjA0ODg=,3326,quantile with Dask arrays,6475152,closed,0,,,0,2019-09-20T17:14:59Z,2019-11-25T15:57:49Z,2019-11-25T15:57:49Z,NONE,,,,"Currently the `quantile` method [raises an exception](https://github.com/pydata/xarray/blob/master/xarray/core/variable.py#L1637) when it encounters a Dask array. ```python if isinstance(self.data, dask_array_type): raise TypeError( ""quantile does not work for arrays stored as dask "" ""arrays. Load the data via .compute() or .load() "" ""prior to calling this method."" ) ``` I think it's because taking a quantile needs to see all the data in the dimension it's quantile-ing, or blocked/approximate methods weren't on hand when the feature was added. Dask arrays where the dimension being quantile-ed was exactly one chunk in extent seem like a special case where no blocked algorithm is needed. The problem with following the suggestion of the exception (loading the array into memory) is that ""wide and shallow"" arrays are too big to load into memory, yet each chunk is statistically independent if the quantile dimension is the ""shallow"" dimension. I'm not necessarily proposing delegating to Dask's quantile (unless it's super easy), but wanted to explore this special case described above. Related links: * https://github.com/pydata/xarray/issues/2999 * https://stackoverflow.com/a/47103407/745557 Thank you! EDIT: added stackoverflow link","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3326/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue