html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4663#issuecomment-740993933,https://api.github.com/repos/pydata/xarray/issues/4663,740993933,MDEyOklzc3VlQ29tbWVudDc0MDk5MzkzMw==,6130352,2020-12-08T20:38:44Z,2020-12-08T20:39:23Z,NONE,"> I like using our raise_if_dask_computes context since it points out where the compute is happening Oo nice, great to know about that. > This looks like a duplicate of #2801. If you agree, can we move the conversation there? Defining a general strategy for handling unknown chunk sizes seems like a good umbrella for it. I would certainly mention the multiple executions though, that seems somewhat orthogonal. Have there been prior discussions about the fact that dask doesn't support consecutive slicing operations well (i.e. applying filters one after the other)? I am wondering what the thinking is on how far off that is in dask vs simply trying to support the current behavior well. I.e. maybe forcing evaluation of indexer arrays is the practical solution for the foreseeable future if xarray didn't do so more than once.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,759709924