id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 684930038,MDU6SXNzdWU2ODQ5MzAwMzg=,4372,Set `allow_rechunk=True` in `apply_ufunc`,2448579,closed,0,,,10,2020-08-24T20:02:50Z,2020-09-09T19:00:17Z,2020-09-09T19:00:17Z,MEMBER,,,," **What happened**: `blockwise` calls `unify_chunks` by default but `apply_gufunc` does not; so we have a regression in `apply_ufunc` now that we've switched from `blockwise` to `apply_gufunc`. **Minimal Complete Verifiable Example**: ```python import operator a = xr.DataArray(np.arange(10), dims=(""a"")).chunk({""a"": 2}) b = xr.DataArray(np.arange(10), dims=(""a"")).chunk({""a"": 4}) xr.apply_ufunc(operator.add, a, b, dask=""parallelized"", output_dtypes=[a.dtype]).compute() ``` raises ``` ValueError: Dimension `'__loopdim0__'` with different chunksize present ``` on master but works with 0.16.0 I think we need to do `dask_gufunc_kwargs.setdefault(""allow_rechunk"", True)` If we want to avoid that, we'll need to go through a deprecation cycle.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4372/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue