html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3323#issuecomment-533327468,https://api.github.com/repos/pydata/xarray/issues/3323,533327468,MDEyOklzc3VlQ29tbWVudDUzMzMyNzQ2OA==,2448579,2019-09-19T22:08:45Z,2019-09-19T22:08:45Z,MEMBER,"I agree with not enforcing matching chunk sizes. I've added an ugly version of `Dataset.unify_chunks` in #3276. Feedback welcome!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,495869721 https://github.com/pydata/xarray/issues/3323#issuecomment-533190578,https://api.github.com/repos/pydata/xarray/issues/3323,533190578,MDEyOklzc3VlQ29tbWVudDUzMzE5MDU3OA==,1217238,2019-09-19T15:42:40Z,2019-09-19T15:42:40Z,MEMBER,"I think dask array has some utility functions for ""unifying chunks"" that we might be able to use inside our map_blocks() function. Potentially we could also make `Dataset.chunks` more robust, e.g., have it return `None` for dimensions with inconsistent chunk sizes rather than raising an error. Alternatively, we could enforce matching chunksizes on all dask arrays inside a Dataset, as part of xarray's model of a Dataset as a collection of aligned arrays. But this seems unnecessarily limiting, and I am reluctant to add extra complexity to xarray's data model.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,495869721