id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 753374426,MDU6SXNzdWU3NTMzNzQ0MjY=,4623,Allow chunk spec per variable,1419010,open,0,,,3,2020-11-30T10:56:39Z,2020-12-19T17:17:23Z,,NONE,,,,"Say, I have a zarr dataset with multiple variables `Foo`, `Bar` and `Baz` (and potentially, many more), there are 2 dimensions: `x`, `y` (potentially more). Say both `Foo` and `Bar` are large 2d arrays dims: `x, y`, `Baz` is relatively small 1d array dim: `y`. Say I would like to read that dataset with xarray but increase chunk from the native zarr chunk size for `x` and `y` but only for `Foo` and `Bar`, I would like to keep native chunking for ` Baz`. afaiu currently I would do that with `chunks` parameter to `open_dataset`/`open_zarr`, but if I do do that via say `dict(x=N, y=M)` that will change chunking for all variables that use those dimensions, which isn't exactly what I need, I need those changed only for `Foo` and `Bar`. Is there a way to do that? Should that be part of the ""harmonisation""? One could imagine that xarray could accept a dict of dict akin to `{var: {dim: chunk_spec}}` to specify chunking for specific variables. Note that `rechunk` after reading is not what I want, I would like to specify chunking at read op. _Originally posted by @ravwojdyla in https://github.com/pydata/xarray/issues/4496#issuecomment-732486436_","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4623/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue