issue_comments: 720785384
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4496#issuecomment-720785384 | https://api.github.com/repos/pydata/xarray/issues/4496 | 720785384 | MDEyOklzc3VlQ29tbWVudDcyMDc4NTM4NA== | 35919497 | 2020-11-02T23:32:48Z | 2020-11-03T09:28:48Z | COLLABORATOR | I think we can keep talking here about xarray chunking interface. It seems that the interface for chunking is a tricky problem in xarray. There are involved different interfaces already implemented:
- dask: They are similar, but there are some inconsistencies. dask
The allowed values for chunking in dask are:
- dictionary (or tuple)
- integers > 0
- The allowed values in the dictionary are: xarray: xarray: xarray: xr.open_dataset(engine="zarr")
It works as dask except for:
- Points to be discussed: 1) Option 1
Maybe the encoded chunking provided by the backend can be seen just as the current on-disk data chunking. According to dask interface, if in a dictionary the chunks for some dimension are Note: Option 2
We could use a different new value for the encoded chunks (e.g. 2) @shoyer, @alexamici, @jhamman, @dcherian, @weiji14 suggestions are welcome |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
717410970 |