issue_comments: 1122796867
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/6542#issuecomment-1122796867 | https://api.github.com/repos/pydata/xarray/issues/6542 | 1122796867 | IC_kwDOAMm_X85C7IVD | 3698640 | 2022-05-10T19:48:34Z | 2022-05-10T19:49:04Z | CONTRIBUTOR | @jakirkham were you thinking a reference to the dask docs for more info on optimal chunk sizing and aligning with storage? or are you suggesting the proposed docs change is too complex? I was trying to address the lack of documentation on specifying chunks within a zarr array for non-dask arrays/coordinates, but also covering the weedsy (but common) case of datasets with a mix of dask & in-memory arrays/coords like in my example. I have been frustrated by zarr stores I've written with a couple dozen array chunks and thousands of coordinate chunks for this reason, but it's definitely a gnarly topic to cover concisely :P |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1221393104 |