issue_comments: 1463628223
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7601#issuecomment-1463628223 | https://api.github.com/repos/pydata/xarray/issues/7601 | 1463628223 | IC_kwDOAMm_X85XPTG_ | 69774 | 2023-03-10T10:54:41Z | 2023-03-10T10:54:41Z | NONE | That, while I can open an xarray as a dask array using chunks: I cannot make "use" of these chunks to get statistics per chunk, right? It's just an efficiency question for the dask.compute() stage, but not an actual way to get statistics per chunk? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1617395129 |