issue_comments: 533190578
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3323#issuecomment-533190578 | https://api.github.com/repos/pydata/xarray/issues/3323 | 533190578 | MDEyOklzc3VlQ29tbWVudDUzMzE5MDU3OA== | 1217238 | 2019-09-19T15:42:40Z | 2019-09-19T15:42:40Z | MEMBER | I think dask array has some utility functions for "unifying chunks" that we might be able to use inside our map_blocks() function. Potentially we could also make Alternatively, we could enforce matching chunksizes on all dask arrays inside a Dataset, as part of xarray's model of a Dataset as a collection of aligned arrays. But this seems unnecessarily limiting, and I am reluctant to add extra complexity to xarray's data model. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
495869721 |