issue_comments: 667299944
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/4155#issuecomment-667299944 | https://api.github.com/repos/pydata/xarray/issues/4155 | 667299944 | MDEyOklzc3VlQ29tbWVudDY2NzI5OTk0NA== | 1005109 | 2020-07-31T18:55:48Z | 2020-07-31T18:55:48Z | CONTRIBUTOR | Hi. I agree, part of this work might belong in dask. But I don't know dask internals enough to go there. In this case, everything was already in place. Moreover I do think that there is room for optimization. In particular, in this implementation, the work is distributed along chunks corresponding to destination. This means that one may have big intermediate array. For example interpolating one value in a chunked vector will load the full vector in memory (first localization aside). In my previous implementation (and uglier), the interpolation was done with the chunks of the starting array. This might be a better choice sometimes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
638909879 |