issue_comments: 293748654
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1372#issuecomment-293748654 | https://api.github.com/repos/pydata/xarray/issues/1372 | 293748654 | MDEyOklzc3VlQ29tbWVudDI5Mzc0ODY1NA== | 1217238 | 2017-04-13T01:05:42Z | 2017-04-13T01:06:17Z | MEMBER | Ah, so here's the thing: You might ask why this separate lazy compute machinery exists. The answer is that dask fails to optimize element-wise operations like See https://github.com/dask/dask/issues/746 for discussion and links to PRs about this. @jcrist had a solution that worked, but it slowed down every dask array operations by 20%, which wasn't a great win. I wonder if this is worth revisiting with a simpler, less general optimization pass that doesn't bother with broadcasting. See the subclasses of If we could optimize all these operations (and ideally chain them), then we could drop all the lazy loading stuff from xarray in favor of dask, which would be a real win. @mrocklin any thoughts? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
221387277 |