issue_comments: 388545372
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1224#issuecomment-388545372 | https://api.github.com/repos/pydata/xarray/issues/1224 | 388545372 | MDEyOklzc3VlQ29tbWVudDM4ODU0NTM3Mg== | 6213168 | 2018-05-12T10:22:02Z | 2018-05-12T10:22:02Z | MEMBER | Both. One of the biggest problem is that the data of my interestest is a mix of - 1D arrays with dims=(scenario, ) and shape=(500000, ) (stressed financial instruments under a Monte Carlo stress set) - 0D arrays with dims=() (financial instruments that are impervious to the Monte Carlo stresses and never change values) So before you do concat(), you need to call broadcast(), which effectively means that doing the sums on your bunch of very fast 0D instruments suddendly requires repeating them on 500k points. Even keeping the two lots separate (which is fastwsum does) performed considerably slower. However, this was over a year ago and much before xarray.dot() and dask.einsum(), so I'll need to tinker with it again. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
202423683 |