issue_comments: 383711323
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2074#issuecomment-383711323 | https://api.github.com/repos/pydata/xarray/issues/2074 | 383711323 | MDEyOklzc3VlQ29tbWVudDM4MzcxMTMyMw== | 6213168 | 2018-04-23T20:26:59Z | 2018-04-23T20:26:59Z | MEMBER | @jakirkham from what I understand Ok this is funny. I ran a few more benchmarks, and apparently ``` def bench(...): ... if not dims: print("a * b (numpy backend):") %timeit a.compute() * b.compute() print("a * b (dask backend):") %timeit (a * b).compute() bench(100, False, [], '...i,...i->...i') bench( 20, False, [], '...i,...i->...i') bench(100, True, [], '...i,...i->...i') bench( 20, True, [], '...i,...i->...i') ``` Output: ``` bench(100, False, [], ...i,...i->...i)
xarray.dot(numpy backend):
291 ms ± 5.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
296 ms ± 10 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
dimension 's' on 0th function argument to apply_ufunc with dask='parallelized' consists of multiple chunks, but is also a core dimension. To fix, rechunk into a single dask array chunk along this dimension, i.e., bench(20, False, [], ...i,...i->...i)
xarray.dot(numpy backend):
345 ms ± 6.02 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
342 ms ± 4.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
dimension 's' on 0th function argument to apply_ufunc with dask='parallelized' consists of multiple chunks, but is also a core dimension. To fix, rechunk into a single dask array chunk along this dimension, i.e., bench(100, True, [], ...i,...i->...i) xarray.dot(numpy backend): 477 ms ± 8.29 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) numpy.einsum: 514 ms ± 35.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) xarray.dot(dask backend): 241 ms ± 8.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) dask.array.einsum: 497 ms ± 21.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (numpy backend) 439 ms ± 27.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (dask backend) 517 ms ± 41.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) bench(20, True, [], ...i,...i->...i) xarray.dot(numpy backend): 572 ms ± 13.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) numpy.einsum: 563 ms ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) xarray.dot(dask backend): 268 ms ± 14.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) dask.array.einsum: 563 ms ± 5.11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (numpy backend) 501 ms ± 5.46 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (dask backend) 922 ms ± 93.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` This particular bit is shocking and I can't wrap my head around it?!? ``` bench(100, True, [], ...i,...i->...i) xarray.dot(dask backend): 241 ms ± 8.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (dask backend) 517 ms ± 41.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) bench(20, True, [], ...i,...i->...i) xarray.dot(dask backend): 268 ms ± 14.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) a * b (dask backend) 922 ms ± 93.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
316618290 |