html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2074#issuecomment-385116430,https://api.github.com/repos/pydata/xarray/issues/2074,385116430,MDEyOklzc3VlQ29tbWVudDM4NTExNjQzMA==,6213168,2018-04-27T23:13:20Z,2018-04-27T23:13:20Z,MEMBER,Done the work - but we'll need to wait for dask 0.17.3 to integrate it,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,316618290
https://github.com/pydata/xarray/issues/2074#issuecomment-383724765,https://api.github.com/repos/pydata/xarray/issues/2074,383724765,MDEyOklzc3VlQ29tbWVudDM4MzcyNDc2NQ==,6213168,2018-04-23T21:12:04Z,2018-04-23T21:12:14Z,MEMBER,"> What are the arrays used as input for this case?
See blob in the opening post
> dot reduces one dimension from each input
``xarray.dot(a, b, dims=[])`` is functionally identical to ``a * b`` to my understanding, but faster in some edge cases - which I can't make any sense of.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,316618290
https://github.com/pydata/xarray/issues/2074#issuecomment-383711323,https://api.github.com/repos/pydata/xarray/issues/2074,383711323,MDEyOklzc3VlQ29tbWVudDM4MzcxMTMyMw==,6213168,2018-04-23T20:26:59Z,2018-04-23T20:26:59Z,MEMBER,"@jakirkham from what I understand ``da.dot`` implements... a limited special case of ``da.einsum``?
Ok this is funny. I ran a few more benchmarks, and apparently ``xarray.dot`` on a dask backend is situationally faster than all other implementations when you are not reducing on any dimensions - which I understand is really the same as (a * b), except that *faster* than (a * b)?!?
```
def bench(...):
...
if not dims:
print(""a * b (numpy backend):"")
%timeit a.compute() * b.compute()
print(""a * b (dask backend):"")
%timeit (a * b).compute()
bench(100, False, [], '...i,...i->...i')
bench( 20, False, [], '...i,...i->...i')
bench(100, True, [], '...i,...i->...i')
bench( 20, True, [], '...i,...i->...i')
```
Output:
```
bench(100, False, [], ...i,...i->...i)
xarray.dot(numpy backend):
291 ms ± 5.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
296 ms ± 10 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
dimension 's' on 0th function argument to apply_ufunc with dask='parallelized' consists of multiple chunks, but is also a core dimension. To fix, rechunk into a single dask array chunk along this dimension, i.e., ``.rechunk({'s': -1})``, but beware that this may significantly increase memory usage.
dask.array.einsum:
296 ms ± 21.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (numpy backend)
279 ms ± 9.51 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
241 ms ± 8.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
bench(20, False, [], ...i,...i->...i)
xarray.dot(numpy backend):
345 ms ± 6.02 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
342 ms ± 4.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
dimension 's' on 0th function argument to apply_ufunc with dask='parallelized' consists of multiple chunks, but is also a core dimension. To fix, rechunk into a single dask array chunk along this dimension, i.e., ``.rechunk({'s': -1})``, but beware that this may significantly increase memory usage.
dask.array.einsum:
347 ms ± 6.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (numpy backend)
319 ms ± 2.53 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
247 ms ± 5.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
bench(100, True, [], ...i,...i->...i)
xarray.dot(numpy backend):
477 ms ± 8.29 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
514 ms ± 35.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
241 ms ± 8.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
dask.array.einsum:
497 ms ± 21.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (numpy backend)
439 ms ± 27.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
517 ms ± 41.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
bench(20, True, [], ...i,...i->...i)
xarray.dot(numpy backend):
572 ms ± 13.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.einsum:
563 ms ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
xarray.dot(dask backend):
268 ms ± 14.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
dask.array.einsum:
563 ms ± 5.11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (numpy backend)
501 ms ± 5.46 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
922 ms ± 93.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
This particular bit is shocking and I can't wrap my head around it?!?
```
bench(100, True, [], ...i,...i->...i)
xarray.dot(dask backend):
241 ms ± 8.47 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
517 ms ± 41.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
bench(20, True, [], ...i,...i->...i)
xarray.dot(dask backend):
268 ms ± 14.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
a * b (dask backend)
922 ms ± 93.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,316618290