home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 383723159

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2074#issuecomment-383723159 https://api.github.com/repos/pydata/xarray/issues/2074 383723159 MDEyOklzc3VlQ29tbWVudDM4MzcyMzE1OQ== 3019665 2018-04-23T21:06:42Z 2018-04-23T21:06:42Z NONE

from what I understand da.dot implements... a limited special case of da.einsum?

Basically dot is an inner product. Certainly inner products can be formulated using Einstein notation (i.e. calling with einsum).

The question is whether the performance keeps up with that formulation. Currently it sounds like chunking causes some problems right now IIUC. However things like dot and tensordot dispatch through optimized BLAS routines. In theory einsum should do the same ( https://github.com/numpy/numpy/pull/9425 ), but the experimental data still shows a few warts. For example, matmul is implemented with einsum, but is slower than dot. ( https://github.com/numpy/numpy/issues/7569 ) ( https://github.com/numpy/numpy/issues/8957 ) Pure einsum implementations seem to perform similarly.

I ran a few more benchmarks...

What are the arrays used as input for this case?

...apparently xarray.dot on a dask backend is situationally faster than all other implementations when you are not reducing on any dimensions...

Having a little trouble following this. dot reduces one dimension from each input. Excepting if one of the inputs is 0-D (i.e. a scalar), then it is just multiplying a single scalar through an array. Is that what you are referring?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  316618290
Powered by Datasette · Queries took 1.144ms · About: xarray-datasette