html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7764#issuecomment-1526241680,https://api.github.com/repos/pydata/xarray/issues/7764,1526241680,IC_kwDOAMm_X85a-JmQ,2448579,2023-04-27T19:26:13Z,2023-04-27T19:26:13Z,MEMBER,I think I agree with `use_opt_einsum: bool`,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1672288892
https://github.com/pydata/xarray/issues/7764#issuecomment-1526240154,https://api.github.com/repos/pydata/xarray/issues/7764,1526240154,IC_kwDOAMm_X85a-JOa,2448579,2023-04-27T19:25:29Z,2023-04-27T19:25:29Z,MEMBER,"`numpy.einsum` has some version of `opt_einsum` implemented under the `optimize` kwarg. IIUC this is False by default because it adds overhead to small problems ([comment](https://github.com/numpy/numpy/pull/5488#issuecomment-246496342))
> The complete overhead for computing a path (parsing the input, finding the path, and organization that data) with default options is about 150us. Looks like einsum takes a minimum of 5-10us to call as a reference. So the worst case scenario would be that the optimization overhead makes einsum 30x slower. Personally id go for turning optimization off by default and then revisiting if someone tackles the parsing issue to reduce the overhead.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1672288892