html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1346#issuecomment-464002579,https://api.github.com/repos/pydata/xarray/issues/1346,464002579,MDEyOklzc3VlQ29tbWVudDQ2NDAwMjU3OQ==,5469,2019-02-15T11:06:06Z,2019-02-15T11:06:06Z,NONE,"Ah ok, I suppose bottleneck is indeed now avoided for float32 xarray. Yeah that issue is for a different function, but the source of the problem and proposed solution in the thread is the same - use higher precision intermediates for float32 (double arithmetic); a small speed vs accuracy/precision trade off.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,218459353 https://github.com/pydata/xarray/issues/1346#issuecomment-458427512,https://api.github.com/repos/pydata/xarray/issues/1346,458427512,MDEyOklzc3VlQ29tbWVudDQ1ODQyNzUxMg==,5469,2019-01-29T06:52:01Z,2019-01-29T06:52:01Z,NONE,"Is it worth changing bottleneck to use double for single precision reductions? AFAICT this is a matter of changing `npy_DTYPE0` to double in the `float{64,32}` versions of functions in `reduce_template.c`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,218459353