html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1208#issuecomment-275955436,https://api.github.com/repos/pydata/xarray/issues/1208,275955436,MDEyOklzc3VlQ29tbWVudDI3NTk1NTQzNg==,1217238,2017-01-29T23:31:29Z,2017-01-29T23:31:29Z,MEMBER,"@fmaussion thanks for puzzling this one out!
@ghisvail thanks for the report!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,200908727
https://github.com/pydata/xarray/issues/1208#issuecomment-275448697,https://api.github.com/repos/pydata/xarray/issues/1208,275448697,MDEyOklzc3VlQ29tbWVudDI3NTQ0ODY5Nw==,1217238,2017-01-26T17:14:26Z,2017-01-26T17:14:26Z,MEMBER,"@ghisvail Thanks for your diligence on this.
@fmaussion If you can turn one of these into a test case for bottleneck to report upstream that would be super helpful. I would probably start with `test_groupby_sum`. It's likely that this only occurs for arrays with a particular `strides` (memory layout) and `shape`, which is where my [blind guess](https://github.com/kwgoodman/bottleneck/issues/160#issuecomment-272991823) that I suggested on the bottleneck tracker was inspired by.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,200908727
https://github.com/pydata/xarray/issues/1208#issuecomment-273569412,https://api.github.com/repos/pydata/xarray/issues/1208,273569412,MDEyOklzc3VlQ29tbWVudDI3MzU2OTQxMg==,1217238,2017-01-18T19:06:58Z,2017-01-18T19:06:58Z,MEMBER,"OK, thanks for looking into this!
On Wed, Jan 18, 2017 at 10:36 AM, Ghislain Antony Vaillant <
notifications@github.com> wrote:
> We'd need to wait for numpy-1.12.1 to be absolutely sure. I don't have
> time to deploy a dev version of numpy to test.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> , or mute
> the thread
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,200908727
https://github.com/pydata/xarray/issues/1208#issuecomment-273560457,https://api.github.com/repos/pydata/xarray/issues/1208,273560457,MDEyOklzc3VlQ29tbWVudDI3MzU2MDQ1Nw==,1217238,2017-01-18T18:34:05Z,2017-01-18T18:34:05Z,MEMBER,Were you able to verify that the xarray tests pass after the numpy fix?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,200908727
https://github.com/pydata/xarray/issues/1208#issuecomment-272763117,https://api.github.com/repos/pydata/xarray/issues/1208,272763117,MDEyOklzc3VlQ29tbWVudDI3Mjc2MzExNw==,1217238,2017-01-16T02:58:06Z,2017-01-16T02:58:06Z,MEMBER,"Thanks for the report. My *guess* is that this is an issue with the bottleneck build -- the large float values (e.g., 1e+248) in the final tests suggest some sort of overflow and/or memory corruption. The values summed in these tests are random numbers between 0 and 1.
Unfortunately, I can't reduce this locally using the conda build of bottleneck 1.2.0 on OS X, and our build on Travis-CI (using Ubuntu and conda) is also succeeding. Do you have any more specific details that describe your test setup, other than using the pre-build bottleneck 1.2.0 package?
If my hypothesis is correct, this test on bottleneck might trigger a test failure in the ubuntu build process (but it passed in bottleneck's tests on TravisCI):
https://github.com/kwgoodman/bottleneck/compare/master...shoyer:possible-reduce-bug?expand=1#diff-a0a3ffc22e0a63118ba4a18e4ab845fc
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,200908727