issue_comments: 272763117
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1208#issuecomment-272763117 | https://api.github.com/repos/pydata/xarray/issues/1208 | 272763117 | MDEyOklzc3VlQ29tbWVudDI3Mjc2MzExNw== | 1217238 | 2017-01-16T02:58:06Z | 2017-01-16T02:58:06Z | MEMBER | Thanks for the report. My guess is that this is an issue with the bottleneck build -- the large float values (e.g., 1e+248) in the final tests suggest some sort of overflow and/or memory corruption. The values summed in these tests are random numbers between 0 and 1. Unfortunately, I can't reduce this locally using the conda build of bottleneck 1.2.0 on OS X, and our build on Travis-CI (using Ubuntu and conda) is also succeeding. Do you have any more specific details that describe your test setup, other than using the pre-build bottleneck 1.2.0 package? If my hypothesis is correct, this test on bottleneck might trigger a test failure in the ubuntu build process (but it passed in bottleneck's tests on TravisCI): https://github.com/kwgoodman/bottleneck/compare/master...shoyer:possible-reduce-bug?expand=1#diff-a0a3ffc22e0a63118ba4a18e4ab845fc |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
200908727 |