issue_comments
5 rows where author_association = "MEMBER", issue = 200908727 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Test failures on Debian if built with bottleneck · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
275955436 | https://github.com/pydata/xarray/issues/1208#issuecomment-275955436 | https://api.github.com/repos/pydata/xarray/issues/1208 | MDEyOklzc3VlQ29tbWVudDI3NTk1NTQzNg== | shoyer 1217238 | 2017-01-29T23:31:29Z | 2017-01-29T23:31:29Z | MEMBER | @fmaussion thanks for puzzling this one out! @ghisvail thanks for the report! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Test failures on Debian if built with bottleneck 200908727 | |
275448697 | https://github.com/pydata/xarray/issues/1208#issuecomment-275448697 | https://api.github.com/repos/pydata/xarray/issues/1208 | MDEyOklzc3VlQ29tbWVudDI3NTQ0ODY5Nw== | shoyer 1217238 | 2017-01-26T17:14:26Z | 2017-01-26T17:14:26Z | MEMBER | @ghisvail Thanks for your diligence on this. @fmaussion If you can turn one of these into a test case for bottleneck to report upstream that would be super helpful. I would probably start with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Test failures on Debian if built with bottleneck 200908727 | |
273569412 | https://github.com/pydata/xarray/issues/1208#issuecomment-273569412 | https://api.github.com/repos/pydata/xarray/issues/1208 | MDEyOklzc3VlQ29tbWVudDI3MzU2OTQxMg== | shoyer 1217238 | 2017-01-18T19:06:58Z | 2017-01-18T19:06:58Z | MEMBER | OK, thanks for looking into this! On Wed, Jan 18, 2017 at 10:36 AM, Ghislain Antony Vaillant notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Test failures on Debian if built with bottleneck 200908727 | |
273560457 | https://github.com/pydata/xarray/issues/1208#issuecomment-273560457 | https://api.github.com/repos/pydata/xarray/issues/1208 | MDEyOklzc3VlQ29tbWVudDI3MzU2MDQ1Nw== | shoyer 1217238 | 2017-01-18T18:34:05Z | 2017-01-18T18:34:05Z | MEMBER | Were you able to verify that the xarray tests pass after the numpy fix? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Test failures on Debian if built with bottleneck 200908727 | |
272763117 | https://github.com/pydata/xarray/issues/1208#issuecomment-272763117 | https://api.github.com/repos/pydata/xarray/issues/1208 | MDEyOklzc3VlQ29tbWVudDI3Mjc2MzExNw== | shoyer 1217238 | 2017-01-16T02:58:06Z | 2017-01-16T02:58:06Z | MEMBER | Thanks for the report. My guess is that this is an issue with the bottleneck build -- the large float values (e.g., 1e+248) in the final tests suggest some sort of overflow and/or memory corruption. The values summed in these tests are random numbers between 0 and 1. Unfortunately, I can't reduce this locally using the conda build of bottleneck 1.2.0 on OS X, and our build on Travis-CI (using Ubuntu and conda) is also succeeding. Do you have any more specific details that describe your test setup, other than using the pre-build bottleneck 1.2.0 package? If my hypothesis is correct, this test on bottleneck might trigger a test failure in the ubuntu build process (but it passed in bottleneck's tests on TravisCI): https://github.com/kwgoodman/bottleneck/compare/master...shoyer:possible-reduce-bug?expand=1#diff-a0a3ffc22e0a63118ba4a18e4ab845fc |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Test failures on Debian if built with bottleneck 200908727 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1