issue_comments
3 rows where issue = 236347050 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Feature/benchmark · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
318415555 | https://github.com/pydata/xarray/pull/1457#issuecomment-318415555 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMxODQxNTU1NQ== | shoyer 1217238 | 2017-07-27T16:31:14Z | 2017-07-27T16:31:14Z | MEMBER | Awesome, thanks @TomAugspurger ! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 | |
315273074 | https://github.com/pydata/xarray/pull/1457#issuecomment-315273074 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMxNTI3MzA3NA== | shoyer 1217238 | 2017-07-14T05:24:04Z | 2017-07-14T05:24:04Z | MEMBER | We should do this to the extent that it is helpful in driving development. Even just a few realistic use cases can be helpful, especially for guarding against performance regressions. On Thu, Jul 13, 2017 at 3:37 PM Joe Hamman notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 | |
308925978 | https://github.com/pydata/xarray/pull/1457#issuecomment-308925978 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMwODkyNTk3OA== | shoyer 1217238 | 2017-06-16T03:50:33Z | 2017-06-16T03:50:33Z | MEMBER | @wesm just setup a machine for dedicated benchmarking of pandas and possibly other pydata/scipy project (if there's extra capacity as expected). @TomAugspurger has been working on getting it setup. So that's potentially an option, at least for single machine benchmarks. The lore I've heard is that benchmarking on shared cloud resources (e.g., Travis-CI) can have reproducibility issues due to resource contention and/or jobs getting scheduled on slightly different machine types. I don't know how true this still is, or if there are good work arounds for particular cloud platforms. I suspect this should be solvable, though. I can certainly make an internal inquiry about benchmarking on GCP if we can't find answers on our own. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1