issue_comments: 308935684
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1457#issuecomment-308935684 | https://api.github.com/repos/pydata/xarray/issues/1457 | 308935684 | MDEyOklzc3VlQ29tbWVudDMwODkzNTY4NA== | 2443309 | 2017-06-16T05:20:24Z | 2017-06-16T05:20:24Z | MEMBER | Keep the comments coming! I think we can distinguish between benchmarking for regressions and benchmarking for development and introspection. The former will require some thought as to what machines we want to rely on and how to achieve consistency throughout the development track. It sounds like there are a number of options that we could pursue toward those ends. The latter use of benchmarking is useful on a single machine with only a few commits of history. For the four benchmarks in my sample So the relative performance is useful information in deciding how to use and/or develop xarray. (Granted the exact factors will change depending on machine/architecture/dataset). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
236347050 |