html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1257#issuecomment-278836146,https://api.github.com/repos/pydata/xarray/issues/1257,278836146,MDEyOklzc3VlQ29tbWVudDI3ODgzNjE0Ng==,1217238,2017-02-10T01:58:03Z,2017-02-10T01:58:03Z,MEMBER,"One issue is that unit tests are often not good benchmarks. Ideal unit tests are as fast as possible, whereas ideal benchmarks should be run on more typical inputs, which may be much slower.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,206632333
https://github.com/pydata/xarray/issues/1257#issuecomment-278788467,https://api.github.com/repos/pydata/xarray/issues/1257,278788467,MDEyOklzc3VlQ29tbWVudDI3ODc4ODQ2Nw==,1217238,2017-02-09T22:02:00Z,2017-02-09T22:02:00Z,MEMBER,"Yes, some sort of automated benchmarking could be valuable, especially for noticing and fixing regressions. I've done occasional benchmarks before to optimize bottlenecks (e.g., class constructors) but it's all been ad-hoc stuff with `%timeit` in IPython.
ASV seems like a pretty sane way to do this. pytest-benchmark can trigger test failures if performance goes below some set level but I suspect performance is too subjective and stochastic to be reliable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,206632333