issue_comments: 538910300
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/3375#issuecomment-538910300 | https://api.github.com/repos/pydata/xarray/issues/3375 | 538910300 | MDEyOklzc3VlQ29tbWVudDUzODkxMDMwMA== | 6213168 | 2019-10-07T09:13:07Z | 2019-10-07T09:13:51Z | MEMBER | I see that all tests in benchmarks/indexing.py use arrays with 2~6 million points. While this is important to spot any case where the numpy underlying functions start being unnecessarily called more than once, it also means any performance improvement or degradation in any of the pure-Python code will be completely drowned out. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
503163130 |