html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/5794#issuecomment-950294432,https://api.github.com/repos/pydata/xarray/issues/5794,950294432,IC_kwDOAMm_X844pFeg,2448579,2021-10-24T09:53:44Z,2021-10-24T09:53:44Z,MEMBER,Does seem cleaner. Thanks @Illviljan ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,996352280
https://github.com/pydata/xarray/pull/5794#issuecomment-940654242,https://api.github.com/repos/pydata/xarray/issues/5794,940654242,IC_kwDOAMm_X844ET6i,5635139,2021-10-12T04:43:37Z,2021-10-12T04:43:37Z,MEMBER,"I wouldn't have thought this has any noticeable difference on timing, but it does make the code a bit cleaner. Is there any reason not to do this?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,996352280
https://github.com/pydata/xarray/pull/5794#issuecomment-940356823,https://api.github.com/repos/pydata/xarray/issues/5794,940356823,IC_kwDOAMm_X844DLTX,14371165,2021-10-11T18:44:27Z,2021-10-11T18:46:50Z,MEMBER,"A little better comparison and little more noticeable difference although still in the noise range:
This branch:
```python
%timeit -n1 -r1 import xarray
3.81 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.83 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.87 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.7 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.77 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.91 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.8 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
np.mean([3.81, 3.83, 3.87, 3.7, 3.77, 3.91, 3.8])
Out[3]: 3.812857142857143
```
Main:
```python
%timeit -n1 -r1 import xarray
3.93 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.69 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.64 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.76 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.79 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.81 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
3.68 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
np.mean([3.93, 3.69, 3.64, 3.76, 3.79, 3.81, 3.68])
Out[4]: 3.7571428571428567
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,996352280
https://github.com/pydata/xarray/pull/5794#issuecomment-921868793,https://api.github.com/repos/pydata/xarray/issues/5794,921868793,IC_kwDOAMm_X8428pn5,35968931,2021-09-17T15:03:05Z,2021-09-17T15:03:05Z,MEMBER,"> `The slowest run took 11.40 times longer than the fastest. This could mean that an intermediate result is being cached.`
What happens if you manually time just a single import (I think you can tell `timeit` to run only once)? It seems like averaging might not be giving an accurate reflection of the import time here.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,996352280