html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/5640#issuecomment-917924524,https://api.github.com/repos/pydata/xarray/issues/5640,917924524,IC_kwDOAMm_X842tmqs,3801015,2021-09-13T07:39:22Z,2021-09-13T07:39:22Z,CONTRIBUTOR,Fixed other usages and added to `whats-new.rst`,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,954574705 https://github.com/pydata/xarray/pull/5640#issuecomment-916837170,https://api.github.com/repos/pydata/xarray/issues/5640,916837170,IC_kwDOAMm_X842pdMy,3801015,2021-09-10T11:36:35Z,2021-09-10T11:36:35Z,CONTRIBUTOR,"@spencerkclark RE perfomance. Its only a performance issue to attempt to import cftime repeatedly. Having it fail once in the top level import is not a big problem. The issue comes when it does it thousands of times every time you try and `.sel` or `.isel`, which then adds up to a huge performance hit. Given xarray takes a while to import anyway, the marginal cost of search ing the full pythonpath 3 times in import is minimal - only an issue when done repeatedly. I've fixed this for some other cases I've found that were causing me slowness - would like me to changeanythinng else before this can be merged?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,954574705 https://github.com/pydata/xarray/pull/5640#issuecomment-916834298,https://api.github.com/repos/pydata/xarray/issues/5640,916834298,IC_kwDOAMm_X842pcf6,3801015,2021-09-10T11:30:47Z,2021-09-10T11:30:47Z,CONTRIBUTOR,So i've found another instance of this which causes a performance issue - this one with groupby.,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,954574705 https://github.com/pydata/xarray/pull/5640#issuecomment-888122071,https://api.github.com/repos/pydata/xarray/issues/5640,888122071,IC_kwDOAMm_X84076rX,3801015,2021-07-28T08:32:44Z,2021-07-28T08:32:44Z,CONTRIBUTOR,I'd also like to append this to tag 14.1 and make tag 14.2 if possible - would this be ok?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,954574705 https://github.com/pydata/xarray/issues/5424#issuecomment-852334068,https://api.github.com/repos/pydata/xarray/issues/5424,852334068,MDEyOklzc3VlQ29tbWVudDg1MjMzNDA2OA==,3801015,2021-06-01T18:01:05Z,2021-06-01T18:01:05Z,CONTRIBUTOR,"Annoyingly the bug affects pretty much every bottleneck function, not just max, and I'm dealing with a large codebase where lots of the code just uses the methods attached to the `xr.DataArray`s. Is there a way of disabling use of bottleneck inside xarray without uninstalling bottleneck? And if so do you know if this is expected to give the same results? Pandas (probably a few versions ago now) had a situation where if you uninstalled bottleneck it would use some other routine, but the nan-handling was then different - I think it caused the all-nan `sum` to flick between nan and zero if I recall. Quick response appreciated though, and I might have a delve into fixing bottleneck myself if I get the free time.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,908464731