html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7377#issuecomment-1359023892,https://api.github.com/repos/pydata/xarray/issues/7377,1359023892,IC_kwDOAMm_X85RAQ8U,7316393,2022-12-20T08:53:34Z,2022-12-20T08:57:52Z,CONTRIBUTOR,"Hi, this is a known issue coming from numpy.nanquantile / numpy.nanpercentile.
I had the same problem - AFAIK the workaround is to implement your own nanpercentiles calculation.
If you want to take that route:
There is a blog post about the issue + a numpy workaround for 3D arrays:
https://krstn.eu/np.nanpercentile()-there-has-to-be-a-faster-way/
I also turned to the numpy mailing list. Abel Aoun had a suggestion to look into the algo used at the [xclim](https://xclim.readthedocs.io/en/stable/index.html) project. See our thread here: https://mail.python.org/archives/list/numpy-discussion@python.org/message/EKQIS4KNOHS6ZAU5OSYTLNOOH7U2Y5TW/
I ended up taking that one and rewrote it to suit my needs. I achieved >100x speedup in my case
Good luck!
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1497031605