issue_comments: 852265735
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/5424#issuecomment-852265735 | https://api.github.com/repos/pydata/xarray/issues/5424 | 852265735 | MDEyOklzc3VlQ29tbWVudDg1MjI2NTczNQ== | 5635139 | 2021-06-01T16:33:09Z | 2021-06-01T16:33:16Z | MEMBER | Thanks @lusewell . Unfortunately — as you suggest — I don't think there's much we can do — but this does seem like a bad bug. It might be worth checking out numbagg — https://github.com/numbagg/numbagg — which we use for fast operations that bottleneck doesn't include. Disclaimer that it comes from @shoyer , and I've recently given it a spring cleaning. To the extent this isn't fixed in bottleneck, we could offer an option to use numbagg, though it would probably require a contribution. If you need this working for now, you could probably write a workaround for yourself using numbagg fairly quickly; e.g. ```python In [6]: numbagg.nanmax(xarr.values) Out[6]: 0.0 or, more generally:In [12]: xr.apply_ufunc(numbagg.nanmax, xarr, input_core_dims=(('A','B','C'),)) Out[12]: <xarray.DataArray ()> array(0.) ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
908464731 |