html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/6002#issuecomment-974066877,https://api.github.com/repos/pydata/xarray/issues/6002,974066877,IC_kwDOAMm_X846DxS9,34062862,2021-11-19T13:20:28Z,2021-11-19T13:20:28Z,NONE,"Ok, then it is clearly a bottleneck/numpy issue. I will raise it there and close it here.
Thanks!
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1057335460
https://github.com/pydata/xarray/issues/6002#issuecomment-973937765,https://api.github.com/repos/pydata/xarray/issues/6002,973937765,IC_kwDOAMm_X846DRxl,34062862,2021-11-19T10:17:00Z,2021-11-19T10:17:00Z,NONE,"I can reproduce it with calling bn.nanmax directly, but I can not reproduce it without the xarray.transpose() function.
- If I call nanmax on the internal data of xarray then nanmax fails with a segfault:
```python
np_data = xdata['Spec name'].data
bn.nanmax(np_data) # Segfault
```
- But if I create a **copy** of that data and then call nanmax then it works fine.
```python
np_data = xdata['Spec name'].data
new_data = np_data.copy()
bn.nanmax(new_data) # works
```
I suspect that the xarray.transpose function does something with the data-structure (lazy reshuffling of dimensions?) that triggers the fault in bottleneck.
Full code:
```python
from collections import OrderedDict
import numpy as np
import xarray as xr
xr.show_versions()
n_time = 1 # 1 : Fails, 2 : everything is fine
from xarray.core.options import OPTIONS
OPTIONS[""use_bottleneck""] = True # Set to False for work-around
# Build some dataset
dirs = np.linspace(0,360, num=121)
freqs = np.linspace(0,4,num=192)
spec_data = np.random.random(size=(n_time,192,121))
dims = ('time', 'freq', 'dir')
coords = OrderedDict()
coords['time'] = range(n_time)
coords['freq'] = freqs
coords['dir'] = dirs
xdata = xr.DataArray(
data=spec_data, coords=coords, dims=dims, name='Spec name',
).to_dataset()
xdata = xdata.transpose(..., ""freq"")
import bottleneck as bn
np_data = xdata['Spec name'].data
new_data = np_data.copy()
bn.nanmax(new_data) # works
bn.nanmax(np_data) # Segfault
print('direct bn call done')
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1057335460
https://github.com/pydata/xarray/issues/6002#issuecomment-973159855,https://api.github.com/repos/pydata/xarray/issues/6002,973159855,IC_kwDOAMm_X846AT2v,34062862,2021-11-18T18:51:50Z,2021-11-18T18:51:50Z,NONE,"tests on another machine (also win64) with the same result.
Running under WSL/Ubuntu results in a Segmentation Fault","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1057335460