issue_comments: 973937765
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/6002#issuecomment-973937765 | https://api.github.com/repos/pydata/xarray/issues/6002 | 973937765 | IC_kwDOAMm_X846DRxl | 34062862 | 2021-11-19T10:17:00Z | 2021-11-19T10:17:00Z | NONE | I can reproduce it with calling bn.nanmax directly, but I can not reproduce it without the xarray.transpose() function.
I suspect that the xarray.transpose function does something with the data-structure (lazy reshuffling of dimensions?) that triggers the fault in bottleneck. Full code: ```python from collections import OrderedDict import numpy as np import xarray as xr xr.show_versions() n_time = 1 # 1 : Fails, 2 : everything is fine from xarray.core.options import OPTIONS OPTIONS["use_bottleneck"] = True # Set to False for work-around Build some datasetdirs = np.linspace(0,360, num=121) freqs = np.linspace(0,4,num=192) spec_data = np.random.random(size=(n_time,192,121)) dims = ('time', 'freq', 'dir') coords = OrderedDict() coords['time'] = range(n_time) coords['freq'] = freqs coords['dir'] = dirs xdata = xr.DataArray( data=spec_data, coords=coords, dims=dims, name='Spec name', ).to_dataset() xdata = xdata.transpose(..., "freq") import bottleneck as bn np_data = xdata['Spec name'].data new_data = np_data.copy() bn.nanmax(new_data) # works bn.nanmax(np_data) # Segfault print('direct bn call done') ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
1057335460 |