home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "NONE", issue = 1057335460 and user = 34062862 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • RubendeBruin · 3 ✖

issue 1

  • Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 · 3 ✖

author_association 1

  • NONE · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
974066877 https://github.com/pydata/xarray/issues/6002#issuecomment-974066877 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846DxS9 RubendeBruin 34062862 2021-11-19T13:20:28Z 2021-11-19T13:20:28Z NONE

Ok, then it is clearly a bottleneck/numpy issue. I will raise it there and close it here.

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460
973937765 https://github.com/pydata/xarray/issues/6002#issuecomment-973937765 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846DRxl RubendeBruin 34062862 2021-11-19T10:17:00Z 2021-11-19T10:17:00Z NONE

I can reproduce it with calling bn.nanmax directly, but I can not reproduce it without the xarray.transpose() function.

  • If I call nanmax on the internal data of xarray then nanmax fails with a segfault:

python np_data = xdata['Spec name'].data bn.nanmax(np_data) # Segfault - But if I create a copy of that data and then call nanmax then it works fine.

python np_data = xdata['Spec name'].data new_data = np_data.copy() bn.nanmax(new_data) # works

I suspect that the xarray.transpose function does something with the data-structure (lazy reshuffling of dimensions?) that triggers the fault in bottleneck.

Full code:

```python from collections import OrderedDict import numpy as np import xarray as xr

xr.show_versions()

n_time = 1 # 1 : Fails, 2 : everything is fine

from xarray.core.options import OPTIONS OPTIONS["use_bottleneck"] = True # Set to False for work-around

Build some dataset

dirs = np.linspace(0,360, num=121) freqs = np.linspace(0,4,num=192) spec_data = np.random.random(size=(n_time,192,121))

dims = ('time', 'freq', 'dir') coords = OrderedDict() coords['time'] = range(n_time) coords['freq'] = freqs coords['dir'] = dirs

xdata = xr.DataArray( data=spec_data, coords=coords, dims=dims, name='Spec name', ).to_dataset()

xdata = xdata.transpose(..., "freq")

import bottleneck as bn np_data = xdata['Spec name'].data

new_data = np_data.copy() bn.nanmax(new_data) # works

bn.nanmax(np_data) # Segfault print('direct bn call done') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460
973159855 https://github.com/pydata/xarray/issues/6002#issuecomment-973159855 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846AT2v RubendeBruin 34062862 2021-11-18T18:51:50Z 2021-11-18T18:51:50Z NONE

tests on another machine (also win64) with the same result.

Running under WSL/Ubuntu results in a Segmentation Fault

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.671ms · About: xarray-datasette