home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

10 rows where user = 34062862 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 4

  • Crash when calling max() after transposing a dataset in combination with numba 3
  • Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 3
  • Added test for issue #6002 (currently fails) 3
  • custom interpolation 1

user 1

  • RubendeBruin · 10 ✖

author_association 1

  • NONE 10
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
981408883 https://github.com/pydata/xarray/pull/6003#issuecomment-981408883 https://api.github.com/repos/pydata/xarray/issues/6003 IC_kwDOAMm_X846fxxz RubendeBruin 34062862 2021-11-29T08:49:02Z 2021-11-29T08:49:02Z NONE

"does that work on your end?"

yes it does. Will remove the xfail and then we can merge once https://github.com/pydata/bottleneck/pull/382 is merged.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added test for issue #6002 (currently fails) 1057355557
979285475 https://github.com/pydata/xarray/pull/6003#issuecomment-979285475 https://api.github.com/repos/pydata/xarray/issues/6003 IC_kwDOAMm_X846XrXj RubendeBruin 34062862 2021-11-25T15:03:17Z 2021-11-25T15:03:17Z NONE

I've added xfail , thanks for the link.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added test for issue #6002 (currently fails) 1057355557
979146287 https://github.com/pydata/xarray/pull/6003#issuecomment-979146287 https://api.github.com/repos/pydata/xarray/issues/6003 IC_kwDOAMm_X846XJYv RubendeBruin 34062862 2021-11-25T12:01:38Z 2021-11-25T12:01:38Z NONE

Hi @max-sixty , what is a xfail ?

Also, the issue turned out to be bottleneck-bug , ref: https://github.com/pydata/bottleneck/issues/393#issuecomment-978017397 (fix available)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Added test for issue #6002 (currently fails) 1057355557
974066877 https://github.com/pydata/xarray/issues/6002#issuecomment-974066877 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846DxS9 RubendeBruin 34062862 2021-11-19T13:20:28Z 2021-11-19T13:20:28Z NONE

Ok, then it is clearly a bottleneck/numpy issue. I will raise it there and close it here.

Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460
973937765 https://github.com/pydata/xarray/issues/6002#issuecomment-973937765 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846DRxl RubendeBruin 34062862 2021-11-19T10:17:00Z 2021-11-19T10:17:00Z NONE

I can reproduce it with calling bn.nanmax directly, but I can not reproduce it without the xarray.transpose() function.

  • If I call nanmax on the internal data of xarray then nanmax fails with a segfault:

python np_data = xdata['Spec name'].data bn.nanmax(np_data) # Segfault - But if I create a copy of that data and then call nanmax then it works fine.

python np_data = xdata['Spec name'].data new_data = np_data.copy() bn.nanmax(new_data) # works

I suspect that the xarray.transpose function does something with the data-structure (lazy reshuffling of dimensions?) that triggers the fault in bottleneck.

Full code:

```python from collections import OrderedDict import numpy as np import xarray as xr

xr.show_versions()

n_time = 1 # 1 : Fails, 2 : everything is fine

from xarray.core.options import OPTIONS OPTIONS["use_bottleneck"] = True # Set to False for work-around

Build some dataset

dirs = np.linspace(0,360, num=121) freqs = np.linspace(0,4,num=192) spec_data = np.random.random(size=(n_time,192,121))

dims = ('time', 'freq', 'dir') coords = OrderedDict() coords['time'] = range(n_time) coords['freq'] = freqs coords['dir'] = dirs

xdata = xr.DataArray( data=spec_data, coords=coords, dims=dims, name='Spec name', ).to_dataset()

xdata = xdata.transpose(..., "freq")

import bottleneck as bn np_data = xdata['Spec name'].data

new_data = np_data.copy() bn.nanmax(new_data) # works

bn.nanmax(np_data) # Segfault print('direct bn call done') ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460
973159855 https://github.com/pydata/xarray/issues/6002#issuecomment-973159855 https://api.github.com/repos/pydata/xarray/issues/6002 IC_kwDOAMm_X846AT2v RubendeBruin 34062862 2021-11-18T18:51:50Z 2021-11-18T18:51:50Z NONE

tests on another machine (also win64) with the same result.

Running under WSL/Ubuntu results in a Segmentation Fault

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 1057335460
972680945 https://github.com/pydata/xarray/issues/6001#issuecomment-972680945 https://api.github.com/repos/pydata/xarray/issues/6001 IC_kwDOAMm_X845-e7x RubendeBruin 34062862 2021-11-18T09:21:18Z 2021-11-18T09:21:18Z NONE

Gets too messy - will clean up and re-open

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Crash when calling max() after transposing a dataset in combination with numba 1057082683
972678034 https://github.com/pydata/xarray/issues/6001#issuecomment-972678034 https://api.github.com/repos/pydata/xarray/issues/6001 IC_kwDOAMm_X845-eOS RubendeBruin 34062862 2021-11-18T09:17:49Z 2021-11-18T09:17:49Z NONE

Minimum conda environment to reproduce:

yml name: ws dependencies: - xarray - numba channels: - defaults - conda-forge resulting in: ```

Name Version Build Channel

blas 1.0 mkl bottleneck 1.3.2 py38h2a96729_1 ca-certificates 2021.10.26 haa95532_2 importlib-metadata 4.8.2 py38haa244fe_0 conda-forge importlib_metadata 4.8.2 hd8ed1ab_0 conda-forge intel-openmp 2021.4.0 haa95532_3556 libblas 3.9.0 12_win64_mkl conda-forge libcblas 3.9.0 12_win64_mkl conda-forge liblapack 3.9.0 12_win64_mkl conda-forge llvmlite 0.35.0 py38h34b8924_4 mkl 2021.4.0 h0e2418a_729 conda-forge mkl-service 2.4.0 py38h2bbff1b_0 numba 0.52.0 py38hf11a4ad_0 numexpr 2.7.3 py38hb80d3ca_1 numpy 1.21.4 py38h089cfbf_0 conda-forge openssl 1.1.1l h2bbff1b_0 pandas 1.3.4 py38h6214cd6_0 pip 21.3.1 pyhd8ed1ab_0 conda-forge python 3.8.12 h6244533_0 python-dateutil 2.8.2 pyhd3eb1b0_0 python_abi 3.8 2_cp38 conda-forge pytz 2021.3 pyhd3eb1b0_0 setuptools 59.1.1 py38haa244fe_0 conda-forge six 1.16.0 pyhd3eb1b0_0 sqlite 3.36.0 h2bbff1b_0 tbb 2021.4.0 h59b6b97_0 typing_extensions 4.0.0 pyha770c72_0 conda-forge ucrt 10.0.20348.0 h57928b3_0 conda-forge vc 14.2 h21ff451_1 vs2015_runtime 14.29.30037 h902a5da_5 conda-forge wheel 0.37.0 pyhd3eb1b0_1 xarray 0.20.1 pyhd8ed1ab_0 conda-forge zipp 3.6.0 pyhd3eb1b0_0 zlib 1.2.11 h62dcd97_4 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Crash when calling max() after transposing a dataset in combination with numba 1057082683
972665006 https://github.com/pydata/xarray/issues/6001#issuecomment-972665006 https://api.github.com/repos/pydata/xarray/issues/6001 IC_kwDOAMm_X845-bCu RubendeBruin 34062862 2021-11-18T09:02:05Z 2021-11-18T09:02:05Z NONE

So not sure if I should post the issue here on with numba

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Crash when calling max() after transposing a dataset in combination with numba 1057082683
575520252 https://github.com/pydata/xarray/issues/3622#issuecomment-575520252 https://api.github.com/repos/pydata/xarray/issues/3622 MDEyOklzc3VlQ29tbWVudDU3NTUyMDI1Mg== RubendeBruin 34062862 2020-01-17T08:06:04Z 2020-01-17T08:06:04Z NONE

Thanks for the link to the tutorial!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  custom interpolation 537936090

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.853ms · About: xarray-datasette