home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3 rows where repo = 13221727, type = "issue" and user = 34062862 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 3 ✖

state 1

  • closed 3

repo 1

  • xarray · 3 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1057335460 I_kwDOAMm_X84_Baik 6002 Abnormal process termination when using bottleneck function on xarray data after transposing and having a dimension with length 1 RubendeBruin 34062862 closed 0     5 2021-11-18T13:06:25Z 2021-11-19T13:20:28Z 2021-11-19T13:20:28Z NONE      

When running the following example the python interpreter exists abnormally (exit code -1073741819)

The code that causes the exception is the maxnan() function in bottleneck.

This error only occurs when:

  1. The data contains a dimension with length 1
  2. The data is transposed
  3. bottleneck is used

So I suspect that the transpose function changes the data somehow such that bottleneck can not handle it anymore.

  • Calling bn.maxnan on a normal ndarray works fine
  • Calling data.max() before transposing works fine
  • Running the example without using bottleneck (either by not installing it or by disabling it in OPTIONS) works fine
  • Running the example with len(time) > 1 works fine

What happened:

Process finished with exit code -1073741819 (0xC0000005)

What you expected to happen:

Script should run without crashing

Minimal Complete Verifiable Example:

```python from collections import OrderedDict import numpy as np import xarray as xr

xr.show_versions()

n_time = 1 # 1 : Fails, 2 : everything is fine

from xarray.core.options import OPTIONS OPTIONS["use_bottleneck"] = True # Set to False for work-around

Build some dataset

dirs = np.linspace(0,360, num=121) freqs = np.linspace(0,4,num=192) spec_data = np.random.random(size=(n_time,192,121))

dims = ('time', 'freq', 'dir') coords = OrderedDict() coords['time'] = range(n_time) coords['freq'] = freqs coords['dir'] = dirs

xdata = xr.DataArray( data=spec_data, coords=coords, dims=dims, name='Spec name', ).to_dataset()

xdata = xdata.transpose(..., "freq") # remove this line and the script will run

tm = xdata.max() print('Done') ```

Anything else we need to know?:

Uhm... it was really hard to dig this deep? :-)

Environment:

Tested with python 3.8 and 3.9 on win64. Only required packages are xarray and bottleneck. For example:

yml name: ws dependencies: - python=3.9 - xarray channels: - defaults - conda-forge

note: xarray requires pandas which installs bottleneck. To install without bottleneck pin pandas to 1.2.4

Output of <tt>xr.show_versions()</tt> INSTALLED VERSIONS ------------------ commit: None python: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD byteorder: little LC_ALL: None LANG: None LOCALE: ('English_United States', '1252') libhdf5: None libnetcdf: None xarray: 0.20.1 pandas: 1.2.4 numpy: 1.21.4 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: None distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 59.1.1 pip: 21.3.1 conda: None pytest: None IPython: None sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6002/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1057082683 I_kwDOAMm_X84_Ac07 6001 Crash when calling max() after transposing a dataset in combination with numba RubendeBruin 34062862 closed 0     4 2021-11-18T08:39:56Z 2021-11-18T20:17:56Z 2021-11-18T09:21:18Z NONE      

I have a piece of code runs fine on an conda environment with just xarray. But when I add numba then the code crashes

What happened:

Abnormal process termination (eg Process finished with exit code -1073741819 (0xC0000005))

What you expected to happen:

should calculate the max of the data (same as before transposing)

Minimal Complete Verifiable Example:

```python from collections import OrderedDict

import numpy as np import xarray as xr

Build some dataset

dirs = np.linspace(0,360, num=121) freqs = np.linspace(0,4,num=192) spec_data = np.random.random(size=(192,121)) data = [spec_data]

dims = ('time', 'freq', 'dir') coords = OrderedDict() coords['time'] = [0,] coords['freq'] = freqs coords['dir'] = dirs

print('constructing data-array')

xdata = xr.DataArray( data=data, coords=coords, dims=dims, name='Spec name', ).to_dataset()

print('transposing data-array')

print('getting max')

print(xdata.max()) # works fine

tdata = xdata.transpose(..., "freq")

print('getting max')

print(tdata.max()) # <==== Process finished with exit code -1073741819 (0xC0000005) print('done!') ```

Anything else we need to know?:

Running on windows 10 x64 Python 3.10 but also occurs with 3.8 and 3.9 (did not test any others)

Environment:

When I create an environment with ONLY xarray then everything works as expected:

This environment WORKS

Output of xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.10.0 | packaged by conda-forge | (default, Oct 12 2021, 21:17:52) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD byteorder: little LC_ALL: None LANG: None LOCALE: English_United States.1252 libhdf5: None libnetcdf: None xarray: 0.18.0 pandas: 1.3.4 numpy: 1.21.4 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None pint: None setuptools: 59.1.1 pip: 21.3.1 conda: None pytest: None IPython: None sphinx: None

But when I add numba then it fails:

Output of INSTALLED VERSIONS ------------------ commit: None python: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD byteorder: little LC_ALL: None LANG: None LOCALE: ('English_United States', '1252') libhdf5: None libnetcdf: None xarray: 0.20.1 pandas: 1.3.4 numpy: 1.20.3 scipy: None netCDF4: None pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: None distributed: None matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 59.1.1 pip: 21.3.1 conda: None pytest: None IPython: None sphinx: None --- conda output --- # packages in environment at c:\python\miniconda3\envs\ws: # # Name Version Build Channel blas 1.0 mkl bottleneck 1.3.2 py39h7cc1a96_1 ca-certificates 2021.10.26 haa95532_2 importlib-metadata 4.8.2 py39hcbf5309_0 conda-forge importlib_metadata 4.8.2 hd8ed1ab_0 conda-forge intel-openmp 2021.4.0 haa95532_3556 llvmlite 0.37.0 py39h23ce68f_1 mkl 2021.4.0 haa95532_640 mkl-service 2.4.0 py39h2bbff1b_0 mkl_fft 1.3.1 py39h277e83a_0 mkl_random 1.2.2 py39hf11a4ad_0 numba 0.54.1 py39hf11a4ad_0 numexpr 2.7.3 py39hb80d3ca_1 numpy 1.20.3 py39ha4e8547_0 numpy-base 1.20.3 py39hc2deb75_0 openssl 1.1.1l h2bbff1b_0 pandas 1.3.4 py39h6214cd6_0 pip 21.3.1 pyhd8ed1ab_0 conda-forge python 3.9.7 h6244533_1 python-dateutil 2.8.2 pyhd3eb1b0_0 python_abi 3.9 2_cp39 conda-forge pytz 2021.3 pyhd3eb1b0_0 setuptools 59.1.1 py39hcbf5309_0 conda-forge six 1.16.0 pyhd3eb1b0_0 sqlite 3.36.0 h2bbff1b_0 tbb 2021.4.0 h59b6b97_0 typing_extensions 4.0.0 pyha770c72_0 conda-forge tzdata 2021e hda174b7_0 ucrt 10.0.20348.0 h57928b3_0 conda-forge vc 14.2 h21ff451_1 vs2015_runtime 14.29.30037 h902a5da_5 conda-forge wheel 0.37.0 pyhd3eb1b0_1 xarray 0.20.1 pyhd8ed1ab_0 conda-forge zipp 3.6.0 pyhd3eb1b0_0 zlib 1.2.11 h62dcd97_4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6001/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
537936090 MDU6SXNzdWU1Mzc5MzYwOTA= 3622 custom interpolation RubendeBruin 34062862 closed 0     2 2019-12-14T16:43:16Z 2020-01-17T08:06:05Z 2020-01-17T08:06:04Z NONE      

I need to interpolate (wave) forces on a ship between headings and/or frequencies.

heading and frequency are both dimensions. force is a phase + amplitude pair.

What I would normally do is linear interpolation of the force amplitude and linear interpolation of the unwrapped phase. Interpolation of the amplitude works fine, but interpolation of the phase is troublesome because I can not unwrap the phases in two dimensions (heading and frequency) at the same time.

A solution that I can think of is storing the phase as a complex number, interpolate that, and then get the phase of the interpolated value. But calculating the angle (phase) from the interpolated values would be slow and it feels like a workaround rather than a good solution.

It would be great if I could pass a custom interpolation function to the interpolate method to use instead of scipy.interpolate.interp1d. But as far as I can see this is not (yet) an option.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3622/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 5998.49ms · About: xarray-datasette