home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

4 rows where user = 44142765 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 3
  • open 1

type 1

  • issue 4

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1609090149 I_kwDOAMm_X85f6MRl 7584 `np.multiply` and `dask.array.multiply` trigger graph computation vs using `+` and `*` operators. zoj613 44142765 closed 0     3 2023-03-03T18:37:06Z 2023-11-06T08:45:55Z 2023-11-06T08:45:54Z NONE      

What is your issue?

I was trying to implement a simple weighted average function using a few xr.DataArray objects via python def weighted_avg(args, weights): return np.multiply(args, weights).sum() / sum(weights)

To my surprise a call to to this function causes the graph to be computed as I had to wait for quite a bit given the size of the DataArray's. I realised that the graph was being computed. Even replacing np.multiply with dask.array.multiply did not make a difference. So I decided to implement the function using built-in operators like

python def weighted_avg(args, weights): import operator as op return sum(map(op.mul, args, weights)) / sum(weights) and to my surprise this implementation returns immediately suggesting that the graph computation is delayed. Is this expected behaviour? I thought numpy/dask array functions lazily evaluate so this seems odd to me.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7584/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1652227927 I_kwDOAMm_X85iev9X 7713 `Variable/IndexVariable` do not accept a tuple for data. zoj613 44142765 closed 0     4 2023-04-03T14:50:58Z 2023-04-28T14:26:37Z 2023-04-28T14:26:37Z NONE      

What happened?

It appears that Variable and IndexVariable do not accept a tuple for the data parameter even though the docstring suggests it should be able to accept array_like objects (tuple falls under this type of object, right?).

What did you expect to happen?

Successful instantiation of a Variable/IndexVariable object, but instead a ValueError exception is raised.

Minimal Complete Verifiable Example

```Python import xarray as xr

xr.Variable(data=(2, 3, 45), dims="day") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

Python ValueError: dimensions ('day',) must have the same length as the number of data dimensions, ndim=0

Anything else we need to know?

This error seems to be triggered by the self._parse_dimensions(dims) call inside the Variable class. This problem does not happen if I use a list. But I find it strange that the array_like data specifically needs to be a certain type of object for the call to work. Maybe if it has to be a list then the docstring should reflect that.

Environment

``` INSTALLED VERSIONS ------------------ commit: None python: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 6.1.21-1-lts machine: x86_64 processor: byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.8.1 xarray: 2023.1.0 pandas: 1.5.3 numpy: 1.23.5 scipy: 1.10.1 netCDF4: 1.6.2 pydap: None h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2023.3.2 distributed: 2023.3.2 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2023.3.0 cupy: None pint: None sparse: 0.14.0 flox: None numpy_groupies: None setuptools: 67.6.1 pip: 23.0.1 conda: None pytest: 7.2.2 mypy: 1.1.1 IPython: 8.12.0 sphinx: None ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7713/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1331985070 I_kwDOAMm_X85PZHqu 6891 Passing extra keyword arguments to `curvefit` throws an exception. zoj613 44142765 open 0     13 2022-08-08T14:47:56Z 2023-03-26T19:40:43Z   NONE      

What happened?

Just like the title says, passing an extra keyword argument corresponding to scipy's curve_fit throws an exception. The documentation has a parameter section that says: *kwargs (optional) – Additional keyword arguments to passed to scipy curve_fit

So if one specifies a method="trf" keyword argument to the .curvefit method, you get an error: TypeError: curvefit() got an unexpected keyword argument 'method'

The only way it works as expected is if I pass the keyword arguments as dictionary elements via a kwargs argument like so: kwargs={"method": "trf"}. This behaviour contradicts what is mentioned in the docstring.

What did you expect to happen?

No error thrown

Minimal Complete Verifiable Example

```Python import pandas as pd import xarray as xr import numpy as np

da = xr.DataArray(

np.random.rand(4, 3),

[

    ("time", pd.date_range("2000-01-01", periods=4)),

    ("space", ["IA", "IL", "IN"]),

],

) da.curvefit(coords=["time"], func=lambda x, params: x, method="trf") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

Python TypeError: curvefit() got an unexpected keyword argument 'method'

Anything else we need to know?

No response

Environment

```shell commit: None python: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] python-bits: 64 OS: Linux OS-release: 5.4.0-1061-aws machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.1 libnetcdf: 4.8.1 xarray: 2022.3.0 pandas: 1.4.3 numpy: 1.22.0 scipy: 1.6.2 netCDF4: 1.6.0 pydap: None h5netcdf: 0.15.0 h5py: 3.7.0 Nio: None zarr: 2.10.3 cftime: 1.6.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: 0.9.10.1 iris: None bottleneck: None dask: 2022.03.0 distributed: 2022.3.0 matplotlib: None cartopy: None seaborn: None numbagg: None fsspec: 2022.01.0 cupy: None pint: None sparse: 0.13.0 setuptools: 63.1.0 pip: 22.2.2 conda: None pytest: 6.2.5 IPython: 8.4.0 sphinx: None ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6891/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1175517164 I_kwDOAMm_X85GEPfs 6395 two Dataset objects reference the same numpy array memory block upon creation. zoj613 44142765 closed 0     3 2022-03-21T15:03:20Z 2022-03-22T11:52:31Z 2022-03-21T15:55:08Z NONE      

What happened?

I tried creating two new Dataset objects using empty numpy arrays that I would later populate with values using integer indexing. To my surprise, I got output that was identical to for both datasets. After digging around, I discovered that the underlying numpy arrays of the two Dataset objects were identical. This confused me because I did not make a copy of one to create the other.

What did you expect to happen?

Getting two separate objects with non-identical memory addresses for the numpy arrays they contain.

Minimal Complete Verifiable Example

```Python import xarray as xr import pandas as pd import numpy as np

def xarray_dataset(): rng = np.random.default_rng(0) data_map = { "Tmin": rng.uniform(-1, 1, size=(3, 2, 2)), "Tmax": rng.uniform(-1, 1, size=(3, 2, 2)), } lon = [-99.83, -99.32] lat = [42.25, 42.21] time = pd.date_range("2014-09-06", "2016-09-06", periods=3) var_map = {"time": time, "lat": lat, "lon": lon} out_map = {name: (tuple(var_map), data_map[name]) for name in data_map} return xr.Dataset(data_vars=out_map, coords=var_map)

base = xarray_dataset() dims = tuple(base.dims) base_shape = (base.time.size, base.lat.size, base.lon.size) var_map = {var: (dims, np.empty(base_shape)) for var in ("var_a", "var_b")} coord_map = { "time": (("time",), base.time.values), "lon": (("lon",), base.lon.values), "lat": (("lat",), base.lat.values), } out1 = xr.Dataset(var_map, coords=coord_map) out2 = xr.Dataset(var_map, coords=coord_map)

print(out1 is out2) # False print(out1.var_a.values is out2.var_a.values) # True......but HOW?! ```

Relevant log output

Python print(out1 is out2) # False print(out1.var_a.values is out2.var_a.values) # True......but HOW?!

Anything else we need to know?

It seems as though changing the lines: python out1 = xr.Dataset(var_map, coords=coord_map) out2 = xr.Dataset(var_map, coords=coord_map) to python out1 = xr.Dataset(var_map, coords=coord_map) out2 = out1.copy(deep=True)

fixes the issue.

Environment

``` INSTALLED VERSIONS


commit: None python: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.15.25-1-lts machine: x86_64 processor: byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4

xarray: 0.21.1 pandas: 1.1.5 numpy: 1.22.0 scipy: 1.6.2 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: 3.6.0 Nio: None zarr: 2.10.3 cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: 2022.01.0 cupy: None pint: None sparse: None setuptools: 60.0.4 pip: 21.3.1 conda: 4.11.0 pytest: 6.2.5 IPython: 8.0.1 sphinx: 4.4.0 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6395/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 22.335ms · About: xarray-datasette