issues
4 rows where user = 44142765 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609090149 | I_kwDOAMm_X85f6MRl | 7584 | `np.multiply` and `dask.array.multiply` trigger graph computation vs using `+` and `*` operators. | zoj613 44142765 | closed | 0 | 3 | 2023-03-03T18:37:06Z | 2023-11-06T08:45:55Z | 2023-11-06T08:45:54Z | NONE | What is your issue?I was trying to implement a simple weighted average function using a few To my surprise a call to to this function causes the graph to be computed as I had to wait for quite a bit given the size of the DataArray's. I realised that the graph was being computed. Even replacing
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1652227927 | I_kwDOAMm_X85iev9X | 7713 | `Variable/IndexVariable` do not accept a tuple for data. | zoj613 44142765 | closed | 0 | 4 | 2023-04-03T14:50:58Z | 2023-04-28T14:26:37Z | 2023-04-28T14:26:37Z | NONE | What happened?It appears that What did you expect to happen?Successful instantiation of a Minimal Complete Verifiable Example```Python import xarray as xr xr.Variable(data=(2, 3, 45), dims="day") ``` MVCE confirmation
Relevant log output
Anything else we need to know?This error seems to be triggered by the Environment
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55)
[GCC 11.3.0]
python-bits: 64
OS: Linux
OS-release: 6.1.21-1-lts
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.8.1
xarray: 2023.1.0
pandas: 1.5.3
numpy: 1.23.5
scipy: 1.10.1
netCDF4: 1.6.2
pydap: None
h5netcdf: 1.1.0
h5py: 3.8.0
Nio: None
zarr: 2.14.2
cftime: 1.6.2
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2023.3.2
distributed: 2023.3.2
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2023.3.0
cupy: None
pint: None
sparse: 0.14.0
flox: None
numpy_groupies: None
setuptools: 67.6.1
pip: 23.0.1
conda: None
pytest: 7.2.2
mypy: 1.1.1
IPython: 8.12.0
sphinx: None
```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1331985070 | I_kwDOAMm_X85PZHqu | 6891 | Passing extra keyword arguments to `curvefit` throws an exception. | zoj613 44142765 | open | 0 | 13 | 2022-08-08T14:47:56Z | 2023-03-26T19:40:43Z | NONE | What happened?Just like the title says, passing an extra keyword argument corresponding to scipy's So if one specifies a The only way it works as expected is if I pass the keyword arguments as dictionary elements via a What did you expect to happen?No error thrown Minimal Complete Verifiable Example```Python import pandas as pd import xarray as xr import numpy as np da = xr.DataArray(
) da.curvefit(coords=["time"], func=lambda x, params: x, method="trf") ``` MVCE confirmation
Relevant log output
Anything else we need to know?No response Environment
```shell
commit: None
python: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07)
[GCC 9.4.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-1061-aws
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.1
libnetcdf: 4.8.1
xarray: 2022.3.0
pandas: 1.4.3
numpy: 1.22.0
scipy: 1.6.2
netCDF4: 1.6.0
pydap: None
h5netcdf: 0.15.0
h5py: 3.7.0
Nio: None
zarr: 2.10.3
cftime: 1.6.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: 0.9.10.1
iris: None
bottleneck: None
dask: 2022.03.0
distributed: 2022.3.0
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2022.01.0
cupy: None
pint: None
sparse: 0.13.0
setuptools: 63.1.0
pip: 22.2.2
conda: None
pytest: 6.2.5
IPython: 8.4.0
sphinx: None
```
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1175517164 | I_kwDOAMm_X85GEPfs | 6395 | two Dataset objects reference the same numpy array memory block upon creation. | zoj613 44142765 | closed | 0 | 3 | 2022-03-21T15:03:20Z | 2022-03-22T11:52:31Z | 2022-03-21T15:55:08Z | NONE | What happened?I tried creating two new What did you expect to happen?Getting two separate objects with non-identical memory addresses for the numpy arrays they contain. Minimal Complete Verifiable Example```Python import xarray as xr import pandas as pd import numpy as np def xarray_dataset(): rng = np.random.default_rng(0) data_map = { "Tmin": rng.uniform(-1, 1, size=(3, 2, 2)), "Tmax": rng.uniform(-1, 1, size=(3, 2, 2)), } lon = [-99.83, -99.32] lat = [42.25, 42.21] time = pd.date_range("2014-09-06", "2016-09-06", periods=3) var_map = {"time": time, "lat": lat, "lon": lon} out_map = {name: (tuple(var_map), data_map[name]) for name in data_map} return xr.Dataset(data_vars=out_map, coords=var_map) base = xarray_dataset() dims = tuple(base.dims) base_shape = (base.time.size, base.lat.size, base.lon.size) var_map = {var: (dims, np.empty(base_shape)) for var in ("var_a", "var_b")} coord_map = { "time": (("time",), base.time.values), "lon": (("lon",), base.lon.values), "lat": (("lat",), base.lat.values), } out1 = xr.Dataset(var_map, coords=coord_map) out2 = xr.Dataset(var_map, coords=coord_map) print(out1 is out2) # False print(out1.var_a.values is out2.var_a.values) # True......but HOW?! ``` Relevant log output
Anything else we need to know?It seems as though changing the lines:
fixes the issue. Environment``` INSTALLED VERSIONS commit: None python: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 5.15.25-1-lts machine: x86_64 processor: byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.1.5 numpy: 1.22.0 scipy: 1.6.2 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: 3.6.0 Nio: None zarr: 2.10.3 cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: 2022.01.0 cupy: None pint: None sparse: None setuptools: 60.0.4 pip: 21.3.1 conda: 4.11.0 pytest: 6.2.5 IPython: 8.0.1 sphinx: 4.4.0 ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);