home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

5 rows where user = 28786187 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 3
  • pull 2

state 2

  • open 3
  • closed 2

repo 1

  • xarray 5
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2235746028 PR_kwDOAMm_X85sQRSg 8924 Keep attributes for "bounds" variables st-bender 28786187 open 0     0 2024-04-10T14:29:58Z 2024-04-11T14:09:48Z   CONTRIBUTOR   0 pydata/xarray/pulls/8924

Issue #2921 is about mismatching time units between a time variable and its "bounds" companion. However, #2965 does more than fixing #2921, it removes all double attributes from "bounds" variables which has the undesired side effect that there is currently no way to save them to netcdf with xarray. Since the mentioned link is a recommendation and not a hard requirement for CF compliance, these attributes should be left to the caller to prepare the dataset variables appropriately if required. Reduces the amount of surprise that attributes are not written to disk and fixes #8368.

  • [x] Closes #8368
  • [ ] Tests added (simple round trip test would be in issue #8368)
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst (not sure about "user visibility" here)
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8924/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1861738444 I_kwDOAMm_X85u99_M 8099 inconsistent dimension propagation in `.interp()` with non-numeric types st-bender 28786187 open 0     0 2023-08-22T15:46:13Z 2023-08-22T15:55:44Z   CONTRIBUTOR      

What happened?

Hi, tried to find a similar issue, but haven't seen it. When interpolating a dataset containing non-numeric types using a DataArray for the new coordinates, then the dimensions of the interpolated data set differ from the case with only numeric types. Not sure this is clear enough, see the example.

Edit: looks like nothing is interpolated at all in the non-numeric case.

What did you expect to happen?

The output dimensions should be the same in both cases and match the ones of the numeric-only case.

Minimal Complete Verifiable Example

```Python import xarray as xr import numpy as np

ds = xr.Dataset( {"x": ("a", np.arange(4))}, coords={"a": (np.arange(4) - 1.5)}, )

t = xr.DataArray( np.random.randn(6).reshape((2, 3)) * 0.5, dims=["r", "s"], coords={"r": np.arange(2) - 0.5, "s": np.arange(3) - 1}, )

ds["m"] = ds.x > 1

different dimensions for x, compare

ds.interp(a=t, method="linear")

to numeric only

ds[["x"]].interp(a=t, method="linear") ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [X] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [X] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python

Output in the numeric-only case:

<xarray.Dataset> Dimensions: (r: 2, s: 3) Coordinates: a (r, s) float64 -0.177 -0.2806 -0.1925 -0.1464 0.3165 0.6232 * r (r) float64 -0.5 0.5 * s (s) int64 -1 0 1 Data variables: x (r, s) float64 1.323 1.219 1.308 1.354 1.816 2.123

output including non-numeric variables:

<xarray.Dataset> Dimensions: (a: 4, r: 2, s: 3) Coordinates: * a (r, s) float64 -0.177 -0.2806 -0.1925 -0.1464 0.3165 0.6232 * r (r) float64 -0.5 0.5 * s (s) int64 -1 0 1 Data variables: x (a) int64 0 1 2 3 m (a) bool False False True True ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0] python-bits: 64 OS: Linux machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.1 xarray: 2023.7.0 pandas: 2.0.3 numpy: 1.25.2 scipy: 1.11.1 netCDF4: 1.6.3 pydap: installed h5netcdf: 1.2.0 h5py: 3.8.0 Nio: None zarr: 2.16.0 cftime: 1.6.2 nc_time_axis: None PseudoNetCDF: None iris: None bottleneck: None dask: 2023.8.0 distributed: 2023.8.0 matplotlib: 3.7.2 cartopy: 0.22.0 seaborn: None numbagg: None fsspec: 2023.6.0 cupy: None pint: None sparse: None flox: 0.7.2 numpy_groupies: 0.9.22 setuptools: 68.0.0 pip: 23.2.1 conda: 23.7.2 pytest: 7.4.0 mypy: None IPython: 8.7.0 sphinx: 5.3.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8099/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1350803561 I_kwDOAMm_X85Qg6Bp 6953 DataArray.resample().apply() fails to apply custom function st-bender 28786187 open 0     5 2022-08-25T12:14:04Z 2022-08-29T12:41:47Z   CONTRIBUTOR      

What happened?

Hi, I try to apply a custom function to aggregate a resampled object via .apply(). Maybe it is a documentation issue? But I couldn't find it.

For example calculating the median by passing np.median fails with an error as shown in the log below.

What did you expect to happen?

I would expect the median or any other custom function to be calculated for the resampled data. It seems to work with pure pandas.

Minimal Complete Verifiable Example

```Python import numpy as np import pandas as pd import xarray as xr

idx = pd.date_range("2000-01-01", "2000-12-31") data = xr.DataArray(np.random.randn(len(idx)), coords={"index": idx})

data.resample(index="M").apply(np.median) ```

MVCE confirmation

  • [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [X] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.

Relevant log output

```Python

AttributeError Traceback (most recent call last) Input In [425], in <cell line: 3>() 1 idx = pd.date_range("2000-01-01", "2000-12-31") 2 data = pd.Series(np.random.randn(len(idx)), index=idx).to_xarray() ----> 3 data.resample(index="M", label="left", loffset="15d").apply(np.median)

File ~/Work/miniconda3/envs/20211123_py39/lib/python3.9/site-packages/xarray/core/resample.py:279, in DataArrayResample.apply(self, func, args, shortcut, kwargs) 267 """ 268 Backward compatible implementation of map 269 (...) 272 DataArrayResample.map 273 """ 274 warnings.warn( 275 "Resample.apply may be deprecated in the future. Using Resample.map is encouraged", 276 PendingDeprecationWarning, 277 stacklevel=2, 278 ) --> 279 return self.map(func=func, shortcut=shortcut, args=args, kwargs)

File ~/Work/miniconda3/envs/20211123_py39/lib/python3.9/site-packages/xarray/core/resample.py:253, in DataArrayResample.map(self, func, args, shortcut, kwargs) 210 """Apply a function to each array in the group and concatenate them 211 together into a new array. 212 (...) 249 The result of splitting, applying and combining this array. 250 """ 251 # TODO: the argument order for Resample doesn't match that for its parent, 252 # GroupBy --> 253 combined = super().map(func, shortcut=shortcut, args=args, kwargs) 255 # If the aggregation function didn't drop the original resampling 256 # dimension, then we need to do so before we can rename the proxy 257 # dimension we used. 258 if self._dim in combined.coords:

File ~/Work/miniconda3/envs/20211123_py39/lib/python3.9/site-packages/xarray/core/groupby.py:1095, in DataArrayGroupByBase.map(self, func, args, shortcut, kwargs) 1093 grouped = self._iter_grouped_shortcut() if shortcut else self._iter_grouped() 1094 applied = (maybe_wrap_array(arr, func(arr, *args, kwargs)) for arr in grouped) -> 1095 return self._combine(applied, shortcut=shortcut)

File ~/Work/miniconda3/envs/20211123_py39/lib/python3.9/site-packages/xarray/core/groupby.py:1115, in DataArrayGroupByBase._combine(self, applied, shortcut) 1113 """Recombine the applied objects like the original.""" 1114 applied_example, applied = peek_at(applied) -> 1115 coord, dim, positions = self._infer_concat_args(applied_example) 1116 if shortcut: 1117 combined = self._concat_shortcut(applied, dim, positions)

File ~/Work/miniconda3/envs/20211123_py39/lib/python3.9/site-packages/xarray/core/groupby.py:559, in GroupBy._infer_concat_args(self, applied_example) 558 def _infer_concat_args(self, applied_example): --> 559 if self._group_dim in applied_example.dims: 560 coord = self._group 561 positions = self._group_indices

AttributeError: 'numpy.float64' object has no attribute 'dims' ```

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21) [GCC 10.3.0] python-bits: 64 OS: Linux OS-release: 4.4.0-210-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C LOCALE: ('en_GB', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.8.1 xarray: 2022.6.0 pandas: 1.4.3 numpy: 1.23.2 scipy: 1.9.0 netCDF4: 1.6.0 pydap: installed h5netcdf: None h5py: 3.7.0 Nio: None zarr: 2.12.0 cftime: 1.6.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2022.6.1 distributed: 2022.6.1 matplotlib: 3.5.3 cartopy: 0.20.3 seaborn: None numbagg: None fsspec: 2022.7.1 cupy: None pint: None sparse: None flox: None numpy_groupies: None setuptools: 65.2.0 pip: 22.2.2 conda: 4.14.0 pytest: 7.1.2 IPython: 8.4.0 sphinx: 5.1.1
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6953/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
937336962 MDExOlB1bGxSZXF1ZXN0NjgzOTA5OTgw 5580 dataset `__repr__` updates st-bender 28786187 closed 0     8 2021-07-05T20:03:55Z 2021-08-21T22:51:16Z 2021-08-21T22:51:03Z CONTRIBUTOR   0 pydata/xarray/pulls/5580

After discussing the recent change in the dataset __repr__, this implements the compromise discussed in #5545. It keeps the dataset __repr__ short, tunable via xr.set_options(), and gives the full output via ds.coords, ds.data_vars, and ds.attrs.

Passes the (updated) testsuite (locally).

  • [x] Closes #5545
  • [x] Tests added
  • [x] Passes pre-commit run --all-files
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5580/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
931591247 MDU6SXNzdWU5MzE1OTEyNDc= 5545 Increase default `display_max_rows` st-bender 28786187 closed 0     19 2021-06-28T13:46:14Z 2021-08-21T22:51:03Z 2021-08-21T22:51:03Z CONTRIBUTOR      

This must have been introduced into xr.set_options() somewhere around version 0.17. First of all this limit breaks backwards compatibility in the output format with print() or something similar on the console. Second, the default of 12 is much too low imo and makes no sense, in particular since terminals usually have a scrollback buffer and notebook cells can be made scrollable.

I use print() frequently to check that all variables made it into the data set correctly, which is meaningless when lines are skipped with this default limit. And it broke my doctests that I wrote to do exactly that (thanks for that btw.). So if not removed, could the default be at least increased to a sensible number like 100 or 1000 or 10000? (I'd personally prefer much higher or even no limit, but I guess that is not an option.)

Cheers.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5545/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 720.979ms · About: xarray-datasette