home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where comments = 4, state_reason = "completed" and user = 35968931 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 4

state 1

  • closed 4

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2224036575 I_kwDOAMm_X86EkBrf 8905 Variable doesn't have an .expand_dims method TomNicholas 35968931 closed 0     4 2024-04-03T22:19:10Z 2024-04-28T19:54:08Z 2024-04-28T19:54:08Z MEMBER      

Is your feature request related to a problem?

DataArray and Dataset have an .expand_dims method, but it looks like Variable doesn't.

Describe the solution you'd like

Variable should also have this method, the only difference being that it wouldn't create any coordinates or indexes.

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8905/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
2117248281 I_kwDOAMm_X85-MqUZ 8704 Currently no way to create a Coordinates object without indexes for 1D variables TomNicholas 35968931 closed 0     4 2024-02-04T18:30:18Z 2024-03-26T13:50:16Z 2024-03-26T13:50:15Z MEMBER      

What happened?

The workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263 does not seem to work on main, meaning that I think there is currently no way to create an xr.Coordinates object without 1D variables being coerced to indexes. This means there is no way to create a Dataset object without 1D variables becoming IndexVariables being coerced to indexes.

What did you expect to happen?

I expected to at least be able to use the workaround described in https://github.com/pydata/xarray/pull/8107#discussion_r1311214263, i.e.

python xr.Coordinates({'x': ('x', uarr)}, indexes={}) where uarr is an un-indexable array-like.

Minimal Complete Verifiable Example

```Python class UnindexableArrayAPI: ...

class UnindexableArray: """ Presents like an N-dimensional array but doesn't support changes of any kind, nor can it be coerced into a np.ndarray or pd.Index. """

_shape: tuple[int, ...]
_dtype: np.dtype

def __init__(self, shape: tuple[int, ...], dtype: np.dtype) -> None:
    self._shape = shape
    self._dtype = dtype
    self.__array_namespace__ = UnindexableArrayAPI

@property
def dtype(self) -> np.dtype:
    return self._dtype

@property
def shape(self) -> tuple[int, ...]:
    return self._shape

@property
def ndim(self) -> int:
    return len(self.shape)

@property
def size(self) -> int:
    return np.prod(self.shape)

@property
def T(self) -> Self:
    raise NotImplementedError()

def __repr__(self) -> str:
    return f"UnindexableArray(shape={self.shape}, dtype={self.dtype})"

def _repr_inline_(self, max_width):
    """
    Format to a single line with at most max_width characters. Used by xarray.
    """
    return self.__repr__()

def __getitem__(self, key, /) -> Self:
    """
    Only supports extremely limited indexing.

    I only added this method because xarray will apparently attempt to index into its lazy indexing classes even if the operation would be a no-op anyway.
    """
    from xarray.core.indexing import BasicIndexer

    if isinstance(key, BasicIndexer) and key.tuple == ((slice(None),) * self.ndim):
        # no-op
        return self
    else:
        raise NotImplementedError()

def __array__(self) -> np.ndarray:
    raise NotImplementedError("UnindexableArrays can't be converted into numpy arrays or pandas Index objects")

```

```python uarr = UnindexableArray(shape=(3,), dtype=np.dtype('int32'))

xr.Variable(data=uarr, dims=['x']) # works fine

xr.Coordinates({'x': ('x', uarr)}, indexes={}) # works in xarray v2023.08.0 but in versions after that it triggers the NotImplementedError in `__array__`:python


NotImplementedError Traceback (most recent call last) Cell In[59], line 1 ----> 1 xr.Coordinates({'x': ('x', uarr)}, indexes={})

File ~/Documents/Work/Code/xarray/xarray/core/coordinates.py:301, in Coordinates.init(self, coords, indexes) 299 variables = {} 300 for name, data in coords.items(): --> 301 var = as_variable(data, name=name) 302 if var.dims == (name,) and indexes is None: 303 index, index_vars = create_default_index_implicit(var, list(coords))

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:159, in as_variable(obj, name) 152 raise TypeError( 153 f"Variable {name!r}: unable to convert object into a variable without an " 154 f"explicit list of dimensions: {obj!r}" 155 ) 157 if name is not None and name in obj.dims and obj.ndim == 1: 158 # automatically convert the Variable into an Index --> 159 obj = obj.to_index_variable() 161 return obj

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:572, in Variable.to_index_variable(self) 570 def to_index_variable(self) -> IndexVariable: 571 """Return this variable as an xarray.IndexVariable""" --> 572 return IndexVariable( 573 self._dims, self._data, self._attrs, encoding=self._encoding, fastpath=True 574 )

File ~/Documents/Work/Code/xarray/xarray/core/variable.py:2642, in IndexVariable.init(self, dims, data, attrs, encoding, fastpath) 2640 # Unlike in Variable, always eagerly load values into memory 2641 if not isinstance(self._data, PandasIndexingAdapter): -> 2642 self._data = PandasIndexingAdapter(self._data)

File ~/Documents/Work/Code/xarray/xarray/core/indexing.py:1481, in PandasIndexingAdapter.init(self, array, dtype) 1478 def init(self, array: pd.Index, dtype: DTypeLike = None): 1479 from xarray.core.indexes import safe_cast_to_index -> 1481 self.array = safe_cast_to_index(array) 1483 if dtype is None: 1484 self._dtype = get_valid_numpy_dtype(array)

File ~/Documents/Work/Code/xarray/xarray/core/indexes.py:469, in safe_cast_to_index(array) 459 emit_user_level_warning( 460 ( 461 "pandas.Index does not support the float16 dtype." (...) 465 category=DeprecationWarning, 466 ) 467 kwargs["dtype"] = "float64" --> 469 index = pd.Index(np.asarray(array), **kwargs) 471 return _maybe_cast_to_cftimeindex(index)

Cell In[55], line 63, in UnindexableArray.array(self) 62 def array(self) -> np.ndarray: ---> 63 raise NotImplementedError("UnindexableArrays can't be converted into numpy arrays or pandas Index objects")

NotImplementedError: UnindexableArrays can't be converted into numpy arrays or pandas Index objects ```

MVCE confirmation

  • [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
  • [x] Complete example — the example is self-contained, including all data and the text of any traceback.
  • [x] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
  • [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
  • [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.

Relevant log output

No response

Anything else we need to know?

Context is #8699

Environment

Versions described above

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8704/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
935062144 MDU6SXNzdWU5MzUwNjIxNDQ= 5559 UserWarning when wrapping pint & dask arrays together TomNicholas 35968931 closed 0     4 2021-07-01T17:25:03Z 2021-09-29T17:48:39Z 2021-09-29T17:48:39Z MEMBER      

With pint-xarray you can create a chunked, unit-aware xarray object, but calling a calculation method and then computing doesn't appear to behave as hoped.

```python da = xr.DataArray([1,2,3], attrs={'units': 'metres'})

chunked = da.chunk(1).pint.quantify() ```

python print(chunked.compute()) <xarray.DataArray (dim_0: 3)> <Quantity([1 2 3], 'meter')> Dimensions without coordinates: dim_0 So far this is fine, but if we try to take a mean before computing we get

python print(chunked.mean().compute()) <xarray.DataArray ()> <Quantity(dask.array<true_divide, shape=(), dtype=float64, chunksize=(), chunktype=numpy.ndarray>, 'meter')> /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:3139: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior. warnings.warn( This is not correct: as well as the UserWarning, the return value of compute is a dask array, meaning we need to compute a second time to actually get the answer: python print(chunked.mean().compute().compute()) <xarray.DataArray ()> <Quantity(2.0, 'meter')> /home/tegn500/miniconda3/envs/py38-mamba/lib/python3.8/site-packages/dask/array/core.py:3139: UserWarning: Passing an object to dask.array.from_array which is already a Dask collection. This can lead to unexpected behavior. warnings.warn(

If we try chunking the other way (chunked = da.pint.quantify().pint.chunk(1)) then we get all the same results.

xref https://github.com/xarray-contrib/pint-xarray/issues/116 and https://github.com/pydata/xarray/pull/4972 @keewis

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5559/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
594688816 MDU6SXNzdWU1OTQ2ODg4MTY= 3939 Why don't we allow indexing with keyword args via __call__? TomNicholas 35968931 closed 0     4 2020-04-05T22:44:18Z 2020-04-09T05:14:46Z 2020-04-09T05:14:46Z MEMBER      

Reading about PEP472, which would have allowed indexing with keyword arguments like python da[x=10] made me wonder: why don't we use __call__ to get the same effect but just with curved brackets instead of square ones? i.e. python da(x=10) We don't currently use __call__ on DataArray or Dataset for anything else.

I presume there is some good reason why this design decision was taken, but I'm just wondering what it is.

(Also has the ship permanently sailed on PEP472 now?)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3939/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 23.132ms · About: xarray-datasette