home / github / issues

Menu
  • GraphQL API
  • Search all tables

issues: 1899895419

This data as json

id node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1899895419 I_kwDOAMm_X85xPhp7 8199 Use Generic Types instead of Hashable or Any 43316012 open 0     2 2023-09-17T19:41:39Z 2023-09-18T14:16:02Z   COLLABORATOR      

Is your feature request related to a problem?

Currently, part of the static type of a DataArray or Dataset is a Mapping[Hashable, DataArray]. I'm quite sure that 99% of the users will actually use str key values (aka. variable names), while some exotic people (me included) want to use e.g. Enums for their keys. Currently, we allow to use anything as keys as long as it is hashable, but once the DataArray/set is created, the type information of the keys is lost.

Consider e.g. ```python

for name, da in Dataset({"a": ("t", np.arange(5))}).items(): reveal_type(name) # hashable reveal_type(da.dims) # tuple[hashable, ...] `` Woudn't that be nice if this would actually returnstr`, so you don't have to cast it or assert it everytime?

This could be solved by making these classes generic.

Another related issue is the underlying data. This could be introduced as a Generic type as well. Probably, this should reach some common ground on all wrapping array libs that are out there. Every one should use a Generic Array class that keeps track of the type of the wrapped array, e.g. dask.array.core.Array[np.ndarray]. In return, we could do DataArray[np.ndarray] or then DataArray[dask.array.core.Array[nd.ndarray]].

Describe the solution you'd like

The implementation would be something along the lines of:

```python KeyT = TypeVar("KeyT", bound=Hashable) DataT = TypeVar("DataT", bound=<some protocol?>)

class DataArray(Generic[KeyT, DataT]):

_coords: dict[KeyT, Variable[DataT]]
_indexes: dict[KeyT, Index[DataT]]
_name: KeyT | None
_variable: Variable[DataT]

def __init__(
    self,
    data: DataT = dtypes.NA,
    coords: Sequence[Sequence[DataT] | pd.Index | DataArray[KeyT]]
    | Mapping[KeyT, DataT]
    | None = None,
    dims: str | Sequence[KeyT] | None = None,
    name: KeyT | None = None,
    attrs: Mapping[KeyT, Any] | None = None,
    # internal parameters
    indexes: Mapping[KeyT, Index] | None = None,
    fastpath: bool = False,
) -> None:
...

```

Now you could create a "classical" DataArray: ```python da = DataArray(np.arange(10), {"t": np.arange(10)}, dims=["t"])

will be of type

DataArray[str, np.ndarray]

while you could also create something more fancypython da2 = DataArray(dask.array.array([1, 2, 3]), {}, dims=[("tup1", "tup2),])

will be of type

DataArray[tuple[str, str], dask.array.core.Array]

``` Any whenever you access the dimensions / coord names / underlying data you will get the correct type.

For now I only see three mayor problems: 1) non-array types (like lists or anything iterable) will get cast to a np.ndarray and I have no idea how to tell the type checker that DataArray([1, 2, 3], {}, "a") should be DataArray[str, np.ndarray] and not DataArray[str, list[int]]. Depending on the Protocol in the bound TypeVar this might even fail static type analysis or require tons of special casing and overloads. 2) How does the type checker extract the dimension type for Datasets? This is quite convoluted and I am not sure this can be typed correctly... 3) The parallel compute workflows are quite dynamic and I am not sure if static type checking can keep track of the underlying datatype... What does DataArray([1, 2, 3], dims="a").chunk({"a": 2}) return? Is it DataArray[str, dask.array.core.Array]? But what about other chunking frameworks?

Describe alternatives you've considered

One could even extend this and add more Generic types.

Different types for dimensions and variable names would be a first (and probably quite a nice) feature addition.

One could even go so far and type the keys and values of variables and coords (for Datasets) differently. This came up e.g. in https://github.com/pydata/xarray/issues/3967 However, this would create a ridiculous amount of Generic types and is probably more confusing than helpful.

Additional context

Probably this feature should be done in consecutive PRs that each implement one Generic each, otherwise this will be a giant task!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8199/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    13221727 issue

Links from other tables

  • 2 rows from issues_id in issues_labels
  • 0 rows from issue in issue_comments
Powered by Datasette · Queries took 0.965ms · About: xarray-datasette