home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

15 rows where comments = 2 and user = 43316012 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 11
  • issue 4

state 2

  • closed 13
  • open 2

repo 1

  • xarray 15
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1462173557 I_kwDOAMm_X85XJv91 7316 Support for python 3.11 headtr1ck 43316012 closed 0     2 2022-11-23T17:52:18Z 2024-03-15T06:07:26Z 2024-03-15T06:07:26Z COLLABORATOR      

Is your feature request related to a problem?

Now that python 3.11 has been released, we should start to support it officially.

Describe the solution you'd like

I guess the first step would be to replace python 3.10 as a maximum version in the tests and see what crashes (and get lucky).

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7316/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1899895419 I_kwDOAMm_X85xPhp7 8199 Use Generic Types instead of Hashable or Any headtr1ck 43316012 open 0     2 2023-09-17T19:41:39Z 2023-09-18T14:16:02Z   COLLABORATOR      

Is your feature request related to a problem?

Currently, part of the static type of a DataArray or Dataset is a Mapping[Hashable, DataArray]. I'm quite sure that 99% of the users will actually use str key values (aka. variable names), while some exotic people (me included) want to use e.g. Enums for their keys. Currently, we allow to use anything as keys as long as it is hashable, but once the DataArray/set is created, the type information of the keys is lost.

Consider e.g. ```python

for name, da in Dataset({"a": ("t", np.arange(5))}).items(): reveal_type(name) # hashable reveal_type(da.dims) # tuple[hashable, ...] `` Woudn't that be nice if this would actually returnstr`, so you don't have to cast it or assert it everytime?

This could be solved by making these classes generic.

Another related issue is the underlying data. This could be introduced as a Generic type as well. Probably, this should reach some common ground on all wrapping array libs that are out there. Every one should use a Generic Array class that keeps track of the type of the wrapped array, e.g. dask.array.core.Array[np.ndarray]. In return, we could do DataArray[np.ndarray] or then DataArray[dask.array.core.Array[nd.ndarray]].

Describe the solution you'd like

The implementation would be something along the lines of:

```python KeyT = TypeVar("KeyT", bound=Hashable) DataT = TypeVar("DataT", bound=<some protocol?>)

class DataArray(Generic[KeyT, DataT]):

_coords: dict[KeyT, Variable[DataT]]
_indexes: dict[KeyT, Index[DataT]]
_name: KeyT | None
_variable: Variable[DataT]

def __init__(
    self,
    data: DataT = dtypes.NA,
    coords: Sequence[Sequence[DataT] | pd.Index | DataArray[KeyT]]
    | Mapping[KeyT, DataT]
    | None = None,
    dims: str | Sequence[KeyT] | None = None,
    name: KeyT | None = None,
    attrs: Mapping[KeyT, Any] | None = None,
    # internal parameters
    indexes: Mapping[KeyT, Index] | None = None,
    fastpath: bool = False,
) -> None:
...

```

Now you could create a "classical" DataArray: ```python da = DataArray(np.arange(10), {"t": np.arange(10)}, dims=["t"])

will be of type

DataArray[str, np.ndarray]

while you could also create something more fancypython da2 = DataArray(dask.array.array([1, 2, 3]), {}, dims=[("tup1", "tup2),])

will be of type

DataArray[tuple[str, str], dask.array.core.Array]

``` Any whenever you access the dimensions / coord names / underlying data you will get the correct type.

For now I only see three mayor problems: 1) non-array types (like lists or anything iterable) will get cast to a np.ndarray and I have no idea how to tell the type checker that DataArray([1, 2, 3], {}, "a") should be DataArray[str, np.ndarray] and not DataArray[str, list[int]]. Depending on the Protocol in the bound TypeVar this might even fail static type analysis or require tons of special casing and overloads. 2) How does the type checker extract the dimension type for Datasets? This is quite convoluted and I am not sure this can be typed correctly... 3) The parallel compute workflows are quite dynamic and I am not sure if static type checking can keep track of the underlying datatype... What does DataArray([1, 2, 3], dims="a").chunk({"a": 2}) return? Is it DataArray[str, dask.array.core.Array]? But what about other chunking frameworks?

Describe alternatives you've considered

One could even extend this and add more Generic types.

Different types for dimensions and variable names would be a first (and probably quite a nice) feature addition.

One could even go so far and type the keys and values of variables and coords (for Datasets) differently. This came up e.g. in https://github.com/pydata/xarray/issues/3967 However, this would create a ridiculous amount of Generic types and is probably more confusing than helpful.

Additional context

Probably this feature should be done in consecutive PRs that each implement one Generic each, otherwise this will be a giant task!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8199/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
1503621596 PR_kwDOAMm_X85F0ZZm 7392 Support complex arrays in xr.corr headtr1ck 43316012 closed 0     2 2022-12-19T21:22:25Z 2023-03-02T20:22:54Z 2023-02-14T16:38:27Z COLLABORATOR   0 pydata/xarray/pulls/7392
  • [x] Closes #7340
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] ~New functions/methods are listed in api.rst~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7392/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1446613571 PR_kwDOAMm_X85Cw17l 7283 Fix mypy 0.990 types headtr1ck 43316012 closed 0     2 2022-11-12T21:34:14Z 2022-11-18T15:42:37Z 2022-11-16T18:41:58Z COLLABORATOR   0 pydata/xarray/pulls/7283
  • [x] Related to #7270
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7283/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1442702272 PR_kwDOAMm_X85Cjnvl 7276 Import nc_time_axis when needed headtr1ck 43316012 closed 0     2 2022-11-09T20:24:45Z 2022-11-10T23:00:15Z 2022-11-10T21:45:27Z COLLABORATOR   0 pydata/xarray/pulls/7276
  • [x] Closes #7275
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [x] ~New functions/methods are listed in api.rst~
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7276/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1393443839 PR_kwDOAMm_X84__aKC 7112 Support of repr and deepcopy of recursive arrays headtr1ck 43316012 closed 0     2 2022-10-01T15:24:40Z 2022-10-07T11:10:32Z 2022-10-06T22:04:01Z COLLABORATOR   0 pydata/xarray/pulls/7112
  • [x] Closes #7111
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

xarray.testing.assert_identical and probably more do not work yet.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7112/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1393837094 PR_kwDOAMm_X85AAmD9 7114 Fix typing of backends headtr1ck 43316012 closed 0     2 2022-10-02T17:20:56Z 2022-10-06T21:33:39Z 2022-10-06T21:30:01Z COLLABORATOR   0 pydata/xarray/pulls/7114

While adding type hints to test_backends I noticed that the open_dataset method and the abstract BackendEntrypoint were missing stream types as inputs.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7114/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1395053809 PR_kwDOAMm_X85AEpA1 7117 Expermimental mypy plugin headtr1ck 43316012 open 0     2 2022-10-03T17:07:59Z 2022-10-03T18:53:10Z   COLLABORATOR   1 pydata/xarray/pulls/7117

I was playing around a bit with a mypy plugin and this was the best I could come up with. Unfortunately the mypy docu about the plugins is not very detailed...

This plugin makes mypy recognize the user defined accessors.

There is a quite severe bug in there (due to my lack of understanding of mypy internals probably) which makes it work only on the first run but when you change a line in your code and run mypy again it will crash... (you can delete the cache to make it work one more time again :)

Any chance that a mypy expert can figure this out? haha

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7117/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1362485455 PR_kwDOAMm_X84-Zfn0 6994 Even less warnings in tests headtr1ck 43316012 closed 0     2 2022-09-05T21:35:50Z 2022-09-10T09:02:46Z 2022-09-09T05:48:19Z COLLABORATOR   0 pydata/xarray/pulls/6994

This PR removes several warnings from the tests and improves their typing on the way.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6994/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1275262097 PR_kwDOAMm_X8453Zo1 6702 Typing of GroupBy & Co. headtr1ck 43316012 closed 0     2 2022-06-17T16:50:43Z 2022-07-03T13:32:30Z 2022-06-29T20:06:04Z COLLABORATOR   0 pydata/xarray/pulls/6702

This PR adds typing support for groupby, coarsen, rolling, weighted and resample.

There are several open points:

  1. Coarsen is missing type annotations for reductions like max, they get added dynamically.
  2. The Groupby group-key type is quite wide. Does anyone have any idea on how to type it correctly? For now it is still Any.
  3. Several function signatures were inconsistent between the DataArray and Dataset versions (looking at you: map). I took the freedom to align them (required for mypy), hopefully this does not break too much.
  4. I was moving the generation functions from DataWithCoords to DataArray and Dataset, which adds some copy-paste of code (I tried to keep it minimal) but was unavoidable for typing support. (Adds the bonus that the corresponding modules are now only imported when required).
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6702/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1278661854 PR_kwDOAMm_X846CnXr 6710 Expanduser (~) for open_dataset with dask headtr1ck 43316012 closed 0     2 2022-06-21T15:58:34Z 2022-06-26T08:08:04Z 2022-06-25T23:44:56Z COLLABORATOR   0 pydata/xarray/pulls/6710
  • [x] Closes #6707
  • [x] ~~Tests added~~
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

I don't really know how to test this... Is it ok to leave it untested?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6710/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1236316818 PR_kwDOAMm_X8431gAK 6611 {full,zeros,ones}_like typing headtr1ck 43316012 closed 0     2 2022-05-15T15:18:55Z 2022-05-16T18:10:05Z 2022-05-16T17:42:25Z COLLABORATOR   0 pydata/xarray/pulls/6611

(partial) typing for functions full_like, zeros_like, ones_like.

I could not figure out how to properly use TypeVars so many things are "hardcoded" with overloads.

I have added a DTypeLikeSave to npcompat, not sure that this file is supposed to be edited.

Problem1: TypeVar["T", Dataset, DataArray, Variable] can only be one of these three, but never with Union[Dataset, DataArray] which is used in several other places in xarray. Problem2: The official mypy support says: use TypeVar["T", bound=Union[Dataset, DataArray, Variable] but the the isinstance(obj, Dataset) could not be correctly resolved (is that a mypy issue?).

So if anyone can get it to work with TypeVars, feel free to change it. :)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6611/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1233058314 PR_kwDOAMm_X843rEkU 6593 Fix polyval overloads headtr1ck 43316012 closed 0     2 2022-05-11T18:54:54Z 2022-05-12T14:50:14Z 2022-05-11T19:42:41Z COLLABORATOR   0 pydata/xarray/pulls/6593

Attempt to fix the typing issues in xr.polyval.

Some problems are still occuring and require a type: ignore. They seem more like mypy issues to me.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6593/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1222103599 I_kwDOAMm_X85I19Iv 6554 isel with drop=True does not drop coordinates if using scalar DataArray as indexer headtr1ck 43316012 closed 0     2 2022-05-01T10:14:37Z 2022-05-10T06:18:19Z 2022-05-10T06:18:19Z COLLABORATOR      

What happened?

When using DataArray/Dataset.isel with drop=True with a scalar DataArray as indexer (see example) resulting scalar coordinates do not get dropped. When using an integer the behavior is as expected.

What did you expect to happen?

I expect that using a scalar DataArray behaves the same as an integer.

Minimal Complete Verifiable Example

```Python import xarray as xr

da = xr.DataArray([1, 2, 3], dims="x", coord={"k": ("x", [0, 1, 2])})

<xarray.DataArray (x: 3)>

array([1, 2, 3])

Coordinates:

k (x) int32 0 1 2

da.isel({"x": 1}, drop=True)

works

<xarray.DataArray ()>

array(2)

da.isel({"x": xr.DataArray(1)}, drop=True)

does not drop "k" coordinate

<xarray.DataArray ()>

array(2)

Coordinates:

k int32 1

```

Relevant log output

No response

Anything else we need to know?

No response

Environment

INSTALLED VERSIONS ------------------ commit: 4fbca23a9fd8458ec8f917dd0e54656925503e90 python: 3.9.6 | packaged by conda-forge | (default, Jul 6 2021, 08:46:02) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('de_DE', 'cp1252') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.18.2.dev76+g3a7e7ca2.d20210706 pandas: 1.3.0 numpy: 1.21.0 scipy: 1.7.0 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 3.3.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: 1.3.1 PseudoNetCDF: installed cfgrib: None iris: 2.4.0 bottleneck: 1.3.2 dask: 2021.06.2 distributed: 2021.06.2 matplotlib: 3.4.2 cartopy: 0.19.0.post1 seaborn: 0.11.1 numbagg: 0.2.1 fsspec: 2021.06.1 cupy: None pint: 0.17 sparse: 0.12.0 setuptools: 49.6.0.post20210108 pip: 21.3.1 conda: None pytest: 6.2.4 IPython: None sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6554/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1217543476 I_kwDOAMm_X85Ikj00 6526 xr.polyval first arg requires name attribute headtr1ck 43316012 closed 0     2 2022-04-27T15:47:02Z 2022-05-05T19:15:58Z 2022-05-05T19:15:58Z COLLABORATOR      

What happened?

I have some polynomial coefficients and want to evaluate them at some values using xr.polyval.

As described in the docstring/docu I created a 1D coordinate DataArray and pass it to xr.polyval but it raises a KeyError (see example).

What did you expect to happen?

I expected that the polynomial would be evaluated at the given points.

Minimal Complete Verifiable Example

```Python import xarray as xr

coeffs = xr.DataArray([1, 2, 3], dims="degree")

With a "handmade" coordinate it fails:

coord = xr.DataArray([0, 1, 2], dims="x")

xr.polyval(coord, coeffs)

raises:

Traceback (most recent call last):

File "<stdin>", line 1, in <module>

File "xarray/core/computation.py", line 1847, in polyval

x = get_clean_interp_index(coord, coord.name, strict=False)

File "xarray/core/missing.py", line 252, in get_clean_interp_index

index = arr.get_index(dim)

File "xarray/core/common.py", line 404, in get_index

raise KeyError(key)

KeyError: None

If one adds a name to the coord that is called like the dimension:

coord2 = xr.DataArray([0, 1, 2], dims="x", name="x")

xr.polyval(coord2, coeffs)

works

```

Relevant log output

No response

Anything else we need to know?

I assume that the "standard" workflow is to obtain the coord argument from an existing DataArrays coordinate, where the name would be correctly set already. However, that is not clear from the description, and also prevents my "manual" workflow.

It could be that the problem will be solved by replacing the coord DataArray argument by an explicit Index in the future.

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.9.10 (main, Mar 15 2022, 15:56:56) [GCC 7.5.0] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 2022.3.0 pandas: 1.4.2 numpy: 1.22.3 scipy: None netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.6.0 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: 0.20.2 seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 58.1.0 pip: 22.0.4 conda: None pytest: None IPython: 8.2.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6526/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 25.905ms · About: xarray-datasette