issues
23 rows where comments = 1 and user = 43316012 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: draft, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1914212923 | PR_kwDOAMm_X85bRN9f | 8234 | Improved typing of align & broadcast | headtr1ck 43316012 | closed | 0 | 1 | 2023-09-26T20:02:22Z | 2023-12-18T20:28:03Z | 2023-10-09T10:21:40Z | COLLABORATOR | 0 | pydata/xarray/pulls/8234 |
This PR improves the typing of align.
Before: the type of the inputs was reduced to the common superclass and the return type was the same. This often required casts or ignores when mixing classes (e.g. Only downside: it requires some ugly overloads with type ignores on align. Maybe someone knows how to type this better? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
2034528244 | I_kwDOAMm_X855RG_0 | 8537 | Doctests failing | headtr1ck 43316012 | closed | 0 | 1 | 2023-12-10T20:49:43Z | 2023-12-11T21:00:03Z | 2023-12-11T21:00:03Z | COLLABORATOR | What is your issue?The doctest is currently failing with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
2021517557 | PR_kwDOAMm_X85g7s9a | 8501 | Update to mypy1.7 | headtr1ck 43316012 | closed | 0 | 1 | 2023-12-01T20:08:46Z | 2023-12-02T13:08:45Z | 2023-12-01T22:02:21Z | COLLABORATOR | 0 | pydata/xarray/pulls/8501 |
I guess we update manually for now? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1943539215 | PR_kwDOAMm_X85c0AkW | 8309 | Move variable typed ops to NamedArray | headtr1ck 43316012 | open | 0 | 1 | 2023-10-14T20:22:07Z | 2023-10-26T21:55:01Z | COLLABORATOR | 1 | pydata/xarray/pulls/8309 | This is highly WIP and probably everything is broken right now... Just creating this now, so other people don't work on the same :) Feel free to continue here with me. @pydata/xarray 1. what do we do with commonly used functions, is it ok to copy them? 2. Moving the typed ops requires a lot of functions to be added to NamedArray, is there a consensus of what we want to move? Is it basically everything? 3. Slowly the utils module is becomming a graveyard of stuff we dont want to put elsewhere, maybe we should at least move the typing stuff over to a types module. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1936080078 | I_kwDOAMm_X85zZjzO | 8291 | `NamedArray.shape` does not support unknown dimensions | headtr1ck 43316012 | closed | 0 | 1 | 2023-10-10T19:36:42Z | 2023-10-18T06:22:54Z | 2023-10-18T06:22:54Z | COLLABORATOR | What is your issue?According to the array api standard, the This will actually raise some errors if a duckarray actually returns some None.
E.g. (On a side note: dask arrays actually use NaN instead of None for some reason.... Only advantage of this is that |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1897167470 | PR_kwDOAMm_X85aX_Ms | 8184 | Fix several warnings in the tests | headtr1ck 43316012 | closed | 0 | 1 | 2023-09-14T19:21:37Z | 2023-09-26T19:01:13Z | 2023-09-15T20:41:03Z | COLLABORATOR | 0 | pydata/xarray/pulls/8184 | Mainly deprecated "closed" argument in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1361246796 | I_kwDOAMm_X85RIvpM | 6985 | FutureWarning for pandas date_range | headtr1ck 43316012 | closed | 0 | 1 | 2022-09-04T20:35:17Z | 2023-02-06T17:51:48Z | 2023-02-06T17:51:48Z | COLLABORATOR | What is your issue?Xarray raises a FutureWarning in its date_range, also observable in your tests. The precise warning is:
You should discuss if you will adapt the new |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
1462057503 | PR_kwDOAMm_X85DlALl | 7315 | Fix polyval overloads | headtr1ck 43316012 | closed | 0 | 1 | 2022-11-23T16:27:21Z | 2022-12-08T20:10:16Z | 2022-11-26T15:42:51Z | COLLABORATOR | 0 | pydata/xarray/pulls/7315 |
Turns out the default value of arguments is important for overloads, haha. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7315/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1468671915 | PR_kwDOAMm_X85D65Bg | 7335 | Enable mypy warn unused ignores | headtr1ck 43316012 | closed | 0 | 1 | 2022-11-29T20:42:08Z | 2022-12-08T20:09:06Z | 2022-12-01T16:14:07Z | COLLABORATOR | 0 | pydata/xarray/pulls/7335 | This PR adds the mypy option "warn_unused_ignores" which will raise an error if a Should enable us to keep our types updated. I am not sure if this will lead to many issues whenever e.g. numpy changes/improves their typing, so we might get errors whenever there is a new version. Maybe it is not that bad, or maybe we can also remove the option again and only do it manually from time to time? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7335/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1410498749 | PR_kwDOAMm_X85A38a6 | 7168 | Fix broken test that fails CI upstream | headtr1ck 43316012 | closed | 0 | 1 | 2022-10-16T14:02:42Z | 2022-10-17T17:48:07Z | 2022-10-16T16:16:51Z | COLLABORATOR | 0 | pydata/xarray/pulls/7168 |
Technically does not close all fails, but if we close it, the CI will open a new issue anyway and the discussion is not relevant anymore :) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1396832809 | PR_kwDOAMm_X85AKhqW | 7126 | Upload mypy coverage report to codecov | headtr1ck 43316012 | closed | 0 | 1 | 2022-10-04T20:55:02Z | 2022-10-06T21:33:51Z | 2022-10-06T20:38:14Z | COLLABORATOR | 0 | pydata/xarray/pulls/7126 | Not sure if that is the correct approach (to simply use a mypy flag) but lets see what people think about it. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1389764085 | PR_kwDOAMm_X84_zMRw | 7102 | Exclude typechecking stuff from coverage | headtr1ck 43316012 | closed | 0 | 1 | 2022-09-28T18:12:39Z | 2022-09-28T20:15:52Z | 2022-09-28T19:18:54Z | COLLABORATOR | 0 | pydata/xarray/pulls/7102 | { "url": "https://api.github.com/repos/pydata/xarray/issues/7102/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1345227910 | PR_kwDOAMm_X849gI4A | 6939 | Improve quantile method docstring + error | headtr1ck 43316012 | closed | 0 | 1 | 2022-08-20T17:17:32Z | 2022-09-10T09:03:05Z | 2022-09-05T22:40:07Z | COLLABORATOR | 0 | pydata/xarray/pulls/6939 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1345220697 | PR_kwDOAMm_X849gHlT | 6938 | Fix bug where indexes were changed inplace | headtr1ck 43316012 | closed | 0 | 1 | 2022-08-20T16:45:22Z | 2022-08-22T11:07:46Z | 2022-08-22T10:39:54Z | COLLABORATOR | 0 | pydata/xarray/pulls/6938 |
Some typing on the way :) |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6938/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1302461674 | PR_kwDOAMm_X847SR33 | 6777 | Move Rolling tests to their own testing module | headtr1ck 43316012 | closed | 0 | 1 | 2022-07-12T18:20:58Z | 2022-07-12T18:48:38Z | 2022-07-12T18:46:32Z | COLLABORATOR | 0 | pydata/xarray/pulls/6777 | This PR moves all DataArrayRolling and DatasetRolling tests to their own module. See request https://github.com/pydata/xarray/pull/6744#issuecomment-1182169308 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1275747776 | I_kwDOAMm_X85MCl3A | 6703 | Add coarsen, rolling and weighted to generate_reductions | headtr1ck 43316012 | open | 0 | 1 | 2022-06-18T09:49:22Z | 2022-06-18T16:04:15Z | COLLABORATOR | Is your feature request related to a problem?Coarsen reductions are currently added dynamically which is not very useful for typing. This is a follow-up to @Illviljan in https://github.com/pydata/xarray/pull/6702#discussion_r900700532_ Same goes for Weighted. And similar for Rolling (not sure if it is exactly the same though?) Describe the solution you'd likeExtend the generate_reductions script to include |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1249902974 | PR_kwDOAMm_X844idGc | 6641 | Typing of `str` and `dt` accessors | headtr1ck 43316012 | closed | 0 | 1 | 2022-05-26T18:25:44Z | 2022-05-27T06:32:33Z | 2022-05-26T20:12:23Z | COLLABORATOR | 0 | pydata/xarray/pulls/6641 | This is initial try to get type hints for I think there is no way of accessing the class at class scope (or is there?), so I had to use plain "DataArray" as the generic type of the accessors. I think that is acceptable for now. The hack of Maybe a common interface class for accessors would be also beneficial? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6641/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
xarray 13221727 | pull | |||||
1244082778 | PR_kwDOAMm_X844PPS5 | 6626 | Mypy badge | headtr1ck 43316012 | closed | 0 | 1 | 2022-05-21T21:12:05Z | 2022-05-22T13:56:45Z | 2022-05-21T22:59:52Z | COLLABORATOR | 0 | pydata/xarray/pulls/6626 | This PR adds a mypy badge to the README. Also, nicer alt texts for all other badges. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1150618439 | I_kwDOAMm_X85ElQtH | 6306 | Assigning to dataset with missing dim raises ValueError | headtr1ck 43316012 | open | 0 | 1 | 2022-02-25T16:08:04Z | 2022-05-21T20:35:52Z | COLLABORATOR | What happened?I tried to assign values to a dataset with a selector-dict where a variable is missing the dim from the selector-dict. This raises a ValueError. What did you expect to happen?I expect that assigning works the same as selecting and it will ignore the missing dims. Minimal Complete Verifiable Example```Python import xarray as xr ds = xr.Dataset({"a": ("x", [1, 2, 3]), "b": ("y", [4, 5])}) ds[{"x": 1}] this works and returns:<xarray.Dataset>Dimensions: (y: 2)Dimensions without coordinates: yData variables:a int64 2b (y) int64 4 5ds[{"x": 1}] = 1 this fails and raises a ValueErrorValueError: Variable 'b': indexer {'x': 1} not available``` Relevant log output```Python Traceback (most recent call last): File "xarray/core/dataset.py", line 1591, in _setitem_check var_k = var[key] File "xarray/core/dataarray.py", line 740, in getitem return self.isel(indexers=self._item_key_to_dict(key)) File "xarray/core/dataarray.py", line 1204, in isel variable = self._variable.isel(indexers, missing_dims=missing_dims) File "xarray/core/variable.py", line 1181, in isel indexers = drop_dims_from_indexers(indexers, self.dims, missing_dims) File "xarray/core/utils.py", line 834, in drop_dims_from_indexers raise ValueError( ValueError: Dimensions {'x'} do not exist. Expected one or more of ('y',) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "xarray/core/dataset.py", line 1521, in setitem value = self._setitem_check(key, value) File "xarray/core/dataset.py", line 1593, in _setitem_check raise ValueError( ValueError: Variable 'b': indexer {'x': 1} not available ``` Anything else we need to know?No response EnvironmentINSTALLED VERSIONScommit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1234229210 | PR_kwDOAMm_X843u7hK | 6601 | change polyval dim ordering | headtr1ck 43316012 | closed | 0 | 1 | 2022-05-12T16:30:44Z | 2022-05-16T18:10:03Z | 2022-05-12T19:01:59Z | COLLABORATOR | 0 | pydata/xarray/pulls/6601 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6601/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1234135124 | PR_kwDOAMm_X843unh4 | 6599 | re-add timedelta support for polyval | headtr1ck 43316012 | closed | 0 | 1 | 2022-05-12T15:12:41Z | 2022-05-12T16:27:01Z | 2022-05-12T15:43:29Z | COLLABORATOR | 0 | pydata/xarray/pulls/6599 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1228977960 | PR_kwDOAMm_X843dwXx | 6579 | Fix Dataset/DataArray.isel with drop=True and scalar DataArray indexes | headtr1ck 43316012 | closed | 0 | 1 | 2022-05-08T20:17:04Z | 2022-05-11T17:19:53Z | 2022-05-10T06:18:19Z | COLLABORATOR | 0 | pydata/xarray/pulls/6579 |
Additionally I have added new literal types for error handling (Only applied to functions related to isel such that mypy stops complaining). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6579/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1155321209 | I_kwDOAMm_X85E3M15 | 6313 | groubpy on array with multiindex renames indices | headtr1ck 43316012 | closed | 0 | 1 | 2022-03-01T13:08:30Z | 2022-03-17T17:11:44Z | 2022-03-17T17:11:44Z | COLLABORATOR | What happened?When grouping and reducing an array or dataset over a multi-index the coordinates that make up the multi-index get renamed to "{name_of_multiindex}_level_{i}". It only works correctly when the Multiindex is a "homogenous grid", i.e. as obtained by stacking. What did you expect to happen?I expect that all coordinates keep their initial names. Minimal Complete Verifiable Example```Python import xarray as xr this works:d = xr.DataArray(range(4), dims="t", coords={"x": ("t", [0, 0, 1, 1]), "y": ("t", [0, 1, 0, 1])}) dd = d.set_index({"t": ["x", "y"]}) returns<xarray.DataArray (t: 4)>array([0, 1, 2, 3])Coordinates:* t (t) MultiIndex- x (t) int64 0 0 1 1- y (t) int64 0 1 0 1dd.groupby("t").mean(...) returns<xarray.DataArray (t: 4)>array([0., 1., 2., 3.])Coordinates:* t (t) MultiIndex- x (t) int64 0 0 1 1- y (t) int64 0 1 0 1this does not workd2 = xr.DataArray(range(6), dims="t", coords={"x": ("t", [0, 0, 1, 1, 0, 1]), "y": ("t", [0, 1, 0, 1, 0, 0])}) dd2 = d2.set_index({"t": ["x", "y"]}) returns<xarray.DataArray (t: 6)>array([0, 1, 2, 3, 4, 5])Coordinates:* t (t) MultiIndex- x (t) int64 0 0 1 1 0 1- y (t) int64 0 1 0 1 0 0dd2.groupby("t").mean(...) returns<xarray.DataArray (t: 4)>array([2. , 1. , 3.5, 3. ])Coordinates:* t (t) MultiIndex- t_level_0 (t) int64 0 0 1 1- t_level_1 (t) int64 0 1 0 1``` Relevant log outputNo response Anything else we need to know?No response EnvironmentINSTALLED VERSIONScommit: None python: 3.9.1 (default, Jan 13 2021, 15:21:08) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] python-bits: 64 OS: Linux OS-release: 3.10.0-1160.49.1.el7.x86_64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.0 libnetcdf: 4.7.4 xarray: 0.21.1 pandas: 1.4.0 numpy: 1.21.5 scipy: 1.7.3 netCDF4: 1.5.8 pydap: None h5netcdf: None h5py: None Nio: None zarr: None cftime: 1.5.1.1 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: None distributed: None matplotlib: 3.5.1 cartopy: None seaborn: None numbagg: None fsspec: None cupy: None pint: None sparse: None setuptools: 49.2.1 pip: 22.0.3 conda: None pytest: 6.2.5 IPython: 8.0.0 sphinx: None |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);