home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

466 rows where user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date)

issue >30

  • Add support for cross product 17
  • Add typing to plot methods 9
  • Add asv benchmark jobs to CI 8
  • Add dataarray scatter with 3d support 7
  • Rely on NEP-18 to dispatch to dask in duck_array_ops 7
  • Replace dataset scatter with the dataarray version 7
  • Enable `flox` in `GroupBy` and `resample` 7
  • Generate reductions for DataArray, Dataset, GroupBy and Resample 7
  • Cumulative examples 7
  • Limit and format number of displayed dimensions in repr 6
  • Add python 3.10 to CI 6
  • Allow .attrs to support any dict-likes 5
  • Add typing to the OPTIONS dict 5
  • Drop support for python 3.7 5
  • Add groupby & resample benchmarks 5
  • Add dataset line plot 4
  • Do not transpose 1d arrays during interpolation 4
  • Allow in-memory arrays with open_mfdataset 4
  • Generator for groupby reductions 4
  • Run pyupgrade on core/utils 4
  • Fix kwargs used for extrapolation in docs 4
  • Remove debugging slow assert statement 4
  • Require to explicitly defining optional dimensions such as hue and markersize 4
  • Add python 3.11 to CI 4
  • Unrecognized chunk manager dask - must be one of: [] 4
  • [WIP] GroupBy plotting 3
  • Improve Dataset documentation 3
  • Should __repr__ and __str__ be PEP8 compliant? 3
  • Add histogram method 3
  • Faster interp 3
  • …

user 1

  • Illviljan · 466 ✖

author_association 1

  • MEMBER 466
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1280906370 https://github.com/pydata/xarray/pull/7173#issuecomment-1280906370 https://api.github.com/repos/pydata/xarray/issues/7173 IC_kwDOAMm_X85MWRSC Illviljan 14371165 2022-10-17T13:57:47Z 2024-03-20T23:12:49Z MEMBER

Scatter vs. Lines:

```python ds = xr.tutorial.scatter_example_dataset(seed=42) hue_ = "y" x_ = "y" size_="y" z_ = "z" fig = plt.figure() ax = fig.add_subplot(1, 2, 1, projection='3d') ds.A.sel(w="one").plot.lines(x=x_, z=z_, hue=hue_, linewidth=size_, ax=ax) ax = fig.add_subplot(1, 2, 2, projection='3d') ds.A.sel(w="one").plot.scatter(x=x_, z=z_, hue=hue_, markersize=size_, ax=ax) ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add LineCollection plot 1410608825
1575756825 https://github.com/pydata/xarray/pull/7891#issuecomment-1575756825 https://api.github.com/repos/pydata/xarray/issues/7891 IC_kwDOAMm_X85d7CQZ Illviljan 14371165 2023-06-04T22:29:18Z 2023-06-04T22:29:18Z MEMBER

You could use DataArray.round to round to significant decimals.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add errors option to curvefit 1740268634
1570164833 https://github.com/pydata/xarray/pull/7821#issuecomment-1570164833 https://api.github.com/repos/pydata/xarray/issues/7821 IC_kwDOAMm_X85dltBh Illviljan 14371165 2023-05-31T12:43:30Z 2023-05-31T12:43:30Z MEMBER

Thanks @mgunyho !

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Implement multidimensional initial guess and bounds for `curvefit` 1698626185
1563788348 https://github.com/pydata/xarray/pull/7875#issuecomment-1563788348 https://api.github.com/repos/pydata/xarray/issues/7875 IC_kwDOAMm_X85dNYQ8 Illviljan 14371165 2023-05-26T04:17:07Z 2023-05-26T04:18:08Z MEMBER

cos is a float operation so I would lean towards using a isclose-check: xr.testing.assert_allclose(a + 1, np.cos(a)).

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  defer to `numpy` for the expected result 1726529405
1556198984 https://github.com/pydata/xarray/issues/7856#issuecomment-1556198984 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85cwbZI Illviljan 14371165 2023-05-21T14:51:55Z 2023-05-21T14:51:55Z MEMBER

Nope, I have not tried that. I suspect things will just self heal then considering the CI without understanding the root cause.

Looking at the backends; we initialize a dict here: https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/backends/common.py#L435

Stores each of our entrypoints like this: https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/backends/h5netcdf_.py#L438

Then we append the local and other entrypoints together here: https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/backends/plugins.py#L106-L116

But load_chunkmanagers doesn't really seem to append from a dict: https://github.com/pydata/xarray/blob/d8ec3a3f6b02a8b941b484b3d254537af84b5fde/xarray/core/parallelcompat.py#L48-L62

Why do the backends use the BACKEND_ENTRYPOINTS strategy? To avoid these cases? Or something else?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1556191288 https://github.com/pydata/xarray/issues/7856#issuecomment-1556191288 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85cwZg4 Illviljan 14371165 2023-05-21T14:17:30Z 2023-05-21T14:17:30Z MEMBER

The CI recreates its entire environment all the time and I don't?

```python from xarray.core.parallelcompat import list_chunkmanagers

list_chunkmanagers() Out[1]: {} ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1556160022 https://github.com/pydata/xarray/pull/7844#issuecomment-1556160022 https://api.github.com/repos/pydata/xarray/issues/7844 IC_kwDOAMm_X85cwR4W Illviljan 14371165 2023-05-21T11:54:28Z 2023-05-21T11:55:48Z MEMBER

before after ratio [05c7888d] [d135ab97] - 2.47±0.02s 806±6ms 0.33 pandas.ToDataFrameDask.time_to_dataframe

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improve to_dask_dataframe performance 1710752209
1556114932 https://github.com/pydata/xarray/issues/7856#issuecomment-1556114932 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85cwG30 Illviljan 14371165 2023-05-21T08:13:47Z 2023-05-21T08:13:47Z MEMBER

Our backends are stored in a dict like this: BACKEND_ENTRYPOINTS["h5netcdf"] = ("h5netcdf", H5netcdfBackendEntrypoint). Is it something similar daskmanager needs to do?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1556113793 https://github.com/pydata/xarray/issues/7856#issuecomment-1556113793 https://api.github.com/repos/pydata/xarray/issues/7856 IC_kwDOAMm_X85cwGmB Illviljan 14371165 2023-05-21T08:09:09Z 2023-05-21T08:09:09Z MEMBER

cc @TomNicholas

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Unrecognized chunk manager dask - must be one of: [] 1718410975
1547017905 https://github.com/pydata/xarray/pull/7824#issuecomment-1547017905 https://api.github.com/repos/pydata/xarray/issues/7824 IC_kwDOAMm_X85cNZ6x Illviljan 14371165 2023-05-14T22:44:06Z 2023-05-15T08:40:51Z MEMBER

Think I'll stop here.

@alimanfoo, feel free to try this out if you have the time.

Results: ``` before after ratio [964d350a] [86ef9540] - 117±2ms 65.6±0.9ms 0.56 groupby.Resample.time_agg_large_num_groups('sum', 2, False) - 117±2ms 65.3±0.7ms 0.56 groupby.ResampleCFTime.time_agg_large_num_groups('sum', 2, False) - 113±1ms 62.0±0.4ms 0.55 groupby.ResampleCFTime.time_agg_large_num_groups('sum', 1, False) - 112±2ms 61.4±0.5ms 0.55 groupby.Resample.time_agg_large_num_groups('sum', 1, False) - 8.50±0.2ms 1.61±0.01ms 0.19 combine.Combine1d.time_combine_by_coords - 1.81±0.02s 224±2ms 0.12 combine.Combine1dDask.time_combine_by_coords

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE INCREASED. ```

{
    "total_count": 4,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 2,
    "eyes": 0
}
  Improve concat performance 1699099029
1543284025 https://github.com/pydata/xarray/issues/7833#issuecomment-1543284025 https://api.github.com/repos/pydata/xarray/issues/7833 IC_kwDOAMm_X85b_KU5 Illviljan 14371165 2023-05-11T03:36:34Z 2023-05-11T03:36:34Z MEMBER

I've noticed this as well, see #7824.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Slow performance of concat() 1704950804
1537345671 https://github.com/pydata/xarray/pull/7822#issuecomment-1537345671 https://api.github.com/repos/pydata/xarray/issues/7822 IC_kwDOAMm_X85bogiH Illviljan 14371165 2023-05-07T07:34:45Z 2023-05-07T07:34:45Z MEMBER

Thanks, @mgunyho !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix typos in contribution guide 1698632575
1536656773 https://github.com/pydata/xarray/pull/7820#issuecomment-1536656773 https://api.github.com/repos/pydata/xarray/issues/7820 IC_kwDOAMm_X85bl4WF Illviljan 14371165 2023-05-05T19:03:08Z 2023-05-05T19:03:08Z MEMBER

Indeed is pint.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pin pint to 0.20 1697987899
1534920140 https://github.com/pydata/xarray/issues/7707#issuecomment-1534920140 https://api.github.com/repos/pydata/xarray/issues/7707 IC_kwDOAMm_X85bfQXM Illviljan 14371165 2023-05-04T14:50:01Z 2023-05-04T14:50:01Z MEMBER

Lots of pint errors with version 0.21, @keewis. I think pint 0.20.1 worked well?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  ⚠️ Nightly upstream-dev CI failed ⚠️ 1650481625
1530345382 https://github.com/pydata/xarray/pull/7795#issuecomment-1530345382 https://api.github.com/repos/pydata/xarray/issues/7795 IC_kwDOAMm_X85bNzem Illviljan 14371165 2023-05-01T21:41:56Z 2023-05-01T21:41:56Z MEMBER

Feels like it worked quite recently, has something changed? Benchmarks still works on the flox side.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add cftime groupby, resample benchmarks 1688781350
1530096971 https://github.com/pydata/xarray/pull/7787#issuecomment-1530096971 https://api.github.com/repos/pydata/xarray/issues/7787 IC_kwDOAMm_X85bM21L Illviljan 14371165 2023-05-01T19:13:57Z 2023-05-01T19:13:57Z MEMBER

Thanks, @ksunden and @tacaswell for the guidance we'll tackle these in separate PRs, see #7802.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the label run-upstream to run upstream CI 1684281101
1529823367 https://github.com/pydata/xarray/pull/7801#issuecomment-1529823367 https://api.github.com/repos/pydata/xarray/issues/7801 IC_kwDOAMm_X85bL0CH Illviljan 14371165 2023-05-01T15:15:30Z 2023-05-01T15:15:30Z MEMBER

Thanks, @dstansby !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update asv links in contributing guide 1690872248
1528693660 https://github.com/pydata/xarray/pull/5704#issuecomment-1528693660 https://api.github.com/repos/pydata/xarray/issues/5704 IC_kwDOAMm_X85bHgOc Illviljan 14371165 2023-04-29T06:56:37Z 2023-04-29T06:58:26Z MEMBER

Those issues indeed has to be fixed if opening files lazily is the only option for xarray.

But xarray could also accept that chunks=None will (for now) load all the files to memory. If that's ok we can merge this now I believe. I suspect there are a few in-memory users out there that could make use of this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow in-memory arrays with open_mfdataset 970245117
1528527292 https://github.com/pydata/xarray/issues/7794#issuecomment-1528527292 https://api.github.com/repos/pydata/xarray/issues/7794 IC_kwDOAMm_X85bG3m8 Illviljan 14371165 2023-04-29T02:39:19Z 2023-04-29T02:39:19Z MEMBER

Why/where do you get a cftime._cftime.Datetime360Day ? It is deprecated according to cftime.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  TypeError for time_bnds variable when calling Dataset.to_netcdf 1688779793
1526232602 https://github.com/pydata/xarray/issues/7792#issuecomment-1526232602 https://api.github.com/repos/pydata/xarray/issues/7792 IC_kwDOAMm_X85a-HYa Illviljan 14371165 2023-04-27T19:21:54Z 2023-04-27T19:22:26Z MEMBER

It's because opening several files requires concatenating the files. xarray does not have any machinery to do that lazily without dask, so all your files will be loaded to memory. Maybe that's ok for you? If the files are small it should be fine.

See #5704 for more discussion.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  If "chunks=None" is set in open_mfdataset, it is changed to "chunks={}" before being passed to "_dataset_from_backend_dataset" 1687297423
1523703079 https://github.com/pydata/xarray/pull/7787#issuecomment-1523703079 https://api.github.com/repos/pydata/xarray/issues/7787 IC_kwDOAMm_X85a0d0n Illviljan 14371165 2023-04-26T16:22:57Z 2023-04-26T16:22:57Z MEMBER

Realized I actually wanted to run mypy with the upstream packages. So added such a CI as well, but only when adding this label so that it doesn't mess with the scheduled runs and those results.

I notice quite a few mypy errors from the plot parts. If you have any ideas @headtr1ck and @ksunden you're very welcome to push more PRs!

``` xarray/core/options.py:12: error: Cannot assign to a type [misc] xarray/core/options.py:12: error: Incompatible types in assignment (expression has type "Type[str]", variable has type "Type[Colormap]") [assignment] xarray/plot/utils.py:808: error: Argument 1 to "set_xticks" of "_AxesBase" has incompatible type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"; expected "Iterable[float]" [arg-type] xarray/plot/utils.py:810: error: Argument 1 to "set_yticks" of "_AxesBase" has incompatible type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"; expected "Iterable[float]" [arg-type] xarray/plot/utils.py:813: error: Argument 1 to "set_xlim" of "_AxesBase" has incompatible type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"; expected "Union[float, Tuple[float, float], None]" [arg-type] xarray/plot/utils.py:815: error: Argument 1 to "set_ylim" of "_AxesBase" has incompatible type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]"; expected "Union[float, Tuple[float, float], None]" [arg-type] Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Installing missing stub packages: /home/runner/micromamba-root/envs/xarray-tests/bin/python -m pip install types-Pillow types-PyYAML types-Pygments types-babel types-colorama types-paramiko types-psutil types-pytz types-pywin32 types-setuptools types-urllib3 Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Found 154 errors in 10 files (checked 138 source files) xarray/plot/utils.py:1349: error: Unsupported operand types for * ("_SupportsArray[dtype[Any]]" and "float") [operator] xarray/plot/utils.py:1349: error: Unsupported operand types for * ("_NestedSequence[_SupportsArray[dtype[Any]]]" and "float") [operator] xarray/plot/utils.py:1349: error: Unsupported operand types for * ("_NestedSequence[Union[bool, int, float, complex, str, bytes]]" and "float") [operator] xarray/plot/utils.py:1349: note: Left operand is of type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" xarray/plot/utils.py:1349: error: Unsupported operand types for * ("str" and "float") [operator] xarray/plot/utils.py:1349: error: Unsupported operand types for * ("bytes" and "float") [operator] xarray/plot/utils.py:1350: error: Item "_SupportsArray[dtype[Any]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "_NestedSequence[_SupportsArray[dtype[Any]]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "int" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "float" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "complex" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "str" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "bytes" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1350: error: Item "_NestedSequence[Union[bool, int, float, complex, str, bytes]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "_SupportsArray[dtype[Any]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "_NestedSequence[_SupportsArray[dtype[Any]]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "int" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "float" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "complex" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "str" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "bytes" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/utils.py:1351: error: Item "_NestedSequence[Union[bool, int, float, complex, str, bytes]]" of "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" has no attribute "mask" [union-attr] xarray/plot/facetgrid.py:684: error: "FigureCanvasBase" has no attribute "get_renderer" [attr-defined] xarray/plot/accessor.py:182: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:182: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:309: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:309: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:428: error: Overloaded function implementation cannot produce return type of signature 2 [misc] xarray/plot/accessor.py:428: error: Overloaded function implementation cannot produce return type of signature 3 [misc] xarray/plot/accessor.py:433: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:433: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:552: error: Overloaded function implementation cannot produce return type of signature 2 [misc] xarray/plot/accessor.py:552: error: Overloaded function implementation cannot produce return type of signature 3 [misc] xarray/plot/accessor.py:557: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:557: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:676: error: Overloaded function implementation cannot produce return type of signature 2 [misc] xarray/plot/accessor.py:676: error: Overloaded function implementation cannot produce return type of signature 3 [misc] xarray/plot/accessor.py:681: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:681: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:800: error: Overloaded function implementation cannot produce return type of signature 2 [misc] xarray/plot/accessor.py:800: error: Overloaded function implementation cannot produce return type of signature 3 [misc] xarray/plot/accessor.py:948: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:948: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:1075: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:1075: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/accessor.py:1190: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/accessor.py:1190: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:324: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:324: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:478: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:478: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "x" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "y" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "u" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "v" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "density" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "linewidth" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "color" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "cmap" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "norm" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "arrowsize" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "arrowstyle" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "minlength" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "transform" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "zorder" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "start_points" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "maxlength" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "integration_direction" [misc] xarray/plot/dataset_plot.py:649: error: Function gets multiple values for keyword argument "broken_streamlines" [misc] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[float, Tuple[float, float]]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[str, Colormap, None]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[str, Normalize, None]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "float" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[str, ArrowStyle]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Optional[Transform]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Optional[float]" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "Literal['forward', 'backward', 'both']" [arg-type] xarray/plot/dataset_plot.py:649: error: Argument 1 has incompatible type "*List[ndarray[Any, Any]]"; expected "bool" [arg-type] xarray/plot/dataset_plot.py:751: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataset_plot.py:751: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:718: error: Incompatible return value type (got "Tuple[Union[ndarray[Any, Any], List[ndarray[Any, Any]]], ndarray[Any, Any], Union[BarContainer, Polygon, List[Union[BarContainer, Polygon]]]]", expected "Tuple[ndarray[Any, Any], ndarray[Any, Any], BarContainer]") [return-value] xarray/plot/dataarray_plot.py:996: error: "Axes" has no attribute "view_init" [attr-defined] xarray/plot/dataarray_plot.py:1106: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:1106: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[Sequence[Union[Union[Tuple[float, float, float], str], Union[str, Tuple[float, float, float, float], Tuple[Union[Tuple[float, float, float], str], float], Tuple[Tuple[float, float, float, float], float]]]], Union[Union[Tuple[float, float, float], str], Union[str, Tuple[float, float, float, float], Tuple[Union[Tuple[float, float, float], str], float], Tuple[Tuple[float, float, float, float], float]]], None]" [arg-type] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Optional[Union[str, Path, MarkerStyle]]" [arg-type] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[str, Colormap, None]" [arg-type] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[str, Normalize, None]" [arg-type] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Optional[float]" [arg-type] xarray/plot/dataarray_plot.py:1261: error: Argument 1 to "scatter" of "Axes" has incompatible type "*List[ndarray[Any, Any]]"; expected "Union[float, Sequence[float], None]" [arg-type] xarray/plot/dataarray_plot.py:1615: error: "Axes" has no attribute "set_zlabel" [attr-defined] xarray/plot/dataarray_plot.py:1655: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:1655: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:1874: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:1874: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:2010: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:2010: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:2146: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:2146: error: Overloaded function signatures 1 and 3 overlap with incompatible return types [misc] xarray/plot/dataarray_plot.py:2464: error: "Axes" has no attribute "plot_surface" [attr-defined] xarray/tests/test_plot.py:427: error: Value of type "Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]" is not indexable [index] xarray/tests/test_plot.py:443: error: Module has no attribute "viridis" [attr-defined] xarray/tests/test_plot.py:457: error: "None" not callable [misc] xarray/tests/test_plot.py:462: error: "None" not callable [misc] xarray/tests/test_plot.py:465: error: "None" not callable [misc] xarray/tests/test_plot.py:471: error: Module has no attribute "viridis" [attr-defined] xarray/tests/test_plot.py:477: error: Module has no attribute "viridis" [attr-defined] xarray/tests/test_plot.py:482: error: Module has no attribute "viridis" [attr-defined] xarray/tests/test_plot.py:486: error: Module has no attribute "viridis" [attr-defined] xarray/tests/test_plot.py:493: error: "None" not callable [misc] xarray/tests/test_plot.py:498: error: "None" not callable [misc] xarray/tests/test_plot.py:501: error: "None" not callable [misc] xarray/tests/test_plot.py:931: error: Module has no attribute "magma" [attr-defined] xarray/tests/test_plot.py:933: error: Module has no attribute "magma" [attr-defined] xarray/tests/test_plot.py:1173: error: Module has no attribute "RdBu" [attr-defined] xarray/tests/test_plot.py:1746: error: Item "Colormap" of "Optional[Colormap]" has no attribute "colors" [union-attr] xarray/tests/test_plot.py:1746: error: Item "None" of "Optional[Colormap]" has no attribute "colors" [union-attr] xarray/tests/test_plot.py:1747: error: Item "Colormap" of "Optional[Colormap]" has no attribute "colors" [union-attr] xarray/tests/test_plot.py:1747: error: Item "None" of "Optional[Colormap]" has no attribute "colors" [union-attr] xarray/tests/test_plot.py:1749: error: Item "Colormap" of "Optional[Colormap]" has no attribute "_rgba_over" [union-attr] xarray/tests/test_plot.py:1749: error: Item "None" of "Optional[Colormap]" has no attribute "_rgba_over" [union-attr] xarray/tests/test_plot.py:1801: error: Item "None" of "Optional[ndarray[Any, Any]]" has no attribute "size" [union-attr] xarray/tests/test_plot.py:1952: error: Item "None" of "Optional[ndarray[Any, Any]]" has no attribute "min" [union-attr] xarray/tests/test_plot.py:1952: error: Item "None" of "Optional[ndarray[Any, Any]]" has no attribute "max" [union-attr] xarray/tests/test_plot.py:1968: error: Item "None" of "Optional[ndarray[Any, Any]]" has no attribute "dtype" [union-attr] xarray/tests/test_plot.py:1969: error: Value of type "Optional[ndarray[Any, Any]]" is not indexable [index] xarray/tests/test_plot.py:2125: error: "Artist" has no attribute "get_clim" [attr-defined] xarray/tests/test_plot.py:2135: error: "Colorbar" has no attribute "vmin" [attr-defined] xarray/tests/test_plot.py:2136: error: "Colorbar" has no attribute "vmax" [attr-defined] xarray/tests/test_plot.py:2202: error: "Artist" has no attribute "get_clim" [attr-defined] xarray/tests/test_plot.py:2218: error: "Artist" has no attribute "norm" [attr-defined] xarray/tests/test_plot.py:2747: error: Item "_AxesBase" of "Optional[_AxesBase]" has no attribute "legend_" [union-attr] xarray/tests/test_plot.py:2747: error: Item "None" of "Optional[_AxesBase]" has no attribute "legend_" [union-attr] xarray/tests/test_plot.py:2754: error: Item "None" of "Optional[_AxesBase]" has no attribute "get_legend" [union-attr] xarray/tests/test_plot.py:2775: error: Item "None" of "Optional[FigureBase]" has no attribute "axes" [union-attr] xarray/tests/test_plot.py:2775: error: Argument 1 to "len" has incompatible type "Union[_AxesBase, None, Any]"; expected "Sized" [arg-type] xarray/tests/test_plot.py:2803: error: Module has no attribute "dates" [attr-defined] xarray/tests/test_plot.py:2812: error: Module has no attribute "dates" [attr-defined] xarray/tests/test_plot.py:2831: error: Item "None" of "Optional[_AxesBase]" has no attribute "xaxis" [union-attr] xarray/tests/test_plot.py:2831: error: Module has no attribute "dates" [attr-defined] xarray/tests/test_groupby.py:715: error: Argument 1 to "groupby" of "Dataset" has incompatible type "ndarray[Any, dtype[signedinteger[Any]]]"; expected "Union[Hashable, DataArray, IndexVariable]" [arg-type] xarray/tests/test_groupby.py:715: note: Following member(s) of "ndarray[Any, dtype[signedinteger[Any]]]" have conflicts: xarray/tests/test_groupby.py:715: note: __hash__: expected "Callable[[], int]", got "None" xarray/tests/test_dataset.py:6964: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataset.py:6965: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataset.py:7007: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataset.py:7008: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataarray.py:6687: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataarray.py:6689: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataarray.py:6735: error: "PlainQuantity[Any]" not callable [operator] xarray/tests/test_dataarray.py:6737: error: "PlainQuantity[Any]" not callable [operator] ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Allow the label run-upstream to run upstream CI 1684281101
1509228401 https://github.com/pydata/xarray/pull/7752#issuecomment-1509228401 https://api.github.com/repos/pydata/xarray/issues/7752 IC_kwDOAMm_X85Z9P9x Illviljan 14371165 2023-04-14T20:38:08Z 2023-04-14T20:38:59Z MEMBER

The mypy 3.9 CI keeps installing the old version which hinders installing a new version with python -m pip install mypy.

``` INSTALLED VERSIONS ------------------ commit: c6eeaa6faa0f3d084f98bb5c9bad777533f07a40 python: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] python-bits: 64 OS: Linux OS-release: 5.15.0-1035-azure machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.12.2 libnetcdf: 4.9.1 xarray: 2023.3.1.dev56+gc6eeaa6f pandas: 2.0.0 numpy: 1.23.5 scipy: 1.10.1 netCDF4: 1.6.3 pydap: installed h5netcdf: 1.1.0 h5py: 3.8.0 Nio: None zarr: 2.14.2 cftime: 1.6.2 nc_time_axis: 1.4.1 PseudoNetCDF: 3.2.2 iris: 3.4.1 bottleneck: 1.3.7 dask: 2023.3.2 distributed: 2023.3.2.1 matplotlib: 3.7.1 cartopy: 0.21.1 seaborn: 0.12.2 numbagg: 0.2.2 fsspec: 2023.4.0 cupy: None pint: 0.20.1 sparse: 0.14.0 flox: 0.6.10 numpy_groupies: 0.9.20 setuptools: 67.6.1 pip: 23.0.1 conda: 23.3.1 pytest: 7.3.0 mypy: 0.982 IPython: None sphinx: None ```

Using python -m pip install mypy --force-reinstall seems to do the trick.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix typing errors using mypy 1.2 1665260014
1505965953 https://github.com/pydata/xarray/pull/7752#issuecomment-1505965953 https://api.github.com/repos/pydata/xarray/issues/7752 IC_kwDOAMm_X85ZwzeB Illviljan 14371165 2023-04-12T21:21:40Z 2023-04-12T21:21:40Z MEMBER

Before: ``` xarray/core/utils.py:116: error: Unused "type: ignore" comment xarray/core/combine.py:374: error: Unused "type: ignore" comment xarray/core/rolling.py:379: error: Unused "type: ignore" comment xarray/core/rolling.py:756: error: Unused "type: ignore" comment xarray/tests/test_plot.py:2045: error: Argument "col" has incompatible type "str"; expected "None" [arg-type] xarray/tests/test_plot.py:2054: error: Argument "col" has incompatible type "str"; expected "None" [arg-type] xarray/tests/test_variable.py:65: error: "staticmethod" expects 2 type arguments, but 1 given [type-arg] xarray/tests/test_concat.py:543: note: By default the bodies of untyped functions are not checked, consider using --check-untyped-defs [annotation-unchecked] Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Installing missing stub packages: /home/runner/micromamba-root/envs/xarray-tests/bin/python -m pip install types-PyYAML types-Pygments types-babel types-colorama types-paramiko types-psutil types-pytz types-pywin32 types-setuptools types-urllib3

Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Found 7 errors in 5 files (checked 138 source files) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix typing errors using mypy 1.2 1665260014
1501602776 https://github.com/pydata/xarray/pull/7720#issuecomment-1501602776 https://api.github.com/repos/pydata/xarray/issues/7720 IC_kwDOAMm_X85ZgKPY Illviljan 14371165 2023-04-10T09:27:10Z 2023-04-10T09:27:10Z MEMBER

Thanks, @kmuehlbauer !

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  preserve boolean dtype in encoding 1655000231
1497352228 https://github.com/pydata/xarray/pull/7561#issuecomment-1497352228 https://api.github.com/repos/pydata/xarray/issues/7561 IC_kwDOAMm_X85ZP8gk Illviljan 14371165 2023-04-05T11:47:34Z 2023-04-05T11:47:34Z MEMBER

I'm not sure about the rest of the errors, @dcherian. Maybe IndexVariable needs to use the DataWithCoords mixin?

xarray/core/groupby.py:577: error: Value of type variable "DataAlignable" of "align" cannot be "Union[DataArray, IndexVariable]" [type-var] xarray/core/groupby.py:577: error: Value of type variable "DataAlignable" of "align" cannot be "Union[Dataset, DataArray, IndexVariable]" [type-var] xarray/tests/test_groupby.py:55: error: List item 1 has incompatible type "int"; expected "slice" [list-item]

https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/alignment.py#L581-L588

https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/alignment.py#L31

https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/common.py#L376-L377

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Introduce Grouper objects internally 1600382587
1489083542 https://github.com/pydata/xarray/issues/7697#issuecomment-1489083542 https://api.github.com/repos/pydata/xarray/issues/7697 IC_kwDOAMm_X85YwZyW Illviljan 14371165 2023-03-29T18:17:35Z 2023-03-29T18:17:35Z MEMBER

Looks like you almost got this figured out! You want to create a PR for this?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  open_mfdataset very slow 1646267547
1486221550 https://github.com/pydata/xarray/pull/7690#issuecomment-1486221550 https://api.github.com/repos/pydata/xarray/issues/7690 IC_kwDOAMm_X85YlfDu Illviljan 14371165 2023-03-28T05:04:50Z 2023-03-28T05:04:50Z MEMBER

How come the benchmarks is not failing? Shouldn't it be significantly slower now?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Add compute to groupby benchmarks 1643132089
1481852421 https://github.com/pydata/xarray/pull/7668#issuecomment-1481852421 https://api.github.com/repos/pydata/xarray/issues/7668 IC_kwDOAMm_X85YU0YF Illviljan 14371165 2023-03-23T20:28:52Z 2023-03-23T20:28:52Z MEMBER

No need to use a release version now that #7667 is merged.

Might be a good idea to use the next release version though.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Use a released version 1638243008
1481849098 https://github.com/pydata/xarray/pull/7667#issuecomment-1481849098 https://api.github.com/repos/pydata/xarray/issues/7667 IC_kwDOAMm_X85YUzkK Illviljan 14371165 2023-03-23T20:25:47Z 2023-03-23T20:25:47Z MEMBER

Ok, it doesn't use the current PR, confirmed in #7668. But it works!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Undo workaround sync-labels bug 1638194068
1481844145 https://github.com/pydata/xarray/pull/7667#issuecomment-1481844145 https://api.github.com/repos/pydata/xarray/issues/7667 IC_kwDOAMm_X85YUyWx Illviljan 14371165 2023-03-23T20:22:29Z 2023-03-23T20:22:29Z MEMBER

Maybe it's not using this PR? I'll try a merge.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Undo workaround sync-labels bug 1638194068
1481829293 https://github.com/pydata/xarray/pull/7651#issuecomment-1481829293 https://api.github.com/repos/pydata/xarray/issues/7651 IC_kwDOAMm_X85YUuut Illviljan 14371165 2023-03-23T20:10:46Z 2023-03-23T20:10:46Z MEMBER

PR labeler error is unrelated, see #7667.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1632697004
1481827782 https://github.com/pydata/xarray/pull/7667#issuecomment-1481827782 https://api.github.com/repos/pydata/xarray/issues/7667 IC_kwDOAMm_X85YUuXG Illviljan 14371165 2023-03-23T20:09:33Z 2023-03-23T20:09:33Z MEMBER

Am I missing something obvious again?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Undo workaround sync-labels bug 1638194068
1481769212 https://github.com/pydata/xarray/pull/7651#issuecomment-1481769212 https://api.github.com/repos/pydata/xarray/issues/7651 IC_kwDOAMm_X85YUgD8 Illviljan 14371165 2023-03-23T19:22:46Z 2023-03-23T19:22:46Z MEMBER

python xarray/core/computation.py:12:1: UP035 `typing.AbstractSet` is deprecated, use `collections.abc.Set` instead xarray/core/merge.py:5:1: UP035 `typing.AbstractSet` is deprecated, use `collections.abc.Set` instead xarray/core/parallel.py:7:1: UP035 `typing.DefaultDict` is deprecated, use `collections.defaultdict` instead

Isn't it odd that ruff doesn't automatically fix this?

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 1,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1632697004
1475150683 https://github.com/pydata/xarray/issues/7645#issuecomment-1475150683 https://api.github.com/repos/pydata/xarray/issues/7645 IC_kwDOAMm_X85X7QNb Illviljan 14371165 2023-03-19T08:31:51Z 2023-03-19T08:31:51Z MEMBER

Probably from #7494. encode_cf_variable only accepts Variables. Replace python data = encode_cf_variable(out_data).values.astype(numpy_dtype) with

python data = encode_cf_variable(out_data.variable).values.astype(numpy_dtype) should fix the error.

mypy should have caught this a while ago when #7374 went in, does out_data have defined type hints?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  encode_cf_variable triggers AttributeError: 'DataArray' object has no attribute '_data' 1630746106
1475049548 https://github.com/pydata/xarray/pull/7206#issuecomment-1475049548 https://api.github.com/repos/pydata/xarray/issues/7206 IC_kwDOAMm_X85X63hM Illviljan 14371165 2023-03-19T00:39:00Z 2023-03-19T00:39:00Z MEMBER

Hmm, did I mess something up? I believe I only changed typing related things.

_bins suffix seems to have disappeared: ``` def test_groupby_bins_multidim(self): array = self.make_groupby_multidim_example_array() bins = [0, 15, 20] bin_coords = pd.cut(array["lat"].values.flat, bins).categories expected = DataArray([16, 40], dims="lat_bins", coords={"lat_bins": bin_coords}) actual = array.groupby_bins("lat", bins).map(lambda x: x.sum())

  assert_identical(expected, actual)

E AssertionError: Left and right DataArray objects are not identical E Differing dimensions: E (lat_bins: 2) != (lat: 2) E Coordinates only on the left object: E * lat_bins (lat_bins) object (0, 15] (15, 20] E Coordinates only on the right object: E * lat (lat) object (0, 15] (15, 20] ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Save groupby codes after factorizing, pass to flox 1421065459
1468702898 https://github.com/pydata/xarray/pull/7603#issuecomment-1468702898 https://api.github.com/repos/pydata/xarray/issues/7603 IC_kwDOAMm_X85XiqCy Illviljan 14371165 2023-03-14T19:27:51Z 2023-03-14T19:27:51Z MEMBER

Error: [ 89.17%] ··· groupby.ResampleDask.time_binary_op_1d failed Error: [ 89.17%] ···· Traceback (most recent call last): File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 1293, in main_run_server main_run(run_args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 1167, in main_run result = benchmark.do_run() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 573, in do_run return self.run(*self._current_params) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 669, in run samples, number = self.benchmark_timing(timer, min_repeat, max_repeat, File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 705, in benchmark_timing timing = timer.timeit(number) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/e3a5540da3d30da735a9fb168f264be6/lib/python3.10/timeit.py", line 178, in timeit timing = self.inner(it, self.timer) File "<timeit-src>", line 6, in inner File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/groupby.py", line 127, in time_binary_op_1d raise NotImplementedError NotImplementedError asv: benchmark failed (exit status 1)

Comparing to https://github.com/pydata/xarray/blob/5043223ca7942c6eb582798aafa843d2efc0895b/asv_bench/benchmarks/init.py#L72-L74

It has to be initialized? Try adding a short description with why it's not implemented/link to issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Fix groupby binary ops benchmarks 1618336774
1463319021 https://github.com/pydata/xarray/pull/7603#issuecomment-1463319021 https://api.github.com/repos/pydata/xarray/issues/7603 IC_kwDOAMm_X85XOHnt Illviljan 14371165 2023-03-10T06:03:20Z 2023-03-10T06:03:20Z MEMBER

Ahh, there are a few errors in the benchmarks. Are those fixed with your other PRs? Error: [ 88.75%] ··· ...by.ResampleDask.peakmem_groupby_binary_op_2d failed Error: [ 88.75%] ···· Traceback (most recent call last): File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 1293, in main_run_server main_run(run_args) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 1167, in main_run result = benchmark.do_run() File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 573, in do_run return self.run(*self._current_params) File "/home/runner/micromamba-root/envs/xarray-tests/lib/python3.10/site-packages/asv/benchmark.py", line 842, in run self.func(*param) File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/groupby.py", line 136, in peakmem_groupby_binary_op_2d self.ds2d.resample(time="48H") - self.ds2d_mean File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/e3a5540da3d30da735a9fb168f264be6/lib/python3.10/site-packages/xarray/core/_typed_ops.py", line 589, in __sub__ return self._binary_op(other, operator.sub) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/e3a5540da3d30da735a9fb168f264be6/lib/python3.10/site-packages/xarray/core/groupby.py", line 601, in _binary_op raise ValueError( ValueError: incompatible dimensions for a grouped binary operation: the group variable '__resample_dim__' is not a dimension on the other argument asv: benchmark failed (exit status 1)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [skip-ci] Fix groupby binary ops benchmarks 1618336774
1461318814 https://github.com/pydata/xarray/pull/7600#issuecomment-1461318814 https://api.github.com/repos/pydata/xarray/issues/7600 IC_kwDOAMm_X85XGfSe Illviljan 14371165 2023-03-09T05:38:40Z 2023-03-09T05:38:40Z MEMBER

I think we should stick to default black values. Otherwise I want to start arguing for line width=79 so we follow pep8. ;)

You can still run this cleanup though so you still get your LOCs. :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Enable blacks `skip_magic_trailing_comma` options 1615980379
1460756497 https://github.com/pydata/xarray/pull/7595#issuecomment-1460756497 https://api.github.com/repos/pydata/xarray/issues/7595 IC_kwDOAMm_X85XEWAR Illviljan 14371165 2023-03-08T19:45:49Z 2023-03-08T19:45:49Z MEMBER

I don't enjoy using git so I'll plug Github Desktop. I think it will reduce so much of the friction git causes to beginners: https://www.youtube.com/watch?v=l7uo1d3R0Wo https://www.youtube.com/watch?v=qUYkRWGWntE

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Clarifications in contributors guide 1615570467
1452462512 https://github.com/pydata/xarray/pull/7442#issuecomment-1452462512 https://api.github.com/repos/pydata/xarray/issues/7442 IC_kwDOAMm_X85WktGw Illviljan 14371165 2023-03-02T19:52:23Z 2023-03-02T19:52:23Z MEMBER

Seems we're experiencing this issue now: https://github.com/pydata/pydata-sphinx-theme/issues/1220

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  update the docs environment 1534634670
1445221673 https://github.com/pydata/xarray/pull/7427#issuecomment-1445221673 https://api.github.com/repos/pydata/xarray/issues/7427 IC_kwDOAMm_X85WJFUp Illviljan 14371165 2023-02-25T22:51:37Z 2023-02-25T22:51:37Z MEMBER

This one is failing on CI / ubuntu-latest py3.9 bare-minimum, is it taking a path without flox installed perhaps?

``` __ TestDataArrayGroupBy.testgroupby_fastpath_for_monotonic ___ [gw2] linux -- Python 3.9.16 /home/runner/micromamba-root/envs/xarray-tests/bin/python

self = <xarray.tests.test_groupby.TestDataArrayGroupBy object at 0x7fbd9f461c10>

def test_groupby_fastpath_for_monotonic(self):
    # Fixes https://github.com/pydata/xarray/issues/6220
    index = [1, 2, 3, 4, 7, 9, 10]
    array = DataArray(np.arange(len(index)), [("idx", index)])
    array_rev = array.copy().assign_coords({"idx": index[::-1]})
    fwd = array.groupby("idx", squeeze=False)
    rev = array_rev.groupby("idx", squeeze=False)

    for gb in [fwd, rev]:
        assert all([isinstance(elem, slice) for elem in gb._group_indices])

    assert_identical(fwd.sum(), array)
  assert_identical(rev.sum(), array_rev.sortby("idx"))

E AssertionError: Left and right DataArray objects are not identical E
E Differing values: E L E array([0, 1, 2, 3, 4, 5, 6]) E R E array([6, 5, 4, 3, 2, 1, 0]) E Differing coordinates: E L * idx (idx) int64 10 9 7 4 3 2 1 E R * idx (idx) int64 1 2 3 4 7 9 10 ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Change .groupby fastpath to work for monotonic increasing and decreasing 1523260646
1433686861 https://github.com/pydata/xarray/issues/4610#issuecomment-1433686861 https://api.github.com/repos/pydata/xarray/issues/4610 IC_kwDOAMm_X85VdFNN Illviljan 14371165 2023-02-16T20:39:54Z 2023-02-16T20:39:54Z MEMBER

Nice, I was looking at the real example too, Temp_url = 'http://apdrc.soest.hawaii.edu:80/dods/public_data/WOA/WOA13/5_deg/annual/temp' etc.., and it was triggering a load in set_dims:

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add histogram method 750985364
1433681353 https://github.com/pydata/xarray/pull/6874#issuecomment-1433681353 https://api.github.com/repos/pydata/xarray/issues/6874 IC_kwDOAMm_X85VdD3J Illviljan 14371165 2023-02-16T20:34:16Z 2023-02-16T20:34:16Z MEMBER

I don't have a better idea than to do DuckArray = Any # ndarray/cupy/sparse etc. and add that as output, but that wont change anything mypy-wise besides making it easier for us to read the code.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid calling np.asarray on lazy indexing classes 1327380960
1433670641 https://github.com/pydata/xarray/issues/4610#issuecomment-1433670641 https://api.github.com/repos/pydata/xarray/issues/4610 IC_kwDOAMm_X85VdBPx Illviljan 14371165 2023-02-16T20:24:51Z 2023-02-16T20:25:36Z MEMBER
  • Absolute speed of xhistogram appears to be 3-4x higher, and that's using numpy_groupies in flox. Possibly flox could be faster if using numba but not sure yet.

Could you show the example that's this slow, @TomNicholas ? So I can play around with it too.

One thing I noticed in your notebook is that you haven't used chunks={} on the open_dataset. Which seems to trigger data loading on strange places in xarray (places that calls self.data), but I'm not sure this is your actual problem.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Add histogram method 750985364
1426819173 https://github.com/pydata/xarray/pull/7521#issuecomment-1426819173 https://api.github.com/repos/pydata/xarray/issues/7521 IC_kwDOAMm_X85VC4hl Illviljan 14371165 2023-02-11T16:42:44Z 2023-02-11T16:42:44Z MEMBER

dask-core 2023.2.0 pyhd8ed1ab_0 conda-forge distarray 2.12.2 pyh050c7b8_4 conda-forge distlib 0.3.6 pyhd8ed1ab_0 conda-forge distributed 2021.4.1 py39hf3d152e_1 conda-forge Yeah looks like an old version. py.typed was around september so it makes sense that part.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  use numpys SupportsDtype 1580266844
1426817848 https://github.com/pydata/xarray/pull/7521#issuecomment-1426817848 https://api.github.com/repos/pydata/xarray/issues/7521 IC_kwDOAMm_X85VC4M4 Illviljan 14371165 2023-02-11T16:36:34Z 2023-02-11T16:36:34Z MEMBER

I don't get the error. Distributed has a py.typed file https://github.com/dask/distributed/blob/main/distributed/py.typed And if it was py.typed issue we should have seen it in other PRs and I haven't seen one yet that has this error..

@TomNicholas added py.typed to package data, https://github.com/xarray-contrib/datatree/commit/927749a7702761491461f9bdfa8a0c1fbc244d85 But still, if distributed also needs to do this we should see this error in more PRs.

``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.2/49.2 kB 3.6 MB/s eta 0:00:00 Collecting types-docutils Downloading types_docutils-0.19.1.3-py3-none-any.whl (16 kB) Installing collected packages: types-PyYAML, types-pytz, types-docutils, types-setuptools Successfully installed types-PyYAML-6.0.12.5 types-docutils-0.19.1.3 types-pytz-2022.7.1.0 types-setuptools-67.2.0.1 xarray/tests/test_distributed.py:14: error: Skipping analyzing "distributed": module is installed, but missing library stubs or py.typed marker [import] xarray/tests/test_distributed.py:21: error: Skipping analyzing "distributed.client": module is installed, but missing library stubs or py.typed marker [import] xarray/tests/test_distributed.py:21: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports xarray/tests/test_distributed.py:22: error: Skipping analyzing "distributed.utils_test": module is installed, but missing library stubs or py.typed marker [import] Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Installing missing stub packages: /home/runner/micromamba-root/envs/xarray-tests/bin/python -m pip install types-PyYAML types-pytz types-setuptools

Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Found 3 errors in 1 file (checked 140 source files) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  use numpys SupportsDtype 1580266844
1414187606 https://github.com/pydata/xarray/issues/5081#issuecomment-1414187606 https://api.github.com/repos/pydata/xarray/issues/5081 IC_kwDOAMm_X85USspW Illviljan 14371165 2023-02-02T18:32:23Z 2023-02-02T18:32:23Z MEMBER

It is recommended to use it for lazy backends though: https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html#how-to-support-lazy-loading

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Lazy indexing arrays as a stand-alone package 842436143
1411536785 https://github.com/pydata/xarray/pull/7418#issuecomment-1411536785 https://api.github.com/repos/pydata/xarray/issues/7418 IC_kwDOAMm_X85UIleR Illviljan 14371165 2023-02-01T06:34:08Z 2023-02-01T06:34:08Z MEMBER

Hmm, I don't understand. Adding py.typed should be all that's needed, did that in flox and it worked great there: https://github.com/xarray-contrib/flox/pull/92

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Import datatree in xarray? 1519552711
1411177667 https://github.com/pydata/xarray/pull/7494#issuecomment-1411177667 https://api.github.com/repos/pydata/xarray/issues/7494 IC_kwDOAMm_X85UHNzD Illviljan 14371165 2023-01-31T22:49:24Z 2023-01-31T22:49:24Z MEMBER

@agoodm, what you think of this version? Using xr.Variable directly seems a little easier to work with than trying to guess which type of array (cupy, dask, pint, backendarray, etc) is in the variable.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Update contains_cftime_datetimes to avoid loading entire variable array 1563270549
1410959131 https://github.com/pydata/xarray/pull/7496#issuecomment-1410959131 https://api.github.com/repos/pydata/xarray/issues/7496 IC_kwDOAMm_X85UGYcb Illviljan 14371165 2023-01-31T19:38:59Z 2023-01-31T19:38:59Z MEMBER

It would be nice to have a good copy/paste example for open_dataset. For example I think chunks have different defaults compared to open_zarr.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate open_zarr 1564661430
1409323892 https://github.com/pydata/xarray/issues/7484#issuecomment-1409323892 https://api.github.com/repos/pydata/xarray/issues/7484 IC_kwDOAMm_X85UAJN0 Illviljan 14371165 2023-01-30T20:59:36Z 2023-01-30T21:00:40Z MEMBER

You can do var._data instead of var.data. There's been a few cases recently where self.data doesn't play so nice when reading from a file.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening datasets with large object dtype arrays is very slow 1561508426
1409243024 https://github.com/pydata/xarray/issues/7484#issuecomment-1409243024 https://api.github.com/repos/pydata/xarray/issues/7484 IC_kwDOAMm_X85T_1eQ Illviljan 14371165 2023-01-30T19:53:03Z 2023-01-30T19:53:03Z MEMBER

Feel free to start working on that PR. 👍 It looks like _contains_cftime_datetimes tries to do a similar thing as your solution so I think the change should be there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Opening datasets with large object dtype arrays is very slow 1561508426
1405634513 https://github.com/pydata/xarray/pull/7277#issuecomment-1405634513 https://api.github.com/repos/pydata/xarray/issues/7277 IC_kwDOAMm_X85TyEfR Illviljan 14371165 2023-01-26T20:51:43Z 2023-01-26T20:51:43Z MEMBER

pre-commit.ci autofix

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
  Require to explicitly defining optional dimensions such as hue and markersize 1444440024
1402675394 https://github.com/pydata/xarray/pull/7472#issuecomment-1402675394 https://api.github.com/repos/pydata/xarray/issues/7472 IC_kwDOAMm_X85TmyDC Illviljan 14371165 2023-01-24T21:15:37Z 2023-01-24T21:57:10Z MEMBER

I like these kinds of improvements :)

With ravel_chunks: before after ratio [3ee7b5a6] [e549724e] - 983M 183M 0.19 pandas.ToDataFrameDask.peakmem_to_dataframe - 2.76±0s 7.76±0.08ms 0.00 pandas.ToDataFrameDask.time_to_dataframe

With reshape before after ratio [3ee7b5a6] [02a4e97f] - 983M 183M 0.19 pandas.ToDataFrameDask.peakmem_to_dataframe - 2.78±0s 9.20±0.1ms 0.00 pandas.ToDataFrameDask.time_to_dataframe

{
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
  Avoid in-memory broadcasting when converting to_dask_dataframe 1554036799
1402567748 https://github.com/pydata/xarray/pull/7474#issuecomment-1402567748 https://api.github.com/repos/pydata/xarray/issues/7474 IC_kwDOAMm_X85TmXxE Illviljan 14371165 2023-01-24T20:13:12Z 2023-01-24T21:00:38Z MEMBER

Seems to work. :) I'll do a quick merge and continue in #7472.

[ 68.06%] ··· pandas.ToDataFrame.peakmem_to_dataframe 2.89G [ 68.19%] ··· pandas.ToDataFrame.time_to_dataframe 1.39±0.02s [ 68.33%] ··· pandas.ToDataFrameDask.peakmem_to_dataframe 983M [ 68.47%] ··· pandas.ToDataFrameDask.time_to_dataframe 2.77±0s

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add benchmarks for to_dataframe and to_dask_dataframe 1555497796
1402594544 https://github.com/pydata/xarray/pull/7461#issuecomment-1402594544 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TmeTw Illviljan 14371165 2023-01-24T20:27:02Z 2023-01-24T20:27:02Z MEMBER

I'm not sure at all about this but maybe you're supposed to go back to something like this? https://github.com/pydata/xarray/pull/6834/commits/7fcc11ffb20583caf1976191997bd2a7525ac218#diff-791d93adb64d0986ac499ce1ba831cc95b4ffbde0dfe98b28d929935b05d7134L49

python from numpy.typing._dtype_like import _DTypeLikeNested, _ShapeLike, _SupportsDType From #6834.

{
    "total_count": 2,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 1,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1400218891 https://github.com/pydata/xarray/pull/7353#issuecomment-1400218891 https://api.github.com/repos/pydata/xarray/issues/7353 IC_kwDOAMm_X85TdaUL Illviljan 14371165 2023-01-23T11:51:56Z 2023-01-23T11:51:56Z MEMBER

Creating a separate environment file seems like a good idea. Feel free to push the changes you want, @keewis. I was thinking of coming back to this once 3.8 was dropped.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add python 3.11 to CI 1474717029
1399202401 https://github.com/pydata/xarray/pull/7277#issuecomment-1399202401 https://api.github.com/repos/pydata/xarray/issues/7277 IC_kwDOAMm_X85TZiJh Illviljan 14371165 2023-01-21T07:54:29Z 2023-01-21T07:54:29Z MEMBER

This has stalled for long enough. I'll merge this at the end of next week unless someone disagrees, this pretty much reverts to original ds.plot.scatter behaviour because defining less than x and y in ds.plot.scatter was never allowed before anyway.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Require to explicitly defining optional dimensions such as hue and markersize 1444440024
1399201567 https://github.com/pydata/xarray/pull/7318#issuecomment-1399201567 https://api.github.com/repos/pydata/xarray/issues/7318 IC_kwDOAMm_X85TZh8f Illviljan 14371165 2023-01-21T07:48:36Z 2023-01-21T07:48:36Z MEMBER

This has stalled for long enough. I'll merge this at the end of next week unless someone disagrees, at least it is an improvement to the bugged main.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use plt.rc_context for default styles 1462470712
1398790498 https://github.com/pydata/xarray/pull/7353#issuecomment-1398790498 https://api.github.com/repos/pydata/xarray/issues/7353 IC_kwDOAMm_X85TX9li Illviljan 14371165 2023-01-20T18:42:31Z 2023-01-20T18:42:31Z MEMBER

87d689a shows that we have issues with 3.8 when either of numba/cdms2/numbagg are not available, then we take a different code path that fails somehow. Hopefully with python 3.9 it's fixed.

Failing tests with only python 3.8 with windows, no numba/cdms2/numbagg:

``` FAILED xarray/tests/test_dataarray.py::TestNumpyCoercion::test_from_sparse - ValueError: Performing this operation would produce a dense result: <ufunc 'add'> FAILED xarray/tests/test_dataset.py::TestDataset::test_unstack_sparse - TypeError: __init__() got an unexpected keyword argument 'fill_value' FAILED xarray/tests/test_dataset.py::TestDataset::test_from_dataframe_sparse - TypeError: __init__() got an unexpected keyword argument 'fill_value' FAILED xarray/tests/test_dataset.py::TestNumpyCoercion::test_from_sparse - ValueError: Performing this operation would produce a dense result: <ufunc 'add'> FAILED xarray/tests/test_dataarray.py::TestDataArray::test_from_series_sparse - TypeError: __init__() got an unexpected keyword argument 'fill_value' FAILED xarray/tests/test_dataarray.py::TestDataArray::test_from_multiindex_series_sparse - TypeError: from_numpy() takes 2 positional arguments but 3 were given FAILED xarray/tests/test_sparse.py::test_variable_method[obj.all(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.any(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.astype(*(), **{'dtype': <class 'int'>})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.clip(*(), **{'min': 0, 'max': 1})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.coarsen(*(), **{'windows': {'x': 2}, 'func': 'sum'})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.compute(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.conj(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.copy(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.count(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.get_axis_num(*(), **{'dim': 'x'})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.isel(*(), **{'x': slice(2, 4, None)})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.isnull(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.load(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.mean(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.notnull(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.roll(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.round(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.set_dims(*(), **{'dims': ('x', 'y', 'z')})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.stack(*(), **{'dimensions': {'flat': ('x', 'y')}})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.to_base_variable(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.transpose(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.unstack(*(), **{'dimensions': {'x': {'x1': 5, 'x2': 2}}})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.broadcast_equals(*(<xarray.Variable (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]]),), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.equals(*(<xarray.Variable (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]]),), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.identical(*(<xarray.Variable (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]]),), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.fillna(*(0,), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.max(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.min(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.prod(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.sum(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_variable_method[obj.where(*(), **{'cond': <xarray.Variable (x: 10, y: 5)>\narray([[False, False, False, True, False],\n [False, False, False, False, False],\n [False, False, False, False, False],\n [False, False, False, False, False],\n [False, False, False, False, False],\n [False, False, False, False, False],\n [False, False, False, False, False],\n [False, False, False, False, True],\n [False, False, False, False, False],\n [False, False, True, False, False]])})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_1d_variable_method[func0-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_nbytes - assert 192 == 120 + where 192 = <xarray.Variable (x: 4, y: 6)>\narray([[0. , 0.87001215, 0. , 0. , 0. ,\n 0. ]...6147936,\n 0.63992102],\n [0. , 0. , 0.77815675, 0. , 0.0202184 ,\n 0.79915856]]).nbytes + where <xarray.Variable (x: 4, y: 6)>\narray([[0. , 0.87001215, 0. , 0. , 0. ,\n 0. ]...6147936,\n 0.63992102],\n [0. , 0. , 0.77815675, 0. , 0.0202184 ,\n 0.79915856]]) = <xarray.tests.test_sparse.TestSparseVariable object at 0x000001CD9F898520>.var + and 120 = <COO: shape=(4, 6), dtype=float64, nnz=12, sorted=True, duplicates=False>.nbytes + where <COO: shape=(4, 6), dtype=float64, nnz=12, sorted=True, duplicates=False> = <xarray.tests.test_sparse.TestSparseVariable object at 0x000001CD9F898520>.data FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_unary_op - AssertionError: assert False + where False = isinstance(array([[-0. , -0.87001215, -0. , -0. , -0. ,\n -0. ],\n [-0.11827443, -0...,\n -0.63992102],\n [-0. , -0. , -0.77815675, -0. , -0.0202184 ,\n -0.79915856]]), (<class 'sparse.sparse_array.SparseArray'>,)) FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_univariate_ufunc - AssertionError: assert False + where False = isinstance(array([[0. , 0.76433677, 0. , 0. , 0. ,\n 0. ],\n [0.11799887, 0. ...4527321,\n 0.59713209],\n [0. , 0. , 0.70196783, 0. , 0.02021702,\n 0.7167696 ]]), (<class 'sparse.sparse_array.SparseArray'>,)) FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_bivariate_ufunc - AssertionError: assert False + where False = isinstance(array([[0. , 0.87001215, 0. , 0. , 0. ,\n 0. ],\n [0.11827443, 0. ...6147936,\n 0.63992102],\n [0. , 0. , 0.77815675, 0. , 0.0202184 ,\n 0.79915856]]), (<class 'sparse.sparse_array.SparseArray'>,)) FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_repr - AssertionError: assert '<xarray.Vari...ll_value=0.0>' == '<xarray.Vari...0.79915856]])' <xarray.Variable (x: 4, y: 6)> + <COO: shape=(4, 6), dtype=float64, nnz=12, fill_value=0.0> - array([[0. , 0.87001215, 0. , 0. , 0. , - 0. ], - [0.11827443, 0. , 0.78052918, 0. , 0.0871293 , - 0.07103606], - [0. , 0.97861834, 0.83261985, 0. , 0.46147936, - 0.63992102], - [0. , 0. , 0.77815675, 0. , 0.0202184 , - 0.79915856]]) FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_pickle - AssertionError: assert False + where False = isinstance(array([[0. , 0.87001215, 0. , 0. , 0. ,\n 0. ],\n [0.11827443, 0. ...6147936,\n 0.63992102],\n [0. , 0. , 0.77815675, 0. , 0.0202184 ,\n 0.79915856]]), (<class 'sparse.sparse_array.SparseArray'>,)) FAILED xarray/tests/test_sparse.py::TestSparseVariable::test_missing_values - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.all(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.any(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.assign_attrs(*({'foo': 'bar'},), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.assign_coords(*(), **{'x': <xarray.DataArray 'x' (x: 10)>\narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\nCoordinates:\n * x (x) int32 0 1 2 3 4 5 6 7 8 9})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.astype(*(<class 'int'>,), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.clip(*(), **{'min': 0, 'max': 1})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.compute(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.conj(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.copy(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.count(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.diff(*('x',), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.drop_vars(*('x',), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.expand_dims(*({'z': 2},), **{'axis': 2})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.get_axis_num(*('x',), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.get_index(*('x',), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.identical(*(<xarray.DataArray 'test' (x: 5, y: 5)>\narray([[0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0.71518937, 0. , 0. ],\n [0.60276338, 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ]])\nCoordinates:\n * x (x) int32 0 1 2 3 4\n * y (y) int32 0 1 2 3 4,), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.integrate(*('x',), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.isel(*({'x': slice(0, 3, None), 'y': slice(2, 4, None)},), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.isnull(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.load(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.mean(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.persist(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.reindex(*({'x': [1, 2, 3]},), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.rename(*('foo',), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.reorder_levels(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.reset_coords(*(), **{'drop': True})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.reset_index(*('x',), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.round(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.sel(*(), **{'x': [0, 1, 2]})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.shift(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.sortby(*('x',), **{'ascending': False})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.stack(*(), **{'z': ['x', 'y']})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.transpose(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.broadcast_equals(*(<xarray.Variable (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]]),), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.equals(*(<xarray.Variable (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]]),), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.combine_first(*(<xarray.DataArray 'test' (x: 10, y: 5)>\narray([[0.43758721, 0. , 0. , 0.891773 , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0. ],\n [0. , 0. , 0. , 0. , 0.96366276],\n [0. , 0. , 0. , 0. , 0.4236548 ],\n [0. , 0. , 0.64589411, 0. , 0. ]])\nCoordinates:\n * x (x) int32 0 1 2 3 4 5 6 7 8 9\n * y (y) int32 0 1 2 3 4,), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.fillna(*(0,), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.max(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.min(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.notnull(*(), **{})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.pipe(*(), **{'func': 'sum', 'axis': 1})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.prod(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.roll(*(), **{'x': 2, 'roll_coords': True})-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dataarray_method[obj.sum(*(), **{})-False] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_datarray_1d_method[func0-True] - AttributeError: 'numpy.ndarray' object has no attribute 'todense' FAILED xarray/tests/test_sparse.py::test_dask_token - AssertionError: assert False + where False = isinstance(array([0, 0, 1, 2]), <class 'sparse.coo.COO'>) + where array([0, 0, 1, 2]) = <xarray.DataArray (dim_0: 4)>\narray([0, 0, 1, 2])\nDimensions without coordinates: dim_0.data + and <class 'sparse.coo.COO'> = sparse.COO FAILED xarray/tests/test_sparse.py::test_apply_ufunc_check_meta_coherence - AssertionError: assert False + where False = isinstance(array([], dtype=int32), (<class 'sparse.sparse_array.SparseArray'>,)) FAILED xarray/tests/test_variable.py::TestVariableWithSparse::test_as_sparse - ValueError: coo is not a valid sparse format ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_to_dataset_roundtrip - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_align - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_align_outer - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_concat - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_stack - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_dataarray_repr - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_dataset_repr - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_sparse_dask_dataset_repr - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_dataarray_pickle - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_dataset_pickle - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) ERROR xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_coarsen - ValueError: could not broadcast input array from shape (4,6) into shape (4,1) = 93 failed, 14715 passed, 1640 skipped, 211 xfailed, 66 xpassed, 327 warnings, 11 errors in 699.08s (0:11:39) = ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add python 3.11 to CI 1474717029
1398730901 https://github.com/pydata/xarray/pull/7353#issuecomment-1398730901 https://api.github.com/repos/pydata/xarray/issues/7353 IC_kwDOAMm_X85TXvCV Illviljan 14371165 2023-01-20T17:41:22Z 2023-01-20T17:41:22Z MEMBER

Deprecation warning in pydap failing the docstring tests: ImportError while loading conftest '/home/runner/work/xarray/xarray/xarray/tests/conftest.py'. xarray/tests/__init__.py:64: in <module> has_pydap, requires_pydap = _importorskip("pydap.client") xarray/tests/__init__.py:51: in _importorskip mod = importlib.import_module(modname) ../../../micromamba-root/envs/xarray-tests/lib/python3.11/site-packages/pydap/client.py:52: in <module> from .net import GET, raise_for_status ../../../micromamba-root/envs/xarray-tests/lib/python3.11/site-packages/pydap/net.py:1: in <module> from webob.request import Request ../../../micromamba-root/envs/xarray-tests/lib/python3.11/site-packages/webob/__init__.py:1: in <module> from webob.datetime_utils import ( # noqa: F401 ../../../micromamba-root/envs/xarray-tests/lib/python3.11/site-packages/webob/datetime_utils.py:18: in <module> from webob.compat import ( ../../../micromamba-root/envs/xarray-tests/lib/python3.11/site-packages/webob/compat.py:5: in <module> from cgi import parse_header ../../../micromamba-root/envs/xarray-tests/lib/python3.11/cgi.py:57: in <module> warnings._deprecated(__name__, remove=(3,13)) ../../../micromamba-root/envs/xarray-tests/lib/python3.11/warnings.py:514: in _deprecated warn(msg, DeprecationWarning, stacklevel=3) E DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add python 3.11 to CI 1474717029
1398702023 https://github.com/pydata/xarray/pull/7461#issuecomment-1398702023 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85TXn_H Illviljan 14371165 2023-01-20T17:21:34Z 2023-01-20T17:21:34Z MEMBER

@jhamman, I believe you should simply remove GenericAlias and the __class_getitem__ it was a hack copied from a newer version of collections, see https://github.com/pydata/xarray/pull/7285#discussion_r1027253337.

Commit testing the hack: https://github.com/pydata/xarray/pull/7285/commits/d8bef27e54aa9e81873d5d64fca6a1d4d324ca62

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1397515014 https://github.com/pydata/xarray/issues/7457#issuecomment-1397515014 https://api.github.com/repos/pydata/xarray/issues/7457 IC_kwDOAMm_X85TTGMG Illviljan 14371165 2023-01-19T19:49:19Z 2023-01-19T19:49:19Z MEMBER

I feel like this has been mentioned before.

Yes, now I recall, it was here #6894.

I think it could be interesting to try out: https://github.com/pmeier/array-protocol

Another good read: https://github.com/data-apis/array-api/issues/229

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Typing of internal datatypes 1548948097
1397074522 https://github.com/pydata/xarray/issues/7457#issuecomment-1397074522 https://api.github.com/repos/pydata/xarray/issues/7457 IC_kwDOAMm_X85TRapa Illviljan 14371165 2023-01-19T14:35:37Z 2023-01-19T14:35:37Z MEMBER

I've been thinking that a good and safe start on this issue is to replace all these raw Any's with T_DuckArray where T_DuckArray = Any. It won't help in the typing but it gives some traceability and makes it easier to determine what the intention was.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Typing of internal datatypes 1548948097
1373979208 https://github.com/pydata/xarray/pull/7424#issuecomment-1373979208 https://api.github.com/repos/pydata/xarray/issues/7424 IC_kwDOAMm_X85R5UJI Illviljan 14371165 2023-01-06T18:18:56Z 2023-01-15T16:09:05Z MEMBER

The array api standard doesn't define any nan*-functions. xarray pretty much always defaults to using nan*-function. :(

So any time an array has a __array_namespace__ we have to use our own nan-solution, like for .sum: https://github.com/pydata/xarray/blob/d6d24507793af9bcaed79d7f8d3ac910e176f1ce/xarray/core/duck_array_ops.py#L288-L295

Any thoughts? Are there any clever ways to handle the aggregations with NaNs in a generic way?

edit: numpy implementations: https://github.com/numpy/numpy/blob/main/numpy/lib/nanfunctions.py

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  array api - Add tests for aggregations 1522810384
1383112964 https://github.com/pydata/xarray/issues/7439#issuecomment-1383112964 https://api.github.com/repos/pydata/xarray/issues/7439 IC_kwDOAMm_X85ScKEE Illviljan 14371165 2023-01-15T10:24:55Z 2023-01-15T10:31:29Z MEMBER

I've found this part of the documentation a bit scary in the past because there is so much stuff to do for a simple "typo fix" so I welcome changes here!

Nowadays, since the CI runs pytest, ASV performance benchmarks, pre-commit and builds the documentation on every PR it isn't really necessary for new contributors to get it to run locally. The only thing left is setting up a new branch with git, change the code and create a PR. If the CI fails check the details of that test, correct the code, push a new commit. Setting up git can also be scary for new users but then there is github desktop which hides away all of that in a quite user friendly GUI.

Sure, if you're doing bigger changes running the specific tests locally is probably faster but at that point you're probably a more experienced contributor anyway.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add clarifying language to contributor's guide 1532853152
1379571092 https://github.com/pydata/xarray/pull/7374#issuecomment-1379571092 https://api.github.com/repos/pydata/xarray/issues/7374 IC_kwDOAMm_X85SOpWU Illviljan 14371165 2023-01-11T22:33:55Z 2023-01-11T22:33:55Z MEMBER

Benchmark are improved if I understand the logs correctly. Unfortunately not significant enough to make ASV report it though. The ratio has to be >1.5 and the improvements on .time_open_dataset are around 1.3-1.4.

```

PR:

[ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ========= chunks
-------- --------- None 130±1ms {} 689±6ms ======== =========

[ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============= ============= -- chunks
--------- --------------------------- engine None {}
========= ============= ============= scipy 5.48±0.04ms 6.91±0.01ms netcdf4 2.93±0.04ms 4.32±0.02ms ========= ============= =============

Baseline:

[ 75.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 75.85%] ··· ======== =========== chunks
-------- ----------- None 177±0.5ms {} 737±3ms
======== ===========

[ 79.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 79.69%] ··· ========= ============ ============ -- chunks
--------- ------------------------- engine None {}
========= ============ ============ scipy 4.47±0.6ms 5.74±0.7ms netcdf4 4.39±0.7ms 5.82±0.6ms ========= ============ ============ ```

```

PR:

[ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ========== chunks
-------- ---------- None 149±4ms
{} 797±20ms ======== ==========

[ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============ ============= -- chunks
--------- -------------------------- engine None {}
========= ============ ============= scipy 6.57±0.2ms 7.77±0.01ms netcdf4 3.71±0.1ms 6.17±0.5ms ========= ============ =============

Baseline:

[ 75.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 75.85%] ··· ======== ========== chunks
-------- ---------- None 204±2ms
{} 857±20ms ======== ==========

[ 79.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 79.69%] ··· ========= ============ ============ -- chunks
--------- ------------------------- engine None {}
========= ============ ============ scipy 5.53±1ms 7.12±0.8ms netcdf4 4.96±0.6ms 6.74±0.8ms ========= ============ ============

```

```

PR:

[ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ============ chunks
-------- ------------ None 204±8ms
{} 1.20±0.04s ======== ============

[ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============ ============ -- chunks
--------- ------------------------- engine None {}
========= ============ ============ netcdf4 6.86±0.7ms 9.81±0.6ms scipy 6.74±1ms 9.10±1ms
========= ============ ============

Baseline:

[ 75.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 75.85%] ··· ======== ============ chunks
-------- ------------ None 282±5ms
{} 1.20±0.04s ======== ============

[ 79.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 79.69%] ··· ========= ============ ============ -- chunks
--------- ------------------------- engine None {}
========= ============ ============ netcdf4 6.91±1ms 9.77±0.7ms scipy 6.91±0.7ms 9.11±1ms
========= ============ ============ ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Improve performance for backend datetime handling 1490160140
1378015095 https://github.com/pydata/xarray/pull/7431#issuecomment-1378015095 https://api.github.com/repos/pydata/xarray/issues/7431 IC_kwDOAMm_X85SItd3 Illviljan 14371165 2023-01-10T23:10:32Z 2023-01-10T23:10:32Z MEMBER

Seems to work, tested a little here #7426.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Workaround sync-labels bug 1528100871
1378010601 https://github.com/pydata/xarray/pull/7431#issuecomment-1378010601 https://api.github.com/repos/pydata/xarray/issues/7431 IC_kwDOAMm_X85SIsXp Illviljan 14371165 2023-01-10T23:05:43Z 2023-01-10T23:05:58Z MEMBER

Seems bot uses labeler from main? I'll merge and see what want happens..

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Pull Request Labeler - Workaround sync-labels bug 1528100871
1376318165 https://github.com/pydata/xarray/pull/7426#issuecomment-1376318165 https://api.github.com/repos/pydata/xarray/issues/7426 IC_kwDOAMm_X85SCPLV Illviljan 14371165 2023-01-09T21:09:05Z 2023-01-09T22:31:51Z MEMBER

Timings for the new ASV-tests: ```

[ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ============ chunks
-------- ------------ None 265±4ms
{} 1.17±0.02s ======== ============ [ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============= ============= -- chunks
--------- --------------------------- engine None {}
========= ============= ============= scipy 4.81±0.1ms 6.65±0.01ms netcdf4 8.41±0.08ms 10.9±0.2ms ========= ============= ============= ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add lazy backend ASV test 1523232313
1376029728 https://github.com/pydata/xarray/issues/7430#issuecomment-1376029728 https://api.github.com/repos/pydata/xarray/issues/7430 IC_kwDOAMm_X85SBIwg Illviljan 14371165 2023-01-09T17:59:17Z 2023-01-09T17:59:17Z MEMBER

Try updating to latest xarray and dask. dask has had some nice updates lately, https://medium.com/pangeo/dask-distributed-and-pangeo-better-performance-for-everyone-thanks-to-science-software-63f85310a36b

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Missing Blocks when loading zarr file 1525802030
1372796814 https://github.com/pydata/xarray/pull/7382#issuecomment-1372796814 https://api.github.com/repos/pydata/xarray/issues/7382 IC_kwDOAMm_X85R0zeO Illviljan 14371165 2023-01-05T21:26:13Z 2023-01-05T21:26:13Z MEMBER

Thanks, @benbovy !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Some alignment optimizations 1498386428
1372763950 https://github.com/pydata/xarray/issues/7422#issuecomment-1372763950 https://api.github.com/repos/pydata/xarray/issues/7422 IC_kwDOAMm_X85R0rcu Illviljan 14371165 2023-01-05T21:09:15Z 2023-01-05T21:09:15Z MEMBER

Hmm, this worked at some point. Must've gotten lost somewhere in one of the bigger plot refactors.

Positional arguments have been deprecated though, so you should start getting used to using keyword arguments: https://docs.xarray.dev/en/stable/whats-new.html#v2022-11-0-nov-4-2022

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  `plot.scatter` only works for declared arguments 1521002414
1371430929 https://github.com/pydata/xarray/pull/7418#issuecomment-1371430929 https://api.github.com/repos/pydata/xarray/issues/7418 IC_kwDOAMm_X85RvmAR Illviljan 14371165 2023-01-04T21:18:05Z 2023-01-04T21:18:05Z MEMBER

Add a py.typed file in datatree to fix the mypy error: https://github.com/pydata/xarray/blob/main/xarray/py.typed

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Import datatree in xarray? 1519552711
1370866073 https://github.com/pydata/xarray/pull/7400#issuecomment-1370866073 https://api.github.com/repos/pydata/xarray/issues/7400 IC_kwDOAMm_X85RtcGZ Illviljan 14371165 2023-01-04T12:25:45Z 2023-01-04T12:25:45Z MEMBER

Benchmark is a numba issue, probably #7306.

Mypy is real, cannot getitem a object. Try out using isinstance instead of the try/except to narrow the typing.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fill missing data_vars during concat by reindexing 1508009922
1353473644 https://github.com/pydata/xarray/pull/7382#issuecomment-1353473644 https://api.github.com/repos/pydata/xarray/issues/7382 IC_kwDOAMm_X85QrF5s Illviljan 14371165 2022-12-15T17:43:29Z 2022-12-15T17:43:38Z MEMBER

No benchmark is catching this? Maybe we can add a small one in https://github.com/pydata/xarray/blob/main/asv_bench/benchmarks/indexing.py ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Some alignment optimizations 1498386428
1342256217 https://github.com/pydata/xarray/pull/7360#issuecomment-1342256217 https://api.github.com/repos/pydata/xarray/issues/7360 IC_kwDOAMm_X85QATRZ Illviljan 14371165 2022-12-08T08:22:16Z 2022-12-08T08:22:16Z MEMBER

The mypy errors should be fixed soon I think: https://github.com/dask/distributed/issues/7378

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  [pre-commit.ci] pre-commit autoupdate 1477162465
1339575144 https://github.com/pydata/xarray/pull/7356#issuecomment-1339575144 https://api.github.com/repos/pydata/xarray/issues/7356 IC_kwDOAMm_X85P2Eto Illviljan 14371165 2022-12-06T15:44:01Z 2022-12-06T15:44:01Z MEMBER

I'm not really opposed to this change, shape and dtype uses self._data aswell.

Without using chunks={} in open_dataset? I just find it a little odd that it's not a duck_array, what type is self._data?

This test just looked so similar to the tests in #6797. I think you can do a similar lazy test taking inspiration from: https://github.com/pydata/xarray/blob/ed60c6ccd3d6725cd91190b8796af4355f3085c2/xarray/tests/test_formatting.py#L715-L727

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid loading entire dataset by getting the nbytes in an array 1475567394
1339423992 https://github.com/pydata/xarray/pull/7356#issuecomment-1339423992 https://api.github.com/repos/pydata/xarray/issues/7356 IC_kwDOAMm_X85P1fz4 Illviljan 14371165 2022-12-06T13:53:03Z 2022-12-06T13:53:03Z MEMBER

Is that test targetting your issue with RAM crashing the laptop? Shouldn't there be some check if the values were loaded?

How did you import your data? self.data looks like this: https://github.com/pydata/xarray/blob/ed60c6ccd3d6725cd91190b8796af4355f3085c2/xarray/core/variable.py#L420-L435

I was expecting your data to be a duck_array?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Avoid loading entire dataset by getting the nbytes in an array 1475567394
1336402438 https://github.com/pydata/xarray/pull/7353#issuecomment-1336402438 https://api.github.com/repos/pydata/xarray/issues/7353 IC_kwDOAMm_X85Pp-IG Illviljan 14371165 2022-12-04T12:41:05Z 2022-12-04T12:41:05Z MEMBER

``` error libmamba Could not solve for environment specs Encountered problems while solving: - nothing provides hdf5 1.8.15* needed by netcdf4-1.2.4-np110py27_1

  The environment can't be solved, aborting the operation

```

It's a bit annoying the ci isn't showing which package has netcdf4 as a dependency. Is that possible?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add python 3.11 to CI 1474717029
1328331087 https://github.com/pydata/xarray/pull/7323#issuecomment-1328331087 https://api.github.com/repos/pydata/xarray/issues/7323 IC_kwDOAMm_X85PLLlP Illviljan 14371165 2022-11-27T20:15:53Z 2022-11-27T20:16:24Z MEMBER

How about converting the dataset to dask dataframe? python ddf = ds.to_dask_dataframe() ddf.to_json(filename)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  (Issue #7324) added functions that return data values in memory efficient manner 1465047346
1328070169 https://github.com/pydata/xarray/pull/7204#issuecomment-1328070169 https://api.github.com/repos/pydata/xarray/issues/7204 IC_kwDOAMm_X85PKL4Z Illviljan 14371165 2022-11-26T15:54:02Z 2022-11-26T15:54:02Z MEMBER

I am merging this on friday next week if no one minds. :)

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  absolufy-imports - No relative imports - PEP8 1420242462
1328068423 https://github.com/pydata/xarray/pull/7315#issuecomment-1328068423 https://api.github.com/repos/pydata/xarray/issues/7315 IC_kwDOAMm_X85PKLdH Illviljan 14371165 2022-11-26T15:43:28Z 2022-11-26T15:43:28Z MEMBER

Thank you @headtr1ck ! :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix polyval overloads 1462057503
1328068009 https://github.com/pydata/xarray/pull/7301#issuecomment-1328068009 https://api.github.com/repos/pydata/xarray/issues/7301 IC_kwDOAMm_X85PKLWp Illviljan 14371165 2022-11-26T15:41:07Z 2022-11-26T15:41:07Z MEMBER

Thanks, @jhamman !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  deprecate pynio backend 1456026667
1326789280 https://github.com/pydata/xarray/pull/7319#issuecomment-1326789280 https://api.github.com/repos/pydata/xarray/issues/7319 IC_kwDOAMm_X85PFTKg Illviljan 14371165 2022-11-24T19:20:31Z 2022-11-24T19:20:31Z MEMBER

Yeah it must be unrelated, I can't see how the doctest could be related to this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mypy - Remove some ignored packages and modules 1462833576
1325718667 https://github.com/pydata/xarray/pull/7318#issuecomment-1325718667 https://api.github.com/repos/pydata/xarray/issues/7318 IC_kwDOAMm_X85PBNyL Illviljan 14371165 2022-11-23T22:14:38Z 2022-11-23T22:17:21Z MEMBER

Seaborn did something similar as the original but had a little check if the marker was filled or not: https://github.com/mwaskom/seaborn/blob/bf4695466d742f301f361b8d0c8168c5c4bdf889/seaborn/relational.py#L563

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use plt.rc_context for default styles 1462470712
1322674412 https://github.com/pydata/xarray/pull/6963#issuecomment-1322674412 https://api.github.com/repos/pydata/xarray/issues/6963 IC_kwDOAMm_X85O1mjs Illviljan 14371165 2022-11-21T21:31:48Z 2022-11-21T21:31:48Z MEMBER

Thanks, @lukeconibear !

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixed type errors in `mypy` GitHub Action 1353467346
1321501372 https://github.com/pydata/xarray/pull/6963#issuecomment-1321501372 https://api.github.com/repos/pydata/xarray/issues/6963 IC_kwDOAMm_X85OxIK8 Illviljan 14371165 2022-11-21T05:59:36Z 2022-11-21T05:59:36Z MEMBER

A good ol' copy/paste job works as expected though. :) I think we can discuss more elegant solutions in a follow up PR.

So now mypy is not crashing anymore? Thats weird, we should open an issue on mypy about this...

This one throws errors still: python -m mypy --install-types --non-interactive --python-version 3.8 --follow-imports=silent Did it ever crash if we used the normal mypy CI but with changed python version?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixed type errors in `mypy` GitHub Action 1353467346
1321231227 https://github.com/pydata/xarray/pull/6963#issuecomment-1321231227 https://api.github.com/repos/pydata/xarray/issues/6963 IC_kwDOAMm_X85OwGN7 Illviljan 14371165 2022-11-20T20:12:00Z 2022-11-20T20:12:00Z MEMBER

Dask is on the ignore list: https://github.com/pydata/xarray/blob/d6671dd414370d006254ba3156cb96256ce0e9c7/pyproject.toml#L31-L43

This seems to ignore the list and follow-imports doesn't seem to work either: - name: Run mypy with python3.8 # silent all imports, since external repos might not support this run: | python -m mypy --install-types --non-interactive --python-version 3.8 --follow-imports=silent

A good ol' copy/paste job works as expected though. :) I think we can discuss more elegant solutions in a follow up PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fixed type errors in `mypy` GitHub Action 1353467346
1321212836 https://github.com/pydata/xarray/pull/7296#issuecomment-1321212836 https://api.github.com/repos/pydata/xarray/issues/7296 IC_kwDOAMm_X85OwBuk Illviljan 14371165 2022-11-20T18:52:12Z 2022-11-20T18:52:12Z MEMBER

Yeah, sounds reasonable. Maybe the reason for the ignore disappeared once DuckArrayModule was implemented?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix some typing errors in DuckArrayModule 1452339775
1321200061 https://github.com/pydata/xarray/pull/7285#issuecomment-1321200061 https://api.github.com/repos/pydata/xarray/issues/7285 IC_kwDOAMm_X85Ov-m9 Illviljan 14371165 2022-11-20T17:49:56Z 2022-11-20T17:49:56Z MEMBER

Mypy errors with python 3.8. Doesn't appear related to this PR at least.

``` Run python -m mypy --install-types --non-interactive --cobertura-xml-report mypy_report python -m mypy --install-types --non-interactive --cobertura-xml-report mypy_report shell: /usr/bin/bash -l {0} env: CONDA_ENV_FILE: ci/requirements/environment.yml PYTHON_VERSION: 3.8 TODAY: 2022-11-20 MAMBA_ROOT_PREFIX: /home/runner/micromamba-root MAMBA_EXE: /home/runner/micromamba-bin/micromamba Collecting types-PyYAML Downloading types_PyYAML-6.0.12.2-py3-none-any.whl (14 kB) Collecting types-paramiko Downloading types_paramiko-2.12.0.1-py3-none-any.whl (32 kB) Collecting types-pytz Downloading types_pytz-2022.6.0.1-py3-none-any.whl (4.7 kB) Collecting types-setuptools Downloading types_setuptools-65.6.0.0-py3-none-any.whl (48 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 19.4 MB/s eta 0:00:00 Collecting types-cryptography Downloading types_cryptography-3.3.23.2-py3-none-any.whl (30 kB) Installing collected packages: types-setuptools, types-PyYAML, types-pytz, types-cryptography, types-paramiko Successfully installed types-PyYAML-6.0.12.2 types-cryptography-3.3.23.2 types-paramiko-2.12.0.1 types-pytz-2022.6.0.1 types-setuptools-65.6.0.0 xarray/core/types.py:73: error: "tuple" is not subscriptable [misc] Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Installing missing stub packages: /home/runner/micromamba-root/envs/xarray-tests/bin/python -m pip install types-PyYAML types-paramiko types-pytz types-setuptools

xarray/core/types.py:75: error: "tuple" is not subscriptable [misc] xarray/core/types.py:77: error: "tuple" is not subscriptable [misc] xarray/core/types.py:79: error: "list" is not subscriptable [misc] xarray/core/merge.py:43: error: "tuple" is not subscriptable [misc] xarray/core/merge.py:44: error: "tuple" is not subscriptable [misc] xarray/core/merge.py:45: error: "tuple" is not subscriptable [misc] xarray/backends/api.py:65: error: "dict" is not subscriptable [misc]

Generated Cobertura report: /home/runner/work/xarray/xarray/mypy_report/cobertura.xml Found 8 errors in 3 files (checked 140 source files) ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Switch to T_DataArray in .coords 1447049720
1321100829 https://github.com/pydata/xarray/pull/7277#issuecomment-1321100829 https://api.github.com/repos/pydata/xarray/issues/7277 IC_kwDOAMm_X85OvmYd Illviljan 14371165 2022-11-20T11:00:50Z 2022-11-20T11:00:50Z MEMBER

@dcherian, as you noted in that PR as well it's not just matplotlib but xarray too that does a lot of guessing already, x,y in plot2d for example. I was more concerned about consistency and believed we got closer to what the majority of the plotting functions does by guessing.

These are the coords that we currently guess in plot1d: https://github.com/pydata/xarray/blob/d6671dd414370d006254ba3156cb96256ce0e9c7/xarray/plot/dataarray_plot.py#L154 Should we remove them, (empty tuple) and let the user explicitly define all of them? If so should we remove any guessing in the other plot functions as well?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Require to explicitly defining optional dimensions such as hue and markersize 1444440024
1321084794 https://github.com/pydata/xarray/pull/7296#issuecomment-1321084794 https://api.github.com/repos/pydata/xarray/issues/7296 IC_kwDOAMm_X85Ovid6 Illviljan 14371165 2022-11-20T09:52:02Z 2022-11-20T09:52:02Z MEMBER

I think the CI wasn't catching this because of: https://github.com/pydata/xarray/blob/63a69fe4b2fa7a4de8a1b65826f6af4869818166/pyproject.toml#L73-L77

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix some typing errors in DuckArrayModule 1452339775
1312087585 https://github.com/pydata/xarray/pull/7281#issuecomment-1312087585 https://api.github.com/repos/pydata/xarray/issues/7281 IC_kwDOAMm_X85ONN4h Illviljan 14371165 2022-11-11T19:14:35Z 2022-11-11T19:16:42Z MEMBER

Markersize before PR:

Markersize after PR:

```python import numpy as np import matplotlib.pyplot as plt import xarray as xr temp = xr.DataArray(np.random.randn(10, 10, 2), coords=[np.arange(10), np.arange(10), [2021, 2022]], dims=["lon", "lat", "year"]) prec = xr.DataArray(np.random.randn(10, 10, 2), coords=[np.arange(10), np.arange(10), [2021, 2022]], dims=["lon", "lat", "year"]) one = xr.DataArray(np.random.randn(10, 10, 2), coords=[np.arange(10), np.arange(10), [2021, 2022]], dims=["lon", "lat", "year"]) blue = xr.DataArray(np.random.randn(10, 10, 2), coords=[np.arange(10), np.arange(10), [2021, 2022]], dims=["lon", "lat", "year"]) ds = xr.Dataset({"temperature": temp, "precipitation": prec, 1: one, "blue": blue}) # Stack the non interesting dims: # fig, ax = plt.subplots() # ds.stack(stacked_dim=[...]).plot.scatter(x="temperature", y="precipitation") fig, ax = plt.subplots() ds.plot.scatter(x="temperature", y="precipitation", markersize=1) ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Use a default value for constant dimensions 1445870847
1311279880 https://github.com/pydata/xarray/pull/7272#issuecomment-1311279880 https://api.github.com/repos/pydata/xarray/issues/7272 IC_kwDOAMm_X85OKIsI Illviljan 14371165 2022-11-11T06:24:06Z 2022-11-11T06:26:40Z MEMBER
  • default markersize values of widths are 18 to 72.
  • plt.scatter default markersize is 36.
  • plt.plot default linewidth is 6.

With the current version of the PR we would get 18 for 1-sized arrays, but 36 if markersize was undefined. This seems a bit inconsistent to me. I think we should default to 36 instead for 1-sized arrays, but this would require a bit more tweaks than the scope of the current PR.

I think we'll merge this to fix the zero division -> np.nan -> empty plots. And then I'll come up with a another PR where the size is the same.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Handle division by zero in _Normalize._calc_widths 1440711212
1310792821 https://github.com/pydata/xarray/pull/7277#issuecomment-1310792821 https://api.github.com/repos/pydata/xarray/issues/7277 IC_kwDOAMm_X85OIRx1 Illviljan 14371165 2022-11-10T19:26:44Z 2022-11-10T19:26:44Z MEMBER

@jorisvandenbossche, feel free to try out this PR.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Require to explicitly defining optional dimensions such as hue and markersize 1444440024
1299061501 https://github.com/pydata/xarray/pull/7123#issuecomment-1299061501 https://api.github.com/repos/pydata/xarray/issues/7123 IC_kwDOAMm_X85Nbhr9 Illviljan 14371165 2022-11-01T19:57:53Z 2022-11-01T19:57:53Z MEMBER

Thanks, @DanielGoman !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DOC: Added examples to docstrings of DataArray methods (#7123) 1396401446
1297256807 https://github.com/pydata/xarray/pull/7204#issuecomment-1297256807 https://api.github.com/repos/pydata/xarray/issues/7204 IC_kwDOAMm_X85NUpFn Illviljan 14371165 2022-10-31T15:24:25Z 2022-10-31T15:24:25Z MEMBER

A lot of pep8 is quite gentle with the phrasing I think: Avoid trailing whitespace anywhere. is only a recommendation. Limit all lines to a maximum of 79 characters. is also only a recommendation, we ignore this one for automation consistency.

Another nice perk with absolute paths is that you can immediately run a utils.py file and test out the functions, much faster workflow than having to do the proper import route.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  absolufy-imports - No relative imports - PEP8 1420242462
1297227274 https://github.com/pydata/xarray/pull/7236#issuecomment-1297227274 https://api.github.com/repos/pydata/xarray/issues/7236 IC_kwDOAMm_X85NUh4K Illviljan 14371165 2022-10-31T15:04:13Z 2022-10-31T15:04:13Z MEMBER

Thanks @hmaarrfk !

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 1,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Expand benchmarks for dataset insertion and creation 1428274982
1293815240 https://github.com/pydata/xarray/pull/7221#issuecomment-1293815240 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85NHg3I Illviljan 14371165 2022-10-27T16:58:45Z 2022-10-27T16:58:45Z MEMBER

``` before after ratio [c000690c] [24753f1f] - 3.17±0.02ms 1.94±0.01ms 0.61 merge.DatasetAddVariable.time_variable_insertion(100) - 81.5±2ms 17.0±0.2ms 0.21 merge.DatasetAddVariable.time_variable_insertion(1000)

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE INCREASED. ``` Nice improvements. :)

I haven't fully understood why we had that code though?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2399.051ms · About: xarray-datasette