home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

24 rows where comments = 2, state = "closed" and user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: draft, body, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 17
  • issue 7

state 1

  • closed · 24 ✖

repo 1

  • xarray 24
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
2236408438 PR_kwDOAMm_X85sSjdN 8926 no untyped tests Illviljan 14371165 closed 0     2 2024-04-10T20:52:29Z 2024-04-14T16:15:45Z 2024-04-14T16:15:45Z MEMBER   1 pydata/xarray/pulls/8926
  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8926/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1991435225 PR_kwDOAMm_X85fV3DW 8449 Use concise date format when plotting Illviljan 14371165 closed 0     2 2023-11-13T20:32:22Z 2024-03-13T21:41:34Z 2023-11-21T19:26:24Z MEMBER   0 pydata/xarray/pulls/8449
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst

```python import matplotlib.pyplot as plt import xarray as xr

airtemps = xr.tutorial.open_dataset("air_temperature") air = airtemps.air - 273.15 air1d = air.isel(lat=10, lon=10)

plt.figure() air1d.plot() ``` Before:

After:

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8449/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1797233538 I_kwDOAMm_X85rH5uC 7971 Pint errors on python 3.11 and windows Illviljan 14371165 closed 0     2 2023-07-10T17:44:51Z 2024-02-26T17:52:50Z 2024-02-26T17:52:50Z MEMBER      

What happened?

The CI seems to consistently crash on test_units.py now: =========================== short test summary info =========================== FAILED xarray/tests/test_units.py::TestVariable::test_aggregation[int32-method_max] - TypeError: no implementation found for 'numpy.max' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestVariable::test_aggregation[int32-method_min] - TypeError: no implementation found for 'numpy.min' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[float64-function_max] - TypeError: no implementation found for 'numpy.max' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[float64-function_min] - TypeError: no implementation found for 'numpy.min' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[int32-function_max] - TypeError: no implementation found for 'numpy.max' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[int32-function_min] - TypeError: no implementation found for 'numpy.min' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[int32-method_max] - TypeError: no implementation found for 'numpy.max' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_aggregation[int32-method_min] - TypeError: no implementation found for 'numpy.min' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_unary_operations[float64-round] - TypeError: no implementation found for 'numpy.round' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataArray::test_unary_operations[int32-round] - TypeError: no implementation found for 'numpy.round' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataset::test_aggregation[int32-method_max] - TypeError: no implementation found for 'numpy.max' on types that implement __array_function__: [<class 'pint.util.Quantity'>] FAILED xarray/tests/test_units.py::TestDataset::test_aggregation[int32-method_min] - TypeError: no implementation found for 'numpy.min' on types that implement __array_function__: [<class 'pint.util.Quantity'>] = 12 failed, 14880 passed, 1649 skipped, 146 xfailed, 68 xpassed, 574 warnings in 737.19s (0:12:17) = For more details: https://github.com/pydata/xarray/actions/runs/5438369625/jobs/9889561685?pr=7955

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7971/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1948928294 PR_kwDOAMm_X85dGRAu 8330 Simplify get_axis_num Illviljan 14371165 closed 0     2 2023-10-18T06:15:57Z 2024-02-02T18:37:32Z 2024-02-02T18:37:32Z MEMBER   1 pydata/xarray/pulls/8330
  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/8330/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1795519181 I_kwDOAMm_X85rBXLN 7969 Upstream CI is failing Illviljan 14371165 closed 0     2 2023-07-09T18:51:41Z 2023-07-10T17:34:12Z 2023-07-10T17:33:12Z MEMBER      

What happened?

The upstream CI has been failing for a while. Here's the latest: https://github.com/pydata/xarray/actions/runs/5501368493/jobs/10024902009#step:7:16

python Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/runner/work/xarray/xarray/xarray/__init__.py", line 1, in <module> from xarray import testing, tutorial File "/home/runner/work/xarray/xarray/xarray/testing.py", line 7, in <module> import numpy as np ModuleNotFoundError: No module named 'numpy'

Digging a little in the logs ``` Installing build dependencies: started Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [3 lines of output] Looking in indexes: https://pypi.anaconda.org/scipy-wheels-nightly/simple ERROR: Could not find a version that satisfies the requirement meson-python==0.13.1 (from versions: none) ERROR: No matching distribution found for meson-python==0.13.1 [end of output] ```

Might be some numpy problem?

Should the CI be robust enough to handle these kinds of errors? Because I suppose we would like to get the automatic issue created anyway?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7969/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1797186226 PR_kwDOAMm_X85VG-Nj 7970 Use another repository for upstream testing Illviljan 14371165 closed 0     2 2023-07-10T17:10:55Z 2023-07-10T17:33:11Z 2023-07-10T17:33:11Z MEMBER   0 pydata/xarray/pulls/7970

Use https://pypi.anaconda.org/scientific-python-nightly-wheels/simple/ instead.

  • [x] Closes #7969
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7970/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1795424047 PR_kwDOAMm_X85VBAF6 7968 Move absolute path finder from open_mfdataset to own function Illviljan 14371165 closed 0     2 2023-07-09T14:24:38Z 2023-07-10T14:04:06Z 2023-07-10T14:04:05Z MEMBER   0 pydata/xarray/pulls/7968

A simple refactor to make it easier to retrieve the proper paths that open_mfdataset uses and passes on the engine.

I've been thinking how to make use of DataTree and one idea I wanted to try was: * Open file (using_find_absolute_path). * Get all groups in the file. * For each group run xr.open_mfdataset(..., group=group)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7968/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1665260014 PR_kwDOAMm_X85OK8Yp 7752 Fix typing errors using mypy 1.2 Illviljan 14371165 closed 0     2 2023-04-12T21:08:31Z 2023-04-15T18:31:58Z 2023-04-15T18:31:57Z MEMBER   0 pydata/xarray/pulls/7752

Fixes typing errors when using newest mypy version.

  • [x] Closes #7270
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7752/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1125040125 I_kwDOAMm_X85DDr_9 6244 Get pyupgrade to update the typing Illviljan 14371165 closed 0     2 2022-02-05T21:56:56Z 2023-03-12T15:38:37Z 2023-03-12T15:38:37Z MEMBER      

Is your feature request related to a problem?

Use more up-to-date typing styles on all files. Will reduce number of imports and avoids big diffs when doing relatively minor changes because pre-commit/pyupgrade has been triggered somehow.

Related to #6240

Describe the solution you'd like

Add from __future__ import annotations on files with a lot of typing. Let pyupgrade do the rest.

Describe alternatives you've considered

No response

Additional context

No response

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6244/reactions",
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1462470712 PR_kwDOAMm_X85DmbNT 7318 Use plt.rc_context for default styles Illviljan 14371165 closed 0     2 2022-11-23T22:11:23Z 2023-02-09T12:56:00Z 2023-02-09T12:56:00Z MEMBER   0 pydata/xarray/pulls/7318
  • [x] Closes #7313
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7318/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1528100871 PR_kwDOAMm_X85HG6Hh 7431 Pull Request Labeler - Workaround sync-labels bug Illviljan 14371165 closed 0     2 2023-01-10T22:29:03Z 2023-01-10T23:10:32Z 2023-01-10T23:06:14Z MEMBER   0 pydata/xarray/pulls/7431
  • Workaround for Pull Request Labeler The PR labeler keeps removing manually added labels. xref: https://github.com/actions/labeler/issues/112
  • ASV benchmarks starts also when topic-performance label is added now. Bot is allowed to change this one, but not run-benchmarks.
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7431/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1410432571 PR_kwDOAMm_X85A3v7B 7167 Fix some scatter plot issues Illviljan 14371165 closed 0     2 2022-10-16T09:38:05Z 2022-10-17T13:39:31Z 2022-10-17T13:39:31Z MEMBER   0 pydata/xarray/pulls/7167

Fix some issues with scatter plots: * Always use markersize widths for scatter. * Fix issue with .values_unique not returning the same values as .values * Added more type hints to _Normalize, some rework had to be done to make mypy pass.

xref: #6778

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7167/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1128356864 PR_kwDOAMm_X84ySpaM 6257 Run pyupgrade on core/weighted Illviljan 14371165 closed 0     2 2022-02-09T10:38:06Z 2022-08-12T09:08:47Z 2022-02-09T12:52:39Z MEMBER   0 pydata/xarray/pulls/6257

Clean up a little in preparation for #6059.

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst

xref: #6244

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6257/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1052888529 PR_kwDOAMm_X84ufplh 5986 Use set_options for asv bottleneck tests Illviljan 14371165 closed 0     2 2021-11-14T09:10:38Z 2022-08-12T09:07:55Z 2021-11-15T20:40:38Z MEMBER   0 pydata/xarray/pulls/5986

Inspired by #5734, remove the non-bottleneck build and instead use xr.set_options on the relevant tests. This makes the report much more readable and reduces testing time quite a bit since everything isn't accelerated by bottleneck.

  • [x] Passes pre-commit run --all-files
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5986/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
970234731 MDExOlB1bGxSZXF1ZXN0NzEyMjE0ODU4 5703 Use the same bool validator as other inputs for use_bottleneck in xr.set_options Illviljan 14371165 closed 0     2 2021-08-13T09:36:03Z 2022-08-12T09:07:28Z 2021-08-13T13:41:42Z MEMBER   0 pydata/xarray/pulls/5703

Minor change to align with other booleans.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5703/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1098439498 PR_kwDOAMm_X84wwv5V 6150 Faster dask unstack Illviljan 14371165 closed 0     2 2022-01-10T22:10:45Z 2022-08-12T09:07:07Z 2022-08-12T09:07:06Z MEMBER   1 pydata/xarray/pulls/6150

ref #5582

  • [ ] Closes #xxxx
  • [ ] Tests added
  • [ ] User visible changes (including notable bug fixes) are documented in whats-new.rst
  • [ ] New functions/methods are listed in api.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6150/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
970208539 MDExOlB1bGxSZXF1ZXN0NzEyMTkxODEx 5702 Move docstring for xr.set_options to numpy style Illviljan 14371165 closed 0     2 2021-08-13T09:05:56Z 2022-08-12T09:06:23Z 2021-08-19T22:27:39Z MEMBER   0 pydata/xarray/pulls/5702

While trying to figure out which types are allowed in #5678 I felt that the set_options docstring was rather hard to read. Moving it to typical numpy docstring style helped at least for me.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5702/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1110829504 PR_kwDOAMm_X84xZwqu 6184 Add seed kwarg to the tutorial scatter dataset Illviljan 14371165 closed 0     2 2022-01-21T19:38:53Z 2022-08-12T09:06:13Z 2022-01-26T19:04:02Z MEMBER   0 pydata/xarray/pulls/6184

Allow controlling the randomness of the dataset. It's difficult to catch issues with the dataset if it always changes each run.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6184/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1318779952 PR_kwDOAMm_X848I_BC 6832 Convert name to string in label_from_attrs Illviljan 14371165 closed 0     2 2022-07-26T21:40:38Z 2022-08-12T09:02:01Z 2022-07-26T22:48:39Z MEMBER   0 pydata/xarray/pulls/6832

Make sure name is a string. Use the same .format method as in _get_units_from_attrs to convert to string.

  • [x] Closes #6826
  • [x] Tests added
  • [x] User visible changes (including notable bug fixes) are documented in whats-new.rst
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6832/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
1182697604 I_kwDOAMm_X85GfoiE 6416 xr.concat removes datetime information Illviljan 14371165 closed 0     2 2022-03-27T23:19:30Z 2022-03-28T16:05:01Z 2022-03-28T16:05:01Z MEMBER      

What happened?

xr.concat removes datetime information and can't concatenate the arrays because they don't have compatible types anymore.

What did you expect to happen?

Succesful concatenation with the same type.

Minimal Complete Verifiable Example

```Python import numpy as np import xarray as xr from datetime import datetime

month = np.arange(1, 13, 1) data = np.sin(2 * np.pi * month / 12.0)

darray = xr.DataArray(data, dims=["time"]) darray.coords["time"] = np.array([datetime(2017, m, 1) for m in month])

darray_nan = np.nan * darray.isel(**{"time": -1}) darray = xr.concat([darray, darray_nan], dim="time") ```

Relevant log output

```Python Traceback (most recent call last):

File "<ipython-input-15-31040255a336>", line 2, in <module> darray = xr.concat([darray, darray_nan], dim="time")

File "c:\users\j.w\documents\github\xarray\xarray\core\concat.py", line 244, in concat return f(

File "c:\users\j.w\documents\github\xarray\xarray\core\concat.py", line 642, in _dataarray_concat ds = _dataset_concat(

File "c:\users\j.w\documents\github\xarray\xarray\core\concat.py", line 555, in _dataset_concat combined_idx = indexes[0].concat(indexes, dim, positions)

File "c:\users\j.w\documents\github\xarray\xarray\core\indexes.py", line 318, in concat coord_dtype = np.result_type(*[idx.coord_dtype for idx in indexes])

File "<array_function internals>", line 5, in result_type

TypeError: The DType <class 'numpy.dtype[datetime64]'> could not be promoted by <class 'numpy.dtype[int64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is object. The full list of DTypes is: (<class 'numpy.dtype[datetime64]'>, <class 'numpy.dtype[int64]'>) ```

Anything else we need to know?

Similar to #6384.

Happens around here:

https://github.com/pydata/xarray/blob/728b648d5c7c3e22fe3704ba163012840408bf66/xarray/core/concat.py#L535

Environment

INSTALLED VERSIONS ------------------ commit: None python: 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:37:25) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel byteorder: little LC_ALL: None LANG: en LOCALE: ('Swedish_Sweden', '1252') libhdf5: 1.10.6 libnetcdf: 4.7.4 xarray: 0.16.3.dev99+gc19467fb pandas: 1.3.1 numpy: 1.21.5 scipy: 1.7.1 netCDF4: 1.5.6 pydap: installed h5netcdf: 0.11.0 h5py: 2.10.0 Nio: None zarr: 2.8.3 cftime: 1.5.0 nc_time_axis: 1.3.1 PseudoNetCDF: installed rasterio: 1.2.6 cfgrib: None iris: 3.0.4 bottleneck: 1.3.2 dask: 2021.10.0 distributed: 2021.10.0 matplotlib: 3.4.3 cartopy: 0.19.0.post1 seaborn: 0.11.1 numbagg: 0.2.1 fsspec: 2021.11.1 cupy: None pint: 0.17 sparse: 0.12.0 setuptools: 49.6.0.post20210108 pip: 21.2.4 conda: None pytest: 6.2.4 IPython: 7.31.0 sphinx: 4.3.2
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/6416/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
1042698589 I_kwDOAMm_X84-JlFd 5928 Relax GitHub Actions first time contributor approval? Illviljan 14371165 closed 0     2 2021-11-02T18:45:16Z 2021-11-02T21:44:54Z 2021-11-02T21:44:54Z MEMBER      

A while back GitHub made it so that new contributors cannot trigger GitHub Actions workflows and a maintainer has to hit "Approve and Run" every time they push a commit to their PR. This is rather annoying for both the contributor and the maintainer as the back and forth takes time.

It however seems possible to relax this constraint: https://twitter.com/metcalfc/status/1448414192285806592?t=maeChQZTSUh2Ph0YFk-hGA&s=19

Shall we relax this constraint?

ref: https://github.com/dask/community/issues/191

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5928/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
929923036 MDExOlB1bGxSZXF1ZXN0Njc3NzAyMTc5 5532 Remove self from classes in How to add new backends docs Illviljan 14371165 closed 0     2 2021-06-25T07:48:32Z 2021-07-02T16:07:16Z 2021-06-25T08:12:59Z MEMBER   0 pydata/xarray/pulls/5532

Copy pasting the examples in http://xarray.pydata.org/en/stable/internals/how-to-add-new-backend.html resulted in crashes. Make the docs copy/paste friendly by removing the self arguments.

Example: ```python expected = xr.Dataset( dict(a=2 * np.arange(5)), coords=dict(x=("x", np.arange(5), dict(units="s"))) )

class CustomBackend(xr.backends.BackendEntrypoint): def open_dataset( self, filename_or_obj, drop_variables=None, **kwargs, ): return expected.copy(deep=True)

xr.open_dataset("fake_filename", engine=CustomBackend) TypeError: open_dataset() missing 1 required positional argument: 'filename_or_obj' This works if self is removed:python expected = xr.Dataset( dict(a=2 * np.arange(5)), coords=dict(x=("x", np.arange(5), dict(units="s"))) )

class CustomBackend(xr.backends.BackendEntrypoint): def open_dataset( filename_or_obj, drop_variables=None, **kwargs, ): return expected.copy(deep=True)

xr.open_dataset("fake_filename", engine=CustomBackend) <xarray.Dataset> Dimensions: (a: 5, x: 5) Coordinates: * a (a) int32 0 2 4 6 8 * x (x) int32 0 1 2 3 4 Data variables: empty ```

  • [x] Passes pre-commit run --all-files
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5532/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
775875024 MDU6SXNzdWU3NzU4NzUwMjQ= 4739 Slow initilization of dataset.interp Illviljan 14371165 closed 0     2 2020-12-29T12:46:05Z 2021-05-05T12:26:01Z 2021-05-05T12:26:01Z MEMBER      

What happened: When interpolating a dataset with >2000 dask variables a lot of time is spent in da.unifying_chunks because da.unifying_chunks forces all variables and coordinates to a dask array. xarray on the other hand forces coordinates to pd.Index even if the coordinates was dask.array when the dataset was first created.

What you expected to happen: If the coords of the dataset was initialized as dask arrays they should stay lazy.

Minimal Complete Verifiable Example:

```python import xarray as xr import numpy as np import dask.array as da

a = np.arange(0, 2000) b = np.core.defchararray.add("long_variable_name", a.astype(str)) coords = dict(time=da.array([0, 1])) data_vars = dict() for v in b: data_vars[v] = xr.DataArray( name=v, data=da.array([3, 4]), dims=["time"], coords=coords ) ds0 = xr.Dataset(data_vars) ds0 = ds0.interp( time=da.array([0, 0.5, 1]), assume_sorted=True, kwargs=dict(fill_value=None), ) ```

Anything else we need to know?: Some thoughts: * Why can't coordinates be lazy? * Can we use dask.dataframe.Index instead of pd.Index when creating IndexVariables? * There's no time saved converting to dask arrays in missing.interp_func. But some time could be saved if we could convert them to dask arrays in xr.Dataset.interp before the variable loop starts. * Can we still store the dask array in IndexVariable and use a to_dask_array()-method to quickly get it? * Initializing the dataarrays will still be slow though since it still has to force the dask array to pd.Index.

Environment:

Output of <tt>xr.show_versions()</tt> xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 libhdf5: 1.10.4 libnetcdf: None xarray: 0.16.2 pandas: 1.1.5 numpy: 1.17.5 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2020.12.0 distributed: 2020.12.0 matplotlib: 3.3.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 51.0.0.post20201207 pip: 20.3.3 conda: 4.9.2 pytest: 6.2.1 IPython: 7.19.0 sphinx: 3.4.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4739/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
775322346 MDU6SXNzdWU3NzUzMjIzNDY= 4736 Limit number of data variables shown in repr Illviljan 14371165 closed 0     2 2020-12-28T10:15:26Z 2021-01-04T02:13:52Z 2021-01-04T02:13:52Z MEMBER      

What happened: xarray feels very unresponsive when using datasets with >2000 data variables because it has to print all the 2000 variables everytime you print something to console.

What you expected to happen: xarray should limit the number of variables printed to console. Maximum maybe 25? Same idea probably apply to dimensions, coordinates and attributes as well,

pandas only shows 2 for reference, the first and last variables.

Minimal Complete Verifiable Example:

```python import numpy as np import xarray as xr

a = np.arange(0, 2000) b = np.core.defchararray.add("long_variable_name", a.astype(str)) data_vars = dict() for v in b: data_vars[v] = xr.DataArray( name=v, data=[3, 4], dims=["time"], coords=dict(time=[0, 1]) ) ds = xr.Dataset(data_vars)

Everything above feels fast. Printing to console however takes about 13 seconds for me:

print(ds) ```

Anything else we need to know?: Out of scope brainstorming: Though printing 2000 variables is probably madness for most people it is kind of nice to show all variables because you sometimes want to know what happened to a few other variables as well. Is there already an easy and fast way to create subgroup of the dataset, so we don' have to rely on the dataset printing everything to the console everytime?

Environment:

Output of <tt>xr.show_versions()</tt> xr.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 libhdf5: 1.10.4 libnetcdf: None xarray: 0.16.2 pandas: 1.1.5 numpy: 1.17.5 scipy: 1.4.1 netCDF4: None pydap: None h5netcdf: None h5py: 2.10.0 Nio: None zarr: None cftime: None nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: 1.3.2 dask: 2020.12.0 distributed: 2020.12.0 matplotlib: 3.3.2 cartopy: None seaborn: 0.11.1 numbagg: None pint: None setuptools: 51.0.0.post20201207 pip: 20.3.3 conda: 4.9.2 pytest: 6.2.1 IPython: 7.19.0 sphinx: 3.4.0
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4736/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 2400.738ms · About: xarray-datasette