home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

16 rows where issue = 996475523 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 5

  • Illviljan 8
  • dcherian 4
  • max-sixty 2
  • pep8speaks 1
  • github-actions[bot] 1

author_association 3

  • MEMBER 14
  • CONTRIBUTOR 1
  • NONE 1

issue 1

  • Add asv benchmark jobs to CI · 16 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
950296261 https://github.com/pydata/xarray/pull/5796#issuecomment-950296261 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X844pF7F dcherian 2448579 2021-10-24T10:07:49Z 2021-10-24T10:07:49Z MEMBER

Thanks @Illviljan this is great work!

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
919547339 https://github.com/pydata/xarray/pull/5796#issuecomment-919547339 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X842zy3L github-actions[bot] 41898282 2021-09-14T22:08:28Z 2021-10-07T18:34:21Z CONTRIBUTOR

Unit Test Results

6 files           6 suites   1h 0m 16s :stopwatch: 16 226 tests 14 490 :heavy_check_mark: 1 736 :zzz: 0 :x: 90 552 runs  82 372 :heavy_check_mark: 8 180 :zzz: 0 :x:

Results for commit 70cd679e.

:recycle: This comment has been updated with latest results.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
938027145 https://github.com/pydata/xarray/pull/5796#issuecomment-938027145 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8436SiJ Illviljan 14371165 2021-10-07T17:59:54Z 2021-10-07T18:06:48Z MEMBER

Here's how long dataarray_missing.py takes in this workflow with different shapes: 321a761 - shape=(100, 25, 25), 3 minutes 8f262f9 - shape=(365, 50, 50), 3m 28s 0b7b1a0 - shape=(365, 75, 75), 4m 8s 8f08506 - shape=(365, 100, 100), 5m 47s d1b908a - shape=(365, 200, 400), 12m 38s 56556f1 - shape=(3650, 100, 100) 19m 55s and crashes 1eba65c - shape=(3650, 200, 400), 20 minutes and crashes

Changed the shape to shape=(365, 75, 75) as that seems to be around tipping point where it starts slowing down.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
921132600 https://github.com/pydata/xarray/pull/5796#issuecomment-921132600 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8425144 pep8speaks 24736507 2021-09-16T18:12:32Z 2021-10-07T18:02:52Z NONE

Hello @Illviljan! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers:

Comment last updated at 2021-10-07 18:02:52 UTC
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
932253077 https://github.com/pydata/xarray/pull/5796#issuecomment-932253077 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X843kQ2V Illviljan 14371165 2021-10-01T13:58:00Z 2021-10-01T14:01:25Z MEMBER

Getting a weird error now.

``` Run set -x + asv machine --yes · No information stored about machine 'fv-az231-522'. I know about nothing. I will now ask you some questions about this machine to identify it in the benchmarks. 1. machine: A unique name to identify this machine in the results. May be anything, as long as it is unique across all the machines used to benchmark this project. NOTE: If changed from the default, it will no longer match the hostname of this machine, and you may need to explicitly use the --machine argument to asv. machine [fv-az231-522]: 2. os: The OS type and version of this machine. For example, 'Macintosh OS-X 10.8'. os [Linux 5.8.0-1042-azure]: 3. arch: The generic CPU architecture of this machine. For example, 'i386' or 'x86_64'. arch [x86_64]: 4. cpu: A specific description of the CPU of this machine, including its speed and class. For example, 'Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz (4 cores)'. cpu [Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz]: 5. num_cpu: The number of CPUs in the system. For example, '4'. num_cpu [2]: 6. ram: The amount of physical RAM on this machine. For example, '4GB'. + echo 'Baseline: ebfc6a3db0580cc11418e906766805ff4bf36455 (Illviljan:main)' ram [7120800]: Baseline: ebfc6a3db0580cc11418e906766805ff4bf36455 (Illviljan:main) + echo 'Contender: 7af19db79493412bddc6ad9db768ce05c75d7297 (Illviljan:asv-benchmark-cron)' ++ which mamba Contender: 7af19db79493412bddc6ad9db768ce05c75d7297 (Illviljan:asv-benchmark-cron) + export CONDA_EXE=/usr/share/miniconda3/condabin/mamba + CONDA_EXE=/usr/share/miniconda3/condabin/mamba + ASV_OPTIONS='--split --show-stderr --factor 1.5' + tee benchmarks.log + sed '/Traceback \|failed$\|PERFORMANCE DECREASED/ s/^/::error::/' + asv continuous --split --show-stderr --factor 1.5 ebfc6a3db0580cc11418e906766805ff4bf36455 7af19db79493412bddc6ad9db768ce05c75d7297 · Creating environments · Discovering benchmarks ·· Uninstalling from conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Building 7af19db7 for conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Installing 7af19db7 into conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Error running /home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/bin/python /home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py discover /home/runner/work/xarray/xarray/asv_bench/benchmarks /tmp/tmp3sm5q3yv/result.json (exit status 1) STDOUT --------> STDERR --------> Error: Traceback (most recent call last): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1315, in <module> main() File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1308, in main commands[mode](args) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1004, in main_discover list_benchmarks(benchmark_dir, fp) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 989, in list_benchmarks for benchmark in disc_benchmarks(root): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 887, in disc_benchmarks for module in disc_modules(root_name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 869, in disc_modules for item in disc_modules(name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 857, in disc_modules module = import_module(module_name) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/combine.py", line 3, in <module> import xarray as xr File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/__init__.py", line 3, in <module> from . import testing, tutorial, ufuncs File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/tutorial.py", line 13, in <module> from .backends.api import open_dataset as _open_dataset File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/__init__.py", line 9, in <module> from .h5netcdf_ import H5NetCDFStore File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py", line 25, in <module> from .netCDF4_ import ( File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/netCDF4_.py", line 34, in <module> import netCDF4 File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/netCDF4/__init__.py", line 3, in <module> from ._netCDF4 import * ImportError: /home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/netCDF4/../../../libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3 ·· Failed: trying different commit/environment ·· Uninstalling from conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Building ebfc6a3d for conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Installing ebfc6a3d into conda-py3.8-bottleneck-dask-distributed-netcdf4-numpy-pandas-scipy ·· Error running /home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/bin/python /home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py discover /home/runner/work/xarray/xarray/asv_bench/benchmarks /tmp/tmpsjxq903z/result.json (exit status 1) STDOUT --------> STDERR --------> Error: Traceback (most recent call last): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1315, in <module> main() File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1308, in main commands[mode](args) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1004, in main_discover list_benchmarks(benchmark_dir, fp) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 989, in list_benchmarks for benchmark in disc_benchmarks(root): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 887, in disc_benchmarks for module in disc_modules(root_name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 869, in disc_modules for item in disc_modules(name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 857, in disc_modules module = import_module(module_name) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/combine.py", line 3, in <module> import xarray as xr File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/__init__.py", line 3, in <module> from . import testing, tutorial, ufuncs File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/tutorial.py", line 13, in <module> from .backends.api import open_dataset as _open_dataset File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/__init__.py", line 9, in <module> from .h5netcdf_ import H5NetCDFStore File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py", line 25, in <module> from .netCDF4_ import ( File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/xarray/backends/netCDF4_.py", line 34, in <module> import netCDF4 File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/netCDF4/__init__.py", line 3, in <module> from ._netCDF4 import * ImportError: /home/runner/work/xarray/xarray/asv_bench/.asv/env/06e0f5dba81d6db545c0b3d92fe94a49/lib/python3.8/site-packages/netCDF4/../../../libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3 ·· Failed: trying different commit/environment ·· Uninstalling from conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Building 7af19db7 for conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Installing 7af19db7 into conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Error running /home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/bin/python /home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py discover /home/runner/work/xarray/xarray/asv_bench/benchmarks /tmp/tmpnxh5svbe/result.json (exit status 1) STDOUT --------> STDERR --------> Error: Traceback (most recent call last): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1315, in <module> main() File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1308, in main commands[mode](args) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1004, in main_discover list_benchmarks(benchmark_dir, fp) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 989, in list_benchmarks for benchmark in disc_benchmarks(root): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 887, in disc_benchmarks for module in disc_modules(root_name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 869, in disc_modules for item in disc_modules(name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 857, in disc_modules module = import_module(module_name) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/combine.py", line 3, in <module> import xarray as xr File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/__init__.py", line 3, in <module> from . import testing, tutorial, ufuncs File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/tutorial.py", line 13, in <module> from .backends.api import open_dataset as _open_dataset File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/__init__.py", line 9, in <module> from .h5netcdf_ import H5NetCDFStore File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py", line 25, in <module> from .netCDF4_ import ( File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/netCDF4_.py", line 34, in <module> import netCDF4 File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/netCDF4/__init__.py", line 3, in <module> from ._netCDF4 import * ImportError: /home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/netCDF4/../../../libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3 ·· Failed: trying different commit/environment ·· Uninstalling from conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Building ebfc6a3d for conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Installing ebfc6a3d into conda-py3.8-dask-distributed-netcdf4-numpy-pandas-scipy ·· Error running /home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/bin/python /home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py discover /home/runner/work/xarray/xarray/asv_bench/benchmarks /tmp/tmp_8pq_qyl/result.json (exit status 1) STDOUT --------> STDERR --------> Error: Traceback (most recent call last): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1315, in <module> main() File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1308, in main commands[mode](args) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 1004, in main_discover list_benchmarks(benchmark_dir, fp) File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 989, in list_benchmarks for benchmark in disc_benchmarks(root): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 887, in disc_benchmarks for module in disc_modules(root_name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 869, in disc_modules for item in disc_modules(name, ignore_import_errors=ignore_import_errors): File "/home/runner/.local/lib/python3.8/site-packages/asv/benchmark.py", line 857, in disc_modules module = import_module(module_name) File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/runner/work/xarray/xarray/asv_bench/benchmarks/combine.py", line 3, in <module> import xarray as xr File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/__init__.py", line 3, in <module> from . import testing, tutorial, ufuncs File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/tutorial.py", line 13, in <module> from .backends.api import open_dataset as _open_dataset File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/__init__.py", line 9, in <module> from .h5netcdf_ import H5NetCDFStore File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py", line 25, in <module> from .netCDF4_ import ( File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/xarray/backends/netCDF4_.py", line 34, in <module> import netCDF4 File "/home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/netCDF4/__init__.py", line 3, in <module> from ._netCDF4 import * ImportError: /home/runner/work/xarray/xarray/asv_bench/.asv/env/3aabcb0950276df5ae33a267529abdce/lib/python3.8/site-packages/netCDF4/../../../libnetcdf.so.19: undefined symbol: H5Pset_fapl_ros3 ·· Failed to build the project and import the benchmark suite. ```

Might be related to * https://github.com/h5py/h5py/issues/1880 * https://github.com/conda-forge/h5py-feedstock/issues/92

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
929633089 https://github.com/pydata/xarray/pull/5796#issuecomment-929633089 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X843aRNB Illviljan 14371165 2021-09-28T21:18:50Z 2021-09-28T21:18:50Z MEMBER

Down to 30 mins now. dataset_io is being skipped however.

I think this is finished and is ready for review.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
923263203 https://github.com/pydata/xarray/pull/5796#issuecomment-923263203 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X843B-Dj dcherian 2448579 2021-09-20T20:16:13Z 2021-09-20T20:16:13Z MEMBER

@dcherian did you have something specific in mind when you linked to https://github.com/jaimergp/scikit-image/blob/main/.github/workflows/benchmarks-cron.yml in #4648? I think I'll remove it otherwise and just focus on

Nothing specific. Just wanted to link their configuration.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
922121144 https://github.com/pydata/xarray/pull/5796#issuecomment-922121144 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8429nO4 Illviljan 14371165 2021-09-17T22:50:25Z 2021-09-17T22:50:49Z MEMBER

Down to 101 minutes now. :)

I thinks it's mainly from the tests that I haven't checked yet. If someone wants to take a stab at reducing the times and removing errors for dataset_io.py and indexing.py that would be great.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920616472 https://github.com/pydata/xarray/pull/5796#issuecomment-920616472 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X842334Y max-sixty 5635139 2021-09-16T06:30:42Z 2021-09-16T06:30:42Z MEMBER

Hi @Illviljan — we'd be interesting in seeing whether you wanted to join the core developers group — but we don't have a way of contacting you! So a comment on your PR was the best I could think of. If you'd be interested in discussing this, drop me an email at m at maximilian roos dot com . Thanks!

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920307944 https://github.com/pydata/xarray/pull/5796#issuecomment-920307944 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8422sjo Illviljan 14371165 2021-09-15T19:16:11Z 2021-09-15T19:18:58Z MEMBER

There we go. I think the workflow works as intended now we'll see in 3-4 hours. Only thing left is to improve the tests which can be done in other PRs and maybe rewrite that README slightly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920238944 https://github.com/pydata/xarray/pull/5796#issuecomment-920238944 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8422btg Illviljan 14371165 2021-09-15T17:49:01Z 2021-09-15T17:49:01Z MEMBER

@dcherian did you have something specific in mind when you linked to https://github.com/jaimergp/scikit-image/blob/main/.github/workflows/benchmarks-cron.yml in #4648? I think I'll remove it otherwise and just focus on * Benchmark label triggered - from scikit * Benchmark no label triggered - from scikit modded by me.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920227674 https://github.com/pydata/xarray/pull/5796#issuecomment-920227674 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8422Y9a Illviljan 14371165 2021-09-15T17:37:48Z 2021-09-15T17:41:47Z MEMBER

Do you mean skipping those that require bottleneck rather than relying on asv to handle the crash? If so, sounds good

Hmm, I was just thinking installing all possible dependencies. But I think I've simply misunderstood the tests and why they were crashing.

The Benchmark no label triggered seems to have succeded but still failed? Anyone understands what the error means? ``` [100.00%] ··· unstacking.UnstackingDask.time_unstack_slow 32.7±0.3ms

BENCHMARKS NOT SIGNIFICANTLY CHANGED. + grep 'Traceback \|failed\|PERFORMANCE DECREASED' benchmarks.log + exit 1 ++ '[' 1 = 1 ']' ++ '[' -x /usr/bin/clear_console ']' ++ /usr/bin/clear_console -q Error: Process completed with exit code 1. ```

Edit: I think I understand now. We get a bunch of Traceback errors, that's why it's erroring even though the performance didn't change.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920203704 https://github.com/pydata/xarray/pull/5796#issuecomment-920203704 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8422TG4 dcherian 2448579 2021-09-15T17:02:44Z 2021-09-15T17:02:44Z MEMBER

Ah one major problem IIUC is that we run a lot of benchmarks with and without bottleneck even if bottleneck isn't involved in the operation being benchmarked.


Some of the following should be fixed


ImportError: Pandas requires version '0.12.3' or newer of 'xarray' (version '0.0.0' currently installed).


rolling.Rolling.time_rolling_construct
xarray.core.merge.MergeError: conflicting values for variable 'x_coords' on objects to be combined. You can skip this check by specifying compat='override'.


IOWriteNetCDFDaskDistributed.time_write Looks like we're spinning up multiple dask clusters. UserWarning: Port 8787 is already in use. Perhaps you already have a cluster running? Hosting the HTTP server on port 33423 instead warnings.warn(

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
920194613 https://github.com/pydata/xarray/pull/5796#issuecomment-920194613 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8422Q41 dcherian 2448579 2021-09-15T16:53:10Z 2021-09-15T16:53:10Z MEMBER

Long times, +240 minutes

yeah this is a problem.

Maybe we need to go through and mark the slow tests like in the README_ci.md document

In that vein, a new private function is defined at benchmarks.__init__: _skip_slow. This will check if the ASV_SKIP_SLOW environment variable has been defined. If set to 1, it will raise NotImplementedError and skip the test. To implement this behavior in other tests, you can add the following attribute:

Add more required installs? bottleneck for examples crashes tests because it isn't installed. Add all the non-required as well?

Do you mean skipping those that require bottleneck rather than relying on asv to handle the crash? If so, sounds good

Can a normal user add and remove labels?

An alternative approach would be to use something similar to our [skip-ci] and [test-upstream] tags in the commit message. Though I think that tag needs to be on every commit you want benchmarked.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
919706767 https://github.com/pydata/xarray/pull/5796#issuecomment-919706767 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X8420ZyP Illviljan 14371165 2021-09-15T05:15:29Z 2021-09-15T07:44:21Z MEMBER

Here's some things besides not getting green ticks @max-sixty: * Long times, +240 minutes. scikit had it down to like 13min somehow. What's the bottleneck? Which tests are super slow? * A lot of printed errors which makes the report very messy, go through each test and fix those. Not necessary for this PR though. * asv.conf.json has been moved. If you have any idea how to trigger this job inside asv_bench that would be nice. * Should we align benchmark folder names? Numpy? * Add more required installs? bottleneck for examples crashes tests because it isn't installed. Add all the non-required as well? * Can a normal user add and remove labels? I set this up so that I don't have to deal with the nightmare of setting up asv locally. Would be nice to avoid having to ask you over and over again to add/remove labels. To trigger it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523
919599539 https://github.com/pydata/xarray/pull/5796#issuecomment-919599539 https://api.github.com/repos/pydata/xarray/issues/5796 IC_kwDOAMm_X842z_mz max-sixty 5635139 2021-09-15T00:15:49Z 2021-09-15T00:15:49Z MEMBER

Awesome!! Thanks a lot @Illviljan !

On how we run these — I would be fine with the labeled approach that scikit-learn uses.

What else do you think is needed for this? It already looks excellent, thank you.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add asv benchmark jobs to CI 996475523

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.079ms · About: xarray-datasette