html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/4672#issuecomment-745413055,https://api.github.com/repos/pydata/xarray/issues/4672,745413055,MDEyOklzc3VlQ29tbWVudDc0NTQxMzA1NQ==,10194086,2020-12-15T16:40:28Z,2020-12-15T16:40:28Z,MEMBER,"Ok, let's get this in. matplotlib-base and nodefaults should be quite uncontroversial. If mamba makes problems it's quickly removed.
I plan to add a whats new entry concerning the CI speed-up (#4672, #4685 & #4694) in #4694
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744749458,https://api.github.com/repos/pydata/xarray/issues/4672,744749458,MDEyOklzc3VlQ29tbWVudDc0NDc0OTQ1OA==,10194086,2020-12-14T22:25:47Z,2020-12-14T22:25:47Z,MEMBER,See #4694,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744710105,https://api.github.com/repos/pydata/xarray/issues/4672,744710105,MDEyOklzc3VlQ29tbWVudDc0NDcxMDEwNQ==,10194086,2020-12-14T21:05:38Z,2020-12-14T21:05:38Z,MEMBER,"I thought that `pytest-xdist` was not an option as the plot tests are not thread safe (as mpl isn't). But looking again I _think_ that `pytest-xdist` actually uses uses multiprocessing and not multithreading, so this might actually be worth a try.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744602415,https://api.github.com/repos/pydata/xarray/issues/4672,744602415,MDEyOklzc3VlQ29tbWVudDc0NDYwMjQxNQ==,2448579,2020-12-14T17:46:46Z,2020-12-14T17:46:46Z,MEMBER,"Not sure how much it would help on CI, but it would be nice to get `pytest-xdist` working again for local testing (#3263)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744601503,https://api.github.com/repos/pydata/xarray/issues/4672,744601503,MDEyOklzc3VlQ29tbWVudDc0NDYwMTUwMw==,2448579,2020-12-14T17:45:11Z,2020-12-14T17:45:11Z,MEMBER,"> using mamba and matplotlib-base should speed up the installation step by 2 to 5 minutes. If we are fine switching to the faster but maybe not-as-mature mamba this can be merged on green.
Seems like a great improvement to me!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744445362,https://api.github.com/repos/pydata/xarray/issues/4672,744445362,MDEyOklzc3VlQ29tbWVudDc0NDQ0NTM2Mg==,10194086,2020-12-14T13:37:36Z,2020-12-14T13:37:36Z,MEMBER,"I wasn't really able to get to the bottom of this. Still, using mamba and matplotlib-base should speed up the installation step by 2 to 5 minutes. If we are fine switching to the faster but maybe not-as-mature mamba this can be merged on green.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-744040595,https://api.github.com/repos/pydata/xarray/issues/4672,744040595,MDEyOklzc3VlQ29tbWVudDc0NDA0MDU5NQ==,10194086,2020-12-13T17:29:32Z,2020-12-13T17:29:32Z,MEMBER,"> Why do we see that much of a speed-up once we downgrade numba on azure pipelines?
Sometimes it also works fine with numba 0.52... So unfortunately I don't know. My suspicion is that we get different CPUs by chance. I added a new step to our CI: `cat /proc/cpuinfo` (worked with gitbash on windows). Maybe this reveals something.
On my dualboot machine the test suite takes 23 min on windows and 15 min on linux. Thus, already quite a difference but not as large as on azure where the windows test seem to take about twice as long as the linux tests.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743924206,https://api.github.com/repos/pydata/xarray/issues/4672,743924206,MDEyOklzc3VlQ29tbWVudDc0MzkyNDIwNg==,14808389,2020-12-13T00:13:02Z,2020-12-13T17:09:36Z,MEMBER,"> locally on windows I find no large difference between numba 0.51 and 0.52
that's really strange. Why do we see that much of a speed-up once we downgrade `numba` on azure pipelines?
> there are about 750 xfailed tests in `test_units.py`
yes, I will have to update those. I think until the index refactor we can safely `skip` all tests that rely on units in indexes, which should improve the situation, and there might also be a few tests that were fixed by `pint`.
Edit: see #4685
> What is the difference between `pytest.mark.xfail` and `pytest.xfail`?
I think `pytest.mark.xfail` is the official way to decorate test functions while `pytest.xfail` can be used in the function body to programmatically mark the test as expected failure (which allows more control than the mark)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743922542,https://api.github.com/repos/pydata/xarray/issues/4672,743922542,MDEyOklzc3VlQ29tbWVudDc0MzkyMjU0Mg==,10194086,2020-12-12T23:59:54Z,2020-12-12T23:59:54Z,MEMBER,"locally on windows I find no large difference between numba 0.51 and 0.52, so that does not seem to be the root cause...
@keewis there are about 750 xfailed tests in `test_units.py`. `xfail` is the correct category but they take much longer than `skip`. Locally the tests take about 6 min 30 s using `xfail` but only 45 s using `skip`. On azure the difference is probably even bigger. Would it be an option to use `skip` instead? Of course this has to be done carefully, e.g. checking the xpassing tests etc...
What is the difference between `pytest.mark.xfail` and `pytest.xfail`?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743780782,https://api.github.com/repos/pydata/xarray/issues/4672,743780782,MDEyOklzc3VlQ29tbWVudDc0Mzc4MDc4Mg==,10194086,2020-12-12T16:33:39Z,2020-12-12T16:33:39Z,MEMBER,"Thanks for figuring this out. Still, I think I have to test this locally - the time the CI takes is very inconsistent on azure.
Yes, I think this PR is helpful anyway and should bring down the ci time a bit. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743488518,https://api.github.com/repos/pydata/xarray/issues/4672,743488518,MDEyOklzc3VlQ29tbWVudDc0MzQ4ODUxOA==,14808389,2020-12-12T00:01:42Z,2020-12-12T00:01:42Z,MEMBER,"pinning `numba` seems to have fixed the issue. It definitely is important to speed up our CI, though, waiting more than 30 minutes for the CI to finish is really not ideal.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743462860,https://api.github.com/repos/pydata/xarray/issues/4672,743462860,MDEyOklzc3VlQ29tbWVudDc0MzQ2Mjg2MA==,14808389,2020-12-11T22:35:55Z,2020-12-11T22:40:25Z,MEMBER,"I'm a bit confused, the windows CI used to take about as long as the macOS CI to complete. The [last run](https://dev.azure.com/xarray/xarray/_build/results?buildId=4371&view=logs&jobId=d0e8f4b8-2f67-5548-290c-4d6f15a1cbca) for which that was true was about a week ago, does anyone know what changed since then?
Edit: maybe because of the release of `numba=0.52.0` to `conda-forge`? If so, could you try pinning `numba`?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-743445866,https://api.github.com/repos/pydata/xarray/issues/4672,743445866,MDEyOklzc3VlQ29tbWVudDc0MzQ0NTg2Ng==,10194086,2020-12-11T21:52:05Z,2020-12-11T21:52:05Z,MEMBER,"Here is what I learned:
* The same tests are slow in windows and linux. Just, that those on windows take about twice as long.
* The following test seem to be slow:
* `xarray/tests/test_distributed.py`
* many tests with sparse, e.g. `xarray/tests/test_dataset.py::TestDataset::test_unstack_sparse`
* plotting tests, especially with `FacetGrid`
I am not sure what takes long in `xarray/tests/test_distributed.py`: writing the files or creating the cluster. If it is the latter, it could be possible to only open in once in the module (but I don't know if that actually works or if it has to be closed every time).
https://github.com/pydata/xarray/blob/76d5c0c075628475b555997b82c55dd18a34936e/xarray/tests/test_distributed.py#L118-L119
**Windows py37**
```
9.52s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF3_CLASSIC]
9.12s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[False-True]
8.66s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[False-False]
8.50s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF4]
8.48s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[scipy-NETCDF3_64BIT]
8.34s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF4_CLASSIC]
8.34s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[h5netcdf-NETCDF4]
7.72s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF4]
7.72s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[h5netcdf-NETCDF4]
7.72s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[True-False]
7.71s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[True-True]
7.42s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF4_CLASSIC]
7.35s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF3_CLASSIC]
6.45s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[scipy-NETCDF3_64BIT]
6.26s call xarray/tests/test_dataset.py::TestDataset::test_unstack_sparse
6.06s call xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_groupby_bins
5.46s call xarray/tests/test_plot.py::TestDatasetScatterPlots::test_facetgrid_hue_style
5.35s call xarray/tests/test_interp.py::test_interpolate_chunk_advanced[linear]
5.10s call xarray/tests/test_distributed.py::test_dask_distributed_rasterio_integration_test
5.00s call xarray/tests/test_plot.py::TestFacetedLinePlots::test_facetgrid_shape
4.40s call xarray/tests/test_interp.py::test_interpolate_chunk_advanced[nearest]
```
**Linux py37**
```
5.78s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[False-True]
5.62s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[False-False]
5.55s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[h5netcdf-NETCDF4]
5.34s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[True-False]
5.31s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF4]
5.30s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF4_CLASSIC]
5.15s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[netcdf4-NETCDF3_CLASSIC]
5.10s call xarray/tests/test_distributed.py::test_dask_distributed_rasterio_integration_test
4.98s call xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[True-True]
4.91s call xarray/tests/test_distributed.py::test_dask_distributed_cfgrib_integration_test
4.87s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF3_CLASSIC]
4.82s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF4_CLASSIC]
4.77s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[scipy-NETCDF3_64BIT]
4.75s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[h5netcdf-NETCDF4]
4.67s call xarray/tests/test_distributed.py::test_dask_distributed_read_netcdf_integration_test[netcdf4-NETCDF4]
4.32s call xarray/tests/test_distributed.py::test_dask_distributed_netcdf_roundtrip[scipy-NETCDF3_64BIT]
3.55s call xarray/tests/test_dataset.py::TestDataset::test_unstack_sparse
3.45s call properties/test_pandas_roundtrip.py::test_roundtrip_dataset
3.07s call xarray/tests/test_plot.py::TestFacetedLinePlots::test_facetgrid_shape
2.70s call xarray/tests/test_plot.py::TestDatasetScatterPlots::test_facetgrid_hue_style
2.67s call xarray/tests/test_sparse.py::TestSparseDataArrayAndDataset::test_groupby_bins
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-742870408,https://api.github.com/repos/pydata/xarray/issues/4672,742870408,MDEyOklzc3VlQ29tbWVudDc0Mjg3MDQwOA==,5635139,2020-12-10T23:40:49Z,2020-12-10T23:40:49Z,MEMBER,"> de-parametrize tests with a slow setup (if possible)
If I understand the proposal correctly — a different approach is to change the scope of the parameterizations rather than remove them — and then they only run once; e.g. `scope=""module""`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-742827949,https://api.github.com/repos/pydata/xarray/issues/4672,742827949,MDEyOklzc3VlQ29tbWVudDc0MjgyNzk0OQ==,10194086,2020-12-10T22:02:34Z,2020-12-10T22:02:34Z,MEMBER,"No..., it failed at 99%. I don't entirely get it. The tests were well under way when I left. So I'd really be interested to get the timings of the tests to see what takes so long...
> I just tried running the windows CI via Github actions
Yes, that's of course another good alternative. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240
https://github.com/pydata/xarray/pull/4672#issuecomment-742750024,https://api.github.com/repos/pydata/xarray/issues/4672,742750024,MDEyOklzc3VlQ29tbWVudDc0Mjc1MDAyNA==,13301940,2020-12-10T19:39:33Z,2020-12-10T19:39:33Z,MEMBER,"> other ideas?
I just tried running the windows CI via Github actions, and I noticed some improvements. The entire run takes ~ 45 minutes


If interested, here's the [workflow configuration file](https://github.com/andersy005/xarray/actions/runs/413723011/workflow) I am using... Also, I am happy to submit a PR if need be.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,761270240