id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 904149592,MDExOlB1bGxSZXF1ZXN0NjU1MzIyNTU1,5389,ignore this,56925856,closed,0,,,0,2021-05-27T20:16:22Z,2021-05-27T20:22:43Z,2021-05-27T20:16:45Z,CONTRIBUTOR,,0,pydata/xarray/pulls/5389,"ignore this, I did the branching wrong...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5389/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 882876804,MDExOlB1bGxSZXF1ZXN0NjM2Mzg1Mjc2,5284,Dask-friendly nan check in xr.corr() and xr.cov(),56925856,closed,0,,,8,2021-05-09T22:06:03Z,2021-05-27T17:31:07Z,2021-05-27T17:31:07Z,CONTRIBUTOR,,0,pydata/xarray/pulls/5284,"Was reading the discussion [here](https://github.com/pydata/xarray/issues/4804) and thought I'd draft a PR to implement some of the changes people suggested. It seems like the main problem is that currently in `computation.py`, `if not valid_values.all()` is not a lazy operation, and so can be a bottleneck for very large dataarrays. To get around this, I've lifted some neat tricks from #4559 so that `valid_values` remains a dask array. --- - [x] Closes #4804 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5284/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 625064501,MDExOlB1bGxSZXF1ZXN0NDIzMzU0MjIw,4096,Corrcov typo fix,56925856,closed,0,,,4,2020-05-26T17:44:07Z,2020-05-27T03:09:37Z,2020-05-26T19:03:25Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4096," Fixing typo in recent PR #4089 - [x] Closes https://github.com/pydata/xarray/pull/4089#issuecomment-634157768 - [x] Passes `isort -rc . && black . && mypy . && flake8`","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4096/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 623751213,MDExOlB1bGxSZXF1ZXN0NDIyMzMzMTg0,4089,xr.cov() and xr.corr(),56925856,closed,0,,,20,2020-05-23T22:09:07Z,2020-05-26T18:31:31Z,2020-05-25T16:55:33Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4089,"**PR** for the `xr.cov()` and `xr.corr()` functionality which others have been working on. Most code adapted from @r-beer in PR #3550. TODO: - [x] ~Write a reasonable set of tests, maybe not using `pandas` as a benchmark? (See https://github.com/pydata/xarray/issues/3784#issuecomment-633145887) Will probably need some help with this~ CHECKLIST: - [x] Closes #3784, #3550, #2652, #1115 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` ~(something wrong with docs though??)~ - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4089/reactions"", ""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 618828102,MDExOlB1bGxSZXF1ZXN0NDE4NDc4MjY5,4064,Auto chunk,56925856,closed,0,,,23,2020-05-15T09:25:13Z,2020-05-25T20:38:53Z,2020-05-25T19:23:45Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4064,"Adding `chunks='auto'` option to `Dataset.chunk()`. - [x] Closes #4055 - [x] Tests added in `test_dask.py` - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Updated `whats-new.rst` for changes ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4064/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull