pull_requests: 636385276
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
636385276 | MDExOlB1bGxSZXF1ZXN0NjM2Mzg1Mjc2 | 5284 | closed | 0 | Dask-friendly nan check in xr.corr() and xr.cov() | 56925856 | Was reading the discussion [here](https://github.com/pydata/xarray/issues/4804) and thought I'd draft a PR to implement some of the changes people suggested. It seems like the main problem is that currently in `computation.py`, `if not valid_values.all()` is not a lazy operation, and so can be a bottleneck for very large dataarrays. To get around this, I've lifted some neat tricks from #4559 so that `valid_values` remains a dask array. --- - [x] Closes #4804 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` | 2021-05-09T22:06:03Z | 2021-05-27T17:31:07Z | 2021-05-27T17:31:07Z | 2021-05-27T17:31:07Z | 3b81a863f6bf96306f90588e4dcef7e54c3af4ea | 0 | ccb5f67b5c8c35e5439a4e7ce1ad69d9400d29da | a6a1e48b57499f91db7e7c15593aadc7930020e8 | CONTRIBUTOR | 13221727 | https://github.com/pydata/xarray/pull/5284 |
Links from other tables
- 1 row from pull_requests_id in labels_pull_requests