pull_requests: 195508617
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
195508617 | MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 | 2236 | closed | 0 | Refactor nanops | 6815844 | - [x] Closes #2230 - [x] Tests added - [x] Tests passed - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) In #2230, the addition of `min_count` keywords for our reduction methods was discussed, but our `duck_array_ops` module is becoming messy (mainly due to nan-aggregation methods for dask, bottleneck and numpy) and it looks a little hard to maintain them. I tried to refactor them by moving nan-aggregation methods to `nanops` module. I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation. Note: In my implementation, **bottleneck is not used when `skipna=False`**. bottleneck would be advantageous when `skipna=True` as numpy needs to copy the entire array once, but I think numpy's method is still OK if `skipna=False`. | 2018-06-18T12:27:31Z | 2018-09-26T12:42:55Z | 2018-08-16T06:59:33Z | 2018-08-16T06:59:33Z | 0b9ab2d12ae866a27050724d94facae6e56f5927 | 0 | b72a1c852add254a4cdd49408fe4e9c934ceece6 | 4df048c146b8da7093faf96b3e59fb4d56945ec5 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/2236 |
Links from other tables
- 0 rows from pull_requests_id in labels_pull_requests