home / github / issues

Menu
  • Search all tables
  • GraphQL API

issues: 333248242

This data as json

id node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
333248242 MDExOlB1bGxSZXF1ZXN0MTk1NTA4NjE3 2236 Refactor nanops 6815844 closed 0     19 2018-06-18T12:27:31Z 2018-09-26T12:42:55Z 2018-08-16T06:59:33Z MEMBER   0 pydata/xarray/pulls/2236
  • [x] Closes #2230
  • [x] Tests added
  • [x] Tests passed
  • [x] Fully documented, including whats-new.rst for all changes and api.rst for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later)

In #2230, the addition of min_count keywords for our reduction methods was discussed, but our duck_array_ops module is becoming messy (mainly due to nan-aggregation methods for dask, bottleneck and numpy) and it looks a little hard to maintain them.

I tried to refactor them by moving nan-aggregation methods to nanops module.

I think I still need to take care of more edge cases, but I appreciate any comment for the current implementation.

Note: In my implementation, bottleneck is not used when skipna=False. bottleneck would be advantageous when skipna=True as numpy needs to copy the entire array once, but I think numpy's method is still OK if skipna=False.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2236/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    13221727 pull

Links from other tables

  • 0 rows from issues_id in issues_labels
  • 19 rows from issue in issue_comments
Powered by Datasette · Queries took 0.897ms · About: xarray-datasette