issues: 391865060
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
391865060 | MDExOlB1bGxSZXF1ZXN0MjM5MjYzOTU5 | 2616 | API for N-dimensional combine | 35968931 | closed | 0 | 25 | 2018-12-17T19:51:32Z | 2019-06-25T16:18:29Z | 2019-06-25T15:14:34Z | MEMBER | 0 | pydata/xarray/pulls/2616 | Continues the discussion from #2553 about how the API for loading and combining data from multiple datasets should work. (Ultimately part of the solution to #2159) @shoyer this is for you to see how I envisaged the API would look, based on our discussion in #2553. For now you can ignore all the changes except the ones to the docstrings of Feedback from anyone else is also encouraged, as really the point of this is to make the API as clear as possible to someone who hasn't delved into the code behind It makes sense to first work out the API, then change the internal implementation to match, using the internal functions developed in #2553. Therefore the tasks include:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | pull |