issues: 771382653
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
771382653 | MDU6SXNzdWU3NzEzODI2NTM= | 4714 | Allow sel's method and tolerance to vary per-dimension | 18488 | open | 0 | 6 | 2020-12-19T13:37:36Z | 2020-12-19T16:58:20Z | NONE | Imagine some data like this:
Let's say we now want to sample these sensors at some arbitrary points. We can use vectorized indexing do this:
This fails because we are sampling one of our sensors at time 1, where we don't have any observations. We can add array([0, 0, 2, 0, 1])``` The problem is that the bogus sensor "B" is now getting a value ffilled from sensor "A"'s time 0 observation, which doesn't make a lot of sense because sensor names are arbitary. What we really want to do is apply the ffill only down the "time" array, so the So, it would be nice if we could supply a per-dimension
From an implementation point of view, this looks like an easy addition in |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
13221727 | issue |