html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/3262#issuecomment-549088791,https://api.github.com/repos/pydata/xarray/issues/3262,549088791,MDEyOklzc3VlQ29tbWVudDU0OTA4ODc5MQ==,1217238,2019-11-02T23:01:30Z,2019-11-02T23:01:30Z,MEMBER,"No worries! You were a great help already!
On Sat, Nov 2, 2019 at 3:01 PM Noah D Brenowitz
wrote:
> Unfortunately, I don’t think I have much time now to contribute to a
> general purpose solution leveraging xarray’s built-in indexing. So feel
> free to add to or close this PR. To be successful, I would need to study
> xarray’s indexing internals more since I don’t think it is as easily
> implemented as a routine calling DataArray methods. Some custom numba code
> I wrote fits in my brain much better, and is general enough for my purposes
> when wrapped with xr.apply_ufunc. I encourage someone else to pick up
> where I left off, or we could close this PR.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> ,
> or unsubscribe
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,484863660
https://github.com/pydata/xarray/pull/3262#issuecomment-549084085,https://api.github.com/repos/pydata/xarray/issues/3262,549084085,MDEyOklzc3VlQ29tbWVudDU0OTA4NDA4NQ==,1217238,2019-11-02T21:46:32Z,2019-11-02T21:46:32Z,MEMBER,"One missing part of the algorithm I wrote in https://github.com/pydata/xarray/pull/3262#issuecomment-525154116 was looping over all index/weight combinations. I recently wrote a version of this for another project that might be a good starting point here:
```python
def prod(items):
out = 1
for item in items:
out *= item
return out
def index_by_linear_interpolation(array, float_indices):
all_indices_and_weights = []
for origin in float_indices:
lower = np.floor(origin)
upper = np.ceil(origin)
l_index = xlower.astype(np.int32)
u_index = upper.astype(np.int32)
l_weight = origin - lower
u_weight = 1 - l_weight
all_indices_and_weights.append(
((l_index, l_weight), (u_index, u_weight))
)
out = 0
for items in itertools.product(*all_indices_and_weights):
indices, weights = zip(*items)
indices = tuple(index % size for index, size in zip(indices, array.shape))
out += prod(weights) * array[indices]
return out
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,484863660
https://github.com/pydata/xarray/pull/3262#issuecomment-525154116,https://api.github.com/repos/pydata/xarray/issues/3262,525154116,MDEyOklzc3VlQ29tbWVudDUyNTE1NDExNg==,1217238,2019-08-27T06:12:14Z,2019-08-27T06:12:14Z,MEMBER,"Feel free to refactor as you see fit, but it may still make sense to do indexing at the Variable rather than Dataset level. That potentially would let you avoid redundant operations on the entire Dataset object.
Take a look at the `_localize()` helper function in `missing.py` for an example of how to do stuff with in the underlying index. I think something like the following helper function could do the trick:
```python
def linear_interp(var, indexes_coords):
lower_indices = {}
upper_indices = {}
for dim, [x, new_x] in indexes_coords.items():
index = x.to_index()
# ideally should precompute these, rather than calling get_indexer_nd for each
# variable separately
lower_indices[dim] = get_indexer_nd(index, new_x.values, method=""ffill"")
upper_indices[dim] = get_indexer_nd(index, new_x.values, method=""bfill"")
result = 0
for weight, indexes in ... # need to compute weights and all lower/upper combinations
result += weight * var.isel(**indexes)
return result
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,484863660