issue_comments: 386658665
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/2104#issuecomment-386658665 | https://api.github.com/repos/pydata/xarray/issues/2104 | 386658665 | MDEyOklzc3VlQ29tbWVudDM4NjY1ODY2NQ== | 1217238 | 2018-05-04T16:40:03Z | 2018-05-04T16:40:03Z | MEMBER | First of all, this is awesome! One question: since I think this new interpolation method will be used quite frequently (more often than We should also consider adding
Yes, I agree. Interpolation should be vectorized similarly to In particular, in the long term I think we should aim to make
I think this is fine to start (we can always add more later).
Which NaN are you referring to?
Case (1) should definitely be supported. Especially if data is stored in dask arrays, we cannot necessarily check if there are NaNs. Cases (2) and (3) are not important, because there are relatively few use-cases for NaN in coordinate arrays.
Currently we don't support this for
I think the new coordinates should take priority, and the dimension coordinates on the new coordinate should be dropped. This is similar to what we do for ``` In [10]: da = xr.DataArray([0, 0.1, 0.2, 0.1], dims='x', coords={'x': [0, 1, 2, 3]}) ...: In [11]: da.sel(x=xr.DataArray([1, 2], dims=['x'], coords={'x': [1, 3]})) ...: Out[11]: <xarray.DataArray (x: 2)> array([0.1, 0.2]) Coordinates: * x (x) int64 1 2 ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
320275317 |