html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2223#issuecomment-626336257,https://api.github.com/repos/pydata/xarray/issues/2223,626336257,MDEyOklzc3VlQ29tbWVudDYyNjMzNjI1Nw==,26384082,2020-05-10T14:25:07Z,2020-05-10T14:25:07Z,NONE,"In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here or remove the `stale` label; otherwise it will be marked as closed automatically ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967 https://github.com/pydata/xarray/issues/2223#issuecomment-396050734,https://api.github.com/repos/pydata/xarray/issues/2223,396050734,MDEyOklzc3VlQ29tbWVudDM5NjA1MDczNA==,15956441,2018-06-10T13:53:25Z,2018-06-10T13:53:25Z,CONTRIBUTOR,"Ok, thank you the information. I first worked with the [API](http://xarray.pydata.org/en/stable/generated/xarray.Dataset.interp.html#xarray.Dataset.interp) Clearly documented in the [link](http://xarray.pydata.org/en/latest/interpolation.html) you provided. I noticed the attrs get lost after orthogonal interpolation (see the first/second plots of `arr` in [mybinder](https://mybinder.org/v2/gh/gwin-zegal/interp/master?filepath=interpolation.ipynb), I might open a new issue for that ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967 https://github.com/pydata/xarray/issues/2223#issuecomment-396049670,https://api.github.com/repos/pydata/xarray/issues/2223,396049670,MDEyOklzc3VlQ29tbWVudDM5NjA0OTY3MA==,6815844,2018-06-10T13:36:42Z,2018-06-10T13:49:58Z,MEMBER,"Thanks for your deeper analysis. > It seems everything's well with xarray. Happy to hear that. > I first thought i'll get a 1D array which is not the case (this is often the behavior I want). Our `interp` is working orthogonally by default, so passing two arrays sized 10,000 will result in interpolation of 100,000,000 values. In order to get a 1D array, you can pass two *dataarray*s with the same dimension, ```python new_tension = xr.DataArray(new_tension, dims='new_dim') new_resistance = xr.DataArray(new_resistance, dims='new_dim') arr.interp(tension=new_tension, resistance=new_resistance) ``` which gives a 1d array with the new dimension `new_dim`. See [here](http://xarray.pydata.org/en/latest/interpolation.html#advanced-interpolation) for the details.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967 https://github.com/pydata/xarray/issues/2223#issuecomment-396050056,https://api.github.com/repos/pydata/xarray/issues/2223,396050056,MDEyOklzc3VlQ29tbWVudDM5NjA1MDA1Ng==,6815844,2018-06-10T13:42:59Z,2018-06-10T13:42:59Z,MEMBER,"I want to keep this issue open, as the performance can be increased for such a case. In the above example, ```python arr.interp(tension=new_tension, resistance=new_resistance) ``` and ```python arr.interp(tension=new_tension).interp(resistance=new_resistance) ``` gives the same result (for 'linear' and 'nearest' methods), but the latter runs much faster. This difference looks similar to the difference between our *orthogonal* indexing and *vectorized* indexing. We may need *orthogonal* interpolation path, which would significantly increase the performance in some cases.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967 https://github.com/pydata/xarray/issues/2223#issuecomment-396043207,https://api.github.com/repos/pydata/xarray/issues/2223,396043207,MDEyOklzc3VlQ29tbWVudDM5NjA0MzIwNw==,15956441,2018-06-10T11:55:54Z,2018-06-10T11:55:54Z,CONTRIBUTOR,"Thanks for your comment had a deeper look at `DataArray.interp` and it computes a new DataArray with new coords given by the dict. I first thought i'll get a 1D array which is not the case (this is often the behavior I want). I then compared first 10_000 interpolations on sdf against 100_000_000 in xarray! It explains the gap. Updated my comparison with the same behavior with scipy and sdf in a jupyter notebook on [mybinder](https://mybinder.org/v2/gh/gwin-zegal/interp/master?filepath=interpolation.ipynb) It seems everything's well with xarray. Extrapolation (linear first) would be a good feature too, I put an example at the end of the notebook about sdf interpolation/extrapolation possibilites (work for nd-arrays of dim 32 as with numpy) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967 https://github.com/pydata/xarray/issues/2223#issuecomment-396002143,https://api.github.com/repos/pydata/xarray/issues/2223,396002143,MDEyOklzc3VlQ29tbWVudDM5NjAwMjE0Mw==,6815844,2018-06-09T22:09:27Z,2018-06-09T22:09:27Z,MEMBER,"@gwin-zegal , thank you for using our new feature and reporting the issue. I confirmed the poor performance of `interp`. I will look inside later, whether problem is on our code or upstream (scipy.interpolate). A possible workaround for your code is to change ```python arr.interp({'tension': new_tension, 'resistance': new_resistance}) ``` to ```python arr.interp({tension': new_tension}).interp('resistance': new_resistance}) ``` but it does not solve all the problems.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,330918967