id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 1791218322,I_kwDOAMm_X85qw9KS,7964,"DataArray.sel methods: ""inner"" and ""outer""",8995328,open,0,,,3,2023-07-06T09:57:15Z,2023-07-19T13:14:50Z,,NONE,,,,"### Is your feature request related to a problem? Currently, in the case of inexact matches, you can select the nearest coordinate, the next coordinate, or the previous coordinate. This works well for single matches, but not for slices. If I want to extract the ROI that definitely contains all the data from 5.5 to 10.5, I don't want 5 to 10, or 6 to 11: I want 5 to 11 (or possibly 6 to 10). It would be helpful to be able to treat the start and stop boundary differently when it comes to inexact matches for selection. ### Describe the solution you'd like The addition of `methods=""inner""` and `methods=""outer""` to the `DataArray.sel` method (names up for debate). When paired with `slice` selections, `""inner""` would take the next-right of the left boundary and the next-left of the right boundary; `""outer""` would take the next-left of the left boundary and next-right of the right boundary. This wouldn't be compatible with scalar or vector indexing. These could either default to `None` or `""nearest""`. It might also be nice to treat different dimensions differently, but that would be a separate feature, and probably can already be achieved using successive calls to `.sel`. ""inner"" is currently the default behaviour for indexing with a slice (for monotonic increasing coordinates). This seems like it could be extended for non-monotonic coordinates and for decreasing coordinates (reversing the dimension depending on the stride of the slice). ### Describe alternatives you've considered Something like `numpy.logical_and(coords >= left, coords =< right)` and logical indexing for inner, `logical_not`ing the inverse for outer. This isn't very efficient for large arrays. ### Additional context I have 3D images and need to get an ROI around a particular point. I want to make sure that the extracted array is at least as large as a certain padding around the point. I could just add the resolution to the padding but this seems like it would be useful in other contexts.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7964/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue