html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1388#issuecomment-362902669,https://api.github.com/repos/pydata/xarray/issues/1388,362902669,MDEyOklzc3VlQ29tbWVudDM2MjkwMjY2OQ==,6815844,2018-02-04T12:20:33Z,2018-02-04T12:52:29Z,MEMBER,"@gajomi
Sorry for my late response and thank you for the proposal.
But aside from my previous proposal, I was thinking whether such aggregation methods (including `argmin`) should propagate the coordinate.
For example, as you pointed out, in theory, we may be able to track `x`-coordinate at the argmin index after `da.argmin(dim='x')`.
But it is not reasonable for `da.mean(dim='x')`.
It may be reasonable for `da.max(dim='x')` but not for `da.median(dim='x')`.
Such specific rules may be confusing and bring additional complexity.
I think the rule
**we do not track coordinates after aggregations**
would be much simpler and easier to understand.
If we adopt the above rule, I think the `argmin` would give just an array of indices,
```python
In [1]: import xarray as xr
...: da = xr.DataArray([[0, 3, 2], [2, 1, 4]], dims=['x', 'y'],
...: coords={'x': [1, 2], 'y': ['a', 'b', 'c']})
...:
In [4]: da.argmin(dim='x')
Out[4]:
array([0, 1, 0])
Coordinates:
* y (y)
array([0, 1, 2])
Coordinates:
x (y) int64 1 2 1
* y (y)