html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2108#issuecomment-1114349288,https://api.github.com/repos/pydata/xarray/issues/2108,1114349288,IC_kwDOAMm_X85Ca57o,5635139,2022-05-01T22:10:48Z,2022-05-01T22:10:48Z,MEMBER,Is this now fixed by `drop_duplicates`? https://docs.xarray.dev/en/stable/generated/xarray.DataArray.drop_duplicates.html,"{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-1114293906,https://api.github.com/repos/pydata/xarray/issues/2108,1114293906,IC_kwDOAMm_X85CasaS,26384082,2022-05-01T17:37:49Z,2022-05-01T17:37:49Z,NONE,"In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the `stale` label; otherwise it will be marked as closed automatically
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-613525795,https://api.github.com/repos/pydata/xarray/issues/2108,613525795,MDEyOklzc3VlQ29tbWVudDYxMzUyNTc5NQ==,5442433,2020-04-14T15:55:05Z,2020-04-14T15:55:05Z,NONE,"I am adding here a comment to keep it alive. In fact, this is more complicated than it seems because in combining files with duplicate times one has to choose how to merge i.e keep first, keep last or even a combination of the two.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-613518293,https://api.github.com/repos/pydata/xarray/issues/2108,613518293,MDEyOklzc3VlQ29tbWVudDYxMzUxODI5Mw==,26384082,2020-04-14T15:42:12Z,2020-04-14T15:42:12Z,NONE,"In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the `stale` label; otherwise it will be marked as closed automatically
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-389196623,https://api.github.com/repos/pydata/xarray/issues/2108,389196623,MDEyOklzc3VlQ29tbWVudDM4OTE5NjYyMw==,5442433,2018-05-15T14:53:38Z,2018-05-15T14:53:38Z,NONE,"Thanks @shoyer. Your approach works better (one line) plus is consistent with the xarray-pandas shared paradigm. Unfortunately, I can't spare the time to do the PR right now. I haven't done it before for xarray and it will require some time overhead. Maybe someone with more experience can oblige.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-387549048,https://api.github.com/repos/pydata/xarray/issues/2108,387549048,MDEyOklzc3VlQ29tbWVudDM4NzU0OTA0OA==,1217238,2018-05-08T21:31:58Z,2018-05-08T21:31:58Z,MEMBER,"The pandas `duplicated()` method might be more convenient than using `np.unique()`, e.g., you could equivalently write:
`arr.sel(time=~arr.indexes['time'].duplicated())`

I think we would be open to adding duplicated() to xarray, too, if you or someone else is interested in making a pull request.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-387343836,https://api.github.com/repos/pydata/xarray/issues/2108,387343836,MDEyOklzc3VlQ29tbWVudDM4NzM0MzgzNg==,5442433,2018-05-08T09:33:14Z,2018-05-08T09:33:14Z,NONE,"To partially answer my issue, I came up with the following post-processing option

1. get the index of the duplicate coordinate values
        val,idx = np.unique(arr.time, return_index=True) 

2. trim the dataset
        arr = arr.isel(time=idx)
 
Maybe this can be integrated somehow...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184