html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2108#issuecomment-613525795,https://api.github.com/repos/pydata/xarray/issues/2108,613525795,MDEyOklzc3VlQ29tbWVudDYxMzUyNTc5NQ==,5442433,2020-04-14T15:55:05Z,2020-04-14T15:55:05Z,NONE,"I am adding here a comment to keep it alive. In fact, this is more complicated than it seems because in combining files with duplicate times one has to choose how to merge i.e keep first, keep last or even a combination of the two.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-389196623,https://api.github.com/repos/pydata/xarray/issues/2108,389196623,MDEyOklzc3VlQ29tbWVudDM4OTE5NjYyMw==,5442433,2018-05-15T14:53:38Z,2018-05-15T14:53:38Z,NONE,"Thanks @shoyer. Your approach works better (one line) plus is consistent with the xarray-pandas shared paradigm. Unfortunately, I can't spare the time to do the PR right now. I haven't done it before for xarray and it will require some time overhead. Maybe someone with more experience can oblige.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184
https://github.com/pydata/xarray/issues/2108#issuecomment-387343836,https://api.github.com/repos/pydata/xarray/issues/2108,387343836,MDEyOklzc3VlQ29tbWVudDM4NzM0MzgzNg==,5442433,2018-05-08T09:33:14Z,2018-05-08T09:33:14Z,NONE,"To partially answer my issue, I came up with the following post-processing option
1. get the index of the duplicate coordinate values
val,idx = np.unique(arr.time, return_index=True)
2. trim the dataset
arr = arr.isel(time=idx)
Maybe this can be integrated somehow...","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,320838184