html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/5767#issuecomment-914653049,https://api.github.com/repos/pydata/xarray/issues/5767,914653049,IC_kwDOAMm_X842hH95,10194086,2021-09-07T21:53:44Z,2021-09-07T21:53:44Z,MEMBER,"Maybe something like this: ```python attrs = dict(units=""days since 1850-01-01"", calendar=""proleptic_gregorian"") ds1 = xr.Dataset(coords=dict(time=(""time"", [164678], attrs))) xr.conventions.decode_cf(ds1) ``` (or you can directly create two time arrays as suggested by @TomNicholas )","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,988426640 https://github.com/pydata/xarray/pull/5767#issuecomment-913094980,https://api.github.com/repos/pydata/xarray/issues/5767,913094980,IC_kwDOAMm_X842bLlE,10194086,2021-09-05T06:31:41Z,2021-09-05T06:31:41Z,MEMBER,"I like the idea. I would still raise an error, though.Maybe: ""Found a mix of pandas and cftime datetime data types. Re-opening the data with the option ``use_cftime=True`` may fix this issue."" That would be less magic and should tell the users what's going wrong. In addition we would not need an additional keyword argument. I expect a `merge_by_coords` operation to be typically fairly early in the analysis pipeline.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,988426640