issue_comments: 626257580
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/4045#issuecomment-626257580 | https://api.github.com/repos/pydata/xarray/issues/4045 | 626257580 | MDEyOklzc3VlQ29tbWVudDYyNjI1NzU4MA== | 6628425 | 2020-05-10T01:15:53Z | 2020-05-10T01:15:53Z | MEMBER | Thanks for the report @half-adder. This indeed is related to times being encoded as floats, but actually is not cftime-related (the times here not being encoded using cftime; we only use cftime for non-standard calendars and out of nanosecond-resolution bounds dates). Here's a minimal working example that illustrates the issue with the current logic in In [2]: times = pd.DatetimeIndex([np.datetime64("2017-02-22T16:27:08.732000000")]) In [3]: reference = pd.Timestamp("1900-01-01") In [4]: units = np.timedelta64(1, "us") In [5]: (times - reference).values[0] Out[5]: numpy.timedelta64(3696769628732000000,'ns') In [6]: ((times - reference) / units).values[0] Out[6]: 3696769628732000.5 ``` In principle, we should be able to represent the difference between this date and the reference date in an integer amount of microseconds, but timedelta division produces a float. We currently try to cast these floats to integers when possible, but that's not always safe to do, e.g. in the case above. It would be great to make roundtripping times -- particularly standard calendar datetimes like these -- more robust. It's possible we could now leverage floor division (i.e.
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
614275938 |