issue_comments: 735789517
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/4045#issuecomment-735789517 | https://api.github.com/repos/pydata/xarray/issues/4045 | 735789517 | MDEyOklzc3VlQ29tbWVudDczNTc4OTUxNw== | 6628425 | 2020-11-30T13:35:26Z | 2020-11-30T13:40:50Z | MEMBER |
The short answer is that CF conventions allow for dates to be encoded with floating point values, so we encounter that in data that xarray ingests from other sources (i.e. files that were not even produced with Python, let alone xarray). If we didn't have to worry about roundtripping files that followed those conventions, I agree we would just encode everything with nanosecond units as
Yes, I can see why this would be quite frustrating. In principle we should be able to handle this (contributions are welcome); it just has not been a priority up to this point. In my experience xarray's current encoding and decoding methods for standard calendar times work well up to at least second precision. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
614275938 |