issue_comments: 1412439718
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7493#issuecomment-1412439718 | https://api.github.com/repos/pydata/xarray/issues/7493 | 1412439718 | IC_kwDOAMm_X85UMB6m | 6628425 | 2023-02-01T17:25:00Z | 2023-02-01T18:54:41Z | MEMBER | Thanks for joining the meeting today @khider. Some potentially relevant places in the code that come to my mind are:
- Automatic casting to nanosecond precision
- Decoding times via pandas
- Encoding times via pandas
- Though as @shoyer says, searching for Some design questions that come to my mind are (but you don't need an answer to these immediately to start working): - How do we decide which precision to decode times to? Would it be the finest precision that enables decoding without overflow? - This is admittedly in the weeds, but how do we decide when to use cftime and when not to? It seems obvious that in the long term we should use NumPy values for proleptic Gregorian dates of all precisions, but what about dates from the Gregorian calendar (where we may no longer have the luxury that the proleptic Gregorian and Gregorian calendars are equivalent for all representable times)? - Not a blocker (since this is an existing issue) but are there ways we could make working with mixed precision datetime values friendlier with regard to overflow (https://github.com/numpy/numpy/issues/16352)? I worry about examples like this:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1563104480 |