issue_comments
2 rows where issue = 702373263 and user = 6628425 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- assign_coords with datetime64[us] changes dtype to datetime64[ns] · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
696423669 | https://github.com/pydata/xarray/issues/4427#issuecomment-696423669 | https://api.github.com/repos/pydata/xarray/issues/4427 | MDEyOklzc3VlQ29tbWVudDY5NjQyMzY2OQ== | spencerkclark 6628425 | 2020-09-21T23:00:54Z | 2020-09-21T23:04:56Z | MEMBER | That would be great @andrewpauling! I think this is the relevant code in xarray: https://github.com/pydata/xarray/blob/1155f5646e07100e4acda18db074b148f1213b5d/xarray/core/variable.py#L244-L250 I want to say arguably we could use the I agree this casting behavior is a bit surprising. If we wanted to be a little more transparent, we could also warn when attempting to cast non-nanosecond-precision |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
assign_coords with datetime64[us] changes dtype to datetime64[ns] 702373263 | |
695436235 | https://github.com/pydata/xarray/issues/4427#issuecomment-695436235 | https://api.github.com/repos/pydata/xarray/issues/4427 | MDEyOklzc3VlQ29tbWVudDY5NTQzNjIzNQ== | spencerkclark 6628425 | 2020-09-20T01:23:47Z | 2020-09-20T01:23:47Z | MEMBER | Thanks @andrewpauling -- I do think there's a bug here, but this issue happens to be more complicated than it might seem on the surface :). Xarray standardizes around nanosecond precision for Addressing this fully would be a challenge (we've discussed this at times in the past). It was concluded that for dates outside the representable range that This is a long way of saying, without a fair amount of work (i.e. addressing this issue upstream in pandas) xarray is unlikely to relax its approach for the precision of However, the fact that your example silently results in non-sensical times should be considered a bug; instead, following pandas, I would argue we should raise an error if the dates cannot be represented with nanosecond precision. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
assign_coords with datetime64[us] changes dtype to datetime64[ns] 702373263 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1