issue_comments
4 rows where issue = 171956399 and user = 10194086 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: updated_at (date)
issue 1
- invalid timestamps in the future · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
241579204 | https://github.com/pydata/xarray/issues/975#issuecomment-241579204 | https://api.github.com/repos/pydata/xarray/issues/975 | MDEyOklzc3VlQ29tbWVudDI0MTU3OTIwNA== | mathause 10194086 | 2016-08-22T23:12:05Z | 2016-08-22T23:12:05Z | MEMBER | pydata/pandas#14068 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
invalid timestamps in the future 171956399 | |
241578398 | https://github.com/pydata/xarray/issues/975#issuecomment-241578398 | https://api.github.com/repos/pydata/xarray/issues/975 | MDEyOklzc3VlQ29tbWVudDI0MTU3ODM5OA== | mathause 10194086 | 2016-08-22T23:07:50Z | 2016-08-22T23:07:50Z | MEMBER | As somewhat hinted at above there seem to be several issues here. I tried to look into a solution for checking the first and last element (which seems work for Problem (1) in my original post) but the OverflowError persisted so I looked into this and now my code is a mess but I figured this second problem out. Pandas does not raise an overflow error when adding a ``` import pandas as pd overflow errorpd.to_timedelta(106580, 'D') + pd.Timestamp('2000') no overflow errorpd.to_timedelta([106580], 'D') + pd.Timestamp('2000') ``` This screws up line 145 in https://github.com/pydata/xarray/blob/master/xarray/conventions.py. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
invalid timestamps in the future 171956399 | |
241033630 | https://github.com/pydata/xarray/issues/975#issuecomment-241033630 | https://api.github.com/repos/pydata/xarray/issues/975 | MDEyOklzc3VlQ29tbWVudDI0MTAzMzYzMA== | mathause 10194086 | 2016-08-19T14:28:39Z | 2016-08-19T14:29:40Z | MEMBER | I tried to look into the logic of decoding datetimes and I am not sure I got it. So the dtype of the dates should be:
(Is it ever a The necessary conversion seems to be determined lazily (which may be the core of my problem above), Try this: ``` import xarray as xr import numpy as np units = 'days since 1850-01-01 00:00:00' dates = np.arange(850) * 365 dta = xr.conventions.DecodedCFDatetimeArray(dates, units) dta[0:1] # a datetime64[ns] object dta[-1] # a datetime.datetime object dta[:] # a datetime.datetime object ``` However, when I load these dates from a netCDF file (see the example in the first post) it results in an error. (Thus, the behavior is not exactly the same as when using Another (or the same) problem is that in Here: (https://github.com/pydata/xarray/blob/master/xarray/conventions.py, line 375) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
invalid timestamps in the future 171956399 | |
240957095 | https://github.com/pydata/xarray/issues/975#issuecomment-240957095 | https://api.github.com/repos/pydata/xarray/issues/975 | MDEyOklzc3VlQ29tbWVudDI0MDk1NzA5NQ== | mathause 10194086 | 2016-08-19T08:17:28Z | 2016-08-19T08:17:28Z | MEMBER | Yes, definitely. However, the documentation states that we should get back a In my example (2) it returned a working |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
invalid timestamps in the future 171956399 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1