issue_comments
2 rows where author_association = "MEMBER", issue = 226549366 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- `decode_cf_datetime()` slow because `pd.to_timedelta()` is slow if floats are passed · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
299916837 | https://github.com/pydata/xarray/issues/1399#issuecomment-299916837 | https://api.github.com/repos/pydata/xarray/issues/1399 | MDEyOklzc3VlQ29tbWVudDI5OTkxNjgzNw== | shoyer 1217238 | 2017-05-08T16:24:50Z | 2017-05-08T16:24:50Z | MEMBER |
@spencerkclark has been working on patch to natively support other datetime precisions in xarray (see https://github.com/pydata/xarray/pull/1252).
For better or worse, NumPy's datetime64 ignores leap seconds.
This sounds pretty reasonable to me. The main challenge here will be guarding against integer overflow -- you might need to do the math twice, once with floats (to check for overflow) and then with integers. You could also experiment with doing the conversion with NumPy instead of pandas, using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`decode_cf_datetime()` slow because `pd.to_timedelta()` is slow if floats are passed 226549366 | |
299510444 | https://github.com/pydata/xarray/issues/1399#issuecomment-299510444 | https://api.github.com/repos/pydata/xarray/issues/1399 | MDEyOklzc3VlQ29tbWVudDI5OTUxMDQ0NA== | shoyer 1217238 | 2017-05-05T16:23:17Z | 2017-05-05T16:23:17Z | MEMBER | Good catch! We should definitely speed this up.
Yes, very much agreed. For units such as months or years, we already are giving the wrong result when we use pandas:
Yes, this might also work. I no longer recall why we cast all inputs to floats (maybe just for consistency), but I suspect that that one of our time conversion libraries (probably netCDF4/netcdftime) expects a float array. Certainly we will still need to support floating point times saved in netCDF files, which are pretty common in my experience. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
`decode_cf_datetime()` slow because `pd.to_timedelta()` is slow if floats are passed 226549366 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1