issue_comments
5 rows where issue = 56817968 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Not-quite-ISO timestamps · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
74811395 | https://github.com/pydata/xarray/issues/316#issuecomment-74811395 | https://api.github.com/repos/pydata/xarray/issues/316 | MDEyOklzc3VlQ29tbWVudDc0ODExMzk1 | shoyer 1217238 | 2015-02-18T04:45:25Z | 2015-02-18T04:45:25Z | MEMBER | Made a new issue for datetime decoding error handling: #323 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not-quite-ISO timestamps 56817968 | |
73756512 | https://github.com/pydata/xarray/issues/316#issuecomment-73756512 | https://api.github.com/repos/pydata/xarray/issues/316 | MDEyOklzc3VlQ29tbWVudDczNzU2NTEy | shoyer 1217238 | 2015-02-10T18:40:12Z | 2015-02-10T18:40:12Z | MEMBER | I agree, this is not ideal. We really should try to decode time units at the time a dataset is opened, not just when the values are read (which open leads to a failed |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not-quite-ISO timestamps 56817968 | |
73619315 | https://github.com/pydata/xarray/issues/316#issuecomment-73619315 | https://api.github.com/repos/pydata/xarray/issues/316 | MDEyOklzc3VlQ29tbWVudDczNjE5MzE1 | shoyer 1217238 | 2015-02-10T00:12:57Z | 2015-02-10T00:12:57Z | MEMBER | Yes, if you want to give it a shot, that would be great! The only thing to watch out for is that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not-quite-ISO timestamps 56817968 | |
73289825 | https://github.com/pydata/xarray/issues/316#issuecomment-73289825 | https://api.github.com/repos/pydata/xarray/issues/316 | MDEyOklzc3VlQ29tbWVudDczMjg5ODI1 | shoyer 1217238 | 2015-02-06T18:42:32Z | 2015-02-06T18:42:32Z | MEMBER | It looks like
The other possibility is adding a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not-quite-ISO timestamps 56817968 | |
73276393 | https://github.com/pydata/xarray/issues/316#issuecomment-73276393 | https://api.github.com/repos/pydata/xarray/issues/316 | MDEyOklzc3VlQ29tbWVudDczMjc2Mzkz | shoyer 1217238 | 2015-02-06T17:21:18Z | 2015-02-06T17:21:18Z | MEMBER | What a hassle! Unfortunately my experience has been that non-ISO conformant time units are not uncommon. I'll take a look into this. We did add one fix to improve the time unit parsing since the last release. You might want to try installing the dev version off github to see if it fixes your issue. Otherwise you can also do the sort of fixup you describe by loading the dataset with decode_cf=False, fixing the metadata and then calling xray.decode_cf. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Not-quite-ISO timestamps 56817968 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1