issue_comments
4 rows where issue = 33112594 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Problems parsing time variable using open_dataset · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
42785413 | https://github.com/pydata/xarray/issues/118#issuecomment-42785413 | https://api.github.com/repos/pydata/xarray/issues/118 | MDEyOklzc3VlQ29tbWVudDQyNzg1NDEz | jhamman 2443309 | 2014-05-11T22:26:54Z | 2014-05-11T22:26:54Z | MEMBER | @shoyer - my experience is that the dummy I just tried the new decoding and it seems to work. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems parsing time variable using open_dataset 33112594 | |
42638962 | https://github.com/pydata/xarray/issues/118#issuecomment-42638962 | https://api.github.com/repos/pydata/xarray/issues/118 | MDEyOklzc3VlQ29tbWVudDQyNjM4OTYy | shoyer 1217238 | 2014-05-09T07:04:36Z | 2014-05-09T07:04:36Z | MEMBER | OK, I just merged a fix into master. Unfortunately, it's not terribly useful to be able to have arrays decoded as Just out of curiosity, why do you usually convert If there is a better type than It's also certainly possible (in principle) to keep around another array with the original, encoded dates. Right now all the decoding according to CF conventions is done in one large function with no options, but I would love for it to be more flexible and modular. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems parsing time variable using open_dataset 33112594 | |
42598104 | https://github.com/pydata/xarray/issues/118#issuecomment-42598104 | https://api.github.com/repos/pydata/xarray/issues/118 | MDEyOklzc3VlQ29tbWVudDQyNTk4MTA0 | jhamman 2443309 | 2014-05-08T19:54:54Z | 2014-05-08T19:54:54Z | MEMBER | Thanks, the I've made a habit of always directly converting my
The important piece to remember if this is done is that you have to be very picky about how you calculate timedeltas between these dates since they think they are on the Gregorian calendar. I usually just keep an ordinal based time array around for that reason. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems parsing time variable using open_dataset 33112594 | |
42591663 | https://github.com/pydata/xarray/issues/118#issuecomment-42591663 | https://api.github.com/repos/pydata/xarray/issues/118 | MDEyOklzc3VlQ29tbWVudDQyNTkxNjYz | shoyer 1217238 | 2014-05-08T19:04:17Z | 2014-05-08T19:04:17Z | MEMBER | Ouch! Thanks for filing the report and providing the sample file -- I will take a look. For now, turn off automatic date decoding by calling I'm guessing that part of the trouble might be that numpy and pandas provide poor support for alternative calendars (and honestly, I haven't tested them very much). I attempted to fall back on making arrays of python datetime objects, but in this case it looks like that didn't work -- somehow things got converted in a numpy native datetime64 array anyways. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Problems parsing time variable using open_dataset 33112594 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2