issue_comments
4 rows where author_association = "CONTRIBUTOR", issue = 169276671 and user = 950575 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Don't convert time data to timedelta by default · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
239210833 | https://github.com/pydata/xarray/pull/940#issuecomment-239210833 | https://api.github.com/repos/pydata/xarray/issues/940 | MDEyOklzc3VlQ29tbWVudDIzOTIxMDgzMw== | ocefpaf 950575 | 2016-08-11T16:15:05Z | 2016-08-11T16:15:05Z | CONTRIBUTOR | @shoyer the more I think about this the more I don't like the addition of extra keywords. Even though I would like this behavior to be the default one I really do not like the complexity I added here. Closing this... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't convert time data to timedelta by default 169276671 | |
237659233 | https://github.com/pydata/xarray/pull/940#issuecomment-237659233 | https://api.github.com/repos/pydata/xarray/issues/940 | MDEyOklzc3VlQ29tbWVudDIzNzY1OTIzMw== | ocefpaf 950575 | 2016-08-04T19:33:35Z | 2016-08-04T19:33:35Z | CONTRIBUTOR |
Indeed! That makes sense to matter what is decided here. Thanks for pointing that out. (Not sure if I am up to the challenge though.)
Done! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't convert time data to timedelta by default 169276671 | |
237649523 | https://github.com/pydata/xarray/pull/940#issuecomment-237649523 | https://api.github.com/repos/pydata/xarray/issues/940 | MDEyOklzc3VlQ29tbWVudDIzNzY0OTUyMw== | ocefpaf 950575 | 2016-08-04T18:57:14Z | 2016-08-04T18:57:14Z | CONTRIBUTOR |
I get that but my question is how often do users perform do such operations? Again I am biased b/c with my data I never want to do that as it does not make sense. And, when it does make sense, I believe that the price of post conversion is worth the advantages of converting by
If you want Feel free to close this. I don't have strong feelings about what |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't convert time data to timedelta by default 169276671 | |
237560363 | https://github.com/pydata/xarray/pull/940#issuecomment-237560363 | https://api.github.com/repos/pydata/xarray/issues/940 | MDEyOklzc3VlQ29tbWVudDIzNzU2MDM2Mw== | ocefpaf 950575 | 2016-08-04T13:56:40Z | 2016-08-04T13:56:40Z | CONTRIBUTOR | This is ready for review. Here is an example of this PR in action showing how plotting and working with periods gets easier with BTW, in light of https://github.com/pydata/xarray/issues/939#issue-169274464, I wonder if Maybe I am being thick and I don't know enough use cases data but I cannot see why someone might want to convert time data (which most of the time represent periods) to Regarding time coordinate itself I understand that I understand, and agree most of the time, that raising erros is better than issuing warnings, and creating an ambiguity in the returns. So maybe this one is harder to change than the former. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Don't convert time data to timedelta by default 169276671 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1