issue_comments
5 rows where issue = 115933483 and user = 2443309 sorted by updated_at descending
This data as json, CSV (advanced)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
218358483 | https://github.com/pydata/xarray/pull/650#issuecomment-218358483 | https://api.github.com/repos/pydata/xarray/issues/650 | MDEyOklzc3VlQ29tbWVudDIxODM1ODQ4Mw== | jhamman 2443309 | 2016-05-11T04:25:04Z | 2016-05-11T04:25:04Z | MEMBER | @MaximilianR - I really like this idea. I'm going to close this PR and we can continue to discuss this feature in the original issue (https://github.com/pydata/xarray/issues/422#issuecomment-218358372). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/average 115933483 | |
185542959 | https://github.com/pydata/xarray/pull/650#issuecomment-185542959 | https://api.github.com/repos/pydata/xarray/issues/650 | MDEyOklzc3VlQ29tbWVudDE4NTU0Mjk1OQ== | jhamman 2443309 | 2016-02-18T05:00:58Z | 2016-02-18T05:00:58Z | MEMBER | I'm doing some cleanup on my outstanding issues/PRs. After thinking about this again, I'm not all than keen on pushing this into the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/average 115933483 | |
156139264 | https://github.com/pydata/xarray/pull/650#issuecomment-156139264 | https://api.github.com/repos/pydata/xarray/issues/650 | MDEyOklzc3VlQ29tbWVudDE1NjEzOTI2NA== | jhamman 2443309 | 2015-11-12T15:32:07Z | 2015-11-12T15:32:35Z | MEMBER | Okay, let's go with the @mathause - any comment? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/average 115933483 | |
155478982 | https://github.com/pydata/xarray/pull/650#issuecomment-155478982 | https://api.github.com/repos/pydata/xarray/issues/650 | MDEyOklzc3VlQ29tbWVudDE1NTQ3ODk4Mg== | jhamman 2443309 | 2015-11-10T16:34:41Z | 2015-11-10T16:34:41Z | MEMBER |
That would be the main motivation. If Pandas is going the way of pydata/pandas#10030 via mean, I think we could do that as well. I actually like that approach more since we tend to call it a "weighted mean" (see title of pandas issue). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/average 115933483 | |
155161359 | https://github.com/pydata/xarray/pull/650#issuecomment-155161359 | https://api.github.com/repos/pydata/xarray/issues/650 | MDEyOklzc3VlQ29tbWVudDE1NTE2MTM1OQ== | jhamman 2443309 | 2015-11-09T19:16:29Z | 2015-11-09T19:16:29Z | MEMBER | Thanks @maximilianr. There has been an open issue here on this for a while (#422). @shoyer - I'm actually not sure I love how I implemented this but I'm teaching a session on open source contributions and code review today so I threw this up here as an example. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/average 115933483 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1