issue_comments
3 rows where author_association = "NONE" and issue = 207587161 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- GroupBy like API for resample · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
280104546 | https://github.com/pydata/xarray/issues/1269#issuecomment-280104546 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI4MDEwNDU0Ng== | darothen 4992424 | 2017-02-15T18:59:17Z | 2017-02-15T18:59:17Z | NONE | @MaximilianR Oh, the interface is easy enough to do, even maintaining backwards-compatibility (already have that working). I was considering going the route done with GroupBy and the classes that compose it, like DatasetGroupBy... basically, we just record the wanted resampling dimension and inject the grouping/resampling operations we want. Also adds the ability to specialize methods like But.... if there's a simpler way, that might be preferable! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
279845588 | https://github.com/pydata/xarray/issues/1269#issuecomment-279845588 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI3OTg0NTU4OA== | darothen 4992424 | 2017-02-14T21:44:11Z | 2017-02-14T21:44:11Z | NONE | Assuming we want to stick with
or else |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
279810604 | https://github.com/pydata/xarray/issues/1269#issuecomment-279810604 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI3OTgxMDYwNA== | darothen 4992424 | 2017-02-14T19:32:01Z | 2017-02-14T19:32:01Z | NONE | Let me dig into this a bit right now. My analysis project for this afternoon was already going to require digging into pandas' resampling in more depth anyways. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1