issue_comments
6 rows where issue = 207587161 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- GroupBy like API for resample · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
280122805 | https://github.com/pydata/xarray/issues/1269#issuecomment-280122805 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI4MDEyMjgwNQ== | shoyer 1217238 | 2017-02-15T20:04:07Z | 2017-02-15T20:04:07Z | MEMBER | I think this could be done with minimal GroupBy subclasses to supply the default dimension argument for aggregation functions. All the machinery on groupby should already be there. On Wed, Feb 15, 2017 at 10:59 AM Daniel Rothenberg notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
280104546 | https://github.com/pydata/xarray/issues/1269#issuecomment-280104546 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI4MDEwNDU0Ng== | darothen 4992424 | 2017-02-15T18:59:17Z | 2017-02-15T18:59:17Z | NONE | @MaximilianR Oh, the interface is easy enough to do, even maintaining backwards-compatibility (already have that working). I was considering going the route done with GroupBy and the classes that compose it, like DatasetGroupBy... basically, we just record the wanted resampling dimension and inject the grouping/resampling operations we want. Also adds the ability to specialize methods like But.... if there's a simpler way, that might be preferable! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
280101839 | https://github.com/pydata/xarray/issues/1269#issuecomment-280101839 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI4MDEwMTgzOQ== | max-sixty 5635139 | 2017-02-15T18:49:32Z | 2017-02-15T18:49:32Z | MEMBER |
I think an interface like |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
280101190 | https://github.com/pydata/xarray/issues/1269#issuecomment-280101190 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI4MDEwMTE5MA== | max-sixty 5635139 | 2017-02-15T18:47:20Z | 2017-02-15T18:47:20Z | MEMBER | Would be great to test for these sorts of issues if we redo this: https://github.com/pydata/xarray/issues/1269 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
279845588 | https://github.com/pydata/xarray/issues/1269#issuecomment-279845588 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI3OTg0NTU4OA== | darothen 4992424 | 2017-02-14T21:44:11Z | 2017-02-14T21:44:11Z | NONE | Assuming we want to stick with
or else |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 | |
279810604 | https://github.com/pydata/xarray/issues/1269#issuecomment-279810604 | https://api.github.com/repos/pydata/xarray/issues/1269 | MDEyOklzc3VlQ29tbWVudDI3OTgxMDYwNA== | darothen 4992424 | 2017-02-14T19:32:01Z | 2017-02-14T19:32:01Z | NONE | Let me dig into this a bit right now. My analysis project for this afternoon was already going to require digging into pandas' resampling in more depth anyways. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
GroupBy like API for resample 207587161 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3