issue_comments
4 rows where issue = 711626733 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Wrap numpy-groupies to speed up Xarray's groupby aggregations · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
711461331 | https://github.com/pydata/xarray/issues/4473#issuecomment-711461331 | https://api.github.com/repos/pydata/xarray/issues/4473 | MDEyOklzc3VlQ29tbWVudDcxMTQ2MTMzMQ== | shoyer 1217238 | 2020-10-19T01:30:48Z | 2020-10-19T01:30:48Z | MEMBER |
I think we can reuse the existing logic from the This just gives us an alternative way to calculate
Agreed. Hopefully this can live alongside in the GroupBy objects.
Yes, I agree that we should do this incrementally. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrap numpy-groupies to speed up Xarray's groupby aggregations 711626733 | |
711460703 | https://github.com/pydata/xarray/issues/4473#issuecomment-711460703 | https://api.github.com/repos/pydata/xarray/issues/4473 | MDEyOklzc3VlQ29tbWVudDcxMTQ2MDcwMw== | shoyer 1217238 | 2020-10-19T01:27:50Z | 2020-10-19T01:27:50Z | MEMBER | Something like the resample test case from https://github.com/pydata/xarray/issues/4498 might be a good example for finding 100x speed-ups. The main feature of that case is that there are a very large number of groups (only slightly fewer groups than original data points). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrap numpy-groupies to speed up Xarray's groupby aggregations 711626733 | |
701961598 | https://github.com/pydata/xarray/issues/4473#issuecomment-701961598 | https://api.github.com/repos/pydata/xarray/issues/4473 | MDEyOklzc3VlQ29tbWVudDcwMTk2MTU5OA== | shoyer 1217238 | 2020-10-01T07:57:58Z | 2020-10-01T07:57:58Z | MEMBER |
I'm not entirely sure, but I suspect something like the approach in https://github.com/pydata/xarray/pull/4184 might be more directly relevant for speeding up |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrap numpy-groupies to speed up Xarray's groupby aggregations 711626733 | |
701609035 | https://github.com/pydata/xarray/issues/4473#issuecomment-701609035 | https://api.github.com/repos/pydata/xarray/issues/4473 | MDEyOklzc3VlQ29tbWVudDcwMTYwOTAzNQ== | shoyer 1217238 | 2020-09-30T19:52:05Z | 2020-09-30T19:52:05Z | MEMBER | A prototype implementation of the core functionality here can be found in: https://nbviewer.jupyter.org/gist/shoyer/6d6c82bbf383fb717cc8631869678737 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Wrap numpy-groupies to speed up Xarray's groupby aggregations 711626733 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1