issue_comments
3 rows where author_association = "MEMBER" and issue = 930918574 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Cache some properties · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1176402083 | https://github.com/pydata/xarray/pull/5540#issuecomment-1176402083 | https://api.github.com/repos/pydata/xarray/issues/5540 | IC_kwDOAMm_X85GHnij | max-sixty 5635139 | 2022-07-06T16:00:08Z | 2022-07-06T16:00:08Z | MEMBER | Resurrecting this, as discussed on the dev call. Could we replace the pandas decorator with the one from the standard library? That may require adding Then as long as the benchmarks still look good, there was consensus that we should merge. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cache some properties 930918574 | |
869922912 | https://github.com/pydata/xarray/pull/5540#issuecomment-869922912 | https://api.github.com/repos/pydata/xarray/issues/5540 | MDEyOklzc3VlQ29tbWVudDg2OTkyMjkxMg== | Illviljan 14371165 | 2021-06-28T18:35:38Z | 2021-06-28T18:35:38Z | MEMBER | The case I'm optimizing is dataset interpolation with many variables. Although it's not the bottleneck there I halved the shape time from ~180ms to ~92ms with this change. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cache some properties 930918574 | |
869286696 | https://github.com/pydata/xarray/pull/5540#issuecomment-869286696 | https://api.github.com/repos/pydata/xarray/issues/5540 | MDEyOklzc3VlQ29tbWVudDg2OTI4NjY5Ng== | max-sixty 5635139 | 2021-06-28T02:22:07Z | 2021-06-28T02:22:07Z | MEMBER | Thanks for kicking this off @Illviljan . My concern with this is that adding the It also sounds like the speed-ups I was seeing might not be real. Probably we should have an ASV so there's a benchmark we're all looking at. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Cache some properties 930918574 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2