issue_comments
5 rows where author_association = "MEMBER" and issue = 503163130 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Speed up isel and __getitem__ · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
539237559 | https://github.com/pydata/xarray/pull/3375#issuecomment-539237559 | https://api.github.com/repos/pydata/xarray/issues/3375 | MDEyOklzc3VlQ29tbWVudDUzOTIzNzU1OQ== | crusaderky 6213168 | 2019-10-07T22:51:09Z | 2019-10-07T22:51:09Z | MEMBER |
Yes, in due time. It's out of scope for this PR though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up isel and __getitem__ 503163130 | |
539204236 | https://github.com/pydata/xarray/pull/3375#issuecomment-539204236 | https://api.github.com/repos/pydata/xarray/issues/3375 | MDEyOklzc3VlQ29tbWVudDUzOTIwNDIzNg== | jhamman 2443309 | 2019-10-07T21:06:09Z | 2019-10-07T21:06:09Z | MEMBER |
Would you be willing to add a few of these cases to the benchmarks? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up isel and __getitem__ 503163130 | |
538910300 | https://github.com/pydata/xarray/pull/3375#issuecomment-538910300 | https://api.github.com/repos/pydata/xarray/issues/3375 | MDEyOklzc3VlQ29tbWVudDUzODkxMDMwMA== | crusaderky 6213168 | 2019-10-07T09:13:07Z | 2019-10-07T09:13:51Z | MEMBER | I see that all tests in benchmarks/indexing.py use arrays with 2~6 million points. While this is important to spot any case where the numpy underlying functions start being unnecessarily called more than once, it also means any performance improvement or degradation in any of the pure-Python code will be completely drowned out. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up isel and __getitem__ 503163130 | |
538906863 | https://github.com/pydata/xarray/pull/3375#issuecomment-538906863 | https://api.github.com/repos/pydata/xarray/issues/3375 | MDEyOklzc3VlQ29tbWVudDUzODkwNjg2Mw== | crusaderky 6213168 | 2019-10-07T09:04:04Z | 2019-10-07T09:08:12Z | MEMBER | @jhamman hm. I'm looking at it now for the first time. On first sight, it's a good start, but it's missing some important use cases:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up isel and __getitem__ 503163130 | |
538846251 | https://github.com/pydata/xarray/pull/3375#issuecomment-538846251 | https://api.github.com/repos/pydata/xarray/issues/3375 | MDEyOklzc3VlQ29tbWVudDUzODg0NjI1MQ== | jhamman 2443309 | 2019-10-07T05:34:58Z | 2019-10-07T05:34:58Z | MEMBER | Thanks @crusaderky. Do you think the indexing benchmarks we have in https://github.com/pydata/xarray/blob/master/asv_bench/benchmarks/indexing.py are sufficient? Anything you think would be worth adding to cover performance regressions here? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Speed up isel and __getitem__ 503163130 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2