issue_comments
4 rows where issue = 236347050 and user = 2443309 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Feature/benchmark · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
318468605 | https://github.com/pydata/xarray/pull/1457#issuecomment-318468605 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMxODQ2ODYwNQ== | jhamman 2443309 | 2017-07-27T19:54:01Z | 2017-07-27T19:54:01Z | MEMBER | Yes! Thanks @wesm and @TomAugspurger. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 | |
317091662 | https://github.com/pydata/xarray/pull/1457#issuecomment-317091662 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMxNzA5MTY2Mg== | jhamman 2443309 | 2017-07-21T19:27:49Z | 2017-07-21T19:27:49Z | MEMBER | Thanks @TomAugspurger - see https://github.com/TomAugspurger/asv-runner/issues/1. All, I added a series of multi-file benchmarks. I think for a first PR, this is ready to fly and we can add more benchmarks as needed. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 | |
315220704 | https://github.com/pydata/xarray/pull/1457#issuecomment-315220704 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMxNTIyMDcwNA== | jhamman 2443309 | 2017-07-13T22:37:02Z | 2017-07-13T22:37:02Z | MEMBER | @rabernat - do you have any thoughts on this? @pydata/xarray - I'm trying to decide if this is worth spending any more time on. What sort of coverage would we want before we merge this first PR? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 | |
308935684 | https://github.com/pydata/xarray/pull/1457#issuecomment-308935684 | https://api.github.com/repos/pydata/xarray/issues/1457 | MDEyOklzc3VlQ29tbWVudDMwODkzNTY4NA== | jhamman 2443309 | 2017-06-16T05:20:24Z | 2017-06-16T05:20:24Z | MEMBER | Keep the comments coming! I think we can distinguish between benchmarking for regressions and benchmarking for development and introspection. The former will require some thought as to what machines we want to rely on and how to achieve consistency throughout the development track. It sounds like there are a number of options that we could pursue toward those ends. The latter use of benchmarking is useful on a single machine with only a few commits of history. For the four benchmarks in my sample So the relative performance is useful information in deciding how to use and/or develop xarray. (Granted the exact factors will change depending on machine/architecture/dataset). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/benchmark 236347050 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1