issue_comments
6 rows where issue = 1428274982 and user = 90008 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Expand benchmarks for dataset insertion and creation · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1295999237 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295999237 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NP2EF | hmaarrfk 90008 | 2022-10-29T22:11:33Z | 2022-10-29T22:11:33Z | CONTRIBUTOR | Well now the benchmarks look like they make more sense:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 | |
1295937569 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295937569 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NPnAh | hmaarrfk 90008 | 2022-10-29T18:58:35Z | 2022-10-29T18:58:35Z | CONTRIBUTOR |
as you though, the numbers improve quite a bit. I kinda want to understand why a no-op takes 1 ms! ^_^ |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 | |
1295937364 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295937364 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NPm9U | hmaarrfk 90008 | 2022-10-29T18:57:54Z | 2022-10-29T18:57:54Z | CONTRIBUTOR | What about just specifying "dims"? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 | |
1295905591 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295905591 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NPfM3 | hmaarrfk 90008 | 2022-10-29T17:11:30Z | 2022-10-29T17:11:30Z | CONTRIBUTOR | With the right window size it looks like:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 | |
1295852860 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295852860 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NPSU8 | hmaarrfk 90008 | 2022-10-29T14:28:25Z | 2022-10-29T14:28:25Z | CONTRIBUTOR | On the CI, it reports similar findings:
```
[ 67.73%] ··· ...dVariable.time_dict_of_dataarrays_to_dataset ok
[ 67.73%] ··· =================== =============
existing_elements [ 67.88%] ··· ...etAddVariable.time_dict_of_tuples_to_dataset ok
[ 67.88%] ··· =================== ===========
existing_elements [ 68.02%] ··· ...ddVariable.time_dict_of_variables_to_dataset ok
[ 68.02%] ··· =================== =============
existing_elements [ 68.17%] ··· ...e.DatasetAddVariable.time_merge_two_datasets ok
[ 68.17%] ··· =================== =============
existing_elements [ 68.31%] ··· ...e.DatasetAddVariable.time_variable_insertion ok
[ 68.31%] ··· =================== =============
existing_elements |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 | |
1295843798 | https://github.com/pydata/xarray/pull/7236#issuecomment-1295843798 | https://api.github.com/repos/pydata/xarray/issues/7236 | IC_kwDOAMm_X85NPQHW | hmaarrfk 90008 | 2022-10-29T13:55:33Z | 2022-10-29T13:55:33Z | CONTRIBUTOR | ``` $ asv run -E existing --quick --bench merge · Discovering benchmarks · Running 5 total benchmarks (1 commits * 1 environments * 5 benchmarks) [ 0.00%] ·· Benchmarking existing-py_home_mark_mambaforge_envs_mcam_dev_bin_python [ 10.00%] ··· merge.DatasetAddVariable.time_dict_of_dataarrays_to_dataset ok [ 10.00%] ··· =================== ========== existing_elements ------------------- ---------- 0 762±0μs 10 7.18±0ms 100 12.6±0ms 1000 89.1±0ms =================== ========== [ 20.00%] ··· merge.DatasetAddVariable.time_dict_of_tuples_to_dataset ok [ 20.00%] ··· =================== ========== existing_elements ------------------- ---------- 0 889±0μs 10 2.01±0ms 100 1.34±0ms 1000 605±0μs =================== ========== [ 30.00%] ··· merge.DatasetAddVariable.time_dict_of_variables_to_dataset ok [ 30.00%] ··· =================== ========== existing_elements ------------------- ---------- 0 2.48±0ms 10 2.06±0ms 100 2.13±0ms 1000 2.38±0ms =================== ========== [ 40.00%] ··· merge.DatasetAddVariable.time_merge_two_datasets ok [ 40.00%] ··· =================== ========== existing_elements ------------------- ---------- 0 814±0μs 10 945±0μs 100 2.42±0ms 1000 5.23±0ms =================== ========== [ 50.00%] ··· merge.DatasetAddVariable.time_variable_insertion ok [ 50.00%] ··· =================== ========== existing_elements ------------------- ---------- 0 1.10±0ms 10 954±0μs 100 1.88±0ms 1000 5.29±0ms =================== ========== ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Expand benchmarks for dataset insertion and creation 1428274982 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1