home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 1040185743 and user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • Illviljan · 5 ✖

issue 1

  • Add groupby & resample benchmarks · 5 ✖

author_association 1

  • MEMBER 5
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
963540277 https://github.com/pydata/xarray/pull/5922#issuecomment-963540277 https://api.github.com/repos/pydata/xarray/issues/5922 IC_kwDOAMm_X845bnU1 Illviljan 14371165 2021-11-08T20:20:55Z 2021-11-08T20:20:55Z MEMBER

When does https://pandas.pydata.org/speed/xarray/#/ update by the way? I was thinking the pandas and dask groupby might be interesting there.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add groupby & resample benchmarks 1040185743
962495898 https://github.com/pydata/xarray/pull/5922#issuecomment-962495898 https://api.github.com/repos/pydata/xarray/issues/5922 IC_kwDOAMm_X845XoWa Illviljan 14371165 2021-11-06T19:03:15Z 2021-11-06T19:03:15Z MEMBER

@dcherian What do you think of this? Had to reduce the values quite a bit to get it decently fast, hopefully it still shows the relevant parts.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add groupby & resample benchmarks 1040185743
962477227 https://github.com/pydata/xarray/pull/5922#issuecomment-962477227 https://api.github.com/repos/pydata/xarray/issues/5922 IC_kwDOAMm_X845Xjyr Illviljan 14371165 2021-11-06T16:40:59Z 2021-11-06T18:07:47Z MEMBER

Notes * 3df1015 - 28m 9s, original * 445312d - 25m 19s, ~x0.5 values * c56dd94 - 9m 50s, ~x0.2 values * a89f62b - 8m 42s, remove pandas and dask dataframe tests from ci

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add groupby & resample benchmarks 1040185743
955711837 https://github.com/pydata/xarray/pull/5922#issuecomment-955711837 https://api.github.com/repos/pydata/xarray/issues/5922 IC_kwDOAMm_X8449wFd Illviljan 14371165 2021-10-31T14:34:34Z 2021-10-31T14:34:34Z MEMBER

I checked the timing in #5796, turns out they we're (of course) as slow... The total time was probably fine though since groupby had so few tests at that point but now with a few more it scales quite a bit. I think we have to reduce the size of the datasets and maybe increase the resampling points a little. The tests with 5s are major bottlenecks, we should try and get it down to max 100-500ms in my opinion. We can increase the numbers again if the numpy_groupies pr improves the speed significantly.

If you temporarily remove the other asv-files we can get a better feel how long the groupby file takes and it's also a little easier to read the report when iterating like this.

It's also curious how mean is faster for some of these, I don't understand that at all.

``` [ 55.91%] ··· groupby.GroupBy.time_agg_large_num_groups ok [ 55.91%] ··· ======== ========== ========== -- ndim -------- --------------------- method 1 2 ======== ========== ========== sum 577±30ms 880±40ms mean 649±40ms 955±60ms ======== ========== ========== [ 56.01%] ··· groupby.GroupBy.time_agg_small_num_groups ok [ 56.01%] ··· ======== ========== ========== -- ndim -------- --------------------- method 1 2 ======== ========== ========== sum 279±20ms 445±20ms mean 325±20ms 483±30ms ======== ========== ========== [ 56.10%] ··· groupby.GroupBy.time_init ok [ 56.10%] ··· ====== ============= ndim ------ ------------- 1 1.08±0.09ms 2 6.45±0.5ms ====== ============= [ 56.20%] ··· groupby.GroupByDask.time_agg_large_num_groups ok [ 56.20%] ··· ======== ============ ============ -- ndim -------- ------------------------- method 1 2 ======== ============ ============ sum 2.27±0.09s 5.57±0.1s mean 1.45±0.03s 4.68±0.05s ======== ============ ============ [ 56.30%] ··· groupby.GroupByDask.time_agg_small_num_groups ok [ 56.30%] ··· ======== ============ ============ -- ndim -------- ------------------------- method 1 2 ======== ============ ============ sum 2.14±0.02s 5.27±0.05s mean 1.40±0.02s 4.62±0.02s ======== ============ ============ [ 56.40%] ··· groupby.GroupByDask.time_init ok [ 56.40%] ··· ====== ============ ndim ------ ------------ 1 5.46±0.1ms 2 31.1±2ms ====== ============ [ 56.49%] ··· ...oupByDaskDataFrame.time_agg_large_num_groups ok [ 56.49%] ··· ======== ============= ========== -- ndim -------- ------------------------ method 1 2 ======== ============= ========== sum 1.36±0.07ms 829±20ms mean 1.10±0.07ms 903±20ms ======== ============= ========== [ 56.59%] ··· ...oupByDaskDataFrame.time_agg_small_num_groups ok [ 56.59%] ··· ======== ============= ========== -- ndim -------- ------------------------ method 1 2 ======== ============= ========== sum 1.32±0.02ms 415±10ms mean 1.06±0.02ms 468±9ms ======== ============= ========== [ 56.69%] ··· groupby.GroupByDaskDataFrame.time_init ok [ 56.69%] ··· ====== ============= ndim ------ ------------- 1 36.8±2μs 2 5.98±0.09ms ====== ============= [ 56.78%] ··· ...y.GroupByDataFrame.time_agg_large_num_groups ok [ 56.78%] ··· ======== ============= ========== -- ndim -------- ------------------------ method 1 2 ======== ============= ========== sum 1.39±0.02ms 855±20ms mean 1.14±0.07ms 907±10ms ======== ============= ========== [ 56.88%] ··· ...y.GroupByDataFrame.time_agg_small_num_groups ok [ 56.88%] ··· ======== ============= ========== -- ndim -------- ------------------------ method 1 2 ======== ============= ========== sum 1.32±0.05ms 423±10ms mean 1.03±0.02ms 455±6ms ======== ============= ========== [ 56.98%] ··· groupby.GroupByDataFrame.time_init ok [ 56.98%] ··· ====== ============= ndim ------ ------------- 1 33.0±0.5μs 2 5.72±0.08ms ====== ============= [ 57.07%] ··· groupby.Resample.time_agg_large_num_groups ok [ 57.07%] ··· ======== ========== ========== -- ndim -------- --------------------- method 1 2 ======== ========== ========== sum 623±10ms 712±10ms mean 685±4ms 910±9ms ======== ========== ========== [ 57.17%] ··· groupby.Resample.time_agg_small_num_groups ok [ 57.17%] ··· ======== ============ ============ -- ndim -------- ------------------------- method 1 2 ======== ============ ============ sum 6.05±0.2ms 7.22±0.3ms mean 6.24±0.2ms 8.34±0.2ms ======== ============ ============ [ 57.27%] ··· groupby.Resample.time_init ok [ 57.27%] ··· ====== ============= ndim ------ ------------- 1 2.21±0.08ms 2 2.13±0.08ms ====== ============= [ 57.36%] ··· groupby.ResampleDask.time_agg_large_num_groups ok [ 57.36%] ··· ======== ============ ============ -- ndim -------- ------------------------- method 1 2 ======== ============ ============ sum 4.98±0.02s 5.33±0.03s mean 2.90±0.02s 3.18±0.01s ======== ============ ============ [ 57.46%] ··· groupby.ResampleDask.time_agg_small_num_groups ok [ 57.46%] ··· ======== ============ ============ -- ndim -------- ------------------------- method 1 2 ======== ============ ============ sum 25.3±0.2ms 28.5±1ms mean 18.3±0.5ms 20.3±0.7ms ======== ============ ============ [ 57.56%] ··· groupby.ResampleDask.time_init ok [ 57.56%] ··· ====== ============ ndim ------ ------------ 1 2.19±0.1ms 2 2.20±0.1ms ====== ============ ```
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add groupby & resample benchmarks 1040185743
955537002 https://github.com/pydata/xarray/pull/5922#issuecomment-955537002 https://api.github.com/repos/pydata/xarray/issues/5922 IC_kwDOAMm_X8449FZq Illviljan 14371165 2021-10-30T17:33:11Z 2021-10-30T17:33:11Z MEMBER

These additions seems to double the total testing time. Won't we learn the same thing with a tenth of the values?

Did you run asv locally by the way? Is this fast on your own pc?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add groupby & resample benchmarks 1040185743

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2802.456ms · About: xarray-datasette