home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where author_association = "MEMBER" and issue = 1423312198 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 4

  • Illviljan 4
  • shoyer 1
  • benbovy 1
  • max-sixty 1

issue 1

  • Remove debugging slow assert statement · 7 ✖

author_association 1

  • MEMBER · 7 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1294262457 https://github.com/pydata/xarray/pull/7221#issuecomment-1294262457 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85NJOC5 shoyer 1217238 2022-10-28T00:27:22Z 2022-10-28T00:27:22Z MEMBER

I no longer remember why I added these checks, but I certainly did not expect to see this sort of performance penalty!

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1293860075 https://github.com/pydata/xarray/pull/7221#issuecomment-1293860075 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85NHrzr benbovy 4160723 2022-10-27T17:40:52Z 2022-10-27T17:40:52Z MEMBER

Thanks @hmaarrfk!

I haven't fully understood why we had that code though?

Me neither. I don't remember ever seeing this assertion error raised while refactoring things. Any idea @shoyer?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1293815240 https://github.com/pydata/xarray/pull/7221#issuecomment-1293815240 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85NHg3I Illviljan 14371165 2022-10-27T16:58:45Z 2022-10-27T16:58:45Z MEMBER

``` before after ratio [c000690c] [24753f1f] - 3.17±0.02ms 1.94±0.01ms 0.61 merge.DatasetAddVariable.time_variable_insertion(100) - 81.5±2ms 17.0±0.2ms 0.21 merge.DatasetAddVariable.time_variable_insertion(1000)

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE INCREASED. ``` Nice improvements. :)

I haven't fully understood why we had that code though?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291523800 https://github.com/pydata/xarray/pull/7221#issuecomment-1291523800 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-xbY Illviljan 14371165 2022-10-26T05:27:11Z 2022-10-26T05:27:11Z MEMBER

Now the asv finishes at least! Could you make a separate PR for the asv? I don't think it runs it when comparing to the main branch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291501993 https://github.com/pydata/xarray/pull/7221#issuecomment-1291501993 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-sGp Illviljan 14371165 2022-10-26T04:56:39Z 2022-10-26T04:57:37Z MEMBER

I like large datasets as well. I seem to remember getting caught in similar places when creating my datasets. I think I solved it by using Variable instead, does doing something like this improve the performance for you?

python import xarray as xr dataset = xr.Dataset() dataset['a'] = xr.Variable(dims="time", data=[1]) dataset['b'] = xr.Variable(dims="time", data=[2])

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291493769 https://github.com/pydata/xarray/pull/7221#issuecomment-1291493769 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-qGJ Illviljan 14371165 2022-10-26T04:44:43Z 2022-10-26T04:44:43Z MEMBER

Error: [ 75.90%] ··· dataset_creation.Creation.time_dataset_creation failed [ 75.90%] ···· asv: benchmark timed out (timeout 60.0s) Maybe 1000 loops is too much. Start with 100 maybe? We still want these benchmarks to be decently fast in the CI.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291388733 https://github.com/pydata/xarray/pull/7221#issuecomment-1291388733 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-Qc9 max-sixty 5635139 2022-10-26T01:58:00Z 2022-10-26T01:58:00Z MEMBER

Gosh, that's quite dramatic! Impressive find @hmaarrfk. (out of interest, how did you find this?)

I can see how that's quadratic when looping like that. I wonder whether using .assign(var1=1, var2=2, ...) has the same behavior?

Would be interesting to see whether this was covered by our existing asv benchmarks. Would be a good benchmark to add if we don't have one already.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.913ms · About: xarray-datasette