home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 1423312198 and user = 14371165 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • Illviljan · 4 ✖

issue 1

  • Remove debugging slow assert statement · 4 ✖

author_association 1

  • MEMBER 4
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1293815240 https://github.com/pydata/xarray/pull/7221#issuecomment-1293815240 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85NHg3I Illviljan 14371165 2022-10-27T16:58:45Z 2022-10-27T16:58:45Z MEMBER

``` before after ratio [c000690c] [24753f1f] - 3.17±0.02ms 1.94±0.01ms 0.61 merge.DatasetAddVariable.time_variable_insertion(100) - 81.5±2ms 17.0±0.2ms 0.21 merge.DatasetAddVariable.time_variable_insertion(1000)

SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE INCREASED. ``` Nice improvements. :)

I haven't fully understood why we had that code though?

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291523800 https://github.com/pydata/xarray/pull/7221#issuecomment-1291523800 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-xbY Illviljan 14371165 2022-10-26T05:27:11Z 2022-10-26T05:27:11Z MEMBER

Now the asv finishes at least! Could you make a separate PR for the asv? I don't think it runs it when comparing to the main branch.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291501993 https://github.com/pydata/xarray/pull/7221#issuecomment-1291501993 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-sGp Illviljan 14371165 2022-10-26T04:56:39Z 2022-10-26T04:57:37Z MEMBER

I like large datasets as well. I seem to remember getting caught in similar places when creating my datasets. I think I solved it by using Variable instead, does doing something like this improve the performance for you?

python import xarray as xr dataset = xr.Dataset() dataset['a'] = xr.Variable(dims="time", data=[1]) dataset['b'] = xr.Variable(dims="time", data=[2])

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198
1291493769 https://github.com/pydata/xarray/pull/7221#issuecomment-1291493769 https://api.github.com/repos/pydata/xarray/issues/7221 IC_kwDOAMm_X85M-qGJ Illviljan 14371165 2022-10-26T04:44:43Z 2022-10-26T04:44:43Z MEMBER

Error: [ 75.90%] ··· dataset_creation.Creation.time_dataset_creation failed [ 75.90%] ···· asv: benchmark timed out (timeout 60.0s) Maybe 1000 loops is too much. Start with 100 maybe? We still want these benchmarks to be decently fast in the CI.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Remove debugging slow assert statement 1423312198

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 27.019ms · About: xarray-datasette