issue_comments
1 row where author_association = "CONTRIBUTOR", issue = 1284475176 and user = 90008 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Long import time · 1 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1280072309 | https://github.com/pydata/xarray/issues/6726#issuecomment-1280072309 | https://api.github.com/repos/pydata/xarray/issues/6726 | IC_kwDOAMm_X85MTFp1 | hmaarrfk 90008 | 2022-10-16T22:33:17Z | 2022-10-16T22:33:17Z | CONTRIBUTOR | In developing https://github.com/pydata/xarray/pull/7172, there are also some places where class types are used to check for features: https://github.com/pydata/xarray/blob/main/xarray/core/pycompat.py#L35 Dask and sparse and big contributors due to their need to resolve the class name in question. Ultimately. I think it is important to maybe constrain the problem. Are we ok with 100 ms over numpy + pandas? 20 ms? On my machines, the 0.5 s that xarray is close to seems long... but everytime I look at it, it seems to "just be a python problem". |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Long import time 1284475176 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 1