issue_comments
5 rows where issue = 478886013 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- One-off isort run · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
524327977 | https://github.com/pydata/xarray/pull/3196#issuecomment-524327977 | https://api.github.com/repos/pydata/xarray/issues/3196 | MDEyOklzc3VlQ29tbWVudDUyNDMyNzk3Nw== | max-sixty 5635139 | 2019-08-23T14:04:13Z | 2019-08-23T14:04:13Z | MEMBER | Great, +1 from me |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
One-off isort run 478886013 | |
524268981 | https://github.com/pydata/xarray/pull/3196#issuecomment-524268981 | https://api.github.com/repos/pydata/xarray/issues/3196 | MDEyOklzc3VlQ29tbWVudDUyNDI2ODk4MQ== | crusaderky 6213168 | 2019-08-23T10:49:54Z | 2019-08-23T10:49:54Z | MEMBER | Updated, re-run everything, and removed the blanket F401 (unused import) suppression. Ready for final review and merge. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
One-off isort run 478886013 | |
520021173 | https://github.com/pydata/xarray/pull/3196#issuecomment-520021173 | https://api.github.com/repos/pydata/xarray/issues/3196 | MDEyOklzc3VlQ29tbWVudDUyMDAyMTE3Mw== | crusaderky 6213168 | 2019-08-09T18:34:08Z | 2019-08-09T18:34:08Z | MEMBER | Fine for me - is a week enough to allow everybody to adapt? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
One-off isort run 478886013 | |
519982373 | https://github.com/pydata/xarray/pull/3196#issuecomment-519982373 | https://api.github.com/repos/pydata/xarray/issues/3196 | MDEyOklzc3VlQ29tbWVudDUxOTk4MjM3Mw== | max-sixty 5635139 | 2019-08-09T16:29:46Z | 2019-08-09T16:29:46Z | MEMBER | I was thinking about doing this (and had done it a couple of times in the past). My only concern with doing it now is whether it's likely to exacerbate any merge conflicts from the black changes. I've found that merge conflicts scale super-linearly with changes, at least in frustration if not lines - if so potentially we could wait for the outstanding PRs to update and then rerun? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
One-off isort run 478886013 | |
519862834 | https://github.com/pydata/xarray/pull/3196#issuecomment-519862834 | https://api.github.com/repos/pydata/xarray/issues/3196 | MDEyOklzc3VlQ29tbWVudDUxOTg2MjgzNA== | crusaderky 6213168 | 2019-08-09T10:06:50Z | 2019-08-09T10:06:50Z | MEMBER | Ready for review and merge |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
One-off isort run 478886013 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2