issue_comments
1 row where author_association = "NONE" and issue = 1617395129 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- groupby_bins groups not correctly applied with built-in methods · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1463628223 | https://github.com/pydata/xarray/issues/7601#issuecomment-1463628223 | https://api.github.com/repos/pydata/xarray/issues/7601 | IC_kwDOAMm_X85XPTG_ | michaelaye 69774 | 2023-03-10T10:54:41Z | 2023-03-10T10:54:41Z | NONE | That, while I can open an xarray as a dask array using chunks: I cannot make "use" of these chunks to get statistics per chunk, right? It's just an efficiency question for the dask.compute() stage, but not an actual way to get statistics per chunk? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
groupby_bins groups not correctly applied with built-in methods 1617395129 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1