issue_comments
1 row where issue = 252358450 and user = 5356122 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Automatic parallelization for dask arrays in apply_ufunc · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
324692881 | https://github.com/pydata/xarray/pull/1517#issuecomment-324692881 | https://api.github.com/repos/pydata/xarray/issues/1517 | MDEyOklzc3VlQ29tbWVudDMyNDY5Mjg4MQ== | clarkfitzg 5356122 | 2017-08-24T16:50:45Z | 2017-08-24T16:50:45Z | MEMBER | Wow, this is great stuff! What's When this makes it into the public facing API it would be nice to include some guidance on how the chunking scheme affects the run time. Imagine a plot with run time plotted as a function of chunk size or number of chunks. Of course it also depends on the data size and the number of cores available. To say it in a different way, More ambitiously I could imagine an API such as |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic parallelization for dask arrays in apply_ufunc 252358450 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1