issue_comments
1 row where user = 29147682 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
705068971 | https://github.com/pydata/xarray/issues/3332#issuecomment-705068971 | https://api.github.com/repos/pydata/xarray/issues/3332 | MDEyOklzc3VlQ29tbWVudDcwNTA2ODk3MQ== | jbphyswx 29147682 | 2020-10-07T17:00:35Z | 2020-10-07T17:00:35Z | NONE | Is there any way to get around this? The window dimension combined with the My workaround has been to just implement my own slicing via for loop and then call reduction operations on the resultant dask arrays as normal... Perhaps there is something I missed along the way but I couldn't find anything in open or past issues to aid in resolving this. Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Memory usage of `da.rolling().construct` 496809167 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1