issue_comments
5 rows where author_association = "CONTRIBUTOR" and issue = 617476316 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Automatic chunking of arrays ? · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
628797255 | https://github.com/pydata/xarray/issues/4055#issuecomment-628797255 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODc5NzI1NQ== | AndrewILWilliams 56925856 | 2020-05-14T18:01:45Z | 2020-05-14T18:01:45Z | CONTRIBUTOR | I also thought that, after the dask error message it's pretty easy to then look at the In general though, is that the type of layout you'd suggest for catching and re-raising errors? Using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628616379 | https://github.com/pydata/xarray/issues/4055#issuecomment-628616379 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODYxNjM3OQ== | AndrewILWilliams 56925856 | 2020-05-14T12:57:21Z | 2020-05-14T17:50:31Z | CONTRIBUTOR | Nice, that's neater! Would this work, in the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628513777 | https://github.com/pydata/xarray/issues/4055#issuecomment-628513777 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODUxMzc3Nw== | AndrewILWilliams 56925856 | 2020-05-14T09:26:24Z | 2020-05-14T09:26:24Z | CONTRIBUTOR | Also, the contributing docs have been super clear so far! Thanks! :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628513443 | https://github.com/pydata/xarray/issues/4055#issuecomment-628513443 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODUxMzQ0Mw== | AndrewILWilliams 56925856 | 2020-05-14T09:25:48Z | 2020-05-14T09:25:48Z | CONTRIBUTOR | Cheers! Just had a look, is it as simple as just changing this line to the following, @dcherian ?
This seems to work fine in a lot of cases, except automatic chunking isn't implemented for One option is to automatically use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628212516 | https://github.com/pydata/xarray/issues/4055#issuecomment-628212516 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODIxMjUxNg== | AndrewILWilliams 56925856 | 2020-05-13T19:56:34Z | 2020-05-13T19:56:34Z | CONTRIBUTOR | Oh ok I didn't know about this, I'll take a look and read the contribution docs tomorrow ! It'll be my first PR so may need a bit of hand-holding when it comes to tests. Willing to try though! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1