issue_comments
6 rows where author_association = "MEMBER" and issue = 617476316 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Automatic chunking of arrays ? · 6 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
628816425 | https://github.com/pydata/xarray/issues/4055#issuecomment-628816425 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODgxNjQyNQ== | shoyer 1217238 | 2020-05-14T18:37:40Z | 2020-05-14T18:37:40Z | MEMBER | If we think can improve an error message by adding additional context, the right solution is to use On the other hand, if xarray doesn't have anything more to add on top of the original error message, it is best not to add any wrapper at all. Users will just see the original error from dask. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628747933 | https://github.com/pydata/xarray/issues/4055#issuecomment-628747933 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODc0NzkzMw== | shoyer 1217238 | 2020-05-14T16:31:39Z | 2020-05-14T16:31:39Z | MEMBER | The error message from dask is already pretty descriptive:
I don't think we have much to add on top of that? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628582552 | https://github.com/pydata/xarray/issues/4055#issuecomment-628582552 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODU4MjU1Mg== | dcherian 2448579 | 2020-05-14T11:51:21Z | 2020-05-14T11:51:21Z | MEMBER |
Can we catch this error and re-raise specifying "automatic chunking fails for object arrays. These include cftime DataArrays" or something like that? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628319690 | https://github.com/pydata/xarray/issues/4055#issuecomment-628319690 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODMxOTY5MA== | shoyer 1217238 | 2020-05-14T00:43:22Z | 2020-05-14T00:43:22Z | MEMBER | Agreed, this would be very welcome!
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628231949 | https://github.com/pydata/xarray/issues/4055#issuecomment-628231949 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODIzMTk0OQ== | dcherian 2448579 | 2020-05-13T20:35:49Z | 2020-05-13T20:35:49Z | MEMBER | Awesome! Please see https://xarray.pydata.org/en/stable/contributing.html for docs on contributing |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 | |
628065564 | https://github.com/pydata/xarray/issues/4055#issuecomment-628065564 | https://api.github.com/repos/pydata/xarray/issues/4055 | MDEyOklzc3VlQ29tbWVudDYyODA2NTU2NA== | dcherian 2448579 | 2020-05-13T15:26:10Z | 2020-05-13T15:26:49Z | MEMBER | so A PR would be very welcome if you have the time, @AndrewWilliams3142 |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Automatic chunking of arrays ? 617476316 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2