issue_comments
5 rows where author_association = "MEMBER" and issue = 315381649 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- open_mfdataset can't handle many files · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
473945587 | https://github.com/pydata/xarray/issues/2066#issuecomment-473945587 | https://api.github.com/repos/pydata/xarray/issues/2066 | MDEyOklzc3VlQ29tbWVudDQ3Mzk0NTU4Nw== | dcherian 2448579 | 2019-03-18T14:58:14Z | 2019-03-18T14:58:14Z | MEMBER | Closing since |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset can't handle many files 315381649 | |
382639942 | https://github.com/pydata/xarray/issues/2066#issuecomment-382639942 | https://api.github.com/repos/pydata/xarray/issues/2066 | MDEyOklzc3VlQ29tbWVudDM4MjYzOTk0Mg== | jhamman 2443309 | 2018-04-19T07:35:42Z | 2018-04-19T07:35:42Z | MEMBER | I'm fine with the solution of a better error message but it may end up being easier said than done. IIRC, there are some OS specific behaviors here and I'm not sure if you'll always get an |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset can't handle many files 315381649 | |
382449451 | https://github.com/pydata/xarray/issues/2066#issuecomment-382449451 | https://api.github.com/repos/pydata/xarray/issues/2066 | MDEyOklzc3VlQ29tbWVudDM4MjQ0OTQ1MQ== | rabernat 1197350 | 2018-04-18T16:34:22Z | 2018-04-18T16:34:22Z | MEMBER | @pgierz, thanks for volunteering! We would love to see a PR from you. I agree that this should be a pretty simple fix. The error gets raised here I believe: https://github.com/pydata/xarray/blob/master/xarray/backends/api.py#L565-L566 As is often the case, creating a test for this will probably be harder than resolving the issue itself! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset can't handle many files 315381649 | |
382383934 | https://github.com/pydata/xarray/issues/2066#issuecomment-382383934 | https://api.github.com/repos/pydata/xarray/issues/2066 | MDEyOklzc3VlQ29tbWVudDM4MjM4MzkzNA== | rabernat 1197350 | 2018-04-18T13:21:07Z | 2018-04-18T13:21:07Z | MEMBER | I don’t think we should change the default, at it may have unintended consequences. I DO think we should catch this specific error and recommend to the user to try autoclose=True in the error message.
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset can't handle many files 315381649 | |
382319672 | https://github.com/pydata/xarray/issues/2066#issuecomment-382319672 | https://api.github.com/repos/pydata/xarray/issues/2066 | MDEyOklzc3VlQ29tbWVudDM4MjMxOTY3Mg== | jhamman 2443309 | 2018-04-18T09:11:13Z | 2018-04-18T09:11:13Z | MEMBER | @pgierz - this use case motivated the development of the
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset can't handle many files 315381649 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3