issue_comments
9 rows where issue = 1635470616 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- add timeouts for tests · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1483101852 | https://github.com/pydata/xarray/pull/7657#issuecomment-1483101852 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YZlac | dcherian 2448579 | 2023-03-24T16:42:37Z | 2023-03-24T16:42:48Z | MEMBER | IIRC all distributed tests are in
The default scheduler is Maybe this is related to https://github.com/pydata/xarray/issues/7079. I restarted #7488 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1483092471 | https://github.com/pydata/xarray/pull/7657#issuecomment-1483092471 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YZjH3 | keewis 14808389 | 2023-03-24T16:35:17Z | 2023-03-24T16:37:45Z | MEMBER | depends on the test, I guess. Most of them are related to one of the netcdf backends (not sure which, the tests don't specify that), I've also seen a drastic reduction in performance with HDF5 1.12.2 (both netcdf4 and h5netcdf) on one of my colleague's datasets, so maybe that's much more visible on a mac? That doesn't explain the slow Do we isolate the dask scheduler in any way? I assume that makes use of the builtin (non- |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1483074436 | https://github.com/pydata/xarray/pull/7657#issuecomment-1483074436 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YZeuE | dcherian 2448579 | 2023-03-24T16:21:23Z | 2023-03-24T16:21:23Z | MEMBER |
I'm thinking this failures is some bad interaction between an I/O backend and dask threads. But I'm having trouble figuring out which backend. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1483049816 | https://github.com/pydata/xarray/pull/7657#issuecomment-1483049816 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YZYtY | keewis 14808389 | 2023-03-24T16:04:28Z | 2023-03-24T16:04:28Z | MEMBER |
Not sure I understand. Can you elaborate, please? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1483009187 | https://github.com/pydata/xarray/pull/7657#issuecomment-1483009187 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YZOyj | dcherian 2448579 | 2023-03-24T15:37:34Z | 2023-03-24T15:37:34Z | MEMBER | Is it for a particular backend? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1481344641 | https://github.com/pydata/xarray/pull/7657#issuecomment-1481344641 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YS4aB | keewis 14808389 | 2023-03-23T14:56:17Z | 2023-03-23T14:56:17Z | MEMBER | since the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1480905116 | https://github.com/pydata/xarray/pull/7657#issuecomment-1480905116 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YRNGc | keewis 14808389 | 2023-03-23T10:00:54Z | 2023-03-23T10:05:37Z | MEMBER | I guess that means that the CPUs of the windows and macos runners are just slow, or there's other tasks that get prioritized, or something. All of this results in the tests being pretty flaky, so I'm not sure what we can do about it. I don't have access to any, but maybe someone with a mac could try to reproduce? In any case, I'll increase the timeout a bit again since I think a timeout after ~1hour is better than the CI job being cancelled after 6 hours. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1480136698 | https://github.com/pydata/xarray/pull/7657#issuecomment-1480136698 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YORf6 | headtr1ck 43316012 | 2023-03-22T19:26:18Z | 2023-03-22T19:26:18Z | COLLABORATOR |
Some of there files are 10 numbers, so the netcdf should be ~50kb or so (mostly the header overhead). On no hardware this should take >1min to read and write? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 | |
1479671510 | https://github.com/pydata/xarray/pull/7657#issuecomment-1479671510 | https://api.github.com/repos/pydata/xarray/issues/7657 | IC_kwDOAMm_X85YMf7W | keewis 14808389 | 2023-03-22T14:29:20Z | 2023-03-22T14:29:20Z | MEMBER | apparently python 3.11 on windows is also pretty slow, which makes the timeouts appear there as well. But in any case, here's the affected tests:
Does anyone know why those take so long only on macos (and, partially, windows)? Does that have to do with the runner hardware? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add timeouts for tests 1635470616 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3