html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4704#issuecomment-747777810,https://api.github.com/repos/pydata/xarray/issues/4704,747777810,MDEyOklzc3VlQ29tbWVudDc0Nzc3NzgxMA==,1217238,2020-12-17T23:51:57Z,2020-12-17T23:51:57Z,MEMBER,"This does happen with some other backends, specifically netCDF and pydap when access remote datasets via HTTP/opendap. We have a `robust_getitem` helper functions for this that you'll see is used in the netCDF4 and pydap backends: https://github.com/pydata/xarray/blob/20d51cc7a49f14ff5e16316dcf00d1ade6a1c940/xarray/backends/common.py#L41 I think exponential backoff with fuzzing is the right strategy for rare network failures, but I would suggest trying to push this to as low of a level as possible, e.g., ideally inside gcsfs. Retrying the whole dask computation seems quite wasteful.","{""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,770006670 https://github.com/pydata/xarray/issues/4704#issuecomment-747453674,https://api.github.com/repos/pydata/xarray/issues/4704,747453674,MDEyOklzc3VlQ29tbWVudDc0NzQ1MzY3NA==,6042212,2020-12-17T13:56:40Z,2020-12-17T13:56:40Z,CONTRIBUTOR,"As far as I can tell, this has only been happening in gcsfs - so my suggestion, to try to collect the set of conditions that should be considered ""retryable"" but currently aren't, still holds. However, it is also worthwhile discussing where else in the stack retries might be applied, which would affect multiple storage backends.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,770006670