home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 770006670 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 1
  • martindurant 1

author_association 2

  • CONTRIBUTOR 1
  • MEMBER 1

issue 1

  • Retries for rare failures · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
747777810 https://github.com/pydata/xarray/issues/4704#issuecomment-747777810 https://api.github.com/repos/pydata/xarray/issues/4704 MDEyOklzc3VlQ29tbWVudDc0Nzc3NzgxMA== shoyer 1217238 2020-12-17T23:51:57Z 2020-12-17T23:51:57Z MEMBER

This does happen with some other backends, specifically netCDF and pydap when access remote datasets via HTTP/opendap. We have a robust_getitem helper functions for this that you'll see is used in the netCDF4 and pydap backends: https://github.com/pydata/xarray/blob/20d51cc7a49f14ff5e16316dcf00d1ade6a1c940/xarray/backends/common.py#L41

I think exponential backoff with fuzzing is the right strategy for rare network failures, but I would suggest trying to push this to as low of a level as possible, e.g., ideally inside gcsfs. Retrying the whole dask computation seems quite wasteful.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Retries for rare failures 770006670
747453674 https://github.com/pydata/xarray/issues/4704#issuecomment-747453674 https://api.github.com/repos/pydata/xarray/issues/4704 MDEyOklzc3VlQ29tbWVudDc0NzQ1MzY3NA== martindurant 6042212 2020-12-17T13:56:40Z 2020-12-17T13:56:40Z CONTRIBUTOR

As far as I can tell, this has only been happening in gcsfs - so my suggestion, to try to collect the set of conditions that should be considered "retryable" but currently aren't, still holds. However, it is also worthwhile discussing where else in the stack retries might be applied, which would affect multiple storage backends.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Retries for rare failures 770006670

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.519ms · About: xarray-datasette