home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where issue = 328572578 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 4

  • pelson 1
  • ocefpaf 1
  • shoyer 1
  • fmaussion 1

author_association 3

  • MEMBER 2
  • CONTRIBUTOR 1
  • NONE 1

issue 1

  • Build timeouts on ReadTheDocs · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
423251446 https://github.com/pydata/xarray/issues/2209#issuecomment-423251446 https://api.github.com/repos/pydata/xarray/issues/2209 MDEyOklzc3VlQ29tbWVudDQyMzI1MTQ0Ng== fmaussion 10050469 2018-09-20T16:39:58Z 2018-09-20T16:39:58Z MEMBER

Closed via https://github.com/rtfd/readthedocs.org/issues/4432

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Build timeouts on ReadTheDocs 328572578
407678876 https://github.com/pydata/xarray/issues/2209#issuecomment-407678876 https://api.github.com/repos/pydata/xarray/issues/2209 MDEyOklzc3VlQ29tbWVudDQwNzY3ODg3Ng== pelson 810663 2018-07-25T08:37:53Z 2018-07-25T08:37:53Z NONE

Pinging @pelson who have some ideas in mind on how to address this problem.

The ideas relate to the fetching of the index, which will take orders of magnitude less time than the resolve and download stages in conda. They aren't entirely unrelated though, as a smaller index (the proposal) would result in fewer options for the conda solver to have to work through. No matter what we do, caching the binaries will have the same impact, though it is a challenge to cache sensibly without having a really large cache... You may find that caching an environment.yaml actually has more of an impact than caching the binaries themselves (i.e. this means you continue to download the binaries each time, but don't do a conda resolve each time).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Build timeouts on ReadTheDocs 328572578
407547882 https://github.com/pydata/xarray/issues/2209#issuecomment-407547882 https://api.github.com/repos/pydata/xarray/issues/2209 MDEyOklzc3VlQ29tbWVudDQwNzU0Nzg4Mg== ocefpaf 950575 2018-07-24T20:51:50Z 2018-07-24T20:51:50Z CONTRIBUTOR

Notice that it took 411 seconds to run conda env create!

If you are using conda-forge bare in mind that our package index is huge and conda is not very smart about it. We are looking into possible solution. Pinging @pelson who have some ideas in mind on how to address this problem.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Build timeouts on ReadTheDocs 328572578
393933718 https://github.com/pydata/xarray/issues/2209#issuecomment-393933718 https://api.github.com/repos/pydata/xarray/issues/2209 MDEyOklzc3VlQ29tbWVudDM5MzkzMzcxOA== shoyer 1217238 2018-06-01T16:24:43Z 2018-06-01T16:24:43Z MEMBER

We see the same issue with our builds on Travis-CI.

Here's our latest doc build on Travis: https://travis-ci.org/pydata/xarray/jobs/386509884

Notice that it took 411 seconds to run conda env create!

I'm not quite exact what the underlying issue is here (e.g., download vs install time), but I'm sure it's somehow related to our large number of dependencies. If download time is the issue, then perhaps caching downloaded conda packages would help, e.g., https://github.com/rtfd/readthedocs.org/issues/3261

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Build timeouts on ReadTheDocs 328572578

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.39ms · About: xarray-datasette