home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 944996552 and user = 5635139 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

These facets timed out: author_association, issue

user 1

  • max-sixty · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
881641897 https://github.com/pydata/xarray/issues/5604#issuecomment-881641897 https://api.github.com/repos/pydata/xarray/issues/5604 IC_kwDOAMm_X840jMmp max-sixty 5635139 2021-07-16T18:36:45Z 2021-07-16T18:36:45Z MEMBER

The memory usage does seem high. Not having the indexes aligned makes it into an expensive operation, and I would vote to have that fail by default ref (https://github.com/pydata/xarray/discussions/5499#discussioncomment-929765).

Can the input files be aligned before attempting to combine the data? Or are you not in control of the input files?

To debug the memory, you probably need to do something like use memory_profiler, and try for varying numbers of files — unfortunately it's a complex problem and just looking at htop gives very course information.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Extremely Large Memory usage for a very small variable  944996552
881111321 https://github.com/pydata/xarray/issues/5604#issuecomment-881111321 https://api.github.com/repos/pydata/xarray/issues/5604 IC_kwDOAMm_X840hLEZ max-sixty 5635139 2021-07-16T01:29:19Z 2021-07-16T01:29:19Z MEMBER

Again — where are you seeing this 1000GB or 1000x number?

(also have a look at GitHub docs on how to format the code)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Extremely Large Memory usage for a very small variable  944996552
880500336 https://github.com/pydata/xarray/issues/5604#issuecomment-880500336 https://api.github.com/repos/pydata/xarray/issues/5604 MDEyOklzc3VlQ29tbWVudDg4MDUwMDMzNg== max-sixty 5635139 2021-07-15T08:24:12Z 2021-07-15T08:24:12Z MEMBER

This will likely need much more detail. Though to start: what's the source of the 1000x number? What happens if you pass compat="identical", coords="minimal" to open_mfdataset? If that fails, the opening operation may be doing some expensive alignment.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Extremely Large Memory usage for a very small variable  944996552

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 3253.531ms · About: xarray-datasette