home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 496809167 and user = 29147682 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • jbphyswx · 1 ✖

issue 1

  • Memory usage of `da.rolling().construct` · 1 ✖

author_association 1

  • NONE 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
705068971 https://github.com/pydata/xarray/issues/3332#issuecomment-705068971 https://api.github.com/repos/pydata/xarray/issues/3332 MDEyOklzc3VlQ29tbWVudDcwNTA2ODk3MQ== jbphyswx 29147682 2020-10-07T17:00:35Z 2020-10-07T17:00:35Z NONE

Is there any way to get around this? The window dimension combined with the For window size x, every chunk should be larger than x//2 requirement means that for a large moving window I'm getting O(100GB) chunks that do not fit in memory at compute time. I can, of course, rechunk other dimensions, but that is expensive and substantially slower. I also suspect this becomes practically infeasible on machines that have little memory. Regardless, mandatory O(n^2) memory usage with window size seems less than ideal.

My workaround has been to just implement my own slicing via for loop and then call reduction operations on the resultant dask arrays as normal... Perhaps there is something I missed along the way but I couldn't find anything in open or past issues to aid in resolving this. Thanks!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Memory usage of `da.rolling().construct` 496809167

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 640.896ms · About: xarray-datasette