home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 202423683 and user = 6213168 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • crusaderky · 3 ✖

issue 1

  • fast weighted sum · 3 ✖

author_association 1

  • MEMBER 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
519833424 https://github.com/pydata/xarray/issues/1224#issuecomment-519833424 https://api.github.com/repos/pydata/xarray/issues/1224 MDEyOklzc3VlQ29tbWVudDUxOTgzMzQyNA== crusaderky 6213168 2019-08-09T08:36:09Z 2019-08-09T08:36:09Z MEMBER

Retiring this as it is way too specialized for the main xarray library.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fast weighted sum 202423683
388545372 https://github.com/pydata/xarray/issues/1224#issuecomment-388545372 https://api.github.com/repos/pydata/xarray/issues/1224 MDEyOklzc3VlQ29tbWVudDM4ODU0NTM3Mg== crusaderky 6213168 2018-05-12T10:22:02Z 2018-05-12T10:22:02Z MEMBER

Both. One of the biggest problem is that the data of my interestest is a mix of - 1D arrays with dims=(scenario, ) and shape=(500000, ) (stressed financial instruments under a Monte Carlo stress set) - 0D arrays with dims=() (financial instruments that are impervious to the Monte Carlo stresses and never change values) So before you do concat(), you need to call broadcast(), which effectively means that doing the sums on your bunch of very fast 0D instruments suddendly requires repeating them on 500k points.

Even keeping the two lots separate (which is fastwsum does) performed considerably slower.

However, this was over a year ago and much before xarray.dot() and dask.einsum(), so I'll need to tinker with it again.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fast weighted sum 202423683
274380448 https://github.com/pydata/xarray/issues/1224#issuecomment-274380448 https://api.github.com/repos/pydata/xarray/issues/1224 MDEyOklzc3VlQ29tbWVudDI3NDM4MDQ0OA== crusaderky 6213168 2017-01-23T02:02:08Z 2017-01-23T02:02:08Z MEMBER

(arrays * weights).sum('stacked') was my first attempt. It performed considerably worse than sum(a * w for a, w in zip(arrays, weights)) - mostly because xarray.concat() is not terribly performant (I did not look deeper into it).

I did not try dask.array.sum() - worth some playing with.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  fast weighted sum 202423683

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 86.123ms · About: xarray-datasette