home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "MEMBER", issue = 479942077 and user = 2190658 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 1

  • hameerabbasi · 3 ✖

issue 1

  • How should xarray use/support sparse arrays? · 3 ✖

author_association 1

  • MEMBER · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1534238962 https://github.com/pydata/xarray/issues/3213#issuecomment-1534238962 https://api.github.com/repos/pydata/xarray/issues/3213 IC_kwDOAMm_X85bcqDy hameerabbasi 2190658 2023-05-04T07:47:04Z 2023-05-04T07:47:04Z MEMBER

Speaking a bit to things like cumprod, it's hard to support those natively with sparse data structures in many cases (at least as things stand in the current Numba framework).

While that doesn't apply in the case of cumprod, PyData/Sparse also has a policy that if the best algorithm available is a dense one, we simply raise an error, and the user should densify explicitly to avoid filling all available RAM or getting obscure MemoryErrors.

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How should xarray use/support sparse arrays? 479942077
1014383681 https://github.com/pydata/xarray/issues/3213#issuecomment-1014383681 https://api.github.com/repos/pydata/xarray/issues/3213 IC_kwDOAMm_X848dkRB hameerabbasi 2190658 2022-01-17T10:48:48Z 2022-01-17T10:48:48Z MEMBER

For ffill specifically, you would get a dense array out anyway, so there's no point to keeping it sparse, unless one did something like run-length-encoding or similar.

As for the size issue, PyData/Sparse provides the nbytes attribute which could be helpful in determining size.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How should xarray use/support sparse arrays? 479942077
615772303 https://github.com/pydata/xarray/issues/3213#issuecomment-615772303 https://api.github.com/repos/pydata/xarray/issues/3213 MDEyOklzc3VlQ29tbWVudDYxNTc3MjMwMw== hameerabbasi 2190658 2020-04-18T08:41:39Z 2020-04-18T08:41:39Z MEMBER

Hi. Yes, it’d be nice if we had a meta issue I could then open separate issues for for sllearn implementations.

Performance is not ideal, and I realise that. However I’m working on a more generic solution to performance as I type.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How should xarray use/support sparse arrays? 479942077

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2315.83ms · About: xarray-datasette