home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where author_association = "MEMBER" and issue = 295621576 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • jhamman 2
  • fujiisoup 2
  • shoyer 1

issue 1

  • Vectorized indexing with cache=False · 5 ✖

author_association 1

  • MEMBER · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
364632449 https://github.com/pydata/xarray/issues/1897#issuecomment-364632449 https://api.github.com/repos/pydata/xarray/issues/1897 MDEyOklzc3VlQ29tbWVudDM2NDYzMjQ0OQ== jhamman 2443309 2018-02-10T07:22:08Z 2018-02-10T07:22:08Z MEMBER

@fujiisoup - thanks for jumping in here so quickly, I really appreciate it. I'll give your PR review and try to weigh in on the design as soon as possible.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized indexing with cache=False 295621576
364298613 https://github.com/pydata/xarray/issues/1897#issuecomment-364298613 https://api.github.com/repos/pydata/xarray/issues/1897 MDEyOklzc3VlQ29tbWVudDM2NDI5ODYxMw== fujiisoup 6815844 2018-02-09T00:45:15Z 2018-02-09T01:11:34Z MEMBER

Or we could switch LazilyIndexedArray to store a chain of successive indexing operations, but that would potentially have non-desirable performance implications.

I think we can store the chain of successive indexing operations, and apply them sequentially when the evaluation. But I am wondering if this operation has an advantage to the eager indexing. (The total computation cost would be the same?)

A workaround would be to take care outer/basic indexers and vectorized indexers separately, i.e, we can combine successive outer/basic indexers as we are doing now, and store the vectorized indexers and apply them at the evaluation time. It would gain some benefit from the lazy indexing (if we can assume vectorized indexing is not so frequent.).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized indexing with cache=False 295621576
364294048 https://github.com/pydata/xarray/issues/1897#issuecomment-364294048 https://api.github.com/repos/pydata/xarray/issues/1897 MDEyOklzc3VlQ29tbWVudDM2NDI5NDA0OA== shoyer 1217238 2018-02-09T00:20:16Z 2018-02-09T00:20:16Z MEMBER

In the current version of LazilyIndexArray, we combined successive indexers into a single indexer, e.g., array[k1][k2] -> array[k3] without evaluating the array.

Someone would either need to think through how to do this for successive VectorizedIndexer objects (all other indexers can be converted in VectorizedIndexer objects if needed). Or we could switch LazilyIndexedArray to store a chain of successive indexing operations, but that would potentially have non-desirable performance implications.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized indexing with cache=False 295621576
364292377 https://github.com/pydata/xarray/issues/1897#issuecomment-364292377 https://api.github.com/repos/pydata/xarray/issues/1897 MDEyOklzc3VlQ29tbWVudDM2NDI5MjM3Nw== jhamman 2443309 2018-02-09T00:11:42Z 2018-02-09T00:11:42Z MEMBER

I'd be interested in discussing what it would take to "Support vectorized indexing with LazilyIndexedArray". If it is not possible, then yes, we should be improve the error message.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized indexing with cache=False 295621576
364281173 https://github.com/pydata/xarray/issues/1897#issuecomment-364281173 https://api.github.com/repos/pydata/xarray/issues/1897 MDEyOklzc3VlQ29tbWVudDM2NDI4MTE3Mw== fujiisoup 6815844 2018-02-08T23:13:40Z 2018-02-08T23:13:40Z MEMBER

I do not yet understand around here, but I guess cache=True implies to load all the data into memory but is still indexed lazily? Is it reasonable to convert this directly to np.ndarray? Or if it is not, the solution would be + Just improve the error message + Support vectorised indexing with LazilyIndexedArray (needs work)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Vectorized indexing with cache=False 295621576

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.221ms · About: xarray-datasette