issue_comments
5 rows where author_association = "MEMBER" and issue = 295621576 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Vectorized indexing with cache=False · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
364632449 | https://github.com/pydata/xarray/issues/1897#issuecomment-364632449 | https://api.github.com/repos/pydata/xarray/issues/1897 | MDEyOklzc3VlQ29tbWVudDM2NDYzMjQ0OQ== | jhamman 2443309 | 2018-02-10T07:22:08Z | 2018-02-10T07:22:08Z | MEMBER | @fujiisoup - thanks for jumping in here so quickly, I really appreciate it. I'll give your PR review and try to weigh in on the design as soon as possible. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized indexing with cache=False 295621576 | |
364298613 | https://github.com/pydata/xarray/issues/1897#issuecomment-364298613 | https://api.github.com/repos/pydata/xarray/issues/1897 | MDEyOklzc3VlQ29tbWVudDM2NDI5ODYxMw== | fujiisoup 6815844 | 2018-02-09T00:45:15Z | 2018-02-09T01:11:34Z | MEMBER |
I think we can store the chain of successive indexing operations, and apply them sequentially when the evaluation. But I am wondering if this operation has an advantage to the eager indexing. (The total computation cost would be the same?) A workaround would be to take care outer/basic indexers and vectorized indexers separately, i.e, we can combine successive outer/basic indexers as we are doing now, and store the vectorized indexers and apply them at the evaluation time. It would gain some benefit from the lazy indexing (if we can assume vectorized indexing is not so frequent.). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized indexing with cache=False 295621576 | |
364294048 | https://github.com/pydata/xarray/issues/1897#issuecomment-364294048 | https://api.github.com/repos/pydata/xarray/issues/1897 | MDEyOklzc3VlQ29tbWVudDM2NDI5NDA0OA== | shoyer 1217238 | 2018-02-09T00:20:16Z | 2018-02-09T00:20:16Z | MEMBER | In the current version of LazilyIndexArray, we combined successive indexers into a single indexer, e.g., Someone would either need to think through how to do this for successive |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized indexing with cache=False 295621576 | |
364292377 | https://github.com/pydata/xarray/issues/1897#issuecomment-364292377 | https://api.github.com/repos/pydata/xarray/issues/1897 | MDEyOklzc3VlQ29tbWVudDM2NDI5MjM3Nw== | jhamman 2443309 | 2018-02-09T00:11:42Z | 2018-02-09T00:11:42Z | MEMBER | I'd be interested in discussing what it would take to "Support vectorized indexing with LazilyIndexedArray". If it is not possible, then yes, we should be improve the error message. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized indexing with cache=False 295621576 | |
364281173 | https://github.com/pydata/xarray/issues/1897#issuecomment-364281173 | https://api.github.com/repos/pydata/xarray/issues/1897 | MDEyOklzc3VlQ29tbWVudDM2NDI4MTE3Mw== | fujiisoup 6815844 | 2018-02-08T23:13:40Z | 2018-02-08T23:13:40Z | MEMBER | I do not yet understand around here, but I guess |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized indexing with cache=False 295621576 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3