issue_comments
12 rows where author_association = "MEMBER", issue = 295838143 and user = 6815844 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Vectorized lazy indexing · 12 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
374422762 | https://github.com/pydata/xarray/pull/1899#issuecomment-374422762 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM3NDQyMjc2Mg== | fujiisoup 6815844 | 2018-03-19T23:40:52Z | 2018-03-19T23:40:52Z | MEMBER | Yes,
LazilyIndexedArray was renamed to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
370970309 | https://github.com/pydata/xarray/pull/1899#issuecomment-370970309 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM3MDk3MDMwOQ== | fujiisoup 6815844 | 2018-03-06T23:45:13Z | 2018-03-06T23:45:13Z | MEMBER | Thanks, @WeatherGod , for your feedback. This is finally merged! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
370125916 | https://github.com/pydata/xarray/pull/1899#issuecomment-370125916 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM3MDEyNTkxNg== | fujiisoup 6815844 | 2018-03-03T07:11:24Z | 2018-03-03T07:11:24Z | MEMBER | All done :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
368385680 | https://github.com/pydata/xarray/pull/1899#issuecomment-368385680 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2ODM4NTY4MA== | fujiisoup 6815844 | 2018-02-26T04:16:03Z | 2018-02-26T04:16:03Z | MEMBER | I think it's ready :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366618866 | https://github.com/pydata/xarray/pull/1899#issuecomment-366618866 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjYxODg2Ng== | fujiisoup 6815844 | 2018-02-19T08:30:01Z | 2018-02-19T08:30:01Z | MEMBER | This looks some backends do not support negative step slices. I'm going to wrap this maybe this weekend. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366377467 | https://github.com/pydata/xarray/pull/1899#issuecomment-366377467 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3NzQ2Nw== | fujiisoup 6815844 | 2018-02-16T22:30:32Z | 2018-02-16T22:30:32Z | MEMBER | @WeatherGod, Thanks for testing.
Can you share more detail?
With your example, what does |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366373577 | https://github.com/pydata/xarray/pull/1899#issuecomment-366373577 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3MzU3Nw== | fujiisoup 6815844 | 2018-02-16T22:12:44Z | 2018-02-16T22:16:13Z | MEMBER | Can you share how you tested this? The test I added says it is still in memory after vectroized indexing. edit: wind_inds is a 1d-array? If this is the case, the both should trigger OuterIndexing. But in both cases it should be indexed lazily... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
364755370 | https://github.com/pydata/xarray/pull/1899#issuecomment-364755370 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NDc1NTM3MA== | fujiisoup 6815844 | 2018-02-11T14:25:40Z | 2018-02-11T19:49:04Z | MEMBER | Based on the suggestion, I implemented the lazy vectorized indexing with index-consolidation. Now, every backend is virtually compatible to all the indexer types, i.e. basic-, outer- and vectorized-indexers. It sometimes consume large amount of memory if the indexer is unable to decompose efficiently, but it is always better than loading the full slice. The drawback is the unpredictability of how many data will be loaded. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
364625973 | https://github.com/pydata/xarray/pull/1899#issuecomment-364625973 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NDYyNTk3Mw== | fujiisoup 6815844 | 2018-02-10T04:47:04Z | 2018-02-10T04:47:04Z | MEMBER |
If the backend supports the orthogonal indexing (not only the basic indexing),
we can do But if we want a full diagonal, we need a full slice anyway...
OK. Agreed. We may need a flag that can be accessed from the array wrapper. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
364616100 | https://github.com/pydata/xarray/pull/1899#issuecomment-364616100 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NDYxNjEwMA== | fujiisoup 6815844 | 2018-02-10T01:47:54Z | 2018-02-10T01:47:54Z | MEMBER | I am inclined to the option 1, as there are some benefit even for backend without the vectorized-indexing support,
e.g.
in case we want to get three diagonal elements (1, 1), (2, 2), (3, 3) from a 1000x1000 array. What we want is
A drawback is that it is difficult for users to predict how large memory is necessary. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
364573328 | https://github.com/pydata/xarray/pull/1899#issuecomment-364573328 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NDU3MzMyOA== | fujiisoup 6815844 | 2018-02-09T21:28:26Z | 2018-02-09T21:28:26Z | MEMBER | Thanks, @shoyer
Do you think it is possible to consolidate I am wondering what computation cost we want to avoid by the lazy indexing. 1. The indexing itself is expensive so we want to minimize the number of indexing operation? 2. The original data is too large to fit into memory, and we want to load the smallest subset of the original array by the lazy indexing? If the reason 2 is the common case, I think it is not a good idea to consolidate all the lazy indexing as And I am also wondering as pointed out in #1725, what I am doing now was already implemented in dask. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
364442081 | https://github.com/pydata/xarray/pull/1899#issuecomment-364442081 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NDQ0MjA4MQ== | fujiisoup 6815844 | 2018-02-09T14:04:16Z | 2018-02-09T14:04:16Z | MEMBER | I noticed the lazy vectorized indexing can be (sometimes) optimized by decomposing the vectorized indexers into successive outer and vectorized indexers, so that the size of the array to be loaded into memory is minimized. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1