issue_comments
3 rows where author_association = "CONTRIBUTOR" and issue = 462049420 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Flat iteration over DataArray · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
520210672 | https://github.com/pydata/xarray/pull/3054#issuecomment-520210672 | https://api.github.com/repos/pydata/xarray/issues/3054 | MDEyOklzc3VlQ29tbWVudDUyMDIxMDY3Mg== | coroa 2552981 | 2019-08-11T08:36:19Z | 2019-08-11T08:36:41Z | CONTRIBUTOR | @yohai : In short, no. It does not make sense to add a built-in function for iteration, if it is unable to augment the low-level functionality. I'd recommend closing this PR! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Flat iteration over DataArray 462049420 | |
519932051 | https://github.com/pydata/xarray/pull/3054#issuecomment-519932051 | https://api.github.com/repos/pydata/xarray/issues/3054 | MDEyOklzc3VlQ29tbWVudDUxOTkzMjA1MQ== | yohai 6164157 | 2019-08-09T14:04:57Z | 2019-08-09T14:04:57Z | CONTRIBUTOR | @crusaderky @corora Thanks for your comments, glad to see that there's a more efficient way to do it. The question is do you think it's useful enough to justify adding it as a built in function. I end up using my solution quite often |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Flat iteration over DataArray 462049420 | |
508407631 | https://github.com/pydata/xarray/pull/3054#issuecomment-508407631 | https://api.github.com/repos/pydata/xarray/issues/3054 | MDEyOklzc3VlQ29tbWVudDUwODQwNzYzMQ== | coroa 2552981 | 2019-07-04T09:15:14Z | 2019-07-04T09:15:14Z | CONTRIBUTOR | @yohai It's a lot more efficient to simply iterate over the underlying array, ie. If you are instead using streaming computation based on dask, then you would have to do something similar on per-chunk basis. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Flat iteration over DataArray 462049420 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2