issue_comments
4 rows where author_association = "NONE" and user = 449558 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- amueller · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
615500990 | https://github.com/pydata/xarray/issues/3213#issuecomment-615500990 | https://api.github.com/repos/pydata/xarray/issues/3213 | MDEyOklzc3VlQ29tbWVudDYxNTUwMDk5MA== | amueller 449558 | 2020-04-17T23:07:57Z | 2020-04-17T23:07:57Z | NONE | @shoyer thanks! Mostly spitballing here, but it's interesting to know that 2) would be the bigger problem in your opinion, I had assumed 1) would be the main issue. That raises the question whether it's easier to wrap |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How should xarray use/support sparse arrays? 479942077 | |
615497160 | https://github.com/pydata/xarray/issues/3213#issuecomment-615497160 | https://api.github.com/repos/pydata/xarray/issues/3213 | MDEyOklzc3VlQ29tbWVudDYxNTQ5NzE2MA== | amueller 449558 | 2020-04-17T22:51:09Z | 2020-04-17T22:51:09Z | NONE | Small comment from #3981: sklearn has just started running benchmarks, but it looks like pydata/sparse is not feature complete enough for us to use. We might be interested in having scipy.sparse support in xarray. There are two problems with scipy.sparse for us as far as I can see (this is very preliminary): it only has COO, which is not good for us, and ideally we'd want to avoid memory copies whenever we want to use xarray, and I think going from scipy.sparse to pydata/sparse will involve memory copies, even if pydata/sparse adds other formats. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
How should xarray use/support sparse arrays? 479942077 | |
615490533 | https://github.com/pydata/xarray/issues/3981#issuecomment-615490533 | https://api.github.com/repos/pydata/xarray/issues/3981 | MDEyOklzc3VlQ29tbWVudDYxNTQ5MDUzMw== | amueller 449558 | 2020-04-17T22:24:36Z | 2020-04-17T22:24:36Z | NONE | FYI the conversation on sklearn is far from resolved, and at this point I think the added pandas dependency is not what will keep us from using xarray. I think right now we're most concerned about sparse data representations (and I was considering asking you folks if you'd support scipy.sparse ;) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
[Proposal] Expose Variable without Pandas dependency 602256880 | |
508785516 | https://github.com/pydata/xarray/issues/3077#issuecomment-508785516 | https://api.github.com/repos/pydata/xarray/issues/3077 | MDEyOklzc3VlQ29tbWVudDUwODc4NTUxNg== | amueller 449558 | 2019-07-05T14:58:12Z | 2019-07-05T14:58:12Z | NONE | Thank you @shoyer, that's very useful input. It seems that xarray would fulfill our requirements and so at least is a reasonable candidate for us. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Question: Guaranteed zero-copy round-trip from numpy? 463841931 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3