issue_comments
4 rows where issue = 288567090 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Slow performance of rolling.reduce · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
357907365 | https://github.com/pydata/xarray/issues/1831#issuecomment-357907365 | https://api.github.com/repos/pydata/xarray/issues/1831 | MDEyOklzc3VlQ29tbWVudDM1NzkwNzM2NQ== | fujiisoup 6815844 | 2018-01-16T09:48:56Z | 2018-01-16T09:49:21Z | MEMBER | Thanks for the information. I will look into the issue. I think the sliding-and-stack method itself would be also handy. I will start from this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of rolling.reduce 288567090 | |
357828636 | https://github.com/pydata/xarray/issues/1831#issuecomment-357828636 | https://api.github.com/repos/pydata/xarray/issues/1831 | MDEyOklzc3VlQ29tbWVudDM1NzgyODYzNg== | shoyer 1217238 | 2018-01-16T01:37:39Z | 2018-01-16T01:37:39Z | MEMBER | Yes, I think the stride tricks version would be a significant improvement. See this numpy PR for discussion/examples: https://github.com/numpy/numpy/pull/31 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of rolling.reduce 288567090 | |
357814170 | https://github.com/pydata/xarray/issues/1831#issuecomment-357814170 | https://api.github.com/repos/pydata/xarray/issues/1831 | MDEyOklzc3VlQ29tbWVudDM1NzgxNDE3MA== | fujiisoup 6815844 | 2018-01-15T23:47:06Z | 2018-01-15T23:47:06Z | MEMBER | I'm thinking to use the fancy indexing to speed up E.g. for the following The advantages would be + Indexing occurs only once. + Reducing operation can be easily vectorized. The disadvantages would be + It constructs a huge array with size of (window_size - 1) * da.size, consuming a lot of memory. I think this disadvantage would be solved if we could use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of rolling.reduce 288567090 | |
357739849 | https://github.com/pydata/xarray/issues/1831#issuecomment-357739849 | https://api.github.com/repos/pydata/xarray/issues/1831 | MDEyOklzc3VlQ29tbWVudDM1NzczOTg0OQ== | jhamman 2443309 | 2018-01-15T17:02:38Z | 2018-01-15T17:02:38Z | MEMBER | @fujiisoup - I think this is a great idea. As you've noted, the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of rolling.reduce 288567090 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3