issue_comments
1 row where issue = 462424005 and user = 4903456 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- xarray rolling does not match pandas when using min_periods and reduce · 1 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
508295570 | https://github.com/pydata/xarray/issues/3066#issuecomment-508295570 | https://api.github.com/repos/pydata/xarray/issues/3066 | MDEyOklzc3VlQ29tbWVudDUwODI5NTU3MA== | mrezak 4903456 | 2019-07-04T00:23:45Z | 2019-07-04T00:23:45Z | NONE | @shoyer thanks for looking into this. I also figured it later that I can just use np.nanmean (or nanmedian) but that function turns out to be much slower than np.mean (or np.median) version. As nans are only happening as the beginning and end of the sequence, is there any efficient way of using nanmean only for those segments and mean for the rest of the processing? My own thought is to have a check for nan in the custom function and apply mean or nanmean depending on the results of that check, but not sure if this can be done more efficiently. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray rolling does not match pandas when using min_periods and reduce 462424005 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1