issue_comments
4 rows where author_association = "MEMBER" and issue = 182667672 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- center=True for xarray.DataArray.rolling() · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 254347566 | https://github.com/pydata/xarray/issues/1046#issuecomment-254347566 | https://api.github.com/repos/pydata/xarray/issues/1046 | MDEyOklzc3VlQ29tbWVudDI1NDM0NzU2Ng== | jhamman 2443309 | 2016-10-17T22:00:32Z | 2016-10-17T22:00:32Z | MEMBER | I'm fine with this approach for now. It would be great if we could convince bottleneck to help us out with a keyword argument of some kind. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
center=True for xarray.DataArray.rolling() 182667672 | |
| 253929482 | https://github.com/pydata/xarray/issues/1046#issuecomment-253929482 | https://api.github.com/repos/pydata/xarray/issues/1046 | MDEyOklzc3VlQ29tbWVudDI1MzkyOTQ4Mg== | shoyer 1217238 | 2016-10-14T21:56:42Z | 2016-10-14T21:56:51Z | MEMBER | @chunweiyuan I agree, this seems worth doing, and I think you have a pretty sensible approach here. For large arrays (especially with ndim > 1), this should add only minimal performance overhead. If you can fit this into the existing framework for |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
center=True for xarray.DataArray.rolling() 182667672 | |
| 253408063 | https://github.com/pydata/xarray/issues/1046#issuecomment-253408063 | https://api.github.com/repos/pydata/xarray/issues/1046 | MDEyOklzc3VlQ29tbWVudDI1MzQwODA2Mw== | jhamman 2443309 | 2016-10-13T03:58:32Z | 2016-10-13T03:58:32Z | MEMBER | We do try to stay consistent with pandas except for the last position. Here's the unit test where we verify that behavior. Using ``` Python In [1]: import pandas as pd s In [2]: data = pd.Series([0, 3, 6]) In [3]: data.rolling(3, center=True, min_periods=1).mean() Out[3]: 0 1.5 1 3.0 2 4.5 ``` If I remember correctly, and my brain is a bit like mush right now so I could be wrong,
So, as you can see, bottleneck does something totally different that wouldn't otherwise work with |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
center=True for xarray.DataArray.rolling() 182667672 | |
| 253405068 | https://github.com/pydata/xarray/issues/1046#issuecomment-253405068 | https://api.github.com/repos/pydata/xarray/issues/1046 | MDEyOklzc3VlQ29tbWVudDI1MzQwNTA2OA== | shoyer 1217238 | 2016-10-13T03:37:55Z | 2016-10-13T03:37:55Z | MEMBER | I think we mostly tried to make this consistent with pandas. To be honest I don't entirely understand the logic myself. Cc @jhamman |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
center=True for xarray.DataArray.rolling() 182667672 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 2