issue_comments
5 rows where issue = 811321550 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Bottleneck and dask objects ignore `min_periods` on `rolling` · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
997406431 | https://github.com/pydata/xarray/issues/4922#issuecomment-997406431 | https://api.github.com/repos/pydata/xarray/issues/4922 | IC_kwDOAMm_X847czbf | schild 4749283 | 2021-12-19T14:59:33Z | 2021-12-19T15:18:45Z | NONE | encountered the same problem by Bottleneck.move_rank() ; i have to judge the length of dataframe in advance
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Bottleneck and dask objects ignore `min_periods` on `rolling` 811321550 | |
791465015 | https://github.com/pydata/xarray/issues/4922#issuecomment-791465015 | https://api.github.com/repos/pydata/xarray/issues/4922 | MDEyOklzc3VlQ29tbWVudDc5MTQ2NTAxNQ== | bradyrx 8881170 | 2021-03-05T14:47:46Z | 2021-03-05T14:47:46Z | CONTRIBUTOR |
This is normally the case, but with Thanks for the pointer on #4977! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Bottleneck and dask objects ignore `min_periods` on `rolling` 811321550 | |
791011542 | https://github.com/pydata/xarray/issues/4922#issuecomment-791011542 | https://api.github.com/repos/pydata/xarray/issues/4922 | MDEyOklzc3VlQ29tbWVudDc5MTAxMTU0Mg== | dcherian 2448579 | 2021-03-04T23:02:48Z | 2021-03-04T23:02:48Z | MEMBER | ``` Just apply rolling to the base array.ds.rolling(time=6, center=False, min_periods=1).mean() ``` I feel like this should not work i.e. rolling window length (6) < size along axis (3). So the bottleneck error seems right. The chunk size error in the last example should go away with #4977 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Bottleneck and dask objects ignore `min_periods` on `rolling` 811321550 | |
790986252 | https://github.com/pydata/xarray/issues/4922#issuecomment-790986252 | https://api.github.com/repos/pydata/xarray/issues/4922 | MDEyOklzc3VlQ29tbWVudDc5MDk4NjI1Mg== | bradyrx 8881170 | 2021-03-04T22:21:37Z | 2021-03-04T22:32:01Z | CONTRIBUTOR | @dcherian, to add to the complexity here, it's even weirder than originally reported. See my test cases below. This might alter how this bug is approached. ```python import xarray as xr def _rolling(ds): return ds.rolling(time=6, center=False, min_periods=1).mean() Length 3 array to test that min_periods is called in, despite askingfor 6 time-steps of smoothingds = xr.DataArray([1, 2, 3], dims='time') ds['time'] = xr.cftime_range(start='2021-01-01', freq='D', periods=3) ``` 1. With
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Bottleneck and dask objects ignore `min_periods` on `rolling` 811321550 | |
781571084 | https://github.com/pydata/xarray/issues/4922#issuecomment-781571084 | https://api.github.com/repos/pydata/xarray/issues/4922 | MDEyOklzc3VlQ29tbWVudDc4MTU3MTA4NA== | dcherian 2448579 | 2021-02-18T19:08:44Z | 2021-02-18T19:08:44Z | MEMBER | Maybe the padding is breaking down for length-1 arrays? I would look at |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Bottleneck and dask objects ignore `min_periods` on `rolling` 811321550 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 3