home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 790986252

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/4922#issuecomment-790986252 https://api.github.com/repos/pydata/xarray/issues/4922 790986252 MDEyOklzc3VlQ29tbWVudDc5MDk4NjI1Mg== 8881170 2021-03-04T22:21:37Z 2021-03-04T22:32:01Z CONTRIBUTOR

@dcherian, to add to the complexity here, it's even weirder than originally reported. See my test cases below. This might alter how this bug is approached.

```python import xarray as xr

def _rolling(ds): return ds.rolling(time=6, center=False, min_periods=1).mean()

Length 3 array to test that min_periods is called in, despite asking

for 6 time-steps of smoothing

ds = xr.DataArray([1, 2, 3], dims='time') ds['time'] = xr.cftime_range(start='2021-01-01', freq='D', periods=3) ```

1. With bottleneck installed, min_periods is ignored as a kwarg with in-memory arrays.

(bottleneck installed) ```python

Just apply rolling to the base array.

ds.rolling(time=6, center=False, min_periods=1).mean()

ValueError: Moving window (=6) must between 1 and 3, inclusive

Group into single day climatology groups and apply

ds.groupby('time.dayofyear').map(_rolling)

ValueError: Moving window (=6) must between 1 and 1, inclusive ```

2. With bottleneck uninstalled, min_periods works with in-memory arrays.

(bottleneck uninstalled) ```python

Just apply rolling to the base array.

ds.rolling(time=6, center=False, min_periods=1).mean()

<xarray.DataArray (time: 3)> array([1. , 1.5, 2. ]) Coordinates: * time (time) object 2021-01-01 00:00:00 ... 2021-01-03 00:00:00

Group into single day climatology groups and apply

ds.groupby('time.dayofyear').map(_rolling)

<xarray.DataArray (time: 3)> array([1., 2., 3.]) Coordinates: * time (time) object 2021-01-01 00:00:00 ... 2021-01-03 00:00:00 ```

3. Regardless of bottleneck, dask objects ignore min_period when a groupby object.

This specifically seems like an issue with .map()

(independent of bottleneck installation) ```python

Just apply rolling to the base array.

ds.chunk().rolling(time=6, center=False, min_periods=1).mean().compute()

<xarray.DataArray (time: 3)> array([1. , 1.5, 2. ]) Coordinates: * time (time) object 2021-01-01 00:00:00 ... 2021-01-03 00:00:00

Group into single day climatology groups and apply

ds.chunk().groupby('time.dayofyear').map(_rolling)

ValueError: For window size 6, every chunk should be larger than 3, but the smallest chunk size is 1. Rechunk your array with a larger chunk size or a chunk size that more evenly divides the shape of your array. ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  811321550
Powered by Datasette · Queries took 0.794ms · About: xarray-datasette