id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type
578736255,MDU6SXNzdWU1Nzg3MzYyNTU=,3855,`rolling.mean` gives negative values on non-negative array.,2272878,closed,0,,,6,2020-03-10T17:13:46Z,2023-11-14T08:25:58Z,2020-03-10T18:51:13Z,CONTRIBUTOR,,,,"When doing a rolling mean on an array with no negative values, the result somehow contains negative values anyway.  This shouldn't be possible, since the mean of non-negative values can never be zero.  Further, it only happens when using the `mean` method directly, not when using `reduce(np.mean)` nor `construct().mean()`.

#### MCVE Code Sample

Take the following xarray `Dataarray`

```Python
import numpy as np
import scipy as sp
import xarray as xr

soundlen=10000
np.random.seed(1)
noise = np.random.randn(soundlen)
noise *= sp.signal.hann(soundlen)
noise2 = noise**2

xnoise = xr.DataArray(noise2, dims='temp',
                      coords={'temp': np.arange(soundlen)})
print(xnoise.min())
```

The result is `0`.  That is, it has no values less than `0`.

Using `reduce(np.mean)` has no negative values, either, since the mean of non-negative values can never be negative:

```Python
>>> print(xroll.reduce(np.mean).min())                                                                                                                                                                                                        
<xarray.DataArray ()>
array(2.90664355e-15)
```

Similarly, using `mean` through `construct` has no negative values:

```Python
print(xroll.construct('new').mean('new').min())                                                                                                                                                                                           
<xarray.DataArray ()>
array(0.)
```

However, using the `mean` method directly does give negative values:

```Python
print(xroll.mean().min())                                                                                                                                                                                                                 
<xarray.DataArray ()>
array(1.72090357e-15)
```

This mathematically shouldn't be possible.

#### Versions

```
xarray: 0.15.0
pandas: 0.25.3
numpy: 1.17.4
scipy: 1.4.1
netCDF4: 1.5.3
pydap: None
h5netcdf: None
h5py: 2.10.0
Nio: None
zarr: None
cftime: 1.0.4.2
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.3.1
dask: 2.11.0
distributed: 2.11.0
matplotlib: 3.1.3
cartopy: None
seaborn: 0.10.0
numbagg: None
setuptools: 44.0.0
pip: 20.0.2
conda: None
pytest: 5.3.5
IPython: 7.12.0
sphinx: 2.4.3
```
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3855/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
753097418,MDExOlB1bGxSZXF1ZXN0NTI5MjQxMDA1,4622,Add additional str accessor methods for DataArray,2272878,closed,0,,,14,2020-11-30T02:48:36Z,2021-03-12T08:44:00Z,2021-03-11T17:49:32Z,CONTRIBUTOR,,0,pydata/xarray/pulls/4622,"This implements the following additional string accessor methods, based loosely on the versions in pandas:

#### One-to-one

- [x] casefold(self)
- [x] normalize(self, form)

#### One-to-many

- [x] extract(self, pat[, flags, expand])
- [x] extractall(self, pat[, flags])
- [x] findall(self, pat[, flags])
- [x] get_dummies(self[, sep])
- [x] partition(self[, sep, expand])
- [x] rpartition(self[, sep, expand])
- [x] rsplit(self[, pat, n, expand])
- [x] split(self[, pat, n, expand])

#### Many-to-one

- [x] cat(self[, others, sep, na_rep, join])
- [x] join(self, sep)

#### Operators

- [x] `+`
- [x] `*`
- [x] `%`

#### Other

- [x] Allow vectorized arguments.

 - [x] Closes #3940
 - [x] Tests added
 - [x] Passes `isort . && black . && mypy . && flake8`
 - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst`
 - [x] New functions/methods are listed in `api.rst`
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4622/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull
594790230,MDU6SXNzdWU1OTQ3OTAyMzA=,3940,ENH: Support more of the pandas str accessors,2272878,closed,0,,,1,2020-04-06T04:18:32Z,2021-03-11T17:49:32Z,2021-03-11T17:49:32Z,CONTRIBUTOR,,,,"Currently pandas supports a lot of str accessors that xarray doesn't.  Many of them have useful functionality.  I think it would be good if the xarray had more of these str accessors.  I would be willing to begin working on this.

There seem to be three categories.  One that has a one-to-one mapping between input and output (so one input element becomes one output element), and one that has a one-to-many mapping (one input element becomes multiple output elements), and one that has a many-to-one mapping (multiple input elements become one output element).  Exactly how the one-to-many mapping should be handling, if at all, is an open issue.  Some of the many-to-one mappings (specifically `join`) would need a dimension argument.


#### One-to-one

- [x] casefold(self)
- [x] normalize(self, form)

#### One-to-many

- [x] extract(self, pat[, flags, expand])
- [x] extractall(self, pat[, flags])
- [x] findall(self, pat[, flags])
- [x] get_dummies(self[, sep])
- [x] partition(self[, sep, expand])
- [x] rpartition(self[, sep, expand])
- [x] rsplit(self[, pat, n, expand])
- [x] split(self[, pat, n, expand])

#### Many-to-one

- [ ] cat(self[, others, sep, na_rep, join])
- [ ] join(self, sep)","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3940/reactions"", ""total_count"": 1, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 1, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
584837010,MDExOlB1bGxSZXF1ZXN0MzkxMzQ2OTQz,3871,Implement idxmax and idxmin functions,2272878,closed,0,,,20,2020-03-20T04:27:32Z,2020-03-31T15:43:48Z,2020-03-29T01:54:25Z,CONTRIBUTOR,,0,pydata/xarray/pulls/3871,"This implements `idxmax` and `idxmin` functions similar to thier pandas equivalents.

This is my first time contributing to the project so I am not certain the structure or approach is the best.  Please let me know if there is a better way to implement this.

This also includes two other changes.

First, it drops some code for backwards-compatibility with numpy 1.12, which isn't supported.  This code was hiding an error I needed to have access to in order to get the function working.

Second, it adds an option to `Dataset.map` to let you map `DataArray` methods by name.  I used this to implement the `Dataset` versions of `idxmax` and `idxmin`.

 - [X] Closes #60
 - [X] Tests added
 - [X] Passes `isort -rc . && black . && mypy . && flake8`
 - [X] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3871/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull
588821932,MDU6SXNzdWU1ODg4MjE5MzI=,3899,_indexes of DataArray are not deep copied,2272878,closed,0,,,4,2020-03-27T01:19:07Z,2020-03-29T02:01:20Z,2020-03-29T02:01:20Z,CONTRIBUTOR,,,,"In `DataArray.copy`, the `_indexes` attributes is not deep copied.  After pull request #3840, this causes deleting a coordinate of a copy will also delete that coordinate from the original, even for deep copies.

#### MCVE Code Sample
<!-- In order for the maintainers to efficiently understand and prioritize issues, we ask you post a ""Minimal, Complete and Verifiable Example"" (MCVE): http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports -->

```python
a0 = xr.DataArray(
    np.array([[1, 2, 3], [4, 5, 6]]),
    dims=[""y"", ""x""],
    coords={""x"": [""a"", ""b"", ""c""], ""y"": [-1, 1]},
)

a1 = a0.copy()
del a1.coords[""y""]

xr.tests.assert_identical(a0, a0)
```

The result is:

```
xarray/testing.py:272: in _assert_internal_invariants
    _assert_dataarray_invariants(xarray_obj)
xarray/testing.py:222: in _assert_dataarray_invariants
    _assert_indexes_invariants_checks(da._indexes, da._coords, da.dims)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

indexes = {'x': Index(['a', 'b', 'c'], dtype='object', name='x')}, possible_coord_variables = {'x': <xarray.IndexVariable 'x' (x: 3)>
array(['a', 'b', 'c'], dtype='<U1'), 'y': <xarray.IndexVariable 'y' (y: 2)>
array([-1,  1])}
dims = ('y', 'x')

    def _assert_indexes_invariants_checks(indexes, possible_coord_variables, dims):
        assert isinstance(indexes, dict), indexes
        assert all(isinstance(v, pd.Index) for v in indexes.values()), {
            k: type(v) for k, v in indexes.items()
        }
    
        index_vars = {
            k for k, v in possible_coord_variables.items() if isinstance(v, IndexVariable)
        }
        assert indexes.keys() <= index_vars, (set(indexes), index_vars)
    
        # Note: when we support non-default indexes, these checks should be opt-in
        # only!
        defaults = default_indexes(possible_coord_variables, dims)
>       assert indexes.keys() == defaults.keys(), (set(indexes), set(defaults))
E       AssertionError: ({'x'}, {'y', 'x'})

xarray/testing.py:185: AssertionError
```

#### Expected Output

The test should pass.

#### Problem Description

Doing a deep copy should make a copy of everything.  Changing a deep copy should not alter the original in any way.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3899/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue