id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type
666523009,MDU6SXNzdWU2NjY1MjMwMDk=,4276,isel with 0d dask array index fails,3958036,closed,0,,,0,2020-07-27T19:14:22Z,2023-03-15T02:48:01Z,2023-03-15T02:48:01Z,CONTRIBUTOR,,,,"
**What happened**:
If a 0d dask array is passed as an argument to `isel()`, an error occurs because dask arrays do not have a `.item()` method. I came across this when trying to use the result of `da.argmax()` from a dask-backed array to select from the DataArray.
**What you expected to happen**:
`isel()` returns the value at the index contained in the 0d dask array.
**Minimal Complete Verifiable Example**:
```python
import dask.array as daskarray
import numpy as np
import xarray as xr
a = daskarray.from_array(np.linspace(0., 1.))
da = xr.DataArray(a, dims=""x"")
x_selector = da.argmax(dim=...)
da_max = da.isel(x_selector)
```
**Anything else we need to know?**:
I think the problem is here
https://github.com/pydata/xarray/blob/a198218ddabe557adbb04311b3234ec8d20419e7/xarray/core/variable.py#L546-L548
and `k.values.item()` or `int(k.data)` would fix my issue, but I don't know the reason for using `.item()` in the first place, so I'm not sure if either of these would have some undesirable side-effect.
May be related to #2511, but from the code snippet above, I think this is a specific issue of 0d dask arrays rather than a generic dask-indexing issue like #2511.
I'd like to fix this because it breaks the nice new features of `argmin()` and `argmax()` if the `DataArray` is dask-backed.
**Environment**:
Output of xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50)
[GCC 7.5.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-42-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.5
libnetcdf: 4.7.4
xarray: 0.16.0
pandas: 1.0.5
numpy: 1.18.5
scipy: 1.4.1
netCDF4: 1.5.3
pydap: None
h5netcdf: None
h5py: 2.10.0
Nio: None
zarr: None
cftime: 1.2.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2.19.0
distributed: 2.21.0
matplotlib: 3.2.2
cartopy: None
seaborn: None
numbagg: None
pint: 0.13
setuptools: 49.2.0.post20200712
pip: 20.1.1
conda: 4.8.3
pytest: 5.4.3
IPython: 7.15.0
sphinx: None
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4276/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
698577111,MDU6SXNzdWU2OTg1NzcxMTE=,4417,Inconsistency in whether index is created with new dimension coordinate?,3958036,closed,0,,,6,2020-09-10T22:44:54Z,2022-09-13T07:54:32Z,2022-09-13T07:54:32Z,CONTRIBUTOR,,,,"It seems like `set_coords()` doesn't create an index variable. Is there a reason for this? I was surprised that the following code snippets produce different Datasets (first one has empty `indexes`, second one has `x` in `indexes`), even though both Datasets have a 'dimension coordinate' `x`:
(1)
```
import numpy as np
import xarray as xr
ds = xr.Dataset()
ds['a'] = ('x', np.linspace(0,1))
ds['b'] = ('x', np.linspace(3,4))
ds = ds.rename(b='x')
ds = ds.set_coords('x')
print(ds)
print('indexes', ds.indexes)
```
(2)
```
import numpy as np
import xarray as xr
ds = xr.Dataset()
ds['a'] = ('x', np.linspace(0,1))
ds['x'] = ('x', np.linspace(3,4))
print(ds)
print('indexes', ds.indexes)
```
**Environment**:
Output of xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50)
[GCC 7.5.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-47-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.5
libnetcdf: 4.7.4
xarray: 0.16.0
pandas: 1.1.1
numpy: 1.18.5
scipy: 1.4.1
netCDF4: 1.5.3
pydap: None
h5netcdf: None
h5py: 2.10.0
Nio: None
zarr: None
cftime: 1.2.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2.23.0
distributed: 2.25.0
matplotlib: 3.2.2
cartopy: None
seaborn: None
numbagg: None
pint: 0.13
setuptools: 49.6.0.post20200814
pip: 20.2.3
conda: 4.8.4
pytest: 5.4.3
IPython: 7.15.0
sphinx: 3.2.1
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4417/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
708337538,MDU6SXNzdWU3MDgzMzc1Mzg=,4456,workaround for file with variable and dimension having same name,3958036,closed,0,,,4,2020-09-24T17:10:04Z,2021-12-29T16:55:53Z,2021-12-29T16:55:53Z,CONTRIBUTOR,,,,"Adding a variable that's not a 1d ""dimension coordinate"" with the same name as a dimension is an error. This makes sense. However, if I have a `.nc` file that has such a variable, is there any workaround to get the badly-named variable into `xarray` short of altering the `.nc` file or loading it separately with `netCDF4`? I.e. to make the following work somehow
```
import xarray as xr
import netCDF4
f = netCDF4.Dataset()
f = netCDF4.Dataset(""test.nc"", ""w"")
f.createDimension(""x"", 2)
f.createDimension(""y"", 3)
f[""y""] = np.ones([2,3])
f[""y""][...] = 1.0
f.close()
ds = xr.open_dataset('test.nc')
```
rather than getting the current error `MissingDimensionsError: 'y' has more than 1-dimension and the same name as one of its dimensions ('x', 'y'). xarray disallows such variables because they conflict with the coordinates used to label dimensions.`
I think it might be nice to have something like a `rename_vars` argument to `open_dataset()`. Similar to how `drop_vars` ignores a list of variables, `rename_vars` could rename a dict of variables so the example above could do
```
ds = xr.open_dataset(""test.nc"", rename_vars={""y"": ""y_not_dimension""})
```
and get a Dataset with a dimension `""y""` and a variable `""y_not_dimension""`.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4456/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
842583817,MDU6SXNzdWU4NDI1ODM4MTc=,5084,plot_surface() wrapper,3958036,closed,0,,,2,2021-03-27T19:16:09Z,2021-05-03T13:05:02Z,2021-05-03T13:05:02Z,CONTRIBUTOR,,,,"Is there an xarray way to make a surface plot, like matplotlib's `plot_surface()`? I didn't see one on a quick skim, but expect it should be fairly easy to add, following the style for `contour()`, `pcolormesh()`, etc.? For the matplotlib version, see https://matplotlib.org/stable/gallery/mplot3d/surface3d.html.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5084/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
823059252,MDU6SXNzdWU4MjMwNTkyNTI=,5002,Dataset.plot.quiver() docs slightly misleading,3958036,closed,0,,,3,2021-03-05T12:54:19Z,2021-05-01T17:38:39Z,2021-05-01T17:38:39Z,CONTRIBUTOR,,,,"In the docs for `Dataset.plot.quiver()`
http://xarray.pydata.org/en/latest/generated/xarray.Dataset.plot.quiver.html
the `u` and `v` arguments are labelled as 'optional'. They are required for quiver plots though, so this is slightly confusing. I guess it is like this because the docs are created from a generic `_dsplot()` docstring, so don't know if it's fixable in a sensible way...","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5002/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
847006334,MDU6SXNzdWU4NDcwMDYzMzQ=,5097,2d plots may fail for some choices of `x` and `y`,3958036,closed,0,,,1,2021-03-31T17:26:34Z,2021-04-22T07:16:17Z,2021-04-22T07:16:17Z,CONTRIBUTOR,,,,"**What happened**:
When making a 2d plot with a 1d `x` argument and a 2d `y`, if the two dimensions have the same size and are in the wrong order, no plot is produced - the third plot in the MCVE is blank.
**What you expected to happen**:
All three plots in the MCVE should be identical.
**Minimal Complete Verifiable Example**:
```python
from matplotlib import pyplot as plt
import numpy as np
import xarray as xr
ds = xr.Dataset({""z"": ([""x"", ""y""], np.random.rand(4,4))})
x2d, y2d = np.meshgrid(ds[""x""], ds[""y""])
ds = ds.assign_coords(x2d=([""x"", ""y""], x2d.T), y2d=([""x"", ""y""], y2d.T))
fig, axes = plt.subplots(1,3)
h0 = ds[""z""].plot.pcolormesh(x=""y2d"", y=""x2d"", ax=axes[0])
h1 = ds[""z""].plot.pcolormesh(x=""y"", y=""x"", ax=axes[1])
h2 = ds[""z""].plot.pcolormesh(x=""y"", y=""x2d"", ax=axes[2])
plt.show()
```
result:

**Anything else we need to know?**:
The bug is present in both the 0.17.0 release and current `master`.
I came across this while starting to work on #5084. I think the problem is here
https://github.com/pydata/xarray/blob/ddc352faa6de91f266a1749773d08ae8d6f09683/xarray/plot/plot.py#L678-L684
as the check `xval.shape[0] == yval.shape[0]` doesn't work if the single dimension of x is actually the second dimension of y, but happened to have the same size as the first dimension of y? I think it needs to check the actual dimensions of `x` and `y`.
Why don't we just do something like
```
xval = xval.broadcast_like(darray)
yval = yval.broadcast_like(darray)
```
if either coordinate is 2d before using `.values` to convert to numpy arrays?
**Environment**:
Output of xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05)
[GCC 7.5.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-70-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.6
libnetcdf: 4.7.4
xarray: 0.17.0
pandas: 1.1.5
numpy: 1.19.4
scipy: 1.5.3
netCDF4: 1.5.5.1
pydap: None
h5netcdf: None
h5py: 3.1.0
Nio: None
zarr: None
cftime: 1.3.0
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2020.12.0
distributed: 2020.12.0
matplotlib: 3.3.3
cartopy: None
seaborn: None
numbagg: None
pint: 0.16.1
setuptools: 49.6.0.post20201009
pip: 20.3.3
conda: 4.9.2
pytest: 6.2.1
IPython: 7.19.0
sphinx: 3.4.0
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5097/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
698263021,MDU6SXNzdWU2OTgyNjMwMjE=,4415,Adding new DataArray to a Dataset removes attrs of existing coord,3958036,closed,0,,,3,2020-09-10T17:21:32Z,2020-09-10T17:38:08Z,2020-09-10T17:38:08Z,CONTRIBUTOR,,,,"
**Minimal Complete Verifiable Example**:
```
import numpy as np
import xarray as xr
ds = xr.Dataset()
ds[""a""] = xr.DataArray(np.linspace(0., 1.), dims=""x"")
ds[""x""] = xr.DataArray(np.linspace(0., 2., len(ds[""x""])), dims=""x"")
ds[""x""].attrs[""foo""] = ""bar""
print(ds[""x""])
ds[""b""] = xr.DataArray(np.linspace(0., 1.), dims=""x"")
print(ds[""x""])
```
**What happened**:
Attribute `""foo""` is present at the first print, but missing in the second, after adding `b` to `ds`.
full output
```
array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898,
0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612,
0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327,
0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041,
1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755,
1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469,
1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184,
2. ])
Coordinates:
* x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0
Attributes:
foo: bar
array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898,
0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612,
0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327,
0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041,
1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755,
1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469,
1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184,
2. ])
Coordinates:
* x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0
```
**What you expected to happen**:
Coordinate `x` should be unchanged.
full expected output
```
array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898,
0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612,
0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327,
0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041,
1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755,
1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469,
1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184,
2. ])
Coordinates:
* x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0
Attributes:
foo: bar
array([0. , 0.040816, 0.081633, 0.122449, 0.163265, 0.204082, 0.244898,
0.285714, 0.326531, 0.367347, 0.408163, 0.44898 , 0.489796, 0.530612,
0.571429, 0.612245, 0.653061, 0.693878, 0.734694, 0.77551 , 0.816327,
0.857143, 0.897959, 0.938776, 0.979592, 1.020408, 1.061224, 1.102041,
1.142857, 1.183673, 1.22449 , 1.265306, 1.306122, 1.346939, 1.387755,
1.428571, 1.469388, 1.510204, 1.55102 , 1.591837, 1.632653, 1.673469,
1.714286, 1.755102, 1.795918, 1.836735, 1.877551, 1.918367, 1.959184,
2. ])
Coordinates:
* x (x) float64 0.0 0.04082 0.08163 0.1224 ... 1.878 1.918 1.959 2.0
Attributes:
foo: bar
```
**Anything else we need to know?**:
**Environment**:
Output of xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.6 | packaged by conda-forge | (default, Jun 1 2020, 18:57:50)
[GCC 7.5.0]
python-bits: 64
OS: Linux
OS-release: 5.4.0-47-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.5
libnetcdf: 4.7.4
xarray: 0.16.0
pandas: 1.1.1
numpy: 1.18.5
scipy: 1.4.1
netCDF4: 1.5.3
pydap: None
h5netcdf: None
h5py: 2.10.0
Nio: None
zarr: None
cftime: 1.2.1
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2.23.0
distributed: 2.25.0
matplotlib: 3.2.2
cartopy: None
seaborn: None
numbagg: None
pint: 0.13
setuptools: 49.6.0.post20200814
pip: 20.2.3
conda: 4.8.4
pytest: 5.4.3
IPython: 7.15.0
sphinx: 3.2.1
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4415/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
668515620,MDU6SXNzdWU2Njg1MTU2MjA=,4289,title bar of docs displays incorrect version,3958036,closed,0,,,5,2020-07-30T09:00:43Z,2020-08-18T22:32:51Z,2020-08-18T22:32:51Z,CONTRIBUTOR,,,,"
**What happened**:
The browser title bar displays an incorrect version when viewing the docs online. See below - title bar says 0.15.1 but actual version in URL is 0.16.0.

`http://xarray.pydata.org/en/stable/` also displays 0.15.1 in the title bar, but I guess is actually showing 0.16.0 docs (?), which is confusing!","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/4289/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
595882590,MDU6SXNzdWU1OTU4ODI1OTA=,3948,Releasing memory?,3958036,closed,0,,,6,2020-04-07T13:49:07Z,2020-04-07T14:18:36Z,2020-04-07T14:18:36Z,CONTRIBUTOR,,,,"Once `xarray` (or `dask`) has loaded some array into memory, is there any way to force the memory to be released again? Or should this never be necessary?
For example, what would be the best workflow for this case: I have several large arrays on disk. Each will fit into memory individually. I want to do some analysis on each array (which produces small results), and keep the results in memory, but I do not need the large arrays any more after the analysis.
I'm wondering if some sort of `release()` method would be useful, so that I could say explicitly ""can drop this DataArray from memory, even though the user might have modified it"". My proposed workflow for the case above would then be something like:
```
da1 = ds[""variable1""]
result1 = do_some_work(da1) # may load large parts of da1 into memory
da1.release() # any changes to da1 not already saved to disk are lost, but do not want da1 any more
da2 = ds[""variable2""]
result2 = do_some_work(da2) # may load large parts of da2 into memory
da2.release() # any changes to da2 not already saved to disk are lost, but do not want da1 any more
... etc.
```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3948/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
583220835,MDU6SXNzdWU1ODMyMjA4MzU=,3866,Allow `isel` to ignore missing dimensions?,3958036,closed,0,,,0,2020-03-17T18:41:13Z,2020-04-03T19:47:08Z,2020-04-03T19:47:08Z,CONTRIBUTOR,,,,"Sometimes it would be nice for `isel()` to be able to ignore a dimension if it is missing in the Dataset/DataArray. E.g.
```
ds = Dataset()
ds.isel(t=0) # currently raises an exception
ds.isel(t=0, ignore_missing=True) # would be nice if this was allowed, just returning ds
```
For example, when writing a function can be called on variables with different combinations of dimensions.
I think it should be fairly easy to implement, just add the argument to the condition here
https://github.com/pydata/xarray/blob/65a5bff79479c4b56d6f733236fe544b7f4120a8/xarray/core/variable.py#L1059-L1062
the only downside would be increased complexity of adding another argument to the API for an issue where a workaround is not hard (at least in the case I have at the moment), just a bit clumsy.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3866/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
583080947,MDU6SXNzdWU1ODMwODA5NDc=,3865,`merge` drops attributes,3958036,closed,0,,,1,2020-03-17T15:06:18Z,2020-03-24T20:40:18Z,2020-03-24T20:40:18Z,CONTRIBUTOR,,,,"
`xarray.merge()` drops the `attrs` of `Dataset`s being merged. They should be kept, at least if they are compatible
#### MCVE Code Sample
```python
# Your code here
import xarray as xr
ds1 = xr.Dataset()
ds1.attrs['a'] = 42
ds2 = xr.Dataset()
ds2.attrs['a'] = 42
merged = xr.merge([ds1, ds2])
print(merged)
```
the result is
```
Dimensions: ()
Data variables:
*empty*
```
#### Expected Output
```
Dimensions: ()
Data variables:
*empty*
Attributes:
a: 42
```
#### Problem Description
The current behaviour means I have to check and copy `attrs` to the result of `merge` by hand, even if the `attrs` of the inputs were identical or not conflicting.
I'm happy to attempt a PR to fix this.
Proposal (following pattern of `compat` arguments):
* add a `combine_attrs` argument to `xarray.merge`
* `combine_attrs = 'drop'` do not copy `attrs` (current behaviour)
* `combine_attrs = 'identical'` if `attrs` of all inputs are identical (using `dict_equiv`) then copy the `attrs` to the result, otherwise raise an exception
* `combine_attrs = 'no_conflicts'` merge the `attrs` of all inputs, as long as any keys shared by more than one input have the same value (if not raise an exception) [I propose this is the default behaviour]
* `override` copy the `attrs` from the first input, to the result
This proposal should also allow `combine_by_coords`, etc. to preserve attributes. These should probably also take a `combine_attrs` argument, which would be passed through to `merge`.
#### Versions
Current `master` of `pydata/xarray` on 17/3/2020
Output of `xr.show_versions()`
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
python-bits: 64
OS: Linux
OS-release: 5.3.0-40-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.2
libnetcdf: 4.6.3
xarray: 0.15.0
pandas: 1.0.2
numpy: 1.18.1
scipy: 1.3.0
netCDF4: 1.5.1.2
pydap: None
h5netcdf: None
h5py: 2.9.0
Nio: None
zarr: None
cftime: 1.0.3.4
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2.12.0
distributed: None
matplotlib: 3.1.1
cartopy: None
seaborn: None
numbagg: None
setuptools: 45.2.0
pip: 9.0.1
conda: None
pytest: 4.4.1
IPython: 7.8.0
sphinx: 1.8.3
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3865/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
532165408,MDU6SXNzdWU1MzIxNjU0MDg=,3590,cmap.set_under() does not work as expected,3958036,closed,0,,,5,2019-12-03T18:04:07Z,2020-02-24T20:20:07Z,2020-02-24T20:20:07Z,CONTRIBUTOR,,,,"When using matplotlib, the `set_under()` method can be used to set values below the range of a colormap to a certain color, for example
```
import matplotlib
from matplotlib import pyplot
import numpy
dat = numpy.linspace(0, 1)[numpy.newaxis, :]*numpy.linspace(0, 1)[:, numpy.newaxis]
cmap = matplotlib.cm.viridis
#cmap.set_under('w')
pyplot.contourf(dat, vmin=.3, cmap=cmap)
pyplot.colorbar()
pyplot.show()
```
produces

while uncommenting the `cmap.set_under()` call produces

However, using `xarray` to do the same thing,
```
import matplotlib
from matplotlib import pyplot
from xarray import DataArray
import numpy
da = DataArray(numpy.linspace(0, 1)[numpy.newaxis, :]*numpy.linspace(0, 1)[:, numpy.newaxis])
cmap = matplotlib.cm.viridis
cmap.set_under('w')
da.plot.contourf(vmin=.3, cmap=cmap)
pyplot.show()
```
produces

where it seems the call to `cmap.set_under('w')` had no effect. Expected behaviour would be output like the second plot.
#### Output of ``xr.show_versions()``
```
In [2]: xarray.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
python-bits: 64
OS: Linux
OS-release: 5.0.0-37-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
libhdf5: 1.10.0
libnetcdf: 4.6.0
xarray: 0.14.1
pandas: 0.24.2
numpy: 1.16.3
scipy: 1.2.1
netCDF4: 1.3.1
pydap: None
h5netcdf: None
h5py: 2.9.0
Nio: None
zarr: None
cftime: None
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2.1.0
distributed: None
matplotlib: 3.1.1
cartopy: None
seaborn: None
numbagg: None
setuptools: 41.0.1
pip: 19.3.1
conda: None
pytest: 4.4.1
IPython: 7.6.1
sphinx: None
```
","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3590/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue
507468507,MDU6SXNzdWU1MDc0Njg1MDc=,3401,_get_scheduler() exception if dask.multiprocessing missing,3958036,closed,0,,,0,2019-10-15T20:35:14Z,2019-10-21T00:17:48Z,2019-10-21T00:17:48Z,CONTRIBUTOR,,,,"These lines were recently changed in #3358
https://github.com/pydata/xarray/blob/3f9069ba376afa35c0ca83b09a6126dd24cb8127/xarray/backends/locks.py#L87-L92
If the 'cloudpickle' package is not installed, then `dask.multiprocessing` is not available. The `try/except` that used to be wrapped around `if actual_get is dask.multiprocessing.get` meant that `_get_scheduler()` worked in that case, returning `""threaded""` (I assume this was the expected behaviour). After #3358, `_get_scheduler()` raised an `AttributeError: module 'dask' has no attribute 'multiprocessing'` until I installed 'cloudpickle'.
Suggest either reverting the changes that removed the `try/except` or making 'cloudpickle' a dependency.
--------------------------
To reproduce:
1. check 'cloudpickle' is not installed, but 'dask' is
2. execute the following commands
```
>>> import xarray
>>> xarray.backends.api._get_scheduler()
```
Expected result: `""threaded""`
Actual result:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 xarray.backends.api._get_scheduler()
~/.local/lib/python3.6/site-packages/xarray/backends/locks.py in _get_scheduler(get, collection)
87 pass
88
---> 89 if actual_get is dask.multiprocessing.get:
90 return ""multiprocessing""
91 else:
AttributeError: module 'dask' has no attribute 'multiprocessing'
```","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/3401/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue