html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/3364#issuecomment-541455875,https://api.github.com/repos/pydata/xarray/issues/3364,541455875,MDEyOklzc3VlQ29tbWVudDU0MTQ1NTg3NQ==,1217238,2019-10-13T20:24:29Z,2019-10-13T20:24:29Z,MEMBER,"Okay, sounds good then!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,501150299
https://github.com/pydata/xarray/pull/3364#issuecomment-541419389,https://api.github.com/repos/pydata/xarray/issues/3364,541419389,MDEyOklzc3VlQ29tbWVudDU0MTQxOTM4OQ==,2448579,2019-10-13T13:39:39Z,2019-10-13T13:39:39Z,MEMBER,"This is just for variables that are being merged and not concatenated.
``` python
# determine which variables to merge, and then merge them according to compat
variables_to_merge = (coord_names | data_names) - concat_over - dim_names
```
We are still raising an error if variables in `concat_over` are missing in some datasets.
``` python
for k in datasets[0].variables:
if k in concat_over:
try:
vars = ensure_common_dims([ds.variables[k] for ds in datasets])
except KeyError:
raise ValueError(""%r is not present in all datasets."" % k)
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,501150299
https://github.com/pydata/xarray/pull/3364#issuecomment-541386349,https://api.github.com/repos/pydata/xarray/issues/3364,541386349,MDEyOklzc3VlQ29tbWVudDU0MTM4NjM0OQ==,1217238,2019-10-13T04:54:47Z,2019-10-13T04:54:47Z,MEMBER,"I am very sympathetic to the idea of not requiring all matching variables, but do we want to handle it like this (not adding the dimension) or by adding in NaNs?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,501150299