html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2921#issuecomment-489752548,https://api.github.com/repos/pydata/xarray/issues/2921,489752548,MDEyOklzc3VlQ29tbWVudDQ4OTc1MjU0OA==,15570875,2019-05-06T19:52:47Z,2019-05-06T19:52:47Z,NONE,"It looks like `ds.time.encoding` is not getting set when `open_mfdataset` is opening multiple files. I suspect that this is leading to the surprising unit for `time` when the `ds` is written out. The code below demonstrates that `ds.time.encoding` is set by `open_mfdataset` in the single-file case and is not set in the multi-file case. However, `ds.time_bounds.encoding` is set in both the single- and multi-file cases.
The possibility of this is alluded to the in a [comment](https://github.com/pydata/xarray/issues/2436#issuecomment-449737841) in #2436, which relates the issue to #1614.
```
import numpy as np
import xarray as xr
# create time and time_bounds DataArrays for Jan-1850 and Feb-1850
time_bounds_vals = np.array([[0.0, 31.0], [31.0, 59.0]])
time_vals = time_bounds_vals.mean(axis=1)
time_var = xr.DataArray(time_vals, dims='time',
coords={'time':time_vals})
time_bounds_var = xr.DataArray(time_bounds_vals, dims=('time', 'd2'),
coords={'time':time_vals})
# create Dataset of time and time_bounds
ds = xr.Dataset(coords={'time':time_var}, data_vars={'time_bounds':time_bounds_var})
ds.time.attrs = {'bounds':'time_bounds', 'calendar':'noleap',
'units':'days since 1850-01-01'}
# write Jan-1850 values to file
ds.isel(time=slice(0,1)).to_netcdf('Jan-1850.nc', unlimited_dims='time')
# write Feb-1850 values to file
ds.isel(time=slice(1,2)).to_netcdf('Feb-1850.nc', unlimited_dims='time')
# use open_mfdataset to read in files, combining into 1 Dataset
decode_times = True
decode_cf = True
ds = xr.open_mfdataset(['Jan-1850.nc'],
decode_cf=decode_cf, decode_times=decode_times)
print('time and time_bounds encoding, single-file open_mfdataset')
print(ds.time.encoding)
print(ds.time_bounds.encoding)
# use open_mfdataset to read in files, combining into 1 Dataset
decode_times = True
decode_cf = True
ds = xr.open_mfdataset(['Jan-1850.nc', 'Feb-1850.nc'],
decode_cf=decode_cf, decode_times=decode_times)
print('--------------------')
print('time and time_bounds encoding, multi-file open_mfdataset')
print(ds.time.encoding)
print(ds.time_bounds.encoding)
```
produces
```
time and time_bounds encoding, single-file open_mfdataset
{'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (512,), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1,), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'}
{'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'}
--------------------
time and time_bounds encoding, multi-file open_mfdataset
{}
{'zlib': False, 'shuffle': False, 'complevel': 0, 'fletcher32': False, 'contiguous': False, 'chunksizes': (1, 2), 'source': '/gpfs/fs1/work/klindsay/analysis/CESM2_coup_carb_cycle_JAMES/Jan-1850.nc', 'original_shape': (1, 2), 'dtype': dtype('float64'), '_FillValue': nan, 'units': 'days since 1850-01-01', 'calendar': 'noleap'}
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,437418525