id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 89268800,MDU6SXNzdWU4OTI2ODgwMA==,438,`xray.open_mfdataset` concatenates also variables without time dimension,3404817,closed,0,,1172685,13,2015-06-18T11:34:53Z,2017-09-19T16:16:58Z,2015-07-15T21:47:11Z,CONTRIBUTOR,,,,"When opening a multi-file dataset with `xray.open_mfdataset`, also some variables are concatenated that do not have a `time` dimension. My netCDF files contain a lot of those ""static"" variables (e.g. grid spacing etc.). `netCDF4.MFDataset` used to handle those as expected (i.e. did not concatenate them). Is the different behaviour of `xray.open_mfdataset` intentional or due to a bug? Note: I am using `decode_times=False`. ## Example ``` python with xray.open_dataset(files[0], decode_times=False) as single: print single['dz'] ``` ``` array([ 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1000. , 1019.68078613, 1056.44836426, 1105.99511719, 1167.80700684, 1242.41333008, 1330.96777344, 1435.14099121, 1557.12585449, 1699.67956543, 1866.21240234, 2060.90234375, 2288.85205078, 2556.24707031, 2870.57495117, 3240.8371582 , 3677.77246094, 4194.03076172, 4804.22363281, 5524.75439453, 6373.19189453, 7366.94482422, 8520.89257812, 9843.65820312, 11332.46582031, 12967.19921875, 14705.34375 , 16480.70898438, 18209.13476562, 19802.234375 , 21185.95703125, 22316.50976562, 23186.49414062, 23819.44921875, 24257.21679688, 24546.77929688, 24731.01367188, 24844.328125 , 24911.97460938, 24951.29101562, 24973.59375 , 24985.9609375 , 24992.67382812, 24996.24414062, 24998.109375 ]) Coordinates: * z_t (z_t) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 6500.0 ... Attributes: long_name: thickness of layer k units: centimeters ``` ``` python with xray.open_mfdataset(files, decode_times=False) as multiple: print multiple['dz'] ``` ``` dask.array Coordinates: * z_t (z_t) float32 500.0 1500.0 2500.0 3500.0 4500.0 5500.0 6500.0 ... * time (time) float64 3.653e+04 3.656e+04 3.659e+04 3.662e+04 ... Attributes: long_name: thickness of layer k units: centimeters ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/438/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 95532383,MDExOlB1bGxSZXF1ZXN0NDAxNzcyNjQ=,478,Xray v0.5.2 updates,1217238,closed,0,,1172685,0,2015-07-16T21:19:14Z,2015-07-16T21:40:23Z,2015-07-16T21:40:22Z,MEMBER,,0,pydata/xarray/pulls/478,"Fixes #444 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/478/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 95306928,MDExOlB1bGxSZXF1ZXN0NDAwNzgzNjQ=,477,Bytes attributes are decoded to strings with engine='h5netcdf',1217238,closed,0,,1172685,0,2015-07-15T22:49:03Z,2015-07-16T18:11:42Z,2015-07-16T18:11:42Z,MEMBER,,0,pydata/xarray/pulls/477,"Fixes #451 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/477/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 95089244,MDExOlB1bGxSZXF1ZXN0Mzk5ODMwNzY=,473,Rewrite of xray.concat,1217238,closed,0,,1172685,0,2015-07-15T02:33:40Z,2015-07-15T21:47:14Z,2015-07-15T21:47:11Z,MEMBER,,0,pydata/xarray/pulls/473,"Fixes #464 Fixes #438 The optional arguments `concat_over` and `mode` in `xray.concat` have been removed and replaced by `data_vars` and `coords`. The new arguments are both more easily understood and more robustly implemented. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/473/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 95011790,MDExOlB1bGxSZXF1ZXN0Mzk5NDg2MzA=,472,Add support for reading/writing complex numbers with h5netcdf,1217238,closed,0,,1172685,0,2015-07-14T18:48:03Z,2015-07-14T20:24:06Z,2015-07-14T20:24:04Z,MEMBER,,0,pydata/xarray/pulls/472,,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/472/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 92528846,MDExOlB1bGxSZXF1ZXN0MzkwNDI4ODc=,450,Add xray.save_mfdataset,1217238,closed,0,,1172685,0,2015-07-02T02:19:41Z,2015-07-06T18:41:27Z,2015-07-06T18:41:25Z,MEMBER,,0,pydata/xarray/pulls/450,"This function allows for saving multiple datasets to disk simultaneously, which is useful when processing large datasets with dask.array. For example, to save a dataset too big to fit into memory to one file per year, we could write: ``` >>> years, datasets = zip(*ds.groupby('time.year')) >>> paths = ['%s.nc' % y for y in years] >>> xray.save_mfdataset(datasets, paths) ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/450/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 93151446,MDExOlB1bGxSZXF1ZXN0MzkyMjc5MDg=,454,Fixed bug in serializing datetime scalars,1217238,closed,0,,1172685,0,2015-07-05T22:18:07Z,2015-07-06T04:56:53Z,2015-07-06T04:56:52Z,MEMBER,,0,pydata/xarray/pulls/454,,"{""url"": ""https://api.github.com/repos/pydata/xarray/issues/454/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 93151483,MDExOlB1bGxSZXF1ZXN0MzkyMjc5MTI=,455,Fix min/max for arrays with string or unicode types,1217238,closed,0,,1172685,0,2015-07-05T22:18:31Z,2015-07-06T04:56:22Z,2015-07-06T04:56:20Z,MEMBER,,0,pydata/xarray/pulls/455,"Fixes #453 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/455/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 91547750,MDExOlB1bGxSZXF1ZXN0Mzg3MTk1MTI=,446,Preprocess argument for open_mfdataset and threading lock,1217238,closed,0,,1172685,4,2015-06-28T03:33:19Z,2015-07-02T17:16:41Z,2015-06-29T18:06:54Z,MEMBER,,0,pydata/xarray/pulls/446,"Fixes #443 Fixes #444 ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/446/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 89180768,MDU6SXNzdWU4OTE4MDc2OA==,437,bit of code missing in documentation (http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files),2539828,closed,0,,1172685,1,2015-06-18T02:34:08Z,2015-06-22T15:44:59Z,2015-06-22T15:44:59Z,NONE,,,,"in [http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files](http://xray.readthedocs.org/en/latest/io.html#combining-multiple-files) there is a bit of code missing in the function `read_netcdfs`: instead of: ``` def read_netcdfs(files, dim, transform_func=None): def process_one_path(path): # use a context manager, to ensure the file gets closed after use with xray.open_dataset(path) as ds: # transform_func should do some sort of selection or # aggregation if transform_func is not None: ds = transform_func(ds) # load all data from the transformed dataset, to ensure we can # use it after closing each original file ds.load() return ds paths = sorted(glob(files)) datasets = [process_one_path(p) for p in paths] xray.concat(datasets, dim) ``` it should be: ``` def read_netcdfs(files, dim, transform_func=None): def process_one_path(path): # use a context manager, to ensure the file gets closed after use with xray.open_dataset(path) as ds: # transform_func should do some sort of selection or # aggregation if transform_func is not None: ds = transform_func(ds) # load all data from the transformed dataset, to ensure we can # use it after closing each original file ds.load() return ds paths = sorted(glob(files)) datasets = [process_one_path(p) for p in paths] dset = xray.concat(datasets, dim) return dset ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/437/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue