id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 324350248,MDU6SXNzdWUzMjQzNTAyNDg=,2159,Concatenate across multiple dimensions with open_mfdataset,35968931,closed,0,,,27,2018-05-18T10:10:49Z,2019-09-16T18:54:39Z,2019-06-25T15:50:33Z,MEMBER,,,,"#### Code Sample ```python # Create 4 datasets containing sections of contiguous (x,y) data for i, x in enumerate([1, 3]): for j, y in enumerate([10, 40]): ds = xr.Dataset({'foo': (('x', 'y'), np.ones((2, 3)))}, coords={'x': [x, x+1], 'y': [y, y+10, y+20]}) ds.to_netcdf('ds.' + str(i) + str(j) + '.nc') # Try to open them all in one go ds_read = xr.open_mfdataset('ds.*.nc') print(ds_read) ``` #### Problem description Currently ``xr.open_mfdataset`` will detect a single common dimension and concatenate DataSets along that dimension. However a common use case is a set of NetCDF files which have two or more common dimensions that need to be concatenated along simultaneously (for example collecting the output of any large-scale simulation which parallelizes in more than one dimension simultaneously). For the behaviour of ``xr.open_mfdataset`` to be n-dimensional it should automatically recognise and concatenate along all common dimensions. #### Expected Output ``` Dimensions: (x: 4, y: 6) Coordinates: * x (x) int64 1 2 3 4 * y (y) int64 10 20 30 40 50 60 Data variables: foo (x, y) float64 dask.array ``` #### Current output of ``xr.open_mfdataset()`` ``` Dimensions: (x: 4, y: 12) Coordinates: * x (x) int64 1 2 3 4 * y (y) int64 10 20 30 40 50 60 10 20 30 40 50 60 Data variables: foo (x, y) float64 dask.array ``` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/2159/reactions"", ""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue