html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2159#issuecomment-505505980,https://api.github.com/repos/pydata/xarray/issues/2159,505505980,MDEyOklzc3VlQ29tbWVudDUwNTUwNTk4MA==,35968931,2019-06-25T15:50:33Z,2019-06-25T15:50:33Z,MEMBER,Closed by #2616 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-437579539,https://api.github.com/repos/pydata/xarray/issues/2159,437579539,MDEyOklzc3VlQ29tbWVudDQzNzU3OTUzOQ==,35968931,2018-11-10T12:10:00Z,2018-11-10T12:10:00Z,MEMBER,"@shoyer see my PR trying to implement this (#2553).
Inputting a list of lists into `auto_combine()` is working, but it wasn't obvious to me how to handle this within `open_mfdataset()`. A few approaches:
1) I could try to somehow generalise all of the list comprehensions in `open_mfdataset()`, which would be messy but general
2) Write some kind of recursive iterator function which would allow me to apply the `preproccess` and `dask.compute` functions to all the objects in the nested list
3) Separate the logic so that the input is assumed to be a flat list unless `infer_order_from_coords=True`
4) Always recursively flatten the input before opening the files, but store the structure somehow","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-435706762,https://api.github.com/repos/pydata/xarray/issues/2159,435706762,MDEyOklzc3VlQ29tbWVudDQzNTcwNjc2Mg==,35968931,2018-11-04T21:10:55Z,2018-11-05T00:56:06Z,MEMBER,"> we probably want to support all permutations of 1/2.
This is fine though right? We can do all of this, because it should compartmentalise fairly easily shouldn't it? You end up with logic like:
```python
def auto_combine(ds_sequence, infer_order_from_coords=True, check_alignment=True):
if check_alignment:
# Check alignment along non-concatenated dimensions (your (2))
if infer_order_from_coords:
# Use coordinates to determine tile_ID for each dataset in N-D (your (1))
else:
# Determine tile_IDs by structure of input in N-D (i.e. ordering in list-of-lists)
# Join everything together
return _concat_nd(tile_IDs, ds_sequence)
```
> I'm not sure we need to support this yet
We don't _need_ to, but I don't think it would be that hard (if the structure above is feasible), and I think it's a common use case. Also there's an argument for putting in special effort to generalize this function as much as possible, because it lowers the barrier to entry for xarray for new users. Though perhaps I'm just biased because it happens to be my use case...
Also if we know what form the tile_IDs should take then I can write the `_concat_nd` function now regardless of what happens with the alignment logic.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-435336049,https://api.github.com/repos/pydata/xarray/issues/2159,435336049,MDEyOklzc3VlQ29tbWVudDQzNTMzNjA0OQ==,35968931,2018-11-02T10:29:24Z,2018-11-02T11:07:17Z,MEMBER,"I was thinking about the general solution to this problem again and wanted to clarify some things.
Currently `concat()` will concatenate datasets in the order they are supplied, and will not check that the resulting dimensions indexes are monotonic. This behvaiour violates CF conventions (as mentioned by @aluhamaa) but currently passes silently.
I think that any general multi-dimensional version of the `auto_combine()` function (and therefore `open_mfdataset()`) should:
1) If possible use the values in the dimension indexes to arrange the datasets so that the indexes are monotonic,
2) Else issue a warning that some of the indexes supplied are not monotonic,
3) Then instead concatenate the supplied datasets in the order supplied (for some N-dimensional definition of ""order""). The warning should tell the user that's what it's doing.
This approach would then be backwards-compatible, accommodate users whose data does not have monotonic indexes (they would just have to arrange their datasets into the correct order themselves first), while still doing the obviously correct thing in unambiguous cases.
However this would mean that users wanting to do a multi-dimensional `auto_combine` on data without monotonic indexes would have to supply their datasets in some way that specifies their desired N-dimensional ordering. This could be done as list-of-lists, combining the inner-most dimensions first, e.g. `[[x1y1, x2y1], [x1y2, x2y2]]`, `concat_dims=['y', 'x']`. But `auto_combine` would then have to be able to handle this type of input, and quickly distinguish between the two cases of monotonic & non-monotonic indices. Is this the behaviour which we want?
Also I'm assuming we are not going to provide functionality to handle uneven sub-lists, e.g. `[[t1x1, t1x2], [t2x1, t2x2, t2x3]]`?
### Edit:
I've just realised that there is a lot of related discussion in #2039, #1385, & #1823. I suppose what I'm suggesting here is essentially the N-D generalisation of the approach discussed in those issues, namely an extra argument `prealigned` for `open_mfdataset()`, which defaults to False. Then with `prealigned=True`, the required input would be a nested list of (paths to) datasets, which is nested the same number of times as there are dimensions in `concat_dims`. Then to recreate the current behaviour for an ordered 1D list of datasets with non-monotonic indexes you would only have to pass `prealigned=True`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-427892990,https://api.github.com/repos/pydata/xarray/issues/2159,427892990,MDEyOklzc3VlQ29tbWVudDQyNzg5Mjk5MA==,35968931,2018-10-08T16:12:06Z,2018-10-08T16:12:06Z,MEMBER,"Thanks @shoyer for the description of how this should be done properly.
In the meantime however, I thought I would describe how I solved the problem in my last comment. My method works but you probably wouldn't want to use it in xarray itself because it's pretty ""hacky"".
To avoid the issue of numpy reading the `__array__` methods of xarray objects and doing weird things, I simply contained each dataset within a single-element dictionary in order to hide the offending methods, i.e.
```python
data = create_test_data()
data_grid = np.array([{'key': data}], dtype='object')
```
With this then creating something which will concatenate the numpy grid-like array of (dicts holding) datasets is quick:
```python
from xarray import concat
import numpy as np
def _concat_nd(obj_grid, concat_dims=None, data_vars=None, **kwargs):
# Combine datasets along one dimension at a time,
# Have to start with last axis and finish with axis=0 otherwise axes will disappear before the loop reaches them
for axis in reversed(range(obj_grid.ndim)):
obj_grid = np.apply_along_axis(_concat_dicts, axis, arr=obj_grid,
dim=concat_dims[axis], data_vars=data_vars[axis], **kwargs)
# Grid should now only contain one dict which contains the concatenated xarray object
return obj_grid.item()['key']
def _concat_dicts(dict_objs, dim, data_vars, **kwargs):
objs = [dict_obj['key'] for dict_obj in dict_objs]
return {'key': concat(objs, dim, data_vars, **kwargs)}
```
In case anyone is interested then [this is how](https://github.com/TomNicholas/xcollect/blob/master/concatenate.py) I've (hopefully temporarily) solved the N-D concatenation problem in the case of my data.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-417802225,https://api.github.com/repos/pydata/xarray/issues/2159,417802225,MDEyOklzc3VlQ29tbWVudDQxNzgwMjIyNQ==,35968931,2018-08-31T22:12:28Z,2018-08-31T22:16:15Z,MEMBER,"I started having a go at writing the second half of this - the ""n-dimensional-concatenation"" function which would accept a grid of xarray.DataSet/DataArray objects (assumed to be in the correct order along all dimensions), and return a single merged dataset.
However, I think there's an issue with using
> something like a NumPy object array of xarray.Dataset/DataArray objects.
My plan was to call `np.apply_along_axis` to apply the 1D `xr.concat()` function along each axis in turn, something like
```python
from numpy import apply_along_axis
from xarray import concat
def concat_nd(obj_grid, concat_dims=None):
""""""
Concatenates a structured ndarray of xarray Datasets along multiple dimensions.
Parameters
----------
obj_grid : numpy array of Dataset and DataArray objects
N-dimensional numpy object array containing xarray objects in the shape they
are to be concatenated. Each object is expected to
consist of variables and coordinates with matching shapes except for
along the concatenated dimension.
concat_dims : list of str or DataArray or pandas.Index
Names of the dimensions to concatenate along. Each dimension in this argument
is passed on to :py:func:`xarray.concat` along with the dataset objects.
Should therefore be a list of valid dimension arguments to xarray.concat().
Returns
-------
combined : xarray.Dataset
""""""
# Combine datasets along one dimension at a time
# Start with last axis and finish with axis=0
for axis in reversed(range(obj_grid.ndim)):
obj_grid = apply_along_axis(concat, axis, arr=obj_grid, dim=concat_dims[axis])
# Grid should now only contain one xarray object
return obj_grid.item
```
However, testing this code with
```python
def test_concat_1d(self):
data = create_test_data()
split_data = [data.isel(dim1=slice(3)), data.isel(dim1=slice(3, None))]
# Will explain why I'm forced to create ndarray like this shortly
split_data_grid = np.empty(shape=(2), dtype=np.object)
split_data_grid[0] = split_data[0]
split_data_grid[1] = split_data[1]
reconstructed = concat_nd(split_data_grid, ['dim1'])
xrt.assert_identical(data, reconstructed)
```
throws an error from within `np.apply_along_axis`
```
TypeError: cannot directly convert an xarray.Dataset into a numpy array. Instead, create an xarray.DataArray first, either with indexing on the Dataset or by invoking the `to_array()` method.
```
I think this is because even just the idea of having a ndarray containing xarray datasets seems to cause problems - if I do it with a single item then xarray thinks I'm trying to convert the Dataset into a numpy array and throws the same error:
```python
data = create_test_data()
data_grid = np.array(data, dtype='object')
```
and if I do it with multiple items then numpy will dive down and extract the variables in the dataset instead of just storing a reference to the dataset:
```python
data = create_test_data()
split_data = [data.isel(dim1=slice(3)), data.isel(dim1=slice(3, None))]
split_data_grid = np.array(split_data, dtype='object')
print(split_data_grid)
```
returns
```
[['time' 'dim2' 'dim3' 'var1' 'var2' 'var3' 'numbers']
['time' 'dim2' 'dim3' 'var1' 'var2' 'var3' 'numbers']]
```
when I expected something more like
```
numpy.array([, ])
```
(This is why I had to create an empty array and then fill it afterwards in my example test further up.)
Is this the intended behaviour of xarray? Does this mean I can't use numpy arrays of xarray objects at all for this problem? If so then what structure do you think I should use instead (list of lists etc.)?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-412177726,https://api.github.com/repos/pydata/xarray/issues/2159,412177726,MDEyOklzc3VlQ29tbWVudDQxMjE3NzcyNg==,35968931,2018-08-10T19:08:56Z,2018-08-11T00:09:28Z,MEMBER,"I've been looking through the functions `open_mfdataset`, `auto_combine`, `_auto_concat` and `concat` to see how one might go about achieving this in general.
The current behaviour isn't completely explicit, and I would like to check my understanding with a few questions:
1) If you `concat` two datasets along a dimension which doesn't have a coordinate, then `concat` will not be able to know what order to concatenate them in, so it just does it in the order they were provided?
2) Although `auto_combine` can determine the common dimension to concatenate datasets over, it doesn't know anything about insertion order! Even if the datasets have dimension coordinates, the [line](https://github.com/pydata/xarray/blob/48d55eea052fec204b843babdc81c258f3ed5ce1/xarray/core/combine.py#L432-L433)
```python
grouped = itertoolz.groupby(lambda ds: tuple(sorted(ds.data_vars)), datasets).values()
```
will only organise the datasets into groups according to the set of dimensions they have, it doesn't order the datasets within each group according to the values in the dimension coordinates?
We can show this because this (new) testcase fails:
```python
@requires_dask
def test_auto_combine_along_coords(self):
# drop the third dimension to keep things relatively understandable
data = create_test_data()
for k in list(data.variables):
if 'dim3' in data[k].dims:
del data[k]
data_split1 = data.isel(dim2=slice(4))
data_split2 = data.isel(dim2=slice(4, None))
split_data = [data_split2, data_split1] # Deliberately arrange datasets in wrong order
assert_identical(data, auto_combine(split_data, 'dim2'))
```
with output
```
E AssertionError:
E Dimensions: (dim1: 8, dim2: 9, dim3: 10, time: 20)
E Coordinates:
E * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-20
E * dim2 (dim2) float64 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
E Dimensions without coordinates: dim1, dim3
E Data variables:
E var1 (dim1, dim2) float64 1.473 1.363 -1.192 ... 0.2341 -0.3403 0.405
E var2 (dim1, dim2) float64 -0.7952 0.7566 0.2468 ... -0.6822 1.455 0.7314
E
E Dimensions: (dim1: 8, dim2: 9, time: 20)
E Coordinates:
E * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-20
E * dim2 (dim2) float64 2.0 2.5 3.0 3.5 4.0 0.0 0.5 1.0 1.5
E Dimensions without coordinates: dim1
E Data variables:
E var1 (dim1, dim2) float64 1.496 -1.834 -0.6588 ... 1.326 0.6805 -0.2999
E var2 (dim1, dim2) float64 0.7926 -1.063 0.1062 ... -0.447 -0.8955
```
3) So the [call](https://github.com/pydata/xarray/blob/48d55eea052fec204b843babdc81c258f3ed5ce1/xarray/core/combine.py#L434-L435) to `_auto_concat` just assumes that the datasets are provided in the correct order:
```python
concatenated = [_auto_concat(ds, dim=dim, data_vars=data_vars, coords=coords) for ds in grouped]
```
4) Therefore what needs to be done here is the `groupby` call needs to be replaced with something that actually orders the datasets according to the value in the dimension coordinates, works in N dimensions, and outputs a structure of datasets upon which `_auto_concat` can be called repeatedly, along every concatenation dimension?
Also, `concat` has a `positions` argument, which allows you to manually specify the concatenation order, but it's not used at all by `auto_combine`. In the main use case imagined here (concatenating the domains of multi-parallel simulation output) then the user will know the desired positions of each dataset, because it will correspond to how they divided up their domain in the first place. Perhaps an easier approach to providing for that use case would be to propagate the `positions` argument upwards so that the user can do something like
```python
# User specifies how they split up their domain
domain_decomposition_structure = how_was_this_parallelized('output.*.nc')
# Feeds this info into open_mfdataset
full_domain = xr.open_mfdataset('output.*.nc', positions=domain_decomposition_structure)
```
This approach would be much less general but would dodge the issue of writing generalized N-D auto-concatenation logic.
Final point - this common use case also has the added complexity of having ghost or guard cells around every dataset, which should be thrown away. Clearly some user input is required here (`ghost_cells_x=2, ghost_cells_y=2, ghost_cells_z=0, ...`), but I'm really not sure what the best way to fit that kind of logic in is. Yet more arguments to `open_mfdataset`?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-410357191,https://api.github.com/repos/pydata/xarray/issues/2159,410357191,MDEyOklzc3VlQ29tbWVudDQxMDM1NzE5MQ==,35968931,2018-08-03T19:44:32Z,2018-08-03T19:44:32Z,MEMBER,"Thanks @jnhansen ! I actually ended up writing my own, much lower level, version of this using the netcdf library. The reason I did that was because I was finding it hard to work out how to merge multiple datasets, then write the data out to a new netcdf file in chunks - I kept accidentally loading the entire merged dataset into memory at once. This might just be because I wasn't using the dask integration properly though.
Have you tried using your function to merge netcdf files, then write out a single file which is larger than RAM? Is that even possible in xarray?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-391512018,https://api.github.com/repos/pydata/xarray/issues/2159,391512018,MDEyOklzc3VlQ29tbWVudDM5MTUxMjAxOA==,35968931,2018-05-23T22:07:30Z,2018-05-23T22:07:30Z,MEMBER,"@shoyer At the risk of going off on a tangent - I think that only works if the number of guard cells you want to remove can be determined from the data in the dataset you're loading, because preprocess doesn't accept any further arguments.
For example, say you want to remove all ghost cells except the ones at the edge of your simulation domain. If there's no information in each dataset which marks it as a dataset containing a simulation boundary region, then the preprocess function can't know to treat it differently without further arguments. I might be wrong though?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248
https://github.com/pydata/xarray/issues/2159#issuecomment-391501504,https://api.github.com/repos/pydata/xarray/issues/2159,391501504,MDEyOklzc3VlQ29tbWVudDM5MTUwMTUwNA==,35968931,2018-05-23T21:25:12Z,2018-05-23T21:25:12Z,MEMBER,"Another suggestion: as one of the obvious uses for this is in collecting the output from parallelized simulations, which always have ghost cells around the domain each processor computes on, would it be worth adding an option to throw those away as the mf dataset is loaded? Or is that a task better dealt with by slicing the resultant array after the fact?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,324350248