html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2525#issuecomment-439766587,https://api.github.com/repos/pydata/xarray/issues/2525,439766587,MDEyOklzc3VlQ29tbWVudDQzOTc2NjU4Nw==,1197350,2018-11-19T04:13:37Z,2018-11-19T04:13:37Z,MEMBER,"> What would the coordinates look like?
>
> 1. apply `func` also for coordinate
> 2. always apply `mean` to coordinate
If I think about my applications, I would probably always want to apply `mean` to dimension coordinates, but would like to be able to choose for non-dimension coordinates.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,375126758
https://github.com/pydata/xarray/issues/2525#issuecomment-434480457,https://api.github.com/repos/pydata/xarray/issues/2525,434480457,MDEyOklzc3VlQ29tbWVudDQzNDQ4MDQ1Nw==,1197350,2018-10-30T21:41:17Z,2018-10-30T21:41:25Z,MEMBER,"> I would lean towards a coordinate based representation since it's a little more usable/certain to be correct.
I feel that this could become too complex in the case of irregularly spaced coordinates. I slightly favor the index-based approach (as in my function above), which one calls like
```python
aggregate_da(da, {'lat': 2, 'lon': 2})
```
If we do that, we can just use scikit-image's `block_reduce` function, which is vectorized and works great with `apply_ufunc`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,375126758
https://github.com/pydata/xarray/issues/2525#issuecomment-434294356,https://api.github.com/repos/pydata/xarray/issues/2525,434294356,MDEyOklzc3VlQ29tbWVudDQzNDI5NDM1Ng==,1197350,2018-10-30T13:10:16Z,2018-10-30T13:10:39Z,MEMBER,"FYI, I do this often in my work with this sort of function:
```python
import xarray as xr
from skimage.measure import block_reduce
def aggregate_da(da, agg_dims, suf='_agg'):
input_core_dims = list(agg_dims)
n_agg = len(input_core_dims)
core_block_size = tuple([agg_dims[k] for k in input_core_dims])
block_size = (da.ndim - n_agg)*(1,) + core_block_size
output_core_dims = [dim + suf for dim in input_core_dims]
output_sizes = {(dim + suf): da.shape[da.get_axis_num(dim)]//agg_dims[dim]
for dim in input_core_dims}
output_dtypes = da.dtype
da_out = xr.apply_ufunc(block_reduce, da, kwargs={'block_size': block_size},
input_core_dims=[input_core_dims],
output_core_dims=[output_core_dims],
output_sizes=output_sizes,
output_dtypes=[output_dtypes],
dask='parallelized')
for dim in input_core_dims:
new_coord = block_reduce(da[dim].data, (agg_dims[dim],), func=np.mean)
da_out.coords[dim + suf] = (dim + suf, new_coord)
return da_out
```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,375126758
https://github.com/pydata/xarray/issues/2525#issuecomment-434294114,https://api.github.com/repos/pydata/xarray/issues/2525,434294114,MDEyOklzc3VlQ29tbWVudDQzNDI5NDExNA==,1197350,2018-10-30T13:09:25Z,2018-10-30T13:09:25Z,MEMBER,"This is being discussed in #1192 under a different name.
Yes, we need this feature.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,375126758