issues: 363629186
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
363629186 | MDU6SXNzdWUzNjM2MjkxODY= | 2438 | Efficient workaround to group by multiple dimensions | 5308236 | closed | 0 | 3 | 2018-09-25T15:11:38Z | 2018-10-02T15:56:53Z | 2018-10-02T15:56:53Z | NONE | Grouping by multiple dimensions is not yet supported (#324):
An inefficient solution is to run the for loops manually: ```python a, b = np.unique(d['a'].values), np.unique(d['b'].values) result = xr.DataArray(np.zeros([len(a), len(b)]), coords={'a': a, 'b': b}, dims=['a', 'b']) for a, b in itertools.product(a, b): cells = d.sel(a=a, b=b) merge = cells.mean() result.loc[{'a': a, 'b': b}] = merge result = DataArray (a: 2, b: 2)> array([[2., 3.], [5., 6.]])Coordinates:* a (a) <U1 'x' 'y'* b (b) int64 0 1``` This is however horribly slow for larger arrays. Is there a more efficient / straight-forward work-around? Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |