issue_comments: 357996887
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1832#issuecomment-357996887 | https://api.github.com/repos/pydata/xarray/issues/1832 | 357996887 | MDEyOklzc3VlQ29tbWVudDM1Nzk5Njg4Nw== | 306380 | 2018-01-16T15:27:33Z | 2018-01-16T15:27:33Z | MEMBER | ```python monthly climatologyds_mm = ds.groupby('time.month').mean(dim='time') anomalyds_anom = ds.groupby('time.month')- ds_mm ``` I would actually hope that this would be a little bit nicer than the case in the dask issue, especially if you are chunked by some dimension other than time. In the case that @shoyer points to we're creating a global aggregation value and then applying that to all input data. In @rabernat's case we have at least twelve aggregation points and possibly more if there are other chunked dimensions like ensemble (or lat/lon if you choose to chunk those). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
288785270 |