issues: 1642299599
This data as json
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1642299599 | I_kwDOAMm_X85h44DP | 7683 | automatically chunk in groupby binary ops | 2448579 | closed | 0 | 0 | 2023-03-27T15:14:09Z | 2023-07-27T16:41:35Z | 2023-07-27T16:41:34Z | MEMBER | What happened?From https://discourse.pangeo.io/t/xarray-unable-to-allocate-memory-how-to-size-up-problem/3233/4 Consider ``` python ds is dataset with big dask arraysmean = ds.groupby("time.day").mean() mean.to_netcdf() mean = xr.open_dataset(...) ds.groupby("time.day") - mean ``` In  we will eagerly construct  What did you expect to happen?I think the only solution is to automatically chunk if   Minimal Complete Verifiable ExampleNo response MVCE confirmation
 Relevant log outputNo response Anything else we need to know?No response Environment | 
                
                    {
    "url": "https://api.github.com/repos/pydata/xarray/issues/7683/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
} | 
                
                    completed | 13221727 | issue |