html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5790#issuecomment-918697549,https://api.github.com/repos/pydata/xarray/issues/5790,918697549,IC_kwDOAMm_X842wjZN,1217238,2021-09-14T00:39:03Z,2021-09-14T00:39:03Z,MEMBER,"> I have a hunch that all arrays get aligned to the final merged coordinate space (which is much bigger), _before_ they are combined, which means at some point in the middle of the process we have a bunch of arrays in memory that have been inflated to the size of the final output array.
Yes, I'm pretty sure this is the case.
> If that's the case, it seems like it should be possible to make this operation more efficient by creating just one inflated array and adding the data from the input arrays to it in-place? Or is this an expected and unavoidable behavior with merging? (fwiw this also affects several other combination methods, presumably because they use `merge()` under the hood?)
Yes, I imagine this could work.
But on the other hand, the implementation would get more complex. For example, it's nice to be able to use `np.concatenate()` so things automatically work with other array backends like Dask.
By the way, if you haven't tried Dask already I would recommend it for this use-case. It can do streaming operations that can result in significant memory savings.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,995207525