html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1379#issuecomment-295993132,https://api.github.com/repos/pydata/xarray/issues/1379,295993132,MDEyOklzc3VlQ29tbWVudDI5NTk5MzEzMg==,7799184,2017-04-21T00:54:28Z,2017-04-21T10:05:27Z,CONTRIBUTOR,"I realised that some of the Datasets I was trying to concatenate had different coordinate values (for coordinates that I was assuming to be the same) so I guess xr.concat was trying to align these coordinates before concatenating and the resultant Dataset ended up being much larger than it should have been. When I ensure I only concatenate Datasets with consistent coordinates, I can do it. However still resource consumption is quite high compared to when I so the same thing with numpy arrays. The memory increased by 42% using xr.concat (against 6% using np.concatenate) and the whole processing took about 4 times longer. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,223231729 https://github.com/pydata/xarray/issues/1379#issuecomment-295970641,https://api.github.com/repos/pydata/xarray/issues/1379,295970641,MDEyOklzc3VlQ29tbWVudDI5NTk3MDY0MQ==,7799184,2017-04-20T23:41:38Z,2017-04-20T23:41:38Z,CONTRIBUTOR,"Also, reading all Datasets into a list and then trying to concatenate this list of Datasets at once also blows memory up.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,223231729