html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1823#issuecomment-531905844,https://api.github.com/repos/pydata/xarray/issues/1823,531905844,MDEyOklzc3VlQ29tbWVudDUzMTkwNTg0NA==,35968931,2019-09-16T18:43:52Z,2019-09-16T18:43:52Z,MEMBER,"This is big if true! But surely to close an issue raised by complaints about speed, we should really have some new asv speed tests?","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,288184220 https://github.com/pydata/xarray/issues/1823#issuecomment-489027263,https://api.github.com/repos/pydata/xarray/issues/1823,489027263,MDEyOklzc3VlQ29tbWVudDQ4OTAyNzI2Mw==,35968931,2019-05-03T09:25:00Z,2019-05-03T09:25:00Z,MEMBER,"@dcherian I'm sorry, I'm very interested in this but after reading the issues I'm still not clear on what's being proposed: What exactly is the bottleneck? Is it reading the coords from all the files? Is it loading the coord values into memory? Is it performing the alignment checks on those coords once they're in memory? Is it performing alignment checks on the dimensions? Is this suggestion relevant to datasets that don't have any coords? Which of these steps would a `join='exact'` option omit? > A related optimization would be to allow the user to pass coords='minimal' (or other concat coords options) via open_mfdataset. But this is already an option to `open_mfdataset`?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,288184220