html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7697#issuecomment-1489341690,https://api.github.com/repos/pydata/xarray/issues/7697,1489341690,IC_kwDOAMm_X85YxYz6,2448579,2023-03-29T21:20:59Z,2023-03-29T21:20:59Z,MEMBER,"> I thought the compat='override' option bypassed most of the consistency checking.
we still construct a dataset representation for each file which involves reading all coordinates etc. The consistency checking is bypassed at the ""concatenation"" stage.
You could also speed using dask by setting up a cluster and using `parallel=True`","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1646267547
https://github.com/pydata/xarray/issues/7697#issuecomment-1489302292,https://api.github.com/repos/pydata/xarray/issues/7697,1489302292,IC_kwDOAMm_X85YxPMU,2448579,2023-03-29T20:53:37Z,2023-03-29T20:53:37Z,MEMBER,"Fundamentally, xarray has to touch every file because there is no guarantee they are consistent with each other.
A number of us now use [kerchunk](https://fsspec.github.io/kerchunk/) to create virtual aggregate datasets that can be read a lot faster.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1646267547