html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/1413#issuecomment-315354054,https://api.github.com/repos/pydata/xarray/issues/1413,315354054,MDEyOklzc3VlQ29tbWVudDMxNTM1NDA1NA==,1197350,2017-07-14T13:01:45Z,2017-07-14T13:02:20Z,MEMBER,"Yes, I think it should be closed. There are better ways to accomplish the desired goals.
Specifically, allowing the user to pass kwargs to concat via open_mfdataset would be useful.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302843502,https://api.github.com/repos/pydata/xarray/issues/1413,302843502,MDEyOklzc3VlQ29tbWVudDMwMjg0MzUwMg==,1197350,2017-05-20T01:51:03Z,2017-05-20T01:51:03Z,MEMBER,"Since the expensive part (for me) is actually reading all the coordinates, I'm not sure that this PR makes sense any more.
The same thing I am going for here could probably be accomplished by allowing the user to pass `join='exact'` via `open_mfdataset`. A related optimization would be to allow the user to pass `coords='minimal'` (or other `concat` coords options) via `open_mfdataset`.
For really big datasets, I think we will want to go the NCML approach, generating the xarray metadata as a pre-processing step. Then we could add a function like `open_ncml_dataset` to xarray which would parse this metadata and construct the dataset in a more efficient way (i.e. not reading redundant coordinates).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302724756,https://api.github.com/repos/pydata/xarray/issues/1413,302724756,MDEyOklzc3VlQ29tbWVudDMwMjcyNDc1Ng==,1197350,2017-05-19T14:53:49Z,2017-05-19T14:53:49Z,MEMBER,"As I think about this further, I realize it might be futile to avoid reading the dimensions from all the files. This is a basic part of how `open_dataset` works.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302576832,https://api.github.com/repos/pydata/xarray/issues/1413,302576832,MDEyOklzc3VlQ29tbWVudDMwMjU3NjgzMg==,1197350,2017-05-19T00:30:13Z,2017-05-19T00:30:28Z,MEMBER,"> Given a collection of datasets, how do I know if setting prealigned=True will work?
I guess we would want to check that (a) the necessary variables and dimensions exist in all datasets and (b) the dimensions have the same _length_. We would want to bypass the actual reading of the indices. I agree it would be nicer to subsume this logic into `align`.
What is `xr.align(..., join='exact')` supposed to do?
> What happens if things go wrong?
I can add more careful checks once we sort out the align question.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302496987,https://api.github.com/repos/pydata/xarray/issues/1413,302496987,MDEyOklzc3VlQ29tbWVudDMwMjQ5Njk4Nw==,1197350,2017-05-18T18:14:56Z,2017-05-18T18:15:34Z,MEMBER,"Let me expand on what this does.
Many netCDF datasets consist of multiple files with identical coordinates, except for one (e.g. time). With xarray we can open these datasets with `open_mfdataset`, which calls `concat` on the list of individual dataset objects. `concat` calls `align`, which loads all of the dimension indices (and, optionally, non-dimension coordinates) from each file and checks them for consistency / alignment.
This `align` step is potentially quite expensive for big collections of files with large indices. For example, an unstructured grid or particle-based dataset would just have a single dimension coordinate, with the same length as the data variables. If the user knows that the datasets are already aligned, this PR enables the alignment step to be skipped by passing the argument `prealigned=True` to `concat`. My goal is to avoid touching the disk as much as possible.
This PR is a draft in progress. I still need to propagate the `prealigned` argument up to `auto_combine` and `open_mfdataset`.
An alternative API would be to add another option to the `coords` keywork, i.e. `coords='prealigned'`.
Feedback welcome. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101