html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/1413#issuecomment-302881933,https://api.github.com/repos/pydata/xarray/issues/1413,302881933,MDEyOklzc3VlQ29tbWVudDMwMjg4MTkzMw==,1217238,2017-05-20T16:00:15Z,2017-07-13T21:20:10Z,MEMBER,Sounds good to me!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302804510,https://api.github.com/repos/pydata/xarray/issues/1413,302804510,MDEyOklzc3VlQ29tbWVudDMwMjgwNDUxMA==,1217238,2017-05-19T20:32:57Z,2017-05-19T20:32:57Z,MEMBER,"Well, we could potentially write a fast path constructor for loading
multiple netcdf files that avoids open_dataset. We just need another way to
specify the schema, e.g., using NCML.
On Fri, May 19, 2017 at 10:53 AM Ryan Abernathey
wrote:
> As I think about this further, I realize it might be futile to avoid
> reading the dimensions from all the files. This is a basic part of how
> open_dataset works.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> , or mute
> the thread
>
> .
>
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302711547,https://api.github.com/repos/pydata/xarray/issues/1413,302711547,MDEyOklzc3VlQ29tbWVudDMwMjcxMTU0Nw==,1217238,2017-05-19T14:04:05Z,2017-05-19T14:04:05Z,MEMBER,"> What is `xr.align(..., join='exact')` supposed to do?
It verifies that all dimensions have the same length, and coordinates along all dimensions (used for indexing) also match. Unlike the normal version of `align`, it doesn't do any indexing -- the outputs are always the same as the inputs.
It *does not* check that the necessary dimensions and variables exist in all datasets. But we should do that as part of the logic in `concat` anyways, since the xarray data model always requires knowing variables and their dimensions.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101
https://github.com/pydata/xarray/pull/1413#issuecomment-302511481,https://api.github.com/repos/pydata/xarray/issues/1413,302511481,MDEyOklzc3VlQ29tbWVudDMwMjUxMTQ4MQ==,1217238,2017-05-18T19:04:18Z,2017-05-18T19:04:18Z,MEMBER,"This enhancement makes a lot of sense to me.
Two things worth considering:
1. Given a collection of datasets, how do I know if setting `prealigned=True` will work? This is where my PR adding `xr.align(..., join='exact')` could help (I can finish that up). Maybe it's worth adding `xr.is_aligned` or something similar.
2. What happens if things go wrong? It's okay if the behavior is undefined (or could give wrong results) but we should document that. Ideally we should raise sensible errors at some later time, e.g., when the dask arrays are computed. This might or might not be possible to do efficiently with dask, if the result of all the equality checks are consolidated and added into the dask graphs of the results.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,229474101