html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/1823#issuecomment-372862174,https://api.github.com/repos/pydata/xarray/issues/1823,372862174,MDEyOklzc3VlQ29tbWVudDM3Mjg2MjE3NA==,2443309,2018-03-14T00:13:34Z,2018-03-14T00:13:34Z,MEMBER,"@jbusecke - No. These options are not mutually exclusive. The parallel open is, in my opinion, the lowest hanging fruit so that's why I started there. There are other improvements that we can tackle incrementally. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,288184220
https://github.com/pydata/xarray/issues/1823#issuecomment-357336022,https://api.github.com/repos/pydata/xarray/issues/1823,357336022,MDEyOklzc3VlQ29tbWVudDM1NzMzNjAyMg==,2443309,2018-01-12T19:46:12Z,2018-01-12T19:46:12Z,MEMBER,"@rabernat - Depending on the structure of the dataset, another possibility that would speed up some `open_mfdataset` tasks substantially is to implement the step of opening each file and getting its metadata in in some parallel way (dask/joblib/etc.) and either returning the just dataset schema or a picklable version of the dataset itself. I think this will only be able to work with `autoclose=True` but it could be quite useful when working with many files. ","{""total_count"": 3, ""+1"": 3, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,288184220