html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2209#issuecomment-423251446,https://api.github.com/repos/pydata/xarray/issues/2209,423251446,MDEyOklzc3VlQ29tbWVudDQyMzI1MTQ0Ng==,10050469,2018-09-20T16:39:58Z,2018-09-20T16:39:58Z,MEMBER,Closed via https://github.com/rtfd/readthedocs.org/issues/4432,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,328572578 https://github.com/pydata/xarray/issues/2209#issuecomment-407678876,https://api.github.com/repos/pydata/xarray/issues/2209,407678876,MDEyOklzc3VlQ29tbWVudDQwNzY3ODg3Ng==,810663,2018-07-25T08:37:53Z,2018-07-25T08:37:53Z,NONE,"> Pinging @pelson who have some ideas in mind on how to address this problem. The ideas relate to the fetching of the index, which will take orders of magnitude less time than the resolve and download stages in ``conda``. They aren't entirely unrelated though, as a smaller index (the proposal) would result in fewer options for the conda solver to have to work through. No matter what we do, caching the binaries will have the same impact, though it is a challenge to cache sensibly without having a *really* large cache... You may find that caching an environment.yaml actually has more of an impact than caching the binaries themselves (i.e. this means you continue to download the binaries each time, but don't do a conda resolve each time).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,328572578 https://github.com/pydata/xarray/issues/2209#issuecomment-407547882,https://api.github.com/repos/pydata/xarray/issues/2209,407547882,MDEyOklzc3VlQ29tbWVudDQwNzU0Nzg4Mg==,950575,2018-07-24T20:51:50Z,2018-07-24T20:51:50Z,CONTRIBUTOR,"> Notice that it took 411 seconds to run `conda env create`! If you are using `conda-forge` bare in mind that our package index is huge and `conda` is not very smart about it. We are looking into possible solution. Pinging @pelson who have some ideas in mind on how to address this problem.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,328572578 https://github.com/pydata/xarray/issues/2209#issuecomment-393933718,https://api.github.com/repos/pydata/xarray/issues/2209,393933718,MDEyOklzc3VlQ29tbWVudDM5MzkzMzcxOA==,1217238,2018-06-01T16:24:43Z,2018-06-01T16:24:43Z,MEMBER,"We see the same issue with our builds on Travis-CI. Here's our latest doc build on Travis: https://travis-ci.org/pydata/xarray/jobs/386509884 Notice that it took 411 seconds to run `conda env create`! I'm not quite exact what the underlying issue is here (e.g., download vs install time), but I'm sure it's somehow related to our large number of dependencies. If download time is the issue, then perhaps caching downloaded conda packages would help, e.g., https://github.com/rtfd/readthedocs.org/issues/3261","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,328572578