html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2209#issuecomment-407678876,https://api.github.com/repos/pydata/xarray/issues/2209,407678876,MDEyOklzc3VlQ29tbWVudDQwNzY3ODg3Ng==,810663,2018-07-25T08:37:53Z,2018-07-25T08:37:53Z,NONE,"> Pinging @pelson who have some ideas in mind on how to address this problem.
The ideas relate to the fetching of the index, which will take orders of magnitude less time than the resolve and download stages in ``conda``. They aren't entirely unrelated though, as a smaller index (the proposal) would result in fewer options for the conda solver to have to work through. No matter what we do, caching the binaries will have the same impact, though it is a challenge to cache sensibly without having a *really* large cache... You may find that caching an environment.yaml actually has more of an impact than caching the binaries themselves (i.e. this means you continue to download the binaries each time, but don't do a conda resolve each time).","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,328572578