html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/5376#issuecomment-850552195,https://api.github.com/repos/pydata/xarray/issues/5376,850552195,MDEyOklzc3VlQ29tbWVudDg1MDU1MjE5NQ==,3805136,2021-05-28T17:04:27Z,2021-05-28T17:04:27Z,NONE,"> Are there cases in practice where on-demand downsampling computation would be preferred over pre-computing and storing all pyramid levels for the full dataset? I admit it's probably a very naive question since most workflows on the client side would likely start by loading the top level (lowest resolution) dataset at full extent, which would require pre-computing the whole thing? I'm not sure when dynamic downsampling would be preferred over loading previously downsampled images from disk. In my usage, the application consuming the multiresolution images is an interactive data visualization tool and the goal is to minimize latency / maximize responsiveness of the visualization, and this would be difficult if the multiresolution images were generated dynamically from the full image -- under a dynamic scheme the lowest resolution image, i.e. the one that should be _fastest_ to load, would instead require the most I/O and compute to generate.... > Are there cases where it makes sense to pre-compute all the the pyramid levels in-memory (could be, e.g., chunked dask arrays persisted on a distributed cluster) without the need to store them? Although I do not do this today, I can think of a lot of uses for this functionality -- an data processing pipeline could expose intermediate data over http via xpublish, but this would require a good caching layer to prevent re-computing the same region of the data repeatedly. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,902009258