html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/3332#issuecomment-779892262,https://api.github.com/repos/pydata/xarray/issues/3332,779892262,MDEyOklzc3VlQ29tbWVudDc3OTg5MjI2Mg==,2448579,2021-02-16T14:59:11Z,2021-02-16T14:59:11Z,MEMBER,"so it's pad then view, so a copy of the *original* array is made, not the strided array.
https://github.com/pydata/xarray/blob/735a3590ea4df70e1e5be729162df2f8774b3879/xarray/core/nputils.py#L149-L151","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,496809167
https://github.com/pydata/xarray/issues/3332#issuecomment-705098106,https://api.github.com/repos/pydata/xarray/issues/3332,705098106,MDEyOklzc3VlQ29tbWVudDcwNTA5ODEwNg==,1217238,2020-10-07T17:54:32Z,2020-10-07T17:54:32Z,MEMBER,"The loop via slicing is not a terrible option. The trick construct() uses with views only really makes sense with NumPy arrays, not with dask.
There are also true streaming moving window algorithms that work very well for computing various statistics (e.g., mean and variance). These are implemented in bottleneck (e.g., `move_mean`) and could be wrapped in xarray if desired for methods like `rolling(...).mean()`. These aren't implemented in dask yet, though.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,496809167
https://github.com/pydata/xarray/issues/3332#issuecomment-705068971,https://api.github.com/repos/pydata/xarray/issues/3332,705068971,MDEyOklzc3VlQ29tbWVudDcwNTA2ODk3MQ==,29147682,2020-10-07T17:00:35Z,2020-10-07T17:00:35Z,NONE,"Is there any way to get around this? The window dimension combined with the `For window size x, every chunk should be larger than x//2` requirement means that for a large moving window I'm getting O(100GB) chunks that do not fit in memory at compute time. I can, of course, rechunk other dimensions, but that is expensive and substantially slower. I also suspect this becomes practically infeasible on machines that have little memory. Regardless, mandatory O(n^2) memory usage with window size seems less than ideal.
My workaround has been to just implement my own slicing via for loop and then call reduction operations on the resultant dask arrays as normal... Perhaps there is something I missed along the way but I couldn't find anything in open or past issues to aid in resolving this. Thanks!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,496809167
https://github.com/pydata/xarray/issues/3332#issuecomment-534709955,https://api.github.com/repos/pydata/xarray/issues/3332,534709955,MDEyOklzc3VlQ29tbWVudDUzNDcwOTk1NQ==,1217238,2019-09-24T19:21:22Z,2019-09-24T19:21:22Z,MEMBER,"It uses a view for allocating the initial result, but I think applying boundary conditions means that we end up doing a copy.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,496809167
https://github.com/pydata/xarray/issues/3332#issuecomment-533908429,https://api.github.com/repos/pydata/xarray/issues/3332,533908429,MDEyOklzc3VlQ29tbWVudDUzMzkwODQyOQ==,2448579,2019-09-22T19:02:07Z,2019-09-22T19:02:07Z,MEMBER,It should be returning a view. ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,496809167