html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4241#issuecomment-662563406,https://api.github.com/repos/pydata/xarray/issues/4241,662563406,MDEyOklzc3VlQ29tbWVudDY2MjU2MzQwNg==,41797673,2020-07-22T16:45:42Z,2020-07-22T16:45:42Z,NONE,"
> This is a fundamental problem that is rather hard to solve without creating a copy of the data.
>
> We just released the [rechunker](https://rechunker.readthedocs.io/en/latest/) package, which makes it easy to create a copy of your data with a different chunking scheme (e.g contiguous in time, chunked in space). If you have enough disk space to store a copy, this might be a good solution.
Thanks for confirming and pointing me to rechunker, that looks nice.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,662982199
https://github.com/pydata/xarray/issues/4241#issuecomment-662509778,https://api.github.com/repos/pydata/xarray/issues/4241,662509778,MDEyOklzc3VlQ29tbWVudDY2MjUwOTc3OA==,41797673,2020-07-22T15:09:24Z,2020-07-22T15:09:24Z,NONE,"Thanks for your answer. Yes I looked at `apply_ufunc` and `map_blocks` and cannot use these here. The reason is that my function here must be applied along the time dimension (e.g., a rolling median in time), but my data is chunked across the time dimension. I could of course re-chunk the data (create a copy where there are no chunks along the time dimension), but I would like to know if this can be avoided.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,662982199