issue_comments: 504058279
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2940#issuecomment-504058279 | https://api.github.com/repos/pydata/xarray/issues/2940 | 504058279 | MDEyOklzc3VlQ29tbWVudDUwNDA1ODI3OQ== | 1217238 | 2019-06-20T14:55:04Z | 2019-06-20T14:55:04Z | MEMBER | Looking into this in more detail, the fix looks somewhat non-trivial. For example, we definitely are not padding by enough inside I'm not sure if this was ever tested properly, at least for dask arrays with multiple chunks. My guess is that this previously worked by consolidating dask arrays into a single chunk, which would simply fail for arrays that are larger than fit into memory. For now, I think it would be safest to issue a new release that simply disables rolling() methods on dask arrays (by raising an error), and to save a full fix for later. I am concerned about letting the current behavior stick around, which silently calculates the wrong result. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
440233667 |