issue_comments: 855822204
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/5202#issuecomment-855822204 | https://api.github.com/repos/pydata/xarray/issues/5202 | 855822204 | MDEyOklzc3VlQ29tbWVudDg1NTgyMjIwNA== | 7611856 | 2021-06-07T10:49:49Z | 2021-06-07T10:49:49Z | NONE | Besides the CPU requirements, IMHO, the memory consumption is even worse. Imagine you want to hold a 1000x1000x1000 int64 array. That would be ~ 7.5 GB and still fits into RAM on most machines. Let's assume float coordinates for all three axes. Their memory consumption of 3000*8 bytes is negligible. Now if you stack that, you end up with three additional 7.5GB arrays. With higher dimensions the situation gets even worse. That said, while it generally should be possible to create the coordinates of the stacked array on the fly, I don't have a solution for it. Side note: I stumbled over that when combining xarray with pytorch, where I want to evaluate a model on a large cartesian grid. For that I stacked the array and batched the stacked coordinates to feed them to pytorch, which makes the iteration over the cartesian space really nice and smooth in code. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
864249974 |