issues: 595882590
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
595882590 | MDU6SXNzdWU1OTU4ODI1OTA= | 3948 | Releasing memory? | 3958036 | closed | 0 | 6 | 2020-04-07T13:49:07Z | 2020-04-07T14:18:36Z | 2020-04-07T14:18:36Z | CONTRIBUTOR | Once For example, what would be the best workflow for this case: I have several large arrays on disk. Each will fit into memory individually. I want to do some analysis on each array (which produces small results), and keep the results in memory, but I do not need the large arrays any more after the analysis. I'm wondering if some sort of da2 = ds["variable2"] result2 = do_some_work(da2) # may load large parts of da2 into memory da2.release() # any changes to da2 not already saved to disk are lost, but do not want da1 any more ... etc. ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |