issues: 163267018
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
163267018 | MDU6SXNzdWUxNjMyNjcwMTg= | 893 | 'Warm start' for open_mfdataset? | 743508 | closed | 0 | 3 | 2016-06-30T21:05:46Z | 2023-05-29T13:35:32Z | 2023-05-29T13:35:32Z | CONTRIBUTOR | I'm using xarray in ipython to do interactive/exploratory analysis on large multi-file datasets. To avoid having too many files open, I'm wrapping my file-open code in a It would be good to have some kind of 'warm start' or caching mechanism to make it easier to re-open multifile datasets without having to re-scan the input files, but equally without having to keep the dataset open which keeps all the file handles open (I've hit the OS max file limit because of this). Not sure what API would suit this - since it while being a useful usecase it's also a bit wierd. Something like |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |