pull_requests: 569059113
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
569059113 | MDExOlB1bGxSZXF1ZXN0NTY5MDU5MTEz | 4879 | closed | 0 | Cache files for different CachingFileManager objects separately | 1217238 | This means that explicitly opening a file multiple times with ``open_dataset`` (e.g., after modifying it on disk) now reopens the file from scratch, rather than reusing a cached version. If users want to reuse the cached file, they can reuse the same xarray object. We don't need this for handling many files in Dask (the original motivation for caching), because in those cases only a single CachingFileManager is created. I think this should some long-standing usability issues: #4240, #4862 Conveniently, this also obviates the need for some messy reference counting logic. <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #4240, #4862 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` | 2021-02-07T21:48:06Z | 2022-10-18T16:40:41Z | 2022-10-18T16:40:40Z | 2022-10-18T16:40:40Z | 268753696f5886615adf5edd8024e80e40c9d4ea | 0 | a5bf6211fb2143844603a0a98c0cd8fa7b648159 | 15c68366b8ba8fd678d675df5688cf861d1c7235 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/4879 |
Links from other tables
- 3 rows from pull_requests_id in labels_pull_requests