issues: 252541496
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
252541496 | MDU6SXNzdWUyNTI1NDE0OTY= | 1521 | open_mfdataset reads coords from disk multiple times | 6213168 | closed | 0 | 14 | 2017-08-24T09:29:57Z | 2017-10-09T21:15:31Z | 2017-10-09T21:15:31Z | MEMBER | I have 200x of the below dataset, split on the 'scenario' axis:
I individually dump them to disk with Dataset.to_netcdf(fname, engine='h5netcdf'). Then I try loading them back up with open_mfdataset, but it's mortally slow: ``` %%time xarray.open_mfdataset('*.nc', engine='h5netcdf') Wall time: 30.3 s ``` The problem is caused by the coords being read from disk multiple times. Workaround:
Proposed solutions: 1. Implement the above workaround directly inside open_mfdataset() 2. change open_dataset() to always eagerly load the coords to memory, regardless of the chunks parameter. Is there any valid use case where lazy coords are actually desirable? An additional, more radical observation is that, very frequently, a user knows in advance that all coords are aligned. In this use case, the user could explicitly request xarray to blindly trust this assumption, and thus skip loading the coords not based on concat_dim in all datasets beyond the first. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |