issues: 94328498
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
94328498 | MDU6SXNzdWU5NDMyODQ5OA== | 463 | open_mfdataset too many files | 1197350 | closed | 0 | 47 | 2015-07-10T15:24:14Z | 2017-11-27T12:17:17Z | 2017-03-23T19:22:43Z | MEMBER | I am very excited to try xray. On my first attempt, I tried to use open_mfdataset on a set of ~8000 netcdf files. I hit a "RuntimeError: Too many open files". The ulimit on my system is 1024, so clearly that is the source of the error. I am curious whether this is the desired behavior for open_mfdataset. Does xray have to keep all the files open? If so, I will work with my sysadmin to increase the ulimit. It seems like the whole point of this function is to work with large collections of files, so this could be a significant limitation. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |