issues: 672262818
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
672262818 | MDU6SXNzdWU2NzIyNjI4MTg= | 4304 | netCDF4 Resource unavailable error | 45180714 | closed | 0 | 2 | 2020-08-03T18:28:28Z | 2022-04-17T18:03:31Z | 2022-04-17T18:03:31Z | NONE | I've been encountering the following error occasionally when running some rather heavy scripts. Sometimes the code will run fine (with the same data) and other times it will exit with this traceback:
I'm not sure if it matters, but this seems to only be a problem when I'm using multiprocessing, when there could be a number of processes parsing different datasets at the same time. Sometimes, though, this isn't a problem at all, and the code runs as expected with no errors. I'm not sure if this error is because of bad data, or limited resources. That would inform how I try to catch errors and re-try. I'm hoping someone here can point me in the right direction. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |