html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/4406#issuecomment-988359778,https://api.github.com/repos/pydata/xarray/issues/4406,988359778,IC_kwDOAMm_X8466Sxi,13684161,2021-12-08T00:05:24Z,2021-12-08T00:06:22Z,NONE,"I am having a similar issue as well. Using latest versions of dask, xarray, distributed, fsspec, and gcsfs. I use h5netcdf backend because it is the only one that works with fsspec's binary stream, reading from cloud.
My workflow consists of:
1. Start dask client with 1 process per CPU, and 2 threads each. This is because it doesn't scale up reading from the cloud with threads.
2. Opening 12x monthly climate data (hourly sampled) using xarray.open_mfdataset
3. Using reasonable dask chunks in the open function
4. Take monthly average across time axis, and write to local NetCDF.
5. Repeate 2-4 for different years.
It is a hit or miss. It hangs towards the middle or end of a year. Next time I run it, it doesn't.
Once it hangs, and I hit stop, in the traceback it is stuck at await of threading lock.
Any ideas how to avoid this?
Things I tried:
1. Use processes only, 1 thread per worker
2. lock=True, lock=False on open_mfdataset
3. Dask scheduler as: spawn and forkserver
4. Different (but recent) versions of all the libraries","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,694112301