html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/6924#issuecomment-1219672919,https://api.github.com/repos/pydata/xarray/issues/6924,1219672919,IC_kwDOAMm_X85IsrtX,64621312,2022-08-18T16:05:02Z,2022-08-18T16:05:02Z,NONE,"I cross posted this as a [dask issue](https://github.com/dask/dask/issues/9396) and on [stack overflow](https://stackoverflow.com/questions/73394589/using-xarray-to-convert-a-zarr-file-to-a-netcdf-causing-memory-allocation-error). I learned that ""dask will often have as many chunks in memory as twice the number of active threads"" ([best practices with dask arrays](https://docs.dask.org/en/stable/array-best-practices.html#best-practices)) and including `dask.config.set(scheduler='synchronous')` which forces single thread computation ([dask scheduling](https://docs.dask.org/en/stable/scheduling.html#single-thread)) resulted in the behavior I expected which is memory usage fluctuations that are roughly the magnitude of the chunk size. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1340994913