html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7274#issuecomment-1308489577,https://api.github.com/repos/pydata/xarray/issues/7274,1308489577,IC_kwDOAMm_X85N_fdp,89445148,2022-11-09T09:50:39Z,2022-11-09T09:50:39Z,NONE,"Thank you so much for your help. I've just started using xarray for large data. There was indeed a compression level.
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1441649908
https://github.com/pydata/xarray/issues/7274#issuecomment-1308462328,https://api.github.com/repos/pydata/xarray/issues/7274,1308462328,IC_kwDOAMm_X85N_Yz4,5821660,2022-11-09T09:27:27Z,2022-11-09T09:27:27Z,MEMBER,"That might be due to compression of the source file. Just have a look into `.encoding`, if If there is some mention of compression.
`dset_year.analysed_sst.encoding`
No, there is nothing you can do about. But as in your example `open_mfdataset` wraps the data on disk using `dask`. That way the data isn't read into memory. It will be read into memory in chunks once processed. In your case one chunk is 32MB. Depending on your algorithms `dask` will take care to not load complete dataset into memory but process chunk-wise.
Please also have a look at xarray and dask here: https://docs.xarray.dev/en/stable/user-guide/dask.html.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1441649908