html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1020#issuecomment-832111396,https://api.github.com/repos/pydata/xarray/issues/1020,832111396,MDEyOklzc3VlQ29tbWVudDgzMjExMTM5Ng==,27021858,2021-05-04T17:24:15Z,2021-05-04T17:24:15Z,NONE,"@shoyer I am having a similar problem. I am reading 80 files with total 8.3 GB . So each files has around 100 MB. If I understand you right: Using mf_dataset on such data is not recommend? So best practive wouold be to loop over the files ? PS: I still tried to use some dask related operations but eachtime I try to access `.values` or use `to_dataframe` the memory usage explodes. Thanks a lot for answering ;) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,180080354 https://github.com/pydata/xarray/issues/1020#issuecomment-250777441,https://api.github.com/repos/pydata/xarray/issues/1020,250777441,MDEyOklzc3VlQ29tbWVudDI1MDc3NzQ0MQ==,1961038,2016-09-30T15:38:01Z,2016-09-30T15:38:01Z,NONE,"Good to know, and since the system I'm running on has 96 GB of RAM, I think your statement about pandas is correct too, as I also get the memory error when running on a smaller (18GB) dataset. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,180080354