html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/1086#issuecomment-661972749,https://api.github.com/repos/pydata/xarray/issues/1086,661972749,MDEyOklzc3VlQ29tbWVudDY2MTk3Mjc0OQ==,25382032,2020-07-21T16:41:52Z,2020-07-21T16:41:52Z,NONE,"Hi @darothen , Thanks a lot..I hadn't thought of processing each file and then merging. Will give it a try, Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079 https://github.com/pydata/xarray/issues/1086#issuecomment-661940009,https://api.github.com/repos/pydata/xarray/issues/1086,661940009,MDEyOklzc3VlQ29tbWVudDY2MTk0MDAwOQ==,25382032,2020-07-21T15:44:54Z,2020-07-21T15:46:06Z,NONE,"Hi, ``` import xarray as xr from pathlib import Path dir_input = Path('.') data_ww3 = xr.open_mfdataset(dir_input.glob('**/' + 'WW3_EUR-11_CCCma-CanESM2_r1i1p1_CLMcom-CCLM4-8-17_v1_6hr_*.nc')) data_ww3 = data_ww3.isel(latitude=74, longitude=18) df_ww3 = data_ww3[['hs', 't02', 't0m1', 't01', 'fp', 'dir', 'spr', 'dp']].to_dataframe() ``` You can download one file here: https://nasgdfa.ugr.es:5001/d/f/566168344466602780 (3.5 GB). I did a profiler when opening 2 .nc files an it said the to_dataframe() call was the one taking most of the time. ![src1](https://user-images.githubusercontent.com/25382032/88075274-db101600-cb78-11ea-8424-5d60a80b9bc4.png) I'm just wondering if there's a way to reduce computing time. I need to open 95 files and it takes about 1.5 hour. Thanks, ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079 https://github.com/pydata/xarray/issues/1086#issuecomment-661775197,https://api.github.com/repos/pydata/xarray/issues/1086,661775197,MDEyOklzc3VlQ29tbWVudDY2MTc3NTE5Nw==,25382032,2020-07-21T10:29:48Z,2020-07-21T10:29:48Z,NONE,"I am running into the same problem, this might be a long shot but @naught101 , do you remember if you managed to convert to dataframe in a more efficient way? Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079