html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5567#issuecomment-874205134,https://api.github.com/repos/pydata/xarray/issues/5567,874205134,MDEyOklzc3VlQ29tbWVudDg3NDIwNTEzNA==,25382032,2021-07-05T15:48:50Z,2021-07-05T15:48:50Z,NONE,"oh I get it now. Thanks.
Indeed it works now when chunking lat and lon from the start.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,935818279
https://github.com/pydata/xarray/issues/5567#issuecomment-873123273,https://api.github.com/repos/pydata/xarray/issues/5567,873123273,MDEyOklzc3VlQ29tbWVudDg3MzEyMzI3Mw==,25382032,2021-07-02T16:37:03Z,2021-07-02T16:37:03Z,NONE,"> > ds.chunk({'time': -1})
>
> I suspect this is making your entire dataset one big chunk. I would chunk along `lat` and `lon` in `open_mfdataset` first.
But if I am doing `ds.quantile(quantiles, dim='time')` and assigning it again to ds, wouldn't that erase the big dataset from memory? (sorry for the ignorance). Thanks","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,935818279
https://github.com/pydata/xarray/issues/1086#issuecomment-661972749,https://api.github.com/repos/pydata/xarray/issues/1086,661972749,MDEyOklzc3VlQ29tbWVudDY2MTk3Mjc0OQ==,25382032,2020-07-21T16:41:52Z,2020-07-21T16:41:52Z,NONE,"Hi @darothen , Thanks a lot..I hadn't thought of processing each file and then merging. Will give it a try,
Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079
https://github.com/pydata/xarray/issues/1086#issuecomment-661940009,https://api.github.com/repos/pydata/xarray/issues/1086,661940009,MDEyOklzc3VlQ29tbWVudDY2MTk0MDAwOQ==,25382032,2020-07-21T15:44:54Z,2020-07-21T15:46:06Z,NONE,"Hi,
```
import xarray as xr
from pathlib import Path
dir_input = Path('.')
data_ww3 = xr.open_mfdataset(dir_input.glob('**/' + 'WW3_EUR-11_CCCma-CanESM2_r1i1p1_CLMcom-CCLM4-8-17_v1_6hr_*.nc'))
data_ww3 = data_ww3.isel(latitude=74, longitude=18)
df_ww3 = data_ww3[['hs', 't02', 't0m1', 't01', 'fp', 'dir', 'spr', 'dp']].to_dataframe()
```
You can download one file here: https://nasgdfa.ugr.es:5001/d/f/566168344466602780 (3.5 GB). I did a profiler when opening 2 .nc files an it said the to_dataframe() call was the one taking most of the time.

I'm just wondering if there's a way to reduce computing time. I need to open 95 files and it takes about 1.5 hour.
Thanks,
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079
https://github.com/pydata/xarray/issues/1086#issuecomment-661775197,https://api.github.com/repos/pydata/xarray/issues/1086,661775197,MDEyOklzc3VlQ29tbWVudDY2MTc3NTE5Nw==,25382032,2020-07-21T10:29:48Z,2020-07-21T10:29:48Z,NONE,"I am running into the same problem, this might be a long shot but @naught101 , do you remember if you managed to convert to dataframe in a more efficient way? Thanks,","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,187608079