html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3386#issuecomment-540208420,https://api.github.com/repos/pydata/xarray/issues/3386,540208420,MDEyOklzc3VlQ29tbWVudDU0MDIwODQyMA==,1217238,2019-10-09T21:28:48Z,2019-10-09T21:28:48Z,MEMBER,"netCDF4.MFDataset works on a much more restricted set of netCDF files than `xarray.open_mfdataset`. I'm not surprised it's a little bit faster, but I'm not sure it's worth the maintenance burden of supporting this separate code path. Making a fully featured version of open_mfdataset with dask would be challenging. Can you simply add more threads in TensorFlow/Keras for loading the data? My other suggestion is to pre-shuffle the data on disk, so you don't need random access inside your training loop.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,504497403