html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/3466#issuecomment-1013673140,https://api.github.com/repos/pydata/xarray/issues/3466,1013673140,IC_kwDOAMm_X848a2y0,6815953,2022-01-15T12:21:44Z,2022-01-17T10:37:59Z,NONE,"Hi all, I encountered the same problem when trying to download NASA's GEOS-5 data (see below). It worked occasionally but I had to restart the script several times. ```python import xarray as xr import pandas as pd URL ='https://opendap.nccs.nasa.gov/dods/GEOS-5/fp/0.25_deg/assim/inst3_3d_asm_Np' ds = xr.open_dataset(URL,engine='netcdf4') var_ls = ['omega', 't', 'v', 'u'] lev_ls = [1000., 975., 950., 925., 900., 875., 850., 825., 800., 775., 750., 700., 650., 600., 550., 500., 450., 400., 350., 300., 275., 250., 225., 200., 175., 150., 125., 100., 70., 50., 30., 10., 5.,3, 2, 1] time_range = pd.date_range('2021-01-02T12', '2021-01-07', freq = '6H') for sel_date in time_range: ds_sel = ds[var_ls].sel( time = sel_date, lev = lev_ls, method = 'nearest' ) ouf_date=sel_date.strftime('%Y%m%d%H') outfile = f'geos5_subset_{ouf_date}.nc' print(outfile) ds_sel.to_netcdf(outfile) ``` [EDIT] It may have helped to add `.load` before `to_netcdf` though.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,514672231 https://github.com/pydata/xarray/issues/3466#issuecomment-793502837,https://api.github.com/repos/pydata/xarray/issues/3466,793502837,MDEyOklzc3VlQ29tbWVudDc5MzUwMjgzNw==,57942990,2021-03-09T07:52:55Z,2021-03-09T07:52:55Z,NONE,"Hi all, I encountered the same problem when trying to save data from met.no's archived MEPS data on ""https://thredds.met.no/thredds/dodsC/meps25epsarchive/"" when trying to save the remote dataset as a local netCDF file. I found that it depends on the size of the dataset I want to save and it works fine up to about 5MB file size. Thus, my workaround is to chunk the data e.g. by selecting only one ensemble member at a time, save the chunk as a file to make sure everything is downloaded, then read in again and do dataset.merge() in order to get one netCDF file. It seems to me as if the problem could be solved by writing to hard disk in chunks. Hope this helps to find the root of this issue.... Thanks for opening this issue @b-kode!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,514672231 https://github.com/pydata/xarray/issues/3466#issuecomment-548030406,https://api.github.com/repos/pydata/xarray/issues/3466,548030406,MDEyOklzc3VlQ29tbWVudDU0ODAzMDQwNg==,47066389,2019-10-30T17:44:19Z,2019-10-30T17:44:19Z,NONE,"Hi @max-sixty, I have added the traceback. And apparently @dcherian already edited the example to its proper format. Thanks!","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,514672231