html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4082#issuecomment-642841283,https://api.github.com/repos/pydata/xarray/issues/4082,642841283,MDEyOklzc3VlQ29tbWVudDY0Mjg0MTI4Mw==,1872600,2020-06-11T17:58:30Z,2020-06-11T18:00:28Z,NONE,"@jswhit, do you know if https://github.com/Unidata/netcdf4-python is doing the caching? Just to catch you up quickly, we have a workflow that opens a bunch of opendap datasets, and while the default `file_cache_maxsize=128` works on Linux, if this exceeds 25 files on windows it fails: ``` xr.set_options(file_cache_maxsize=25) # works #xr.set_options(file_cache_maxsize=26) # fails ```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286 https://github.com/pydata/xarray/issues/4082#issuecomment-641236117,https://api.github.com/repos/pydata/xarray/issues/4082,641236117,MDEyOklzc3VlQ29tbWVudDY0MTIzNjExNw==,1872600,2020-06-09T11:42:38Z,2020-06-09T11:42:38Z,NONE,"@DennisHeimbigner , do you not agree that this issue on windows is related to the number of files cached from OPeNDAP requests? Clearly there are some differences with cache files on windows: https://www.unidata.ucar.edu/support/help/MailArchives/netcdf/msg11190.html ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286 https://github.com/pydata/xarray/issues/4082#issuecomment-640808125,https://api.github.com/repos/pydata/xarray/issues/4082,640808125,MDEyOklzc3VlQ29tbWVudDY0MDgwODEyNQ==,1872600,2020-06-08T18:51:37Z,2020-06-08T18:51:37Z,NONE,"@DennisHeimbigner I don't understand how it can be a DAP or code issue since: - it runs on Linux without errors with default `file_cache_maxsize=128`. - it runs on Windows without errors with `file_cache_maxsize=25` Right? Or am I missing something?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286 https://github.com/pydata/xarray/issues/4082#issuecomment-640590247,https://api.github.com/repos/pydata/xarray/issues/4082,640590247,MDEyOklzc3VlQ29tbWVudDY0MDU5MDI0Nw==,1872600,2020-06-08T13:05:28Z,2020-06-08T13:05:28Z,NONE,"Or perhaps Unidata's @WardF, who leads NetCDF development. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286 https://github.com/pydata/xarray/issues/4082#issuecomment-639450932,https://api.github.com/repos/pydata/xarray/issues/4082,639450932,MDEyOklzc3VlQ29tbWVudDYzOTQ1MDkzMg==,1872600,2020-06-05T12:26:14Z,2020-06-05T12:26:14Z,NONE,"@shoyer, unfortunately these opendap datasets contain only 1 time record (1 daily value) each. And it works fine on Linux with `file_cache_maxsize=128`, so it must be some Windows cache thing right? So since I just picked `file_cache_maxsize=10` arbitrarily, I thought it would be useful to see what the maximum value was. Using the good old bi-section method, I determined that (for this case anyway), the maximum size that works is 25. In other words: ``` xr.set_options(file_cache_maxsize=25) # works #xr.set_options(file_cache_maxsize=26) # fails ``` I would bet money that Unidata's @DennisHeimbigner knows what's going on here!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286 https://github.com/pydata/xarray/issues/4082#issuecomment-639111588,https://api.github.com/repos/pydata/xarray/issues/4082,639111588,MDEyOklzc3VlQ29tbWVudDYzOTExMTU4OA==,1872600,2020-06-04T20:55:49Z,2020-06-04T20:55:49Z,NONE,"@EliT1626 , I confirmed that this problem exists on Windows, but not on Linux. The error: ``` IOError: [Errno -37] NetCDF: Write to read only: 'https://www.ncei.noaa.gov/thredds/dodsC/OisstBase/NetCDF/V2.1/AVHRR/201703/oisst-avhrr-v02r01.20170304.nc' ``` suggested some kind of cache problem, and as you noted it always fails after a certain number of dates, so I tried increasing the number of cached files from the default 128 to 256: ``` xr.set_options(file_cache_maxsize=256) ``` but that had no effect. Just to see if it would fail earlier, I then tried *decreasing* the number of cached files: ``` xr.set_options(file_cache_maxsize=10) ``` and to my surprise, it ran all the way through: https://nbviewer.jupyter.org/gist/rsignell-usgs/c52fadd8626734bdd32a432279bc6779 I'm hoping someone who worked on the caching (@shoyer?) might have some idea of what is going on, but at least you can execute your workflow now on windows! ","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,621177286