html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/793#issuecomment-200629173,https://api.github.com/repos/pydata/xarray/issues/793,200629173,MDEyOklzc3VlQ29tbWVudDIwMDYyOTE3Mw==,4295853,2016-03-24T02:49:13Z,2016-03-24T02:49:26Z,CONTRIBUTOR,"I'm going to close this for now but will reopen it if the issue arises again following the dask release. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-199879233,https://api.github.com/repos/pydata/xarray/issues/793,199879233,MDEyOklzc3VlQ29tbWVudDE5OTg3OTIzMw==,4295853,2016-03-22T15:56:42Z,2016-03-22T15:56:42Z,CONTRIBUTOR,"Note, also waiting on `dask` going to 0.8.2 version number for the full fix. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-199855592,https://api.github.com/repos/pydata/xarray/issues/793,199855592,MDEyOklzc3VlQ29tbWVudDE5OTg1NTU5Mg==,4295853,2016-03-22T15:02:54Z,2016-03-22T15:03:18Z,CONTRIBUTOR,"Thanks @shoyer! I ran into this problem again with this morning and as you note I had multiple arrays in the file that were being written. PR https://github.com/pydata/xarray/pull/800 implements your suggestion and should hopefully resolve the issue, although it is not clear to me how to build a reproducible test case-- perhaps write a file with a ton of random arrays to crash it out on the write? Any thoughts or suggestions you have on this would be very helpful. Note that the PR is preliminary until I can verify that it resolves the issue via testing. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-199547343,https://api.github.com/repos/pydata/xarray/issues/793,199547343,MDEyOklzc3VlQ29tbWVudDE5OTU0NzM0Mw==,1217238,2016-03-22T00:01:52Z,2016-03-22T00:01:52Z,MEMBER,"This should be pretty easy -- we'll just need to add `lock=threading.Lock()` to this line: https://github.com/pydata/xarray/blob/v0.7.2/xarray/backends/common.py#L165 The only subtlety is that this needs to be done in a way that is dependent on the version of dask, because the keyword argument is new -- something like `if dask.__version__ > '0.8.1'`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-199544425,https://api.github.com/repos/pydata/xarray/issues/793,199544425,MDEyOklzc3VlQ29tbWVudDE5OTU0NDQyNQ==,4295853,2016-03-21T23:56:45Z,2016-03-21T23:56:45Z,CONTRIBUTOR,"@shoyer, I'm assuming there needs to be an xarray PR corresponding to Matt's merged PR, is that correct? Do you think this will be a difficult xarray change? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-196924992,https://api.github.com/repos/pydata/xarray/issues/793,196924992,MDEyOklzc3VlQ29tbWVudDE5NjkyNDk5Mg==,1217238,2016-03-15T17:04:57Z,2016-03-15T17:27:29Z,MEMBER,"I did a little digging into this and I'm pretty sure the issue here is that HDF5 [cannot do multi-threading](https://www.hdfgroup.org/hdf5-quest.html#gconc) -- at all. Moreover, many HDF5 builds are not thread safe. Right now, we use a single shared lock for all _reads_ with xarray, but for writes we rely on dask.array.store, which [only uses different locks for each array it writes](https://github.com/dask/dask/blob/0.8.1/dask/array/core.py#L1968). Because @pwolfram's HDF5 file includes multiple variables, each of these gets written with their own thread lock -- which means we end up writing to the same file simultaneously from multiple threads. So what we could really use here is a `lock` argument to `dask.array.store` (like `dask.array.from_array`) that lets us insist on a using a shared lock when we're writing HDF5 files. Also, we may need to share that same lock between reading and writing data -- I'm not 100% sure. But at the very least we definitely need a lock to stop HDF5 from trying to do multi-threaded writes, whether that's to the same or different files. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-196935638,https://api.github.com/repos/pydata/xarray/issues/793,196935638,MDEyOklzc3VlQ29tbWVudDE5NjkzNTYzOA==,306380,2016-03-15T17:26:41Z,2016-03-15T17:26:41Z,MEMBER,"https://github.com/dask/dask/pull/1053 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-196926696,https://api.github.com/repos/pydata/xarray/issues/793,196926696,MDEyOklzc3VlQ29tbWVudDE5NjkyNjY5Ng==,4295853,2016-03-15T17:08:22Z,2016-03-15T17:08:22Z,CONTRIBUTOR,"@thanks @shoyer for looking into this further and for figuring out the cause of the problem. @mrocklin, does this mean that I should submit a dask issue? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195811381,https://api.github.com/repos/pydata/xarray/issues/793,195811381,MDEyOklzc3VlQ29tbWVudDE5NTgxMTM4MQ==,306380,2016-03-12T21:32:56Z,2016-03-12T21:32:56Z,MEMBER,"To be clear, we ran into the `NetCDF: HDF error` error when having multiple threads in the same process open-read-close many different files. I don't think there was any concurrent access of the same file. The problem went away when we switched to using processes rather than threads. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195811187,https://api.github.com/repos/pydata/xarray/issues/793,195811187,MDEyOklzc3VlQ29tbWVudDE5NTgxMTE4Nw==,4295853,2016-03-12T21:30:14Z,2016-03-12T21:30:14Z,CONTRIBUTOR,"I can't fully confirm that the above scripts works with synchronous execution because the job ran out of its 16hr run time. However, it does appear to be the case that forcing synchronous execution resolves potential issues because previous runs of the script crashed and this one did not. I'll have to try more cases with synchronous execution, especially over the next half week, to see if I encounter more issues but am suspicious this is the problem. @mrocklin and I noted that the netCDF reader has problems when threading is on when we were using distributed, so this appears to be a likely candidate. We got the same `NetCDF: HDF error` error as above, and were able to resolve the issue by forcing distributed to work synchronously (non-threaded). @mrocklin should feel free to correct me if I've miss-represented our findings yesterday. I'm suspicious that the netCDF reader is not thread safe and may not have been compiled as such (http://hdf-forum.184993.n3.nabble.com/Activate-thread-safe-and-enable-cxx-in-HDF5-td2993951.html) but there appear other potential issues that could be part of the problem, e.g., https://github.com/Unidata/netcdf4-python/issues/279 because I am doing so many reads. It may also be possible, as you note @shoyer, that the tread locks aren't aggressive enough. It would probably be good to come up with some type of testing strategy to better isolate the problem... I'll have to give this more thought. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195637636,https://api.github.com/repos/pydata/xarray/issues/793,195637636,MDEyOklzc3VlQ29tbWVudDE5NTYzNzYzNg==,1217238,2016-03-12T02:19:18Z,2016-03-12T02:19:18Z,MEMBER,"I'm pretty sure we now have a thread lock around all writes to NetCDF files, but it's possible that isn't aggressive enough (maybe we can't safely read and write a different file at the same time?). If your script works with synchronous execution I'll take another look. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195573297,https://api.github.com/repos/pydata/xarray/issues/793,195573297,MDEyOklzc3VlQ29tbWVudDE5NTU3MzI5Nw==,306380,2016-03-11T22:13:28Z,2016-03-11T22:13:28Z,MEMBER,"Yes, my apologies for the typo. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195572996,https://api.github.com/repos/pydata/xarray/issues/793,195572996,MDEyOklzc3VlQ29tbWVudDE5NTU3Mjk5Ng==,4295853,2016-03-11T22:11:54Z,2016-03-11T22:12:12Z,CONTRIBUTOR,"@mrocklin, For option 1, should the command be `dask.set_options(get=dask.async.get_sync)`? I'm on 0.8.0 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195565307,https://api.github.com/repos/pydata/xarray/issues/793,195565307,MDEyOklzc3VlQ29tbWVudDE5NTU2NTMwNw==,4295853,2016-03-11T21:40:11Z,2016-03-11T21:40:11Z,CONTRIBUTOR,"Test 2 passed, so it doesn't appear to be due to too many open file handles. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195563852,https://api.github.com/repos/pydata/xarray/issues/793,195563852,MDEyOklzc3VlQ29tbWVudDE5NTU2Mzg1Mg==,4295853,2016-03-11T21:33:54Z,2016-03-11T21:33:54Z,CONTRIBUTOR,"Agreed. I'll let you know what I find out. Thanks @mrocklin. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195562924,https://api.github.com/repos/pydata/xarray/issues/793,195562924,MDEyOklzc3VlQ29tbWVudDE5NTU2MjkyNA==,306380,2016-03-11T21:29:46Z,2016-03-11T21:29:46Z,MEMBER,"Sure. I'm not proposing any particular approach. I'm just supporting your previous idea that maybe the problem is having too many open file handles. It would be good to check this before diving into threading or concurrency issues. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195562125,https://api.github.com/repos/pydata/xarray/issues/793,195562125,MDEyOklzc3VlQ29tbWVudDE5NTU2MjEyNQ==,4295853,2016-03-11T21:27:19Z,2016-03-11T21:27:33Z,CONTRIBUTOR,"Quick question @mrocklin, for 2, are you proposing a script that just opens all the files, e.g., something like this ``` # get full xr dataset dslist = [] nfiles = len(glob.glob('dispersion_calcs_rlzn0*layerrange_0000-0000.nc')) for i in np.arange(nfiles): ds = xr.open_mfdataset('dispersion_calcs_rlzn%04d_*nc'%(i)) dslist.append(ds) dstotal = xr.concat(dslist,'Nr') # do an operation spanning Nr space and Nb space print dstotal.dtdays.values ``` where `dtdays` spans the all the files? I'm running it now. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195557013,https://api.github.com/repos/pydata/xarray/issues/793,195557013,MDEyOklzc3VlQ29tbWVudDE5NTU1NzAxMw==,306380,2016-03-11T21:16:41Z,2016-03-11T21:16:41Z,MEMBER,"1024 might be a common open file handle limit. Some things to try to isolate the issue: 1. Try this with `dask.set_globals(get=dask.async.get_sync)` to turn off threading 2. Try just opening all of the files and see if the NetCDF error presents itself under normal operation ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195552065,https://api.github.com/repos/pydata/xarray/issues/793,195552065,MDEyOklzc3VlQ29tbWVudDE5NTU1MjA2NQ==,4295853,2016-03-11T21:08:00Z,2016-03-11T21:08:00Z,CONTRIBUTOR,"There are a large number of files (1320) where `nfiles = 120` and `len(dslist)=11`, so perhaps this is an issue with opening a large number of files as noted by @rabernat. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195550711,https://api.github.com/repos/pydata/xarray/issues/793,195550711,MDEyOklzc3VlQ29tbWVudDE5NTU1MDcxMQ==,4295853,2016-03-11T21:05:44Z,2016-03-11T21:05:44Z,CONTRIBUTOR,"I should note that serialization also does not appear to be robust under reshaping the data via `ds = ds.transpose('Nt','Nt-1','Nr','Nb','Nc')` as well as rechunking. The input data stream was previously generated via a call to `ds.to_netcdf` in another script using xarray. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221 https://github.com/pydata/xarray/issues/793#issuecomment-195550183,https://api.github.com/repos/pydata/xarray/issues/793,195550183,MDEyOklzc3VlQ29tbWVudDE5NTU1MDE4Mw==,4295853,2016-03-11T21:04:45Z,2016-03-11T21:04:45Z,CONTRIBUTOR,"cc @mrocklin ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,140291221