html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/3961#issuecomment-778972571,https://api.github.com/repos/pydata/xarray/issues/3961,778972571,MDEyOklzc3VlQ29tbWVudDc3ODk3MjU3MQ==,6948919,2021-02-15T06:11:34Z,2021-02-15T06:11:34Z,NONE,"Please make some dummy tests, I did time.sleep, prior every operation. This was the only workaround that really worked. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-778841149,https://api.github.com/repos/pydata/xarray/issues/3961,778841149,MDEyOklzc3VlQ29tbWVudDc3ODg0MTE0OQ==,2560426,2021-02-14T21:01:21Z,2021-02-14T21:01:21Z,NONE,"> Or alternatively you can try to set sleep between openings.
To clarify, do you mean adding a sleep of e.g. 1 second prior to your `preprocess` function (and setting `preprocess` to just sleep then `return ds` if you're not doing any preprocessing)? Or, are you instead sleeping before the entire `open_mfdataset` call?
Is this solution only addressing the issue of opening the same ds multiple times within a python process, or would it also address multiple processes opening the same ds?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-778839471,https://api.github.com/repos/pydata/xarray/issues/3961,778839471,MDEyOklzc3VlQ29tbWVudDc3ODgzOTQ3MQ==,6948919,2021-02-14T20:47:27Z,2021-02-14T20:47:27Z,NONE,"
> Is the current recommended solution to set `lock=False` and retry until success? Or, is it to keep `lock=None` and use `zarr` instead? @dcherian
Or alternatively you can try to set sleep between openings.
When you try to open same file from different functions with different operations, it is better to keep file opening function wrapped with a 1 second delay/sleep rather than direct open
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-778838527,https://api.github.com/repos/pydata/xarray/issues/3961,778838527,MDEyOklzc3VlQ29tbWVudDc3ODgzODUyNw==,2560426,2021-02-14T20:40:38Z,2021-02-14T20:40:38Z,NONE,"Also seeing this as of version 0.16.1.
In some cases, I need `lock=False` otherwise I'll run into hung processes a certain percentage of the time. `ds.load()` prior to `to_netcdf()` does not solve the problem.
In other cases, I need `lock=None` otherwise I'll consistently get `RuntimeError: NetCDF: Not a valid ID`.
Is the current recommended solution to set `lock=False` and retry until success? Or, is it to keep `lock=None` and use `zarr` instead? @dcherian ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-730033698,https://api.github.com/repos/pydata/xarray/issues/3961,730033698,MDEyOklzc3VlQ29tbWVudDczMDAzMzY5OA==,6948919,2020-11-19T00:02:09Z,2020-11-19T00:04:13Z,NONE,"> I have the same behaviour with MacOS (10.15). xarray=0.16.1, dask=2.30.0, netcdf4=1.5.4. Sometimes saves, sometimes doesn't. `lock=False` seems to work.
Lock false sometimes throws hd5 error. No clear solution.
The only solution I have found, sleep method for 1 second","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-691788479,https://api.github.com/repos/pydata/xarray/issues/3961,691788479,MDEyOklzc3VlQ29tbWVudDY5MTc4ODQ3OQ==,5053963,2020-09-14T03:21:45Z,2020-09-14T03:21:45Z,NONE,I have the same issue as well and it appears to me that Ubuntu system is more prone to this issue vs. CentOS. Wondering if anyone else has a similar experience,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-690332310,https://api.github.com/repos/pydata/xarray/issues/3961,690332310,MDEyOklzc3VlQ29tbWVudDY5MDMzMjMxMA==,6948919,2020-09-10T14:36:53Z,2020-09-10T14:38:51Z,NONE,"Using:
- xarray=0.16.0
- dask=2.25.0
- netcdf4=1.5.4
I am experiencing same when trying to write netcdf file using **xr.to_netcdf()** on a files opened via xr.open_mfdataset with lock=None.
Then I tried OP's suggestion and it worked like a charm
**BUT**
Now I am facing different issue. Seems that **hdf5 IS NOT thread safe**, since I encounter **NetCDF: HDF error** while applying different function on a netcdf file, previously were processed by another function with **lock=False**.
script just terminates not even reaching any calculation step in the code. seems like lock=False works opposite and file is in a **corrupted** mode?
This is the **BIGGEST** issue and needs resolve ASAP","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663
https://github.com/pydata/xarray/issues/3961#issuecomment-628985289,https://api.github.com/repos/pydata/xarray/issues/3961,628985289,MDEyOklzc3VlQ29tbWVudDYyODk4NTI4OQ==,45645265,2020-05-15T02:14:48Z,2020-05-15T02:14:48Z,NONE,"Using:
- xarray=0.15.1
- dask=2.14.0
- netcdf4=1.5.3
I have experienced this issue as well when writing netcdf using `xr.save_mfdataset` on a dataset opened using `xr.open_mfdataset`. As described by OP it hangs when using `lock=None` (default behavior) on `xr.open_mfdataset()`, but works fine when using `lock=False`.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,597657663