id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 1981799811,I_kwDOAMm_X852H92D,8423,Support remote string paths for `h5netcdf` engine,11656932,open,0,,,4,2023-11-07T16:52:18Z,2023-11-09T07:24:45Z,,CONTRIBUTOR,,,,"### Is your feature request related to a problem? Currently the `h5netcdf` engine supports opening remote files, but only already open file-like objects (e.g. `s3fs.open(...)`), not string paths like `s3://...`. There are situations where I'd like to use string paths instead of open file-like objets - Opening files can sometimes be slow (xref https://github.com/fsspec/s3fs/issues/816) - When using `parallel=True` for opening lots of files, serializing open file-like objects back and forth from a remote cluster can be slow - Some systems (e.g. NASA Earthdata) only hand out credentials that are valid when run in the same region as the data. Being able to use `parallel=True` + `storage_options` would be convenient/performant in that case. ### Describe the solution you'd like It would be nice if I could do something like the following: ```python ds = xr.open_mfdataset( files, # A bunch of files like `s3://bucket/file` engine=""h5netcdf"", ... parallel=True, storage_options={...}, # fsspec-compatible options ) ``` and have my files opened prior to handing off to `h5netcdf`. `storage_options` is already supported for Zarr, so hopefully extending to `h5netcdf` feels natural. ### Describe alternatives you've considered _No response_ ### Additional context _No response_","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/8423/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue