html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/2132#issuecomment-389644148,https://api.github.com/repos/pydata/xarray/issues/2132,389644148,MDEyOklzc3VlQ29tbWVudDM4OTY0NDE0OA==,1197350,2018-05-16T19:50:52Z,2018-05-16T19:50:52Z,MEMBER,"> it turned out I had exceeded my quota on the storage.
This is a major hazard with the llc4230 dataset! 😱 ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,323333361
https://github.com/pydata/xarray/issues/2132#issuecomment-389274247,https://api.github.com/repos/pydata/xarray/issues/2132,389274247,MDEyOklzc3VlQ29tbWVudDM4OTI3NDI0Nw==,1197350,2018-05-15T18:51:04Z,2018-05-15T18:51:04Z,MEMBER,"The warning suggests you are using a dask distributed to perform this write. Could you also post the details of your dask cluster (how you launch it, etc?) Is it possible that the dask workers cannot see the filesystem you are trying to write to?
FWIW, if you are trying to create an ""analysis optimized"" datastore, I would consider using zarr (rather than netcdf). Can you do `v.to_zarr` without errors?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,323333361