html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2417#issuecomment-422461245,https://api.github.com/repos/pydata/xarray/issues/2417,422461245,MDEyOklzc3VlQ29tbWVudDQyMjQ2MTI0NQ==,1217238,2018-09-18T16:31:03Z,2018-09-18T16:31:03Z,MEMBER,"If your data using in-file HDF5 chunks/compression it's *possible* that HDF5 is uncompressing the data is parallel, though I haven't seen that before personally.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2417#issuecomment-422206083,https://api.github.com/repos/pydata/xarray/issues/2417,422206083,MDEyOklzc3VlQ29tbWVudDQyMjIwNjA4Mw==,1217238,2018-09-17T23:40:52Z,2018-09-17T23:40:52Z,MEMBER,"Step 1 would be making sure that you're actually using dask :). Xarray only uses dask with `open_dataset()` if you supply the `chunks` keyword argument. That said, xarray's only built-in support for parallelism is through Dask, so I'm not sure what is using all your CPU.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974