html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2417#issuecomment-460393715,https://api.github.com/repos/pydata/xarray/issues/2417,460393715,MDEyOklzc3VlQ29tbWVudDQ2MDM5MzcxNQ==,2443309,2019-02-04T20:07:56Z,2019-02-04T20:07:56Z,MEMBER,"@Zeitsperre - are you still having problems in this area? If not, is okay if we close this issue?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2417#issuecomment-460298993,https://api.github.com/repos/pydata/xarray/issues/2417,460298993,MDEyOklzc3VlQ29tbWVudDQ2MDI5ODk5Mw==,2443309,2019-02-04T15:50:09Z,2019-02-04T15:51:43Z,MEMBER,"On a few systems, I've noticed that I need to set the environment variable `OMP_NUM_THREADS` to `1` to limit parallel evaluation within dask threads. I wonder if that something like this is happening here? xref: https://stackoverflow.com/questions/39422092/error-with-omp-num-threads-when-using-dask-distributed","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2417#issuecomment-460020879,https://api.github.com/repos/pydata/xarray/issues/2417,460020879,MDEyOklzc3VlQ29tbWVudDQ2MDAyMDg3OQ==,2443309,2019-02-03T03:54:59Z,2019-02-03T03:54:59Z,MEMBER,@Zeitsperre - this issue has been inactive for a while. Did you find a solution to y our problem? ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2417#issuecomment-422461245,https://api.github.com/repos/pydata/xarray/issues/2417,422461245,MDEyOklzc3VlQ29tbWVudDQyMjQ2MTI0NQ==,1217238,2018-09-18T16:31:03Z,2018-09-18T16:31:03Z,MEMBER,"If your data using in-file HDF5 chunks/compression it's *possible* that HDF5 is uncompressing the data is parallel, though I haven't seen that before personally.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2417#issuecomment-422206083,https://api.github.com/repos/pydata/xarray/issues/2417,422206083,MDEyOklzc3VlQ29tbWVudDQyMjIwNjA4Mw==,1217238,2018-09-17T23:40:52Z,2018-09-17T23:40:52Z,MEMBER,"Step 1 would be making sure that you're actually using dask :). Xarray only uses dask with `open_dataset()` if you supply the `chunks` keyword argument. That said, xarray's only built-in support for parallelism is through Dask, so I'm not sure what is using all your CPU.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974