html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/7012#issuecomment-1243917954,https://api.github.com/repos/pydata/xarray/issues/7012,1243917954,IC_kwDOAMm_X85KJK6C,10819524,2022-09-12T15:33:10Z,2022-09-12T15:33:10Z,CONTRIBUTOR,"@mathause You're right! I noticed this first in my builds using ""upstream"" dependencies (xarray@main, flox@main, cftime@master, bottleneck@master). It might indeed be flox-related!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1367029446 https://github.com/pydata/xarray/issues/4054#issuecomment-678423562,https://api.github.com/repos/pydata/xarray/issues/4054,678423562,MDEyOklzc3VlQ29tbWVudDY3ODQyMzU2Mg==,10819524,2020-08-21T18:16:16Z,2020-08-21T18:16:16Z,CONTRIBUTOR,"Just dicsovered that the same things is true for `~`. Another thing to add to the list: ```... : error: Unsupported operand type for ~ (""DataArray"") [operator]```","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,617140674 https://github.com/pydata/xarray/issues/4054#issuecomment-677920840,https://api.github.com/repos/pydata/xarray/issues/4054,677920840,MDEyOklzc3VlQ29tbWVudDY3NzkyMDg0MA==,10819524,2020-08-20T21:41:51Z,2020-08-20T22:48:49Z,CONTRIBUTOR,"We're currently working on a library largely based on xarray and have seen the same types of errors from mypy (PR in our project that is currently trying to integrate mypy: https://github.com/Ouranosinc/xclim/pull/532). Currently working off of xarray v0.16. I also want to note this error is raised for other operations as well (`+`, `-`, `/`, and `*`) between `xarray.DataArray` and `xarray.Datasets`. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,617140674 https://github.com/pydata/xarray/pull/2957#issuecomment-491853311,https://api.github.com/repos/pydata/xarray/issues/2957,491853311,MDEyOklzc3VlQ29tbWVudDQ5MTg1MzMxMQ==,10819524,2019-05-13T14:46:50Z,2019-05-13T14:46:50Z,CONTRIBUTOR,This PR addresses https://github.com/Ouranosinc/xclim/issues/199,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,443440217 https://github.com/pydata/xarray/issues/2417#issuecomment-462422387,https://api.github.com/repos/pydata/xarray/issues/2417,462422387,MDEyOklzc3VlQ29tbWVudDQ2MjQyMjM4Nw==,10819524,2019-02-11T17:41:47Z,2019-02-11T17:41:47Z,CONTRIBUTOR,"Hi @jhamman, please excuse the lateness of this reply. It turned out that in the end all I needed to do was set `OMP_NUM_THREADS` to the number based on my cores I want to use (2 threads/core) before launching my processes. Thanks for the help and for keeping this open. Feel free to close this thread. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974 https://github.com/pydata/xarray/issues/2664#issuecomment-453203293,https://api.github.com/repos/pydata/xarray/issues/2664,453203293,MDEyOklzc3VlQ29tbWVudDQ1MzIwMzI5Mw==,10819524,2019-01-10T18:30:48Z,2019-01-10T18:30:48Z,CONTRIBUTOR,That certainly is the error. The workaround identified for it is good enough for now. Thanks!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,397950349 https://github.com/pydata/xarray/issues/2417#issuecomment-422445732,https://api.github.com/repos/pydata/xarray/issues/2417,422445732,MDEyOklzc3VlQ29tbWVudDQyMjQ0NTczMg==,10819524,2018-09-18T15:44:03Z,2018-09-18T15:44:03Z,CONTRIBUTOR,"As per your suggestion, I retried with chunking and found a new error (due to the nature of my data having rotated poles, dask demanded that I save my data with astype(); this isn't my major concern so I'll deal with that somewhere else). What I did notice was that when chunking was specified (`ds = xr.open_dataset(ncfile).chunking({'time': 10})`), I lost all parallelism and although I had specified different thread counts, the performance never crossed 110% (I imagine the extra 10% was due to I/O). This is really a mystery and unfortunately, I haven't a clue how this beahviour is possible if parallel processing is disabled by default. The speed of my results when dask multprocessing isn't specified suggests that it must be using more processing power: - using Multiprocessing calls to CDO with 5 ForkPoolWorkers = ~2h/5 files (100% x 5 CPUs) - xarray without dask multiprocessing specifications = ~3min/5 files (spikes of 3500% on one CPU) Could these spikes in CPU usage be due to other processes (e.g. memory usage, I/O)? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,361016974