issue_comments: 456149964
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1346#issuecomment-456149964 | https://api.github.com/repos/pydata/xarray/issues/1346 | 456149964 | MDEyOklzc3VlQ29tbWVudDQ1NjE0OTk2NA== | 2405019 | 2019-01-21T17:33:31Z | 2019-01-21T17:33:31Z | CONTRIBUTOR | Sorry to unearth this issue again, but I just got bitten by this quite badly. I'm looking at absolute temperature perturbations and bottleneck's implementation together with my data being loaded as Example: ``` In [1]: import numpy as np ...: import bottleneck In [2]: a = 300np.ones((800*2,), dtype=np.float32) In [3]: np.mean(a) Out[3]: 300.0 In [4]: bottleneck.nanmean(a) Out[4]: 302.6018981933594 ``` Would it be worth adding a warning (until the right solution is found) if someone is doing Based a little experimentation (https://gist.github.com/leifdenby/8e874d3440a1ac96f96465a418f158ab) bottleneck's mean function builds up significant errors even with moderately sized arrays if they are |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
218459353 |