issue_comments: 601885539
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/2922#issuecomment-601885539 | https://api.github.com/repos/pydata/xarray/issues/2922 | 601885539 | MDEyOklzc3VlQ29tbWVudDYwMTg4NTUzOQ== | 7441788 | 2020-03-20T19:57:54Z | 2020-03-20T20:00:20Z | CONTRIBUTOR | All good points:
Good idea, though I don't know what the performance hit would be of the extra check (in the case that da does contain NaNs, so the check is for naught).
Well,
Yes. You can continue not supporting NaNs in the weights, yet not explicitly check that there are no NaNs (optionally, if the caller assures you that there are no NaNs).
Correct. These have nothing to do with the NaNs issue. For profiling memory usage, I use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
437765416 |