html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/3066#issuecomment-508295570,https://api.github.com/repos/pydata/xarray/issues/3066,508295570,MDEyOklzc3VlQ29tbWVudDUwODI5NTU3MA==,4903456,2019-07-04T00:23:45Z,2019-07-04T00:23:45Z,NONE,"@shoyer thanks for looking into this.
I also figured it later that I can just use np.nanmean (or nanmedian) but that function turns out to be much slower than np.mean (or np.median) version. As nans are only happening as the beginning and end of the sequence, is there any efficient way of using nanmean only for those segments and mean for the rest of the processing? My own thought is to have a check for nan in the custom function and apply mean or nanmean depending on the results of that check, but not sure if this can be done more efficiently.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,462424005