issue_comments: 434531970
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2525#issuecomment-434531970 | https://api.github.com/repos/pydata/xarray/issues/2525 | 434531970 | MDEyOklzc3VlQ29tbWVudDQzNDUzMTk3MA== | 14314623 | 2018-10-31T01:46:19Z | 2018-10-31T01:46:19Z | CONTRIBUTOR | I agree with @rabernat, and favor the index based approach. For regular lon-lat grids its easy enough to implement a weighted mean, and for irregular spaced grids and other exotic grids the coordinate based approach might lead to errors. To me the resample API above might suggest to some users that some proper regridding (a la xESMF) onto a regular lat/lon grid is performed. ‚block_reduce‘ sounds good to me and sounds appropriate for non-dask arrays. Does anyone have experience how ‚dask.coarsen‘ compares performance wise? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
375126758 |