issue_comments
3 rows where author_association = "CONTRIBUTOR", issue = 375126758 and user = 14314623 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Multi-dimensional binning/resampling/coarsening · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
439892007 | https://github.com/pydata/xarray/issues/2525#issuecomment-439892007 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzOTg5MjAwNw== | jbusecke 14314623 | 2018-11-19T13:26:45Z | 2018-11-19T13:26:45Z | CONTRIBUTOR | I think mean would be a good default (thinking about cell center dimensions like longitude and latitude) but I would very much like it if other functions could be specified e. g. for grid face dimensions (where min and max would be more appropriate) and other coordinates like surface area (where sum would be the most appropriate function).
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435201618 | https://github.com/pydata/xarray/issues/2525#issuecomment-435201618 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTIwMTYxOA== | jbusecke 14314623 | 2018-11-01T21:59:19Z | 2018-11-01T21:59:19Z | CONTRIBUTOR | My favorite would be |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434531970 | https://github.com/pydata/xarray/issues/2525#issuecomment-434531970 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDUzMTk3MA== | jbusecke 14314623 | 2018-10-31T01:46:19Z | 2018-10-31T01:46:19Z | CONTRIBUTOR | I agree with @rabernat, and favor the index based approach. For regular lon-lat grids its easy enough to implement a weighted mean, and for irregular spaced grids and other exotic grids the coordinate based approach might lead to errors. To me the resample API above might suggest to some users that some proper regridding (a la xESMF) onto a regular lat/lon grid is performed. ‚block_reduce‘ sounds good to me and sounds appropriate for non-dask arrays. Does anyone have experience how ‚dask.coarsen‘ compares performance wise? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1