issue_comments
16 rows where issue = 375126758 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Multi-dimensional binning/resampling/coarsening · 16 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
447545224 | https://github.com/pydata/xarray/issues/2525#issuecomment-447545224 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQ0NzU0NTIyNA== | fujiisoup 6815844 | 2018-12-15T07:28:13Z | 2018-12-15T07:28:13Z | MEMBER | Thinking its API.
I like |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
439892007 | https://github.com/pydata/xarray/issues/2525#issuecomment-439892007 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzOTg5MjAwNw== | jbusecke 14314623 | 2018-11-19T13:26:45Z | 2018-11-19T13:26:45Z | CONTRIBUTOR | I think mean would be a good default (thinking about cell center dimensions like longitude and latitude) but I would very much like it if other functions could be specified e. g. for grid face dimensions (where min and max would be more appropriate) and other coordinates like surface area (where sum would be the most appropriate function).
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
439766587 | https://github.com/pydata/xarray/issues/2525#issuecomment-439766587 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzOTc2NjU4Nw== | rabernat 1197350 | 2018-11-19T04:13:37Z | 2018-11-19T04:13:37Z | MEMBER |
If I think about my applications, I would probably always want to apply |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435272976 | https://github.com/pydata/xarray/issues/2525#issuecomment-435272976 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTI3Mjk3Ng== | dcherian 2448579 | 2018-11-02T05:11:36Z | 2018-11-02T05:11:36Z | MEMBER | I like |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435268965 | https://github.com/pydata/xarray/issues/2525#issuecomment-435268965 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTI2ODk2NQ== | fujiisoup 6815844 | 2018-11-02T04:37:35Z | 2018-11-02T04:37:35Z | MEMBER | +1 for What would the coordinates look like?
1. apply |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435213658 | https://github.com/pydata/xarray/issues/2525#issuecomment-435213658 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTIxMzY1OA== | shoyer 1217238 | 2018-11-01T22:51:55Z | 2018-11-01T22:51:55Z | MEMBER | skimage implements But given that it doesn't actually duplicate any elements and needs a C-order array to work, I think it's actually just equivalent to use use So the super-simple version of block-reduce looks like:
This would work on dask arrays out of the box but it's probably worth benchmarking whether you'd get better performance doing the operation chunk-wise (e.g., with |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435201618 | https://github.com/pydata/xarray/issues/2525#issuecomment-435201618 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTIwMTYxOA== | jbusecke 14314623 | 2018-11-01T21:59:19Z | 2018-11-01T21:59:19Z | CONTRIBUTOR | My favorite would be |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435192382 | https://github.com/pydata/xarray/issues/2525#issuecomment-435192382 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTE5MjM4Mg== | shoyer 1217238 | 2018-11-01T21:24:15Z | 2018-11-01T21:24:15Z | MEMBER | OK, so maybe We could call this something like We can save the full coordinate based version for a later addition to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434705757 | https://github.com/pydata/xarray/issues/2525#issuecomment-434705757 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDcwNTc1Nw== | shoyer 1217238 | 2018-10-31T14:22:07Z | 2018-10-31T14:22:07Z | MEMBER | block_reduce from skimage is indeed a small function using strides/reshape, if I remember correctly. We should certainly copy or implement it ourselves rather than adding an skimage dependency. On Wed, Oct 31, 2018 at 12:36 AM Keisuke Fujii notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434589377 | https://github.com/pydata/xarray/issues/2525#issuecomment-434589377 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDU4OTM3Nw== | fujiisoup 6815844 | 2018-10-31T07:36:41Z | 2018-10-31T07:36:41Z | MEMBER |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434531970 | https://github.com/pydata/xarray/issues/2525#issuecomment-434531970 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDUzMTk3MA== | jbusecke 14314623 | 2018-10-31T01:46:19Z | 2018-10-31T01:46:19Z | CONTRIBUTOR | I agree with @rabernat, and favor the index based approach. For regular lon-lat grids its easy enough to implement a weighted mean, and for irregular spaced grids and other exotic grids the coordinate based approach might lead to errors. To me the resample API above might suggest to some users that some proper regridding (a la xESMF) onto a regular lat/lon grid is performed. ‚block_reduce‘ sounds good to me and sounds appropriate for non-dask arrays. Does anyone have experience how ‚dask.coarsen‘ compares performance wise? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434480457 | https://github.com/pydata/xarray/issues/2525#issuecomment-434480457 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDQ4MDQ1Nw== | rabernat 1197350 | 2018-10-30T21:41:17Z | 2018-10-30T21:41:25Z | MEMBER |
I feel that this could become too complex in the case of irregularly spaced coordinates. I slightly favor the index-based approach (as in my function above), which one calls like
If we do that, we can just use scikit-image's |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434477550 | https://github.com/pydata/xarray/issues/2525#issuecomment-434477550 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDQ3NzU1MA== | shoyer 1217238 | 2018-10-30T21:31:18Z | 2018-10-30T21:31:18Z | MEMBER | I'm +1 for adding this feature in some form as well. From an API perspective, should the window size be specified in term of integer or coordinates?
- I would lean towards a coordinate based representation since it's a little more usable/certain to be correct. It might even make sense to still call this The API for resampling to a 2x2 degree latitude/longittude grid could look something like: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434294356 | https://github.com/pydata/xarray/issues/2525#issuecomment-434294356 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDI5NDM1Ng== | rabernat 1197350 | 2018-10-30T13:10:16Z | 2018-10-30T13:10:39Z | MEMBER | FYI, I do this often in my work with this sort of function:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434294114 | https://github.com/pydata/xarray/issues/2525#issuecomment-434294114 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDI5NDExNA== | rabernat 1197350 | 2018-10-30T13:09:25Z | 2018-10-30T13:09:25Z | MEMBER | This is being discussed in #1192 under a different name. Yes, we need this feature. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434261896 | https://github.com/pydata/xarray/issues/2525#issuecomment-434261896 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDI2MTg5Ng== | fujiisoup 6815844 | 2018-10-30T11:17:17Z | 2018-10-30T11:17:17Z | MEMBER | This is from a thread at SO. Does anyone have an opinion if we add a |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 5