issue_comments
4 rows where issue = 375126758 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Multi-dimensional binning/resampling/coarsening · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
435213658 | https://github.com/pydata/xarray/issues/2525#issuecomment-435213658 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTIxMzY1OA== | shoyer 1217238 | 2018-11-01T22:51:55Z | 2018-11-01T22:51:55Z | MEMBER | skimage implements But given that it doesn't actually duplicate any elements and needs a C-order array to work, I think it's actually just equivalent to use use So the super-simple version of block-reduce looks like:
This would work on dask arrays out of the box but it's probably worth benchmarking whether you'd get better performance doing the operation chunk-wise (e.g., with |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
435192382 | https://github.com/pydata/xarray/issues/2525#issuecomment-435192382 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNTE5MjM4Mg== | shoyer 1217238 | 2018-11-01T21:24:15Z | 2018-11-01T21:24:15Z | MEMBER | OK, so maybe We could call this something like We can save the full coordinate based version for a later addition to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434705757 | https://github.com/pydata/xarray/issues/2525#issuecomment-434705757 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDcwNTc1Nw== | shoyer 1217238 | 2018-10-31T14:22:07Z | 2018-10-31T14:22:07Z | MEMBER | block_reduce from skimage is indeed a small function using strides/reshape, if I remember correctly. We should certainly copy or implement it ourselves rather than adding an skimage dependency. On Wed, Oct 31, 2018 at 12:36 AM Keisuke Fujii notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 | |
434477550 | https://github.com/pydata/xarray/issues/2525#issuecomment-434477550 | https://api.github.com/repos/pydata/xarray/issues/2525 | MDEyOklzc3VlQ29tbWVudDQzNDQ3NzU1MA== | shoyer 1217238 | 2018-10-30T21:31:18Z | 2018-10-30T21:31:18Z | MEMBER | I'm +1 for adding this feature in some form as well. From an API perspective, should the window size be specified in term of integer or coordinates?
- I would lean towards a coordinate based representation since it's a little more usable/certain to be correct. It might even make sense to still call this The API for resampling to a 2x2 degree latitude/longittude grid could look something like: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-dimensional binning/resampling/coarsening 375126758 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1