issue_comments
2 rows where author_association = "NONE" and issue = 902009258 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Multi-scale datasets and custom indexes · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
850552195 | https://github.com/pydata/xarray/issues/5376#issuecomment-850552195 | https://api.github.com/repos/pydata/xarray/issues/5376 | MDEyOklzc3VlQ29tbWVudDg1MDU1MjE5NQ== | d-v-b 3805136 | 2021-05-28T17:04:27Z | 2021-05-28T17:04:27Z | NONE |
I'm not sure when dynamic downsampling would be preferred over loading previously downsampled images from disk. In my usage, the application consuming the multiresolution images is an interactive data visualization tool and the goal is to minimize latency / maximize responsiveness of the visualization, and this would be difficult if the multiresolution images were generated dynamically from the full image -- under a dynamic scheme the lowest resolution image, i.e. the one that should be fastest to load, would instead require the most I/O and compute to generate....
Although I do not do this today, I can think of a lot of uses for this functionality -- an data processing pipeline could expose intermediate data over http via xpublish, but this would require a good caching layer to prevent re-computing the same region of the data repeatedly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-scale datasets and custom indexes 902009258 | |
848914165 | https://github.com/pydata/xarray/issues/5376#issuecomment-848914165 | https://api.github.com/repos/pydata/xarray/issues/5376 | MDEyOklzc3VlQ29tbWVudDg0ODkxNDE2NQ== | joshmoore 88113 | 2021-05-26T16:23:13Z | 2021-05-26T16:23:13Z | NONE | I don't think I am familiar enough to really judge between the suggestions, @benbovy, but I'm intrigued. I think there's certainly something to be won just by having a data structure which says these arrays/datasets represent a multiscale series. One real benefit though will be when access of that structure can simplify the client code needed to interactively load that data, e.g. with prefetching. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Multi-scale datasets and custom indexes 902009258 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 2