issue_comments: 551090042
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/3213#issuecomment-551090042 | https://api.github.com/repos/pydata/xarray/issues/3213 | 551090042 | MDEyOklzc3VlQ29tbWVudDU1MTA5MDA0Mg== | 4605410 | 2019-11-07T13:57:46Z | 2019-11-07T13:57:46Z | NONE | @oliverhiggs Ive also noticed a huge computational overhead when joining xarray datasets where the result would be sparse. Something like a minute of computation time to join two 10GB datasets, even when there are no overlapping indices. I'm not sure if a sparse representation would help but its possible we'd get a reduced memory footprint and a faster merge/concat time with this kind of support. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
479942077 |