issue_comments: 489027263
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1823#issuecomment-489027263 | https://api.github.com/repos/pydata/xarray/issues/1823 | 489027263 | MDEyOklzc3VlQ29tbWVudDQ4OTAyNzI2Mw== | 35968931 | 2019-05-03T09:25:00Z | 2019-05-03T09:25:00Z | MEMBER | @dcherian I'm sorry, I'm very interested in this but after reading the issues I'm still not clear on what's being proposed: What exactly is the bottleneck? Is it reading the coords from all the files? Is it loading the coord values into memory? Is it performing the alignment checks on those coords once they're in memory? Is it performing alignment checks on the dimensions? Is this suggestion relevant to datasets that don't have any coords? Which of these steps would a
But this is already an option to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
288184220 |