issue_comments: 290477014
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1340#issuecomment-290477014 | https://api.github.com/repos/pydata/xarray/issues/1340 | 290477014 | MDEyOklzc3VlQ29tbWVudDI5MDQ3NzAxNA== | 1217238 | 2017-03-30T17:07:50Z | 2017-03-30T17:07:50Z | MEMBER | My strong suspicion is that the bottleneck here is xarray checking all the coordinates for equality in concat, when deciding whether to add a "time" dimension or not. Try passing This was a convenient check for small/in-memory datasets but possibly it's not a good one going forward. It's generally slow to load all the coordinate data for comparisons, but it's even worse with the current implementation, which computes pair-wise comparisons of arrays with dask instead of doing them in parallel all at once. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
218260909 |