issue_comments: 1299369449
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7239#issuecomment-1299369449 | https://api.github.com/repos/pydata/xarray/issues/7239 | 1299369449 | IC_kwDOAMm_X85Ncs3p | 90008 | 2022-11-01T23:54:07Z | 2022-11-01T23:54:07Z | CONTRIBUTOR | I think these are good alternatives. From my experiments (and I'm still trying to create a minimum reproducible code that shows the real problem behind the slowdowns) reindexing can be quite an expensive. We used to have many coordinates (to ensure that critical metdata stays with data_variables) and those coordinates were causing slowdowns on reindexing operations. Thus the two calls However, for this particular issue, I think that documenting the strategies proposed in the docstring is good enough. I have a feeling if one can get to the bottom of 7224, the performance concerns here will be mitigated too. We can leave the performance discussion to: https://github.com/pydata/xarray/issues/7224 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1429172192 |