issues: 1421441672
This data as json
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1421441672 | PR_kwDOAMm_X85BcmP0 | 7209 | Optimize some copying | 43316012 | closed | 0 | 8 | 2022-10-24T21:00:21Z | 2022-12-08T20:09:49Z | 2022-11-30T23:36:56Z | COLLABORATOR | 0 | pydata/xarray/pulls/7209 |
I have passed along some more memo dicts, which could prevent some double deep-copying of the same data (don't know how exactly, but who knows :P) Also, I have found some copy calls that did not pass along the deep argument (I am not sure if that breaks things, lets find out). And finally I have found some places where shallow copies are enough. All together it should improve the performance a lot when copying things around. |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7209/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
13221727 | pull |