id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 1462057503,PR_kwDOAMm_X85DlALl,7315,Fix polyval overloads,43316012,closed,0,,,1,2022-11-23T16:27:21Z,2022-12-08T20:10:16Z,2022-11-26T15:42:51Z,COLLABORATOR,,0,pydata/xarray/pulls/7315," - [x] Closes #7312 - [ ] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~New functions/methods are listed in `api.rst`~ Turns out the default value of arguments is important for overloads, haha.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7315/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1377097243,PR_kwDOAMm_X84_J8JL,7051,Add parse_dims func,43316012,closed,0,,,6,2022-09-18T15:36:59Z,2022-12-08T20:10:01Z,2022-11-30T23:36:33Z,COLLABORATOR,,0,pydata/xarray/pulls/7051,"This PR adds a `utils.parse_dims` function for parsing one or more dimensions. Currently every function that accepts multiple dimensions does this by itself. I decided to first see if it would be useful to centralize the dimension parsing and collect inputs before adding it to other functions.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7051/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1421441672,PR_kwDOAMm_X85BcmP0,7209,Optimize some copying,43316012,closed,0,,,8,2022-10-24T21:00:21Z,2022-12-08T20:09:49Z,2022-11-30T23:36:56Z,COLLABORATOR,,0,pydata/xarray/pulls/7209,"- [x] Potentially closes #7181 - [x] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` I have passed along some more memo dicts, which could prevent some double deep-copying of the same data (don't know how exactly, but who knows :P) Also, I have found some copy calls that did not pass along the deep argument (I am not sure if that breaks things, lets find out). And finally I have found some places where shallow copies are enough. All together it should improve the performance a lot when copying things around.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7209/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 1468671915,PR_kwDOAMm_X85D65Bg,7335,Enable mypy warn unused ignores,43316012,closed,0,,,1,2022-11-29T20:42:08Z,2022-12-08T20:09:06Z,2022-12-01T16:14:07Z,COLLABORATOR,,0,pydata/xarray/pulls/7335,"This PR adds the mypy option ""warn_unused_ignores"" which will raise an error if a `# type: ignore` is used where it is no longer necessary. Should enable us to keep our types updated. I am not sure if this will lead to many issues whenever e.g. numpy changes/improves their typing, so we might get errors whenever there is a new version. Maybe it is not that bad, or maybe we can also remove the option again and only do it manually from time to time?","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/7335/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull