html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/5580#issuecomment-877021079,https://api.github.com/repos/pydata/xarray/issues/5580,877021079,MDEyOklzc3VlQ29tbWVudDg3NzAyMTA3OQ==,28786187,2021-07-09T08:41:15Z,2021-07-09T08:55:14Z,CONTRIBUTOR,"@keewis Thanks for the pointers, I'd say that nothing public facing should change in 0.18 now. OT (edit): By the way, these incompatibilities happen when one side decides to change the API without considering that some users may actually use that interface (and looking at the pandas' ""deprecation"" list, I fear that this will only get worse). Nice from the `xarray` people to have a section about keeping backwards compatibility as much as possible in their contribution guidelines. As for the tests, I found the tests that @max-sixty put in and extended them (see second and third commits in this PR). However, now there is one dataset setup and then 4(!) asserts, which seems to be too much to follow nicely. Imagine all of them break, you fix the first, only to find out that the second breaks as well. so you fix that, only to find out that the third breaks too, and so on. @Illviljan It is a good idea, however, I'd prefer if those changes were introduced as an *option* first, before changing the default behaviour.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,937336962 https://github.com/pydata/xarray/pull/5580#issuecomment-875075222,https://api.github.com/repos/pydata/xarray/issues/5580,875075222,MDEyOklzc3VlQ29tbWVudDg3NTA3NTIyMg==,28786187,2021-07-06T20:55:49Z,2021-07-06T21:02:46Z,CONTRIBUTOR,"Hi @max-sixty, Sure, but it will take a bit. Could you point me to right places for the docs? Just the filenames would do. I would be in favour of waiting a little with this change to get a few more opinions. This change will ignore `display_max_rows` for everything except `dataset.__repr__`, which may not be what some people might expect or hope. Any ideas how to get more people to weigh in? Edit: I would also like to separate the tests, to make it easier to follow if something breaks, but the setup for the test dataset would be the same. Any preferences or best practices for the code layout in such a case without duplicating too much of the code? However, that can probably wait for another PR.","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,937336962