id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 874331538,MDExOlB1bGxSZXF1ZXN0NjI4OTE0NDQz,5252,"Add mode=""r+"" for to_zarr and use consolidated writes/reads by default",1217238,closed,0,,,14,2021-05-03T07:57:16Z,2021-06-22T06:51:35Z,2021-06-17T17:19:26Z,MEMBER,,0,pydata/xarray/pulls/5252,"`mode=""r+""` only allows for modifying pre-existing array values in a Zarr store. This makes it a safer default `mode` when doing a limited `region` write. It also offers a nice performance bonus when using consolidated metadata, because the store to modify can be opened in ""consolidated"" mode -- rather than painfully slow non-consolidated mode. This PR includes several related changes to `to_zarr()`: 1. It adds support for the new `mode=""r+""`. 2. `consolidated=True` in `to_zarr()` now means ""open in consolidated mode"" if using using `mode=""r+""`, instead of ""write in consolidated mode"" (which would not make sense for r+). 3. It allows setting `consolidated=True` when using `region`, mostly for the sake of fast store opening with r+. 4. Validation in `to_zarr()` has been reorganized to always use the _existing_ Zarr group, rather than re-opening zar stores from scratch, which could require additional network requests. 5. Incidentally, I've renamed the `ZarrStore.ds` attribute to `ZarrStore.zarr_group`, which is a much more descriptive name. These changes gave me a ~5x boost in write performance in a large parallel job making use of `to_zarr` with `region`. - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/5252/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,pull 29136905,MDU6SXNzdWUyOTEzNjkwNQ==,60,Implement DataArray.idxmax(),1217238,closed,0,,741199,14,2014-03-10T22:03:06Z,2020-03-29T01:54:25Z,2020-03-29T01:54:25Z,MEMBER,,,,"Should match the pandas function: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/60/reactions"", ""total_count"": 2, ""+1"": 2, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,completed,13221727,issue 188113943,MDU6SXNzdWUxODgxMTM5NDM=,1097,"Better support for subclasses: tests, docs and API",1217238,open,0,,,14,2016-11-08T21:54:00Z,2019-08-22T13:07:44Z,,MEMBER,,,,"Given that people *do* currently subclass xarray objects, it's worth considering making a subclass API like pandas: http://pandas.pydata.org/pandas-docs/stable/internals.html#subclassing-pandas-data-structures At the very least, it would be nice to have docs that describe how/when it's safe to subclass, and tests that verify our support for such subclasses.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1097/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue