home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

26 rows where user = 35919497

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: closed_at, base, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
418979144 MDExOlB1bGxSZXF1ZXN0NDE4OTc5MTQ0 4071 closed 0 #1621 optional decode timedelta aurghs 35919497 Releated to ticket #1621. Add `decode_timedelta` kwargs in open_dataset, `xr.open_datarray`, `xr.open_zarr`, `xr.decode_cf`. If not passed explicitly the behaviour is not changed. - [x] Tests added for `xr.decode_cf`. - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API 2020-05-16T14:57:39Z 2020-05-19T15:44:21Z 2020-05-19T15:43:54Z 2020-05-19T15:43:54Z 742d00076c8e79cb753b4b4856dbbef5f52878c6     0 fd7cddee2137f2a3056e6ac5b6086b406cad9907 2542a63f6ebed1a464af7fc74b9f3bf302925803 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4071  
495760925 MDExOlB1bGxSZXF1ZXN0NDk1NzYwOTI1 4477 closed 0 WIP: Proposed refactor of read API for backends aurghs 35919497 The first draft of the new backend API: - Move decoding inside the bakends. - Backends return `Dataset` with `BackendArray`. - Xarray manages chunking and caching. - Some code is duplicated, it will be simplified later. - Zarr chunking is still inside the backend for now. cc @jhamman @shoyer - [x] Addresses #4309 - [ ] Tests added - [ ] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2020-09-30T20:12:36Z 2020-10-22T15:07:33Z 2020-10-22T15:06:39Z 2020-10-22T15:06:39Z cc271e61077c543e0f3b1a06ad5e905ea2c91617     0 aa2320921f939238cbe2cad14741bf07564d03ce db4f03e467d13229512f8f7924dc142db1b9486b COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4477  
499345640 MDExOlB1bGxSZXF1ZXN0NDk5MzQ1NjQw 4494 closed 0 Remove maybe chunck duplicated function aurghs 35919497 I propose this small change with a view to unifying in `open_dataset` the logic of zarr chunking with that of the other backends. Currently, the function `maybe_chunk` is duplicated: it is defined inside the function `dataset.chunks` and as method of `zarr.ZarrStore`. This last function has been added with the recent merge of #4187 . I merged the two functions in a private function `_maybe_chunk` inside the module `dataset`. - [x] Addresses #4309 - [ ] Tests added - [x] Passes `isort . && black . && mypy . && flake8` 2020-10-07T15:42:35Z 2020-12-10T10:27:34Z 2020-10-08T15:10:46Z 2020-10-08T15:10:45Z 49e3032ddfa3fe86361300fd08db4764ee718bf1     0 44dc25011ef64e2336c4f2e348e6a7f4b68fb4d1 544bbe204362709fb6c2d0a4176e1646954ceb9a COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4494  
511373629 MDExOlB1bGxSZXF1ZXN0NTExMzczNjI5 4547 closed 0 Update signature open_dataset for API v2 aurghs 35919497 Proposal for the new API of `open_dataset()`. It is implemented in `apiv2.py` and it doesn't modify the current behavior of `api.open_dataset()`. It is something in between the first and second alternative suggested at https://github.com/pydata/xarray/issues/4490#issue-715374721, see the related quoted text: > **Describe alternatives you've considered** > > For the overall approach: > > 1. We could keep the current design, with separate keyword arguments for decoding options, and just be very careful about passing around these arguments. This seems pretty painful for the backend refactor, though. > 2. We could keep the current design only for the user facing `open_dataset()` interface, and then internally convert into the `DecodingOptions()` struct for passing to backend constructors. This would provide much needed flexibility for backend authors, but most users wouldn't benefit from the new interface. Perhaps this would make sense as an intermediate step? Instead of a class for the decoders, I have added a function: `resolve_decoders_kwargs`. `resolve_decoders_kwargs` performs two tasks: - If decode_cf is `False`, it sets to `False` all the decoders supported by the backend (using `inspect`). - It filters out the None decoder keywords. So xarray manages the keyword decode_cf and passes on only the non-default decoders to the backend. If the user sets to a non-None value a decoder not supported by the backend, the backend will raise an error. With this implementation `drop_variable` should be always supported by the backend. But I think this could be implemented easely by all the backends. I wouldn't group it with the decoders: to me, it seems to be more a filter rather than a decoder. The behavior `decode_cf` is unchanged. PRO: - the user doesn't need to import and instantiate a class. - users get the argument completion on `open_dataset`. - the backend defines directly in `open_backend_dataset_${engine}` API the accepted decoders. - xarray manages decode_cf, not the backends.… 2020-10-28T08:35:54Z 2021-02-11T01:50:09Z 2020-11-06T14:43:10Z 2020-11-06T14:43:10Z ba989f65e800c1dd5a308c7f14bda89963ee2bd5     0 73328accb529cd9b7f208bc0ed72d32e6cfdf5b2 063606b90946d869e90a6273e2e18ed24bffb052 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4547  
512328068 MDExOlB1bGxSZXF1ZXN0NTEyMzI4MDY4 4550 closed 0 WIP: Zarr chunks refactor aurghs 35919497 This work aims to harmonize the way zarr deals with chunking to have similar behavior for all other backends and unify the code. Most of the changes involve the new API, apiv2.py, except for some changes in the code that has been added with the merge of https://github.com/pydata/xarray/pull/4187. main changes: - refactor `apiv2.dataset_from_backend_dataset` function. - move `get_chunks` from `zarr` to `dataset`. current status: - in `apiv2.open_dataset` `chunks='auto'` and `chunks={}` now has the same beahviuor - in `apiv2.open_dataset` for all the backends now the default chunking is provided by the backend, if it is not available it uses one big chunk. Missing points: - standardize the key in encodings to define the on-disk chunks: `chunksizes` - add a specific key in encodings for preferred chunking (currently it is used `chunks`) There is one open point to be discussed yet: `dataset.chunks` and `open_dataset(..., chunks=...)` have different behaviors. `dataset.chunks(chunks={})` opens the dataset with only one chunk per variable, while in `open_dataset(..., chunks={})` it uses `encodings['chunks']`, when available. Note that also `chunks=None` has a different behaviour: `open_dataset(..., chunks=None)` (or `open_dataset(...)`, it's the deafult) returns variables without chunks, while `dataset.chunk(chunks=None)` (or `dataset.chunk()`, it's the default) has the same behavior of `dataset.chunk(chunks=None)`. Probably it's not worth changing it. - [x] related to https://github.com/pydata/xarray/issues/4496 - [ ] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2020-10-29T14:44:31Z 2020-12-10T10:28:06Z 2020-11-10T16:08:53Z   e18d9f6bc8e332e192acea4d73e7ac6d4be0ee50     0 c6d341c7ad0190588184d4126f2f8236fc162da8 063606b90946d869e90a6273e2e18ed24bffb052 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4550  
519979819 MDExOlB1bGxSZXF1ZXN0NTE5OTc5ODE5 4577 closed 0 Backends entrypoints aurghs 35919497 - It's an update of @jhamman pull request https://github.com/pydata/xarray/pull/3166 - It uses `entrypoints` module to detect the installed engines. The detection is done at `open_dataset` function call and it is cached. It raises a warning in case of conflicts. - Add a class for the backend interface `BackendEtrypoint` instead of a function. Modified files: - add plugins.py containing `detect_engines` function and `BackendEtrypoint`. - dependencies file to add `entrypoints`. - backend.__init__ to add `detect_engines` - apiv2.py and api.py do use `detect_engines` - zarr.py, h5netcdf_.py, cfgrib.py to instatiate the `BackendEtrypoint`. - [x] Related to #3166 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` 2020-11-12T15:53:00Z 2020-12-10T13:30:42Z 2020-12-10T09:56:13Z 2020-12-10T09:56:13Z 74dffffbfea2ba9aea18ce194fe868f2cb00907d     0 14bf314e7b670fdd07e089a61c257c488ea540a3 8ac3d862197204e6212a9882051808eb4b1cf3ff COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4577  
523989408 MDExOlB1bGxSZXF1ZXN0NTIzOTg5NDA4 4595 closed 0 WIP: Chunking refactor aurghs 35919497 This work aims to harmonize the way zarr deals with chunking to have similar behavior for all other backends and unify the code. Most of the changes involve the new API, apiv2.py, except for some changes in the code that has been added with the merge of https://github.com/pydata/xarray/pull/4187. main changes: - refactor `apiv2.dataset_from_backend_dataset` function. - move `_get_chunks` from `zarr` to `dataset`. - modify `_get_chunks` to fit https://github.com/pydata/xarray/issues/4496#issuecomment-720785384 option 1 chunking behaviuor - Add warning when it is used in `ds.chunk(..., chunk=None)` - Add some test nedded separate pull request for the following missing points: - standardize the key in encodings to define the on-disk chunks: `chunksizes` - add a specific key in encodings for preferred chunking (currently it is used `chunks`) - [x] Related https://github.com/pydata/xarray/issues/4496 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2020-11-19T14:22:45Z 2020-12-10T10:28:25Z 2020-12-10T10:18:47Z   4a9d1a0dd1d5e5edd348e6e9eea99aed1e7cafe8     0 e21820cabef71804c9335d0b54412051b627ce4e 6c32d7c21941461ae9c21b43e6071ee79fb47d68 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4595  
530191990 MDExOlB1bGxSZXF1ZXN0NTMwMTkxOTkw 4632 closed 0 Move get_chunks from zarr.py to dataset.py aurghs 35919497 The aim is to split the PR https://github.com/pydata/xarray/pull/4595 in small PRs. In this smaller PR there aren't changes in xarry interfaces, it's only a small code refactor: - Move `get_chunks` from zarr.py to dataset.py - Align `apiv2` to `apiv1`: in `apiv2` replace `zarr.ZarrStore.maybe_chunk` with `dataset._maybe_chunk` and `zarr.ZarrStore.get_chunk` with `dataset._get_chunks`. - remove `zarr.ZarrStore.maybe_chunk` and `zarr.ZarrStore.get_chunks` (no more used) - [x] Related #4496 - [x] Passes `isort . && black . && mypy . && flake8` - No user visible changes (including notable bug fixes) are documented in `whats-new.rst` - No new functions/methods are listed in `api.rst` 2020-12-01T10:19:51Z 2021-02-11T01:51:40Z 2020-12-02T09:25:01Z 2020-12-02T09:25:01Z 65308954787d313d81ced5fe33e6a4a49bcc2167     0 ec31a67390f01d4f86cf99ee4c7d86c5fa549d96 a41edc7bf5302f2ea327943c0c48c532b12009bc COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4632  
530209718 MDExOlB1bGxSZXF1ZXN0NTMwMjA5NzE4 4633 closed 0 change default in ds.chunk and datarray.chunk variable.chunk aurghs 35919497 The aim is to split the PR #4595 in small PRs. The scope of this smaller PR is to modify the default of `chunks` in `dataset.chunk` to align the behaviour to `xr.open_dataset`. The main changes are: - Modify the default of chunk in `dataset.chunk`, `datarray.chunk` and `variable.chunk` from None to {}. - If the user pass `chunk=None` inside is set to `{}` - Add a future warning to advice that the usage of `None` will raise an error in the future. Note that the changes currently don't modify the behaviour of `dataset.chunk` <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Related #4496 - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-01T10:48:11Z 2020-12-10T10:38:06Z 2020-12-10T10:38:06Z 2020-12-10T10:38:05Z 76d5c0c075628475b555997b82c55dd18a34936e     0 ca987f1e6dcd027b65bea5aa35ed291611514286 a41edc7bf5302f2ea327943c0c48c532b12009bc COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4633  
530928176 MDExOlB1bGxSZXF1ZXN0NTMwOTI4MTc2 4642 closed 0 Refactor apiv2.open_dataset aurghs 35919497 Related to PR https://github.com/pydata/xarray/pull/4595. In this smaller PR, there aren't changes functional changes, it's only a small code refactor needed to simplify pydata#4595. Changes in `apiv2.dataset_from_backend_dataset`: - rename `ds` in `backend_ds` and `ds2` in `ds`. - simplify if in chunking and split code adding `function _chunks_ds` - add `_get_mtime` specific function Make `resolve_decoders_kwargs` and `dataset_from_backend_dataset` private - [x] related to https://github.com/pydata/xarray/pull/4595 - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-02T10:51:31Z 2020-12-10T10:29:24Z 2020-12-02T13:17:26Z 2020-12-02T13:17:26Z 8ac3d862197204e6212a9882051808eb4b1cf3ff     0 699a99b7d957fd4d32f02c92b3a8684195bcf1e5 65308954787d313d81ced5fe33e6a4a49bcc2167 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4642  
531887999 MDExOlB1bGxSZXF1ZXN0NTMxODg3OTk5 4646 closed 0 Modify zarr chunking as suggested in #4496 aurghs 35919497 Part of https://github.com/pydata/xarray/pull/4595 The changes involve only `open_dataset(..., engine=zarr)` (and marginally `open_zarr`), in particular, `_get_chunks` has been modified to fit #4496 (comment) option 1 chunking behaviour and align open_dataset chunking with `dataset.chunk`: - with `auto` it uses dask auto-chunking (if a preferred_chunking is defined, dask will take it into account as done in `dataset.chunk`) - with `-1` it uses dask but no chunking. - with `{}` it uses the backend encoded chunks (when available) for on-disk data (`xr.open_dataset`) and the current chunking for already opened datasets (`ds.chunk`) Add some test - [x] Releted to pydata#4496 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` 2020-12-03T15:56:28Z 2021-02-11T01:51:55Z 2020-12-09T12:26:45Z 2020-12-09T12:26:45Z 9802411b35291a6149d850e8e573cde71a93bfbf     0 d99150a9110acd6e4dfb80e733eb5410a675912a 7152b41fa80a56db0ce88b241fbe4092473cfcf0 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4646  
535182542 MDExOlB1bGxSZXF1ZXN0NTM1MTgyNTQy 4667 closed 0 unify zarr chunking with other chunking in apiv2.open_dataset aurghs 35919497 It's the last part of, and closes #4595. Here we unify the code for chunking in `apiv2.open_dataset`. Note the code unification is only a refactor, there aren't functional changes since the zarr chunking has been already aligned with the others. - [x] Related to https://github.com/pydata/xarray/issues/4496 - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-09T13:32:41Z 2021-02-11T01:51:59Z 2020-12-10T10:18:47Z 2020-12-10T10:18:47Z 6d4a292f65cca30647fd222109325b6d5c3154ea     0 f30c5f8e7148c5a031394b1534412137acf692be 9802411b35291a6149d850e8e573cde71a93bfbf COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4667  
535288346 MDExOlB1bGxSZXF1ZXN0NTM1Mjg4MzQ2 4669 closed 0 add encodings["preferred_chunks"], used in open_dataset instead of en… aurghs 35919497 Related to https://github.com/pydata/xarray/issues/4496 Add `encodings["preferred_chunks"]` in zarr, used in open_dataset instead of `encodings["chunks"]`. - [x] Related to #https://github.com/pydata/xarray/issues/4496 - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-09T16:06:58Z 2021-02-11T01:52:11Z 2020-12-17T16:05:57Z 2020-12-17T16:05:57Z 91318d2ee63149669404489be9198f230d877642     0 b926946774f622d40f7ed86b0384ea7c8f5b7ef8 9802411b35291a6149d850e8e573cde71a93bfbf COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4669  
536017864 MDExOlB1bGxSZXF1ZXN0NTM2MDE3ODY0 4673 closed 0 Port all the engines to apiv2 aurghs 35919497 Port all the engines to the new API apiv2. Note: - `test_autoclose_future_warning` has been removed because in apiv2.py `autoclose` has been removed - in `open_backend_dataset_psedonetcdf` currently is still used `**format_kwargs` and the signature is defined explicitly - [x] Related to https://github.com/pydata/xarray/issues/4309 - [x] Tests updated - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-10T15:27:01Z 2021-02-11T01:56:48Z 2020-12-17T16:21:58Z 2020-12-17T16:21:58Z 138679748558f41cd28f82a25046bc96b1c4d1ef     0 0deccce4616c2b895c37c07871069b88e75022cb 51ef2a66c4e0896eab7d2b03e3dfb3963e338e3c COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4673  
543628790 MDExOlB1bGxSZXF1ZXN0NTQzNjI4Nzkw 4719 closed 0 Remove close_on_error store.py aurghs 35919497 Remove `close_on_error` in store.py. This change involves only apiv2. Currently, `api_v2.open_dataset` can take in input a store instead of a file. In case of error, xarray closes the store. Xarray should manage the closure of a store that has been instantiated externally. This PR correct this behaviour in apiv2 - [x] Related https://github.com/pydata/xarray/pull/4673 - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-21T17:34:23Z 2021-02-11T01:56:13Z 2020-12-22T14:31:05Z 2020-12-22T14:31:05Z 5179cd92fd0d5438e2b7366619e21a242d0d55c3     0 763859b16b718b2aeef80d6bd281453442dd82e4 de3f27553fd480e247a3f1f7d377fec0f5f2759c COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4719  
544565866 MDExOlB1bGxSZXF1ZXN0NTQ0NTY1ODY2 4724 closed 0 Remove entrypoints in setup for internal backends aurghs 35919497 This PR aims to avoid conflicts during the transition period between the old backend implementation and the new plugins. During the transition period will coexist both external backend plugins and internal ones. Currently, if two plugins with the same name are detected, we just pick one randomly. It would be better to be sure to use the external one. Main changes: - Remove from setup.cfg - Store in the internal backend and stored in the dictionary in plugins.py. The dictionary is updated with the external plugins detected by pkg_resources. - Move the class BackendEntrypoints in common.py to resolve a circular import. - Add a test - [x] Related to https://github.com/pydata/xarray/issues/4309 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-23T04:45:40Z 2021-02-11T01:56:03Z 2020-12-24T16:29:44Z 2020-12-24T16:29:44Z ac234619d5471e789b0670a673084dbb01df4f9e     0 8208ec45e8ef1eaf62a6875d6596d8787242c61e ff56e726f4b63a11cf8c5c6fac8f0a519c921fd8 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4724  
544578278 MDExOlB1bGxSZXF1ZXN0NTQ0NTc4Mjc4 4725 closed 0 remove autoclose in open_dataset and related warning test aurghs 35919497 This PR remove `autoclose` option from `open_dataset` (both api.py and apiv2.py) and the corresponding test `test_autoclose_future_warning` from test.py `autoclose=True` option was deprecated in https://github.com/pydata/xarray/pull/2261 since xarray now uses a LRU cache to manage open file handles. - [x] Related to https://github.com/pydata/xarray/issues/4309 and https://github.com/pydata/xarray/pull/2261, - [x] Tests updated - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-23T05:28:59Z 2021-02-11T01:55:45Z 2020-12-24T16:25:26Z 2020-12-24T16:25:26Z 1525fb0b23b8e92420ab428dc3d918a658e92dd4     0 6a7ace4bc11d7b4d34b0c4e6117f837eb948b400 ff56e726f4b63a11cf8c5c6fac8f0a519c921fd8 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4725  
544750480 MDExOlB1bGxSZXF1ZXN0NTQ0NzUwNDgw 4726 closed 0 Fix warning on chunks compatibility aurghs 35919497 This PR fixes https://github.com/pydata/xarray/issues/4708. It's a very small change. Changes: - In `dataset._check_chunks_compatibility` now it doesn't raise a warning if the last chunk % preferred_chunk != 0. - Update tests - Style: rename a variable inside `dataset._check_chunks_compatibility` - [x] Closes https://github.com/pydata/xarray/issues/4708 - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-23T12:25:42Z 2021-02-11T01:55:56Z 2020-12-24T11:32:43Z 2020-12-24T11:32:43Z ed0dadc273fc05766ec7e73a6980e02a8a360069     0 a5d0d8e653ea8a1622f222680a3d3e2ce89fb367 ff56e726f4b63a11cf8c5c6fac8f0a519c921fd8 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4726  
544827534 MDExOlB1bGxSZXF1ZXN0NTQ0ODI3NTM0 4728 closed 0 Remove unexpected warnings in tests aurghs 35919497 - #4646 add tests on chunking without using a `with` statement, causing unexpected warnings. - Add filterwarnings in test_plugins.test_remove_duplicates tests and backend_tests.test_chunking_consintency - [x] Tests fixex - [x] Passes `isort . && black . && mypy . && flake8` 2020-12-23T14:01:49Z 2021-02-11T01:55:54Z 2020-12-24T13:12:41Z 2020-12-24T13:12:41Z 03d8d56c9b6d090f0de2475202368b08435eaeb5     0 d0cc22722256ed01bef587308cba8d78272d8fd5 ff56e726f4b63a11cf8c5c6fac8f0a519c921fd8 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4728  
555017269 MDExOlB1bGxSZXF1ZXN0NTU1MDE3MjY5 4810 closed 0 add new backend api documentation aurghs 35919497 - add backend documentation - rename ``store_spec`` in ``filename_or_obj``in backend entrypoint method ``guess_can_open`` - [x] Related #4803 2021-01-14T15:41:50Z 2021-03-25T14:01:25Z 2021-03-08T19:16:57Z 2021-03-08T19:16:57Z d2582c2f8811a3bd527d47c945b1cccd4983a1d3     0 06371dfaa0317d5f2b16e10e677fa5a8f483e535 2a34bfbbd586882ebe892ae12c72de36318714d5 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4810  
555028157 MDExOlB1bGxSZXF1ZXN0NTU1MDI4MTU3 4811 closed 0 Bugfix in list_engine aurghs 35919497 Currently ``list_engines`` returns the list of all installed backend plus the list of the internal ones. For the internal ones, there is no check on the installed dependencies. Now the registration of the internal backends is done by the backends only if the needed dependencies are installed. - [x] Passes `pre-commit run --all-files` 2021-01-14T15:58:38Z 2021-01-19T10:10:26Z 2021-01-19T10:10:26Z 2021-01-19T10:10:26Z 7dbbdcafed7f796ab77039ff797bcd31d9185903     0 c42d45e95f34648d28c0545895c304fbb095c539 3721725754f2491da48aeba506e1b036e340b6a6 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4811  
559159062 MDExOlB1bGxSZXF1ZXN0NTU5MTU5MDYy 4836 closed 0 backend interface, now it uses subclassing aurghs 35919497 Currently, the interface between the backend and xarray is the class/container BackendEntrypoint, that must be instantiated by the backend. With this pull request, BackendEntrypoint is replaced by AbstractBackendEntrypoint. The backend will inherit from this class. Reason for these changes: - This type of interface is more standard. - [x] Tests updated - [x] Passes `pre-commit run --all-files` 2021-01-21T12:38:58Z 2021-01-28T15:22:45Z 2021-01-28T15:21:00Z 2021-01-28T15:21:00Z 8cc34cb412ba89ebca12fc84f76a9e452628f1bc     0 2820f09e5d106dbdeab80c1f5b05c2511f75e94c bc35548d96caaec225be9a26afbbaa94069c9494 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4836  
571565762 MDExOlB1bGxSZXF1ZXN0NTcxNTY1NzYy 4886 closed 0 Sort backends aurghs 35919497 Ensure that backend list are always sorted in the same way. In particular: - the standards backend are always the first in the following order: "netcdf4", "h5netcdf", "scipy" - all the other backends a sorted in lexicographic order. the changes involve two files (plugins.py and test_plugins.py) and they include: - add utility function for sorting backends ``sort_backends`` - Update tests - Small changes in variables/functions names. - [x] Tests added - [x] Passes `pre-commit run --all-files` 2021-02-11T04:53:51Z 2021-02-12T17:48:24Z 2021-02-12T17:48:24Z 2021-02-12T17:48:24Z 6e4d66734f63fb60b13ba25d2a7da33fbfab2b4b     0 b8fa58442d2f64bebb1157ad2982b94e5ef36a60 10f0227a1667c5ab3c88465ff1572065322cde77 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/4886  
611700940 MDExOlB1bGxSZXF1ZXN0NjExNzAwOTQw 5135 closed 0 Fix open_dataset regression aurghs 35919497 Fix `open_dataset` regression, expands ~ in `filepath_or_obj` when necessary. I have checked the behaviour of the engines. It seems that `pynio` already expands ~. - [x] Closes #5098 - [x] Passes `pre-commit run --all-files` 2021-04-08T16:26:15Z 2021-04-15T12:11:34Z 2021-04-15T12:11:34Z 2021-04-15T12:11:34Z 18ed29e4086145c29fde31c9d728a939536911c9     0 57cfaef697d8e02e8111473bb1e35746e4d3330d 7e48aefd3fd280389dee0fc103843c6ad7561e2b COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/5135  
611752657 MDExOlB1bGxSZXF1ZXN0NjExNzUyNjU3 5136 closed 0 Fix broken engine breakes xarray.open_dataset aurghs 35919497 Currently, a broken engine breaks xarray.open_dataset. I have added a `try except` to avoid this problem. Old behaviour: ```python >>> ds = xr.open_dataset('example.nc') Traceback (most recent call last): File "/usr/local/Caskroom/miniconda/base/envs/xarray/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-0c694cae8262>", line 1, in <module> arr = xr.open_dataset("example.nc") File "/Users/barghini/devel/xarray/xarray/backends/api.py", line 495, in open_dataset backend = plugins.get_backend(engine) File "/Users/barghini/devel/xarray/xarray/backends/plugins.py", line 115, in get_backend engines = list_engines() File "/Users/barghini/devel/xarray/xarray/backends/plugins.py", line 97, in list_engines return build_engines(pkg_entrypoints) File "/Users/barghini/devel/xarray/xarray/backends/plugins.py", line 84, in build_engines external_backend_entrypoints = backends_dict_from_pkg(pkg_entrypoints) File "/Users/barghini/devel/xarray/xarray/backends/plugins.py", line 58, in backends_dict_from_pkg backend = pkg_ep.load() File "/usr/local/Caskroom/miniconda/base/envs/xarray/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2450, in load return self.resolve() File "/usr/local/Caskroom/miniconda/base/envs/xarray/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2456, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/Users/barghini/devel/xarray-sentinel/xarray_sentinel/sentinel1.py", line 13 ERROR ^ SyntaxError: invalid syntax ``` New behaviour: ```python >>> ds = xr.open_dataset('example.nc') /Users/barghini/devel/xarray/xarray/backends/plugins.py:61: RuntimeWarning: Engine sentinel-1 loading failed: name 'ERROR' is not defined warnings.warn(f"Engine {name} loading failed:\n{ex}", RuntimeWarning) ``` - [x] Tests added - … 2021-04-08T17:47:12Z 2021-04-10T23:55:04Z 2021-04-10T23:55:01Z 2021-04-10T23:55:01Z 32ccc93e899ac083834127ca382204b467ed89a3     0 a3996fb78c195b61e4d9acfd1fa71d0a6bcbc204 7e48aefd3fd280389dee0fc103843c6ad7561e2b COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/5136  
666018161 MDExOlB1bGxSZXF1ZXN0NjY2MDE4MTYx 5455 closed 0 Improve error message for guess engine aurghs 35919497 When open_dataset() fails because no working engines are found, it suggests installing the dependencies of the compatible internal backends, providing explicitly the list. - [x] closes #5302 - [x] Tests added - [x] Passes `pre-commit run --all-files` 2021-06-09T15:22:24Z 2021-06-23T16:36:16Z 2021-06-23T08:18:08Z 2021-06-23T08:18:07Z eea76733770be03e78a0834803291659136bca31     0 f640ff69681a7b0ffd0489954c746310acae672d e87d65b77711bbf289e14dfa0581fb842247f1c2 COLLABORATOR   xarray 13221727 https://github.com/pydata/xarray/pull/5455  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 19.453ms · About: xarray-datasette