pull_requests: 1492188700
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1492188700 | PR_kwDOAMm_X85Y8P4c | 8118 | open | 0 | Add Coordinates `set_xindex()` and `drop_indexes()` methods | 4160723 | <!-- Feel free to remove check-list items aren't relevant to your change --> - Complements #8102 - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` I don't think that we need to copy most API from Dataset / DataArray to `Coordinates`, but I find it convenient to have some relevant methods there too. For example, building Coordinates from scratch (with custom indexes) before passing the whole coords + indexes bundle around: ```python import dask.array as da import numpy as np import xarray as xr coords = ( xr.Coordinates( coords={"x": da.arange(100_000_000), "y": np.arange(100)}, indexes={}, ) .set_xindex("x", DaskIndex) .set_xindex("y", xr.indexes.PandasIndex) ) ds = xr.Dataset(coords=coords) # <xarray.Dataset> # Dimensions: (x: 100000000, y: 100) # Coordinates: # * x (x) int64 dask.array<chunksize=(16777216,), meta=np.ndarray> # * y (y) int64 0 1 2 3 4 5 6 7 8 9 10 ... 90 91 92 93 94 95 96 97 98 99 # Data variables: # *empty* # Indexes: # x DaskIndex ``` | 2023-08-28T14:28:24Z | 2023-09-19T01:53:18Z | 664b100ba033d892b0894c82c49c18fc71b3f7be | 0 | 13ebc667add99d53fe5619de8206ce745e453829 | 828ea08aa74d390519f43919a0e8851e29091d00 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/8118 |
Links from other tables
- 0 rows from pull_requests_id in labels_pull_requests