home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

18 rows where user = 90008

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, base, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
206247218 MDExOlB1bGxSZXF1ZXN0MjA2MjQ3MjE4 2344 closed 0 FutureWarning: creation of DataArrays w/ coords Dataset hmaarrfk 90008 Previously, this would raise a: FutureWarning: iteration over an xarray.Dataset will change in xarray v0.11 to only include data variables, not coordinates. Iterate over the Dataset.variables property instead to preserve existing behavior in a forwards compatible manner. - [ ] Closes #xxxx (remove if there is no corresponding issue, which should only be the case for minor changes) - [ ] Tests added (for all bug fixes or enhancements) - [ ] Tests passed (for all non-documentation changes) - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) 2018-08-05T16:34:59Z 2018-08-06T16:02:09Z 2018-08-06T16:02:09Z   c44aae428b1867b22c51087d9e04b53933e9664c     0 d4dd9ee0c00223e6324d0e7807b4f4b1b1d8fa8b 56381ef444c5e699443e8b4e08611060ad5c9507 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2344  
393349721 MDExOlB1bGxSZXF1ZXN0MzkzMzQ5NzIx 3888 closed 0 [WIP] [DEMO] Add tests for ZipStore for zarr hmaarrfk 90008 <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Related to #3815 - [ ] Tests added - [ ] Passes `isort -rc . && black . && mypy . && flake8` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API 2020-03-25T02:29:20Z 2020-03-26T04:23:05Z 2020-03-25T21:57:09Z   e88946e8d7ddfbff98f0af78f1eb0df00edb2521     0 37a6b0ddabcb81f4b39aa75038d04c2a824758e3 009aa66620b3437cf0de675013fa7d1ff231963c CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/3888  
476537397 MDExOlB1bGxSZXF1ZXN0NDc2NTM3Mzk3 4395 closed 0 WIP: Ensure that zarr.ZipStores are closed hmaarrfk 90008 ZipStores aren't always closed making it hard to use them as fluidly as regular zarr stores. - [ ] Closes #xxxx - [x] Tests added - [x] Passes `isort . && black . && mypy . && flake8` # master doesn't pass black - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2020-08-31T20:57:49Z 2023-01-31T21:39:15Z 2023-01-31T21:38:23Z   5ac27926097b0a7f24c250b50e35f8f0dd9a2116     0 bbd2515502d7a42ccb94c0569132e7fadd921233 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4395  
477420193 MDExOlB1bGxSZXF1ZXN0NDc3NDIwMTkz 4400 closed 0 [WIP] Support nano second time encoding. hmaarrfk 90008 <!-- Feel free to remove check-list items aren't relevant to your change --> Not too sure i have the bandwidth to complete this seeing as cftime and datetime don't have nanoseconds, but maybe it can help somebody. - [x] Closes #4183 - [x] Tests added - [ ] Passes `isort . && black . && mypy . && flake8` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2020-09-02T00:16:04Z 2023-03-26T20:59:00Z 2023-03-26T20:08:50Z   3b78de79321e29d7fb2fc548a03a125c6192a65b     0 74e9d72f970b0bfab4f473fc44bf6fe820decda1 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4400  
818499276 PR_kwDOAMm_X84wyU7M 6154 closed 0 Use base ImportError not MoudleNotFoundError when testing for plugins hmaarrfk 90008 Admittedly i had a pretty broken environment (I manually uninstalled C dependencies for python packages installed with conda), but I still expected xarray to "work" with a different backend. I hope the comments in the code explain why `ImportError` is preferred to `ModuleNotFoundError`. Thank you for considering. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-01-11T09:48:36Z 2022-01-11T10:28:51Z 2022-01-11T10:24:57Z 2022-01-11T10:24:57Z 5c08ab296bf9bbcfb5bd3c262e3fdcce986d69ab     0 92fc8747305b3e0127ce49884d5fda1382560f69 9226c7ac87b3eb246f7a7e49f8f0f23d68951624 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/6154  
1088467433 PR_kwDOAMm_X85A4LHp 7172 closed 0 Lazy import dask.distributed to reduce import time of xarray hmaarrfk 90008 I was auditing the import time of my software and found that distributed added a non insignificant amount of time to the import of xarray: Using `tuna`, one can find that the following are sources of delay in import time for xarray: To audit, one can use the the command ``` python -X importtime -c "import numpy as np; import pandas as pd; import dask.array; import xarray as xr" 2>import.log && tuna import.lo ``` The command as is, breaks out the import time of numpy, pandas, and dask.array to allow you to focus on "other" costs within xarray. Main branch: ![image](https://user-images.githubusercontent.com/90008/196051640-8bb182a9-fbb0-4b83-a39d-a576fec25249.png) Proposed: ![image](https://user-images.githubusercontent.com/90008/196051596-34d87232-5cb9-4f3d-84f9-d2ec969c95ce.png) One would be tempted to think that this is due to xarray.testing and xarray.tutorial but those just move the imports one level down in tuna graphs. ![image](https://user-images.githubusercontent.com/90008/196051584-7895b64c-319a-4f9f-8327-b254b6571551.png) - [x] ~~Closes~~ - [x] ~~Tests added~~ - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] ~~New functions/methods are listed in `api.rst`~~ 2022-10-16T18:25:31Z 2022-10-18T17:41:50Z 2022-10-18T17:06:34Z 2022-10-18T17:06:34Z 89f7de888468eb37979faa686e7d70dbe11fb83c     0 6b4aa3c401720e324ffa407c4da7bad6ecaf6fa2 9df2dfca57e1c672f6faf0f7945d2f38921a4bb2 CONTRIBUTOR
{
    "enabled_by": {
        "login": "dcherian",
        "id": 2448579,
        "node_id": "MDQ6VXNlcjI0NDg1Nzk=",
        "avatar_url": "https://avatars.githubusercontent.com/u/2448579?v=4",
        "gravatar_id": "",
        "url": "https://api.github.com/users/dcherian",
        "html_url": "https://github.com/dcherian",
        "followers_url": "https://api.github.com/users/dcherian/followers",
        "following_url": "https://api.github.com/users/dcherian/following{/other_user}",
        "gists_url": "https://api.github.com/users/dcherian/gists{/gist_id}",
        "starred_url": "https://api.github.com/users/dcherian/starred{/owner}{/repo}",
        "subscriptions_url": "https://api.github.com/users/dcherian/subscriptions",
        "organizations_url": "https://api.github.com/users/dcherian/orgs",
        "repos_url": "https://api.github.com/users/dcherian/repos",
        "events_url": "https://api.github.com/users/dcherian/events{/privacy}",
        "received_events_url": "https://api.github.com/users/dcherian/received_events",
        "type": "User",
        "site_admin": false
    },
    "merge_method": "squash",
    "commit_title": "Lazy import dask.distributed to reduce import time of xarray (#7172)",
    "commit_message": "* Lazy import testing and tutorial\r\n\r\n* Lazy import distributed to avoid a costly import\r\n\r\n* Revert changes to __init__\r\n\r\n* Explain why we lazy import\r\n\r\n* Add release note\r\n\r\n* dask.distritubed.lock now supports blocking argument\r\n\r\nCo-authored-by: Deepak Cherian <dcherian@users.noreply.github.com>"
}
xarray 13221727 https://github.com/pydata/xarray/pull/7172  
1099657449 PR_kwDOAMm_X85Bi3Dp 7221 closed 0 Remove debugging slow assert statement hmaarrfk 90008 We've been trying to understand why our code is slow. One part is that we use xarray.Datasets almost like dictionaries for our data. The following code is quite common for us ```python import xarray as xr dataset = xr.Dataset() dataset['a'] = 1 dataset['b'] = 2 ``` However, through benchmarks, it became obvious that the `merge_core` method of xarray was causing alot of slowdowns. `main` branch: ![image](https://user-images.githubusercontent.com/90008/197914741-c920046a-e957-4584-9e00-082575fd1f6c.png) With this merge request: ![image](https://user-images.githubusercontent.com/90008/197914642-9d9439a3-397b-4f04-abb2-ddc62c7b4849.png) ```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 50]) plt.grid(True) ``` - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-10-26T01:43:08Z 2022-10-28T02:49:44Z 2022-10-28T02:49:44Z 2022-10-28T02:49:44Z 040816a64f52974a79f631c55d920f4b6a4c22ec     0 1a58759ea804775564a4e074e28444d0241e9f2a c000690c7aa6dd134b45e580f377681a0de1996c CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7221  
1099665485 PR_kwDOAMm_X85Bi5BN 7222 closed 0 Actually make the fast code path return early for Aligner.align hmaarrfk 90008 In relation to my other PR. Without this PR ![image](https://user-images.githubusercontent.com/90008/197916473-0149747e-25b0-41d6-921d-1fad62a23699.png) With the early return ![image](https://user-images.githubusercontent.com/90008/197916546-9ea9a020-2683-4d62-805a-b386835d61c0.png) <details><summary>Removing the frivolous copy (does not pass tests)</summary> ![image](https://user-images.githubusercontent.com/90008/197916632-dbc89c21-94a9-4b92-af11-5b1fa5f5cddd.png) </details> <details><summary>Code for benchmark</summary> ```python from tqdm import tqdm import xarray as xr from time import perf_counter import numpy as np N = 1000 # Everybody is lazy loading now, so lets force modules to get instantiated dummy_dataset = xr.Dataset() dummy_dataset['a'] = 1 dummy_dataset['b'] = 1 del dummy_dataset time_elapsed = np.zeros(N) dataset = xr.Dataset() # tqdm = iter for i in tqdm(range(N)): time_start = perf_counter() dataset[f"var{i}"] = i time_end = perf_counter() time_elapsed[i] = time_end - time_start # %% from matplotlib import pyplot as plt plt.plot(np.arange(N), time_elapsed * 1E3, label='Time to add one variable') plt.xlabel("Number of existing variables") plt.ylabel("Time to add a variables (ms)") plt.ylim([0, 10]) plt.grid(True) ``` </details> xref: https://github.com/pydata/xarray/pull/7221 <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-10-26T01:59:09Z 2022-10-28T16:22:36Z 2022-10-28T16:22:35Z 2022-10-28T16:22:35Z 65bfa4d10a529f00a9f9b145d1cea402bdae83d0     0 f9e23d49244def9a01687d06e8c5ff26e5d68b9e 040816a64f52974a79f631c55d920f4b6a4c22ec CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7222  
1100177522 PR_kwDOAMm_X85Bk2By 7223 closed 0 Dataset insertion benchmark hmaarrfk 90008 xref: https://github.com/pydata/xarray/pull/7221 - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-10-26T12:09:14Z 2022-10-27T15:38:09Z 2022-10-27T15:38:09Z 2022-10-27T15:38:09Z c000690c7aa6dd134b45e580f377681a0de1996c     0 2fdf774d51cb5d7b9e7e20b58c601b3029a09b10 076bd8e15f04878d7b97100fb29177697018138f CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7223  
1103975300 PR_kwDOAMm_X85BzVOE 7235 closed 0 Fix type in benchmarks/merge.py hmaarrfk 90008 I don't think this affects what is displayed that is determined by param_names <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-10-29T13:28:12Z 2022-10-29T15:52:45Z 2022-10-29T15:52:45Z 2022-10-29T15:52:45Z 2608c407d73551e0d6055d4b81060e321e905d95     0 62c3b918c96734a543a45f94f980a51a5a2091f2 e1936a98059ae29da2861f58a7aff4a56302aac1 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7235  
1103984081 PR_kwDOAMm_X85BzXXR 7236 closed 0 Expand benchmarks for dataset insertion and creation hmaarrfk 90008 Taken from discussions in https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344 Thank you @Illviljan <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-10-29T13:55:19Z 2022-10-31T15:04:13Z 2022-10-31T15:03:58Z 2022-10-31T15:03:58Z bc35e39e5754c7a6c84c274815d95cb4130f0000     0 bab7cbb9fc9f7b7446b8dac3786c651bf5bc3d29 2608c407d73551e0d6055d4b81060e321e905d95 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7236  
1139443490 PR_kwDOAMm_X85D6oci 7334 closed 0 Remove code used to support h5py<2.10.0 hmaarrfk 90008 It seems that the relevant issue was fixed in 2.10.0 https://github.com/h5py/h5py/commit/466181b178c1b8a5bfa6fb8f217319e021f647e0 I'm not sure how far back you want to fix things. I'm hoping to test this on the CI. I found this since I've been auditing slowdowns in our codebase, which has caused me to review much of the reading pipeline. Do you want to add a test for h5py>=2.10.0? Or can we assume that users won't install things together. https://pypi.org/project/h5py/2.10.0/ I could for example set the backend to not be available if a version of h5py that is too old is detected. One could alternatively, just keep the code here. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-11-29T19:34:24Z 2022-11-30T23:30:41Z 2022-11-30T23:30:41Z 2022-11-30T23:30:41Z 2fb22cf37b0de6c24ef8eef0f8398d34ee4e3ebb     0 84539d6bba8f4d425b53eecde62e229e4fa84257 3aa75c8d00a4a2d4acf10d80f76b937cadb666b7 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7334  
1145695726 PR_kwDOAMm_X85ESe3u 7356 closed 0 Avoid loading entire dataset by getting the nbytes in an array hmaarrfk 90008 Using `.data` accidentally tries to load the whole lazy arrays into memory. Sad. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2022-12-05T03:29:53Z 2023-03-17T17:31:22Z 2022-12-12T16:46:40Z 2022-12-12T16:46:40Z 021c73e12cccb06c017ce6420dd043a0cfbf9f08     0 02e3cb10c2d49569b888532f48ba8e47226c1e85 db68db6793bdd10f740e1ff7f68d821e853e3d73 CONTRIBUTOR
{
    "enabled_by": {
        "login": "dcherian",
        "id": 2448579,
        "node_id": "MDQ6VXNlcjI0NDg1Nzk=",
        "avatar_url": "https://avatars.githubusercontent.com/u/2448579?v=4",
        "gravatar_id": "",
        "url": "https://api.github.com/users/dcherian",
        "html_url": "https://github.com/dcherian",
        "followers_url": "https://api.github.com/users/dcherian/followers",
        "following_url": "https://api.github.com/users/dcherian/following{/other_user}",
        "gists_url": "https://api.github.com/users/dcherian/gists{/gist_id}",
        "starred_url": "https://api.github.com/users/dcherian/starred{/owner}{/repo}",
        "subscriptions_url": "https://api.github.com/users/dcherian/subscriptions",
        "organizations_url": "https://api.github.com/users/dcherian/orgs",
        "repos_url": "https://api.github.com/users/dcherian/repos",
        "events_url": "https://api.github.com/users/dcherian/events{/privacy}",
        "received_events_url": "https://api.github.com/users/dcherian/received_events",
        "type": "User",
        "site_admin": false
    },
    "merge_method": "squash",
    "commit_title": "Avoid loading entire dataset by getting the nbytes in an array (#7356)",
    "commit_message": "* Avoid instantiating entire dataset by getting the nbytes in an array\r\n\r\nUsing `.data` accidentally tries to load the whole lazy arrays into\r\nmemory.\r\n\r\nSad.\r\n\r\n* DOC: Add release note for bugfix.\r\n\r\n* Add test to ensure that number of bytes of sparse array is correctly\r\nreported\r\n\r\n* Add suggested test using InaccessibleArray\r\n\r\n* [pre-commit.ci] auto fixes from pre-commit.com hooks\r\n\r\nfor more information, see https://pre-commit.ci\r\n\r\n* Remove duplicate test\r\n\r\nCo-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>\r\nCo-authored-by: Deepak Cherian <dcherian@users.noreply.github.com>"
}
xarray 13221727 https://github.com/pydata/xarray/pull/7356  
1369630754 PR_kwDOAMm_X85Rougi 7883 closed 0 Avoid one call to len when getting ndim of Variables hmaarrfk 90008 I admit this is a super micro optimization but it avoids in certain cases the creation of a tuple, and a call to len on it. I hit this as I was trying to understand why Variable indexing was so much slower than numpy indexing. It seems that bounds checking in python is just slower than in C. Feel free to close this one if you don't want this kind of optimization. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2023-05-29T23:37:10Z 2023-07-03T15:44:32Z 2023-07-03T15:44:31Z   623aff94a679912d56e5fa38543f20856d368753     0 856419b0599c024405510cac5fd71ad8c00deca4 86f99337d803866a4288fc7550f9ee8c495baf87 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7883  
1637698012 PR_kwDOAMm_X85hnUnc 8534 closed 0 Point users to where in their code they should make mods for Dataset.dims hmaarrfk 90008 Its somewhat annoying to get warnings that point to a line within a library where the warning is issued. It really makes it unclear what one needs to change. This points to the user's access of the `dims` attribute. <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2023-12-10T14:31:29Z 2023-12-10T18:50:10Z 2023-12-10T18:23:42Z 2023-12-10T18:23:42Z 8d168db533715767042676d0dfd1b4563ed0fb61     0 0654243249c6a988b5435529eb0fa4d918410ba4 9acc411bc7e99e61269eadf77e96b9ddd40aec9e CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8534  
1721098007 PR_kwDOAMm_X85mld8X 8736 closed 0 Make list_chunkmanagers more resilient to broken entrypoints hmaarrfk 90008 As I'm a developing my custom chunk manager, I'm often checking out between my development branch and production branch breaking the entrypoint. This made xarray impossible to import unless I re-ran `pip install -e . -vv` which is somewhat tiring. This should help xarray be more resilient in other software's bugs in case they install malformed entrypoints Example: ```python >>> from xarray.core.parallelcompat import list_chunkmanagers >>> list_chunkmanagers() <ipython-input-3-19326f4950bc>:1: UserWarning: Failed to load entrypoint MyChunkManager due to No module named 'my.array._chunkmanager'. Skipping. list_chunkmanagers() {'dask': <xarray.core.daskmanager.DaskManager at 0x7f5b826231c0>} ``` Thank you for considering. <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #xxxx - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [x] New functions/methods are listed in `api.rst` This is mostly a quality of life thing for developers, I don't see this as a user visible change. 2024-02-11T21:37:38Z 2024-03-13T17:54:02Z 2024-03-13T17:54:02Z 2024-03-13T17:54:02Z a3f7774443862b1ee8822778a2f813b90cea24ef     0 b51a951cf7b6d5258318d590d093044f1fba2eb9 c919739fe6b2cdd46887dda90dcc50cb22996fe5 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8736  
1723028538 PR_kwDOAMm_X85ms1Q6 8738 closed 0 Don't break users that were already using ChunkManagerEntrypoint hmaarrfk 90008 For example, you just broke cubed https://github.com/xarray-contrib/cubed-xarray/blob/main/cubed_xarray/cubedmanager.py#L15 Not sure how much you care, it didn't seem like anybody other than me ever tried this module on github... <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2024-02-13T02:17:55Z 2024-02-13T15:37:54Z 2024-02-13T03:21:32Z   0b5b35994d84459ba815d129eb7214cb24aa8bbf     0 3003f9b281a9634a791a9b3052769f0bb340bffe d64460795e406bc4a998e2ddae0054a1029d52a9 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8738  
1723044865 PR_kwDOAMm_X85ms5QB 8739 open 0 Add a test for usability of duck arrays with chunks property hmaarrfk 90008 xref: https://github.com/pydata/xarray/issues/8733 <details> ```python xarray/tests/test_variable.py F ================================================ FAILURES ================================================ ____________________________ TestAsCompatibleData.test_duck_array_with_chunks ____________________________ self = <xarray.tests.test_variable.TestAsCompatibleData object at 0x7f3d1b122e60> def test_duck_array_with_chunks(self): # Non indexable type class CustomArray(NDArrayMixin, indexing.ExplicitlyIndexed): def __init__(self, array): self.array = array @property def chunks(self): return self.shape def __array_function__(self, *args, **kwargs): return NotImplemented def __array_ufunc__(self, *args, **kwargs): return NotImplemented array = CustomArray(np.arange(3)) assert is_chunked_array(array) var = Variable(dims=("x"), data=array) > var.load() /home/mark/git/xarray/xarray/tests/test_variable.py:2745: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/mark/git/xarray/xarray/core/variable.py:936: in load self._data = to_duck_array(self._data, **kwargs) /home/mark/git/xarray/xarray/namedarray/pycompat.py:129: in to_duck_array chunkmanager = get_chunked_array_type(data) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (CustomArray(array=array([0, 1, 2])),), chunked_arrays = [CustomArray(array=array([0, 1, 2]))] chunked_array_types = {<class 'xarray.tests.test_variable.TestAsCompatibleData.test_duck_array_with_chunks.<locals>.CustomArray'>} chunkmanagers = {'dask': <xarray.namedarray.daskmanager.DaskManager object at 0x7f3d1b1568f0>} def get_chunked_array_type(*args: Any) -> ChunkManagerEntrypoint[Any]: """ … 2024-02-13T02:46:47Z 2024-02-13T03:35:24Z     fbc348922aa26d8d1e01e69b8707656bd9b8ba88     0 cc505c77930130bd527d330f43fe21bf9cd6c182 d64460795e406bc4a998e2ddae0054a1029d52a9 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8739  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 28.56ms · About: xarray-datasette