home / github

Menu
  • Search all tables
  • GraphQL API

commits

Table actions
  • GraphQL API for commits

11 rows where raw_author = "6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77" sorted by author_date descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: author_date (date), committer_date (date)

repo 1

  • xarray 11

author 1

  • hmaarrfk 11
sha message author_date ▲ committer_date raw_author raw_committer repo author committer
a3f7774443862b1ee8822778a2f813b90cea24ef Make list_chunkmanagers more resilient to broken entrypoints (#8736) * Make list_chunkmanagers more resilient to broken entrypoints As I'm a developing my custom chunk manager, I'm often checking out between my development branch and production branch breaking the entrypoint. This made xarray impossible to import unless I re-ran `pip install -e . -vv` which is somewhat tiring. * Type hint untyped test function to appease mypy * Try to return something to help mypy --------- Co-authored-by: Tom Nicholas <tom@cworthy.org> Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> 2024-03-13T17:54:02Z 2024-03-13T17:54:02Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
8d168db533715767042676d0dfd1b4563ed0fb61 Point users to where in their code they should make mods for Dataset.dims (#8534) * Point users to where in their code they should make mods for Dataset.dims Its somewhat annoying to get warnings that point to a line within a library where the warning is issued. It really makes it unclear what one needs to change. This points to the user's access of the `dims` attribute. * use emit_user_level_warning 2023-12-10T18:23:41Z 2023-12-10T18:23:41Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
021c73e12cccb06c017ce6420dd043a0cfbf9f08 Avoid loading entire dataset by getting the nbytes in an array (#7356) * Avoid instantiating entire dataset by getting the nbytes in an array Using `.data` accidentally tries to load the whole lazy arrays into memory. Sad. * DOC: Add release note for bugfix. * Add test to ensure that number of bytes of sparse array is correctly reported * Add suggested test using InaccessibleArray * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Remove duplicate test Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> 2022-12-12T16:46:40Z 2022-12-12T16:46:40Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
2fb22cf37b0de6c24ef8eef0f8398d34ee4e3ebb Remove code used to support h5py<2.10.0 (#7334) It seems that the relevant issue was fixed in 2.10.0 https://github.com/h5py/h5py/commit/466181b178c1b8a5bfa6fb8f217319e021f647e0 I'm not sure how far back you want to fix things. I'm hoping to test this on the CI. I found this since I've been auditing slowdowns in our codebase, which has caused me to review much of the reading pipeline. 2022-11-30T23:30:40Z 2022-11-30T23:30:40Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
bc35e39e5754c7a6c84c274815d95cb4130f0000 Expand benchmarks for dataset insertion and creation (#7236) * Expand benchmarks for dataset insertion and creation Taken from discussions in https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344 Thank you @Illviljan * Apply suggestions from code review Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> * Update asv_bench/benchmarks/merge.py Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> * Move data set creation definition * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add attrs * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update asv_bench/benchmarks/merge.py * Update asv_bench/benchmarks/merge.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> 2022-10-31T15:03:57Z 2022-10-31T15:03:57Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
2608c407d73551e0d6055d4b81060e321e905d95 Fix type in benchmarks/merge.py (#7235) I don't think this affects what is displayed that is determined by param_names 2022-10-29T15:52:45Z 2022-10-29T15:52:45Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
65bfa4d10a529f00a9f9b145d1cea402bdae83d0 Actually make the fast code path return early for Aligner.align (#7222) * Actually make the fast code path return early * Totally avoid any kind of copy * Revert "Totally avoid any kind of copy" This reverts commit b528234f26b87fd72586e2fe7b7f143fa128f893. Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> 2022-10-28T16:22:35Z 2022-10-28T16:22:35Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
040816a64f52974a79f631c55d920f4b6a4c22ec Remove debugging slow assert statement (#7221) * Remove debugging slow assert statement * Add a benchmark * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update asv_bench/benchmarks/dataset_creation.py * Rework the benchmark * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Delete dataset_creation.py Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> 2022-10-28T02:49:43Z 2022-10-28T02:49:43Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
c000690c7aa6dd134b45e580f377681a0de1996c Dataset insertion benchmark (#7223) * Add a benchmark for dataset element insertion * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update asv_bench/benchmarks/dataset_creation.py * Rework the benchmark * lint before the bot * Update asv_bench/benchmarks/dataset_creation.py Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> * Update asv_bench/benchmarks/dataset_creation.py Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Lint * Rename the benchmark * Rename benchmark * Update asv_bench/benchmarks/dataset_in_memory_operation.py Co-authored-by: Maximilian Roos <5635139+max-sixty@users.noreply.github.com> * Update and rename dataset_in_memory_operation.py to merge.py * add back elements * Only add a single variable * Give the parameter a name Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com> Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> Co-authored-by: Maximilian Roos <5635139+max-sixty@users.noreply.github.com> 2022-10-27T15:38:08Z 2022-10-27T15:38:08Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
89f7de888468eb37979faa686e7d70dbe11fb83c Lazy import dask.distributed to reduce import time of xarray (#7172) * Lazy import testing and tutorial * Lazy import distributed to avoid a costly import * Revert changes to __init__ * Explain why we lazy import * Add release note * dask.distritubed.lock now supports blocking argument Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com> 2022-10-18T17:06:33Z 2022-10-18T17:06:33Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447
5c08ab296bf9bbcfb5bd3c262e3fdcce986d69ab Use base ImportError not MoudleNotFoundError when trying to see if the (#6154) modules load 2022-01-11T10:24:57Z 2022-01-11T10:24:57Z Mark Harfouche 6068ef178b1a24077fdbbe3ea6f0c3fe97b6ae77 GitHub cd792325681cbad9f663f2879d8b69f1edbb678f xarray 13221727 hmaarrfk 90008 web-flow 19864447

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [commits] (
   [sha] TEXT PRIMARY KEY,
   [message] TEXT,
   [author_date] TEXT,
   [committer_date] TEXT,
   [raw_author] TEXT REFERENCES [raw_authors]([id]),
   [raw_committer] TEXT REFERENCES [raw_authors]([id]),
   [repo] INTEGER REFERENCES [repos]([id]),
   [author] INTEGER REFERENCES [users]([id]),
   [committer] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_commits_committer]
    ON [commits] ([committer]);
CREATE INDEX [idx_commits_author]
    ON [commits] ([author]);
CREATE INDEX [idx_commits_repo]
    ON [commits] ([repo]);
CREATE INDEX [idx_commits_raw_committer]
    ON [commits] ([raw_committer]);
CREATE INDEX [idx_commits_raw_author]
    ON [commits] ([raw_author]);
Powered by Datasette · Queries took 21.778ms · About: xarray-datasette