home / github

Menu
  • Search all tables
  • GraphQL API

pull_requests

Table actions
  • GraphQL API for pull_requests

6 rows where user = 56925856

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
418478269 MDExOlB1bGxSZXF1ZXN0NDE4NDc4MjY5 4064 closed 0 Auto chunk AndrewILWilliams 56925856 Adding `chunks='auto'` option to `Dataset.chunk()`. - [x] Closes #4055 - [x] Tests added in `test_dask.py` - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Updated `whats-new.rst` for changes 2020-05-15T09:25:13Z 2020-05-25T20:38:53Z 2020-05-25T19:23:45Z 2020-05-25T19:23:45Z 1de38bc7460fd987338a0bfb78d24645dac35663     0 98734ef51bbcd8318834392d5f00ab3e31a056bc 3194b3ed1e414729ba6ab6f7f3ed39a425da42b1 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4064  
422333184 MDExOlB1bGxSZXF1ZXN0NDIyMzMzMTg0 4089 closed 0 xr.cov() and xr.corr() AndrewILWilliams 56925856 **PR** for the `xr.cov()` and `xr.corr()` functionality which others have been working on. Most code adapted from @r-beer in PR #3550. TODO: - [x] ~Write a reasonable set of tests, maybe not using `pandas` as a benchmark? (See https://github.com/pydata/xarray/issues/3784#issuecomment-633145887) Will probably need some help with this~ CHECKLIST: - [x] Closes #3784, #3550, #2652, #1115 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` ~(something wrong with docs though??)~ - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API 2020-05-23T22:09:07Z 2020-05-26T18:31:31Z 2020-05-25T16:55:33Z 2020-05-25T16:55:33Z 3194b3ed1e414729ba6ab6f7f3ed39a425da42b1     0 672c87f207147948853e5d8071e1557d203c6e5e f3ffab7ee4593c97e2ae63f22140d0a823a64b6d CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4089  
423354220 MDExOlB1bGxSZXF1ZXN0NDIzMzU0MjIw 4096 closed 0 Corrcov typo fix AndrewILWilliams 56925856 <!-- Feel free to remove check-list items aren't relevant to your change --> Fixing typo in recent PR #4089 - [x] Closes https://github.com/pydata/xarray/pull/4089#issuecomment-634157768 - [x] Passes `isort -rc . && black . && mypy . && flake8` 2020-05-26T17:44:07Z 2020-05-27T03:09:37Z 2020-05-26T19:03:25Z 2020-05-26T19:03:25Z 864877c313d026ea5664570741a328324064f77c     0 c271fe77760dfe8a97df189b40e559fdb178e243 d1f7cb8fd95d588d3f7a7e90916c25747b90ad5a CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4096  
636385276 MDExOlB1bGxSZXF1ZXN0NjM2Mzg1Mjc2 5284 closed 0 Dask-friendly nan check in xr.corr() and xr.cov() AndrewILWilliams 56925856 Was reading the discussion [here](https://github.com/pydata/xarray/issues/4804) and thought I'd draft a PR to implement some of the changes people suggested. It seems like the main problem is that currently in `computation.py`, `if not valid_values.all()` is not a lazy operation, and so can be a bottleneck for very large dataarrays. To get around this, I've lifted some neat tricks from #4559 so that `valid_values` remains a dask array. --- - [x] Closes #4804 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` 2021-05-09T22:06:03Z 2021-05-27T17:31:07Z 2021-05-27T17:31:07Z 2021-05-27T17:31:07Z 3b81a863f6bf96306f90588e4dcef7e54c3af4ea     0 ccb5f67b5c8c35e5439a4e7ce1ad69d9400d29da a6a1e48b57499f91db7e7c15593aadc7930020e8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/5284  
655322555 MDExOlB1bGxSZXF1ZXN0NjU1MzIyNTU1 5389 closed 0 ignore this AndrewILWilliams 56925856 ignore this, I did the branching wrong... 2021-05-27T20:16:22Z 2021-05-27T20:22:43Z 2021-05-27T20:16:45Z   d024a6e8c313b828166cffde9b1c892455f52438     0 dd128c808fe773a48e151d197230bc34501353a3 2a3965c16ad8ab8251b04970336f8b8d41baedb3 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/5389  
655326238 MDExOlB1bGxSZXF1ZXN0NjU1MzI2MjM4 5390 open 0 Improvements to lazy behaviour of `xr.cov()` and `xr.corr()` AndrewILWilliams 56925856 Following @willirath 's suggestion in #4804, I've changed https://github.com/pydata/xarray/blob/master/xarray/core/computation.py#L1373_L1375 so that Dask doesn't hold all chunks in memory - [x] Closes (more of) #4804, specifically [this comment](https://github.com/pydata/xarray/issues/4804#issuecomment-760114285) - [x] Passes `pre-commit run --all-files` 2021-05-27T20:22:08Z 2023-12-07T02:25:56Z     890645d97daeb8bfef76549c8c527e878475efc5     0 a28ea6214e7c70d066d8995d357b8a33c5f3980f d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/5390  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 24.44ms · About: xarray-datasette