home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

7 rows where user = 81219

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
262983973 MDExOlB1bGxSZXF1ZXN0MjYyOTgzOTcz 2828 closed 0 Add quantile method to GroupBy huard 81219 - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Fixes #3018 Note that I've added an unrelated test that exposes an issue with grouping when there is only one element per group. 2019-03-20T18:20:41Z 2019-06-24T15:21:36Z 2019-06-24T15:21:29Z 2019-06-24T15:21:28Z b054c317f86639cd3b889a96d77ddb3798f8584e     0 f71d05e1275ad3308f608d9f2476352bcf7d68a6 223a05f1b77d4efe8ac7d4dc2c24bff61335693c CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2828  
317054253 MDExOlB1bGxSZXF1ZXN0MzE3MDU0MjUz 3305 closed 0 Honor `keep_attrs` in DataArray.quantile huard 81219 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3304 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Note that I've set the default to True (if keep_attrs is None). This sounded reasonable since quantiles share the same units and properties as the original array, but I can switch it to False if that's the usual default. 2019-09-12T19:27:14Z 2019-09-15T22:16:27Z 2019-09-15T22:16:15Z 2019-09-15T22:16:15Z b65ce8666020ba3a0300154655d2e5c05884d73b     0 f4552adc2f9c21cd58d6bdee7eb29f7d0f1d6bd3 69c7e01e5167a3137c285cb50d1978252bb8bcbf CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/3305  
353735038 MDExOlB1bGxSZXF1ZXN0MzUzNzM1MDM4 3631 closed 0 Add support for CFTimeIndex in get_clean_interp_index huard 81219 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3641 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Related to #3349 As suggested by @spencerkclark, index values are computed as a delta with respect to 1970-01-01. At the moment, this fails if dates fall outside of the range for nanoseconds timedeltas [ 1678 AD, 2262 AD]. Is this something we can fix ? 2019-12-16T19:57:24Z 2020-01-26T18:36:24Z 2020-01-26T14:10:37Z 2020-01-26T14:10:37Z 8772355b23e2a451697023844a0e6b688e1468e1     0 6f0c5042c955ba26adceaa6fb3c1db665204ca38 c32e58b4fff72816c6b554db51509bea6a891cdc CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/3631  
354730729 MDExOlB1bGxSZXF1ZXN0MzU0NzMwNzI5 3642 closed 0 Make datetime_to_numeric more robust to overflow errors huard 81219 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3641 - [x] Tests added - [x] Passes `black . && mypy . && flake8` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API This is likely only safe with NumPy>=1.17 though. 2019-12-18T17:34:41Z 2020-01-20T19:21:49Z 2020-01-20T19:21:49Z   3c29b173ddcb98673387c0e41bf8308d98f0cc10     0 49641632ac4c13f53ff5499d0bc583690ad70f4d 3cbc459caa010f9b5042d3fa312b66c9b2b6c403 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/3642  
372062536 MDExOlB1bGxSZXF1ZXN0MzcyMDYyNTM2 3758 closed 0 Fix interp bug when indexer shares coordinates with array huard 81219 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #3252 - [x] Tests added - [x] Passes `isort -rc . && black . && mypy . && flake8` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Replaces #3262 (I think). 2020-02-06T19:06:22Z 2020-03-13T13:58:38Z 2020-03-13T13:58:38Z 2020-03-13T13:58:38Z 0d95ebac19faa3af25ac369d1e8177535022c0d9     0 7042803da07f06d3877cfa2599fa06685db14a83 8512b7bf498c0c300f146447c0b05545842e9404 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/3758  
519474743 MDExOlB1bGxSZXF1ZXN0NTE5NDc0NzQz 4573 closed 0 Update xESMF link to pangeo-xesmf in related-projects huard 81219 <!-- Feel free to remove check-list items aren't relevant to your change --> The new link is where development now occurs. 2020-11-11T22:00:34Z 2020-11-12T14:54:08Z 2020-11-12T14:53:56Z 2020-11-12T14:53:56Z 76036bd239ad2fbf7aa6948ab61a6215c22c3d6e     0 5d5866898eccf1a14e26fd58b7d2f228e2d2d07b e71c7b4ea967c32fa1c9fd99209a0d4cc05e1577 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/4573  
1766894598 PR_kwDOAMm_X85pUKwG 8821 open 0 Add small test exposing issue from #7794 and suggestion for `_wrap_numpy_scalars` fix huard 81219 `_wrap_numpy_scalars` relies on `np.isscalar`, which incorrectly labels a single cftime object as not a scalar. ```python import cftime import numpy as np c = cftime.datetime(2000, 1, 1, calendar='360_day') np.isscalar(c) # False ``` The PR adds logic to handle non-numpy objects using the `np.ndim` function. The logic for built-ins and numpy objects should remain the same. The function logic could possibly be rewritten more clearly as ```python if hasattr(array, "dtype"): if np.isscalar(array): return np.array(array) else: return array if np.ndim(array) == 0: return np.array(array) return array ``` <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #7794 - [x] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2024-03-11T23:40:17Z 2024-04-03T18:53:28Z     06dd0ddbac632b48a45e2b933153a16cbba318e0     0 12217501029657ba8b6e90a4243bbe45dd73a228 90e00f0022c8d1871f441470d08c79bb3b03c164 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8821  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 19.977ms · About: xarray-datasette