pull_requests
7 rows where user = 8809578
This data as json, CSV (advanced)
Suggested facets: base, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
761583682 | PR_kwDOAMm_X84tZNhC | 5876 | closed | 0 | _season_from_months can now handle np.nan | pierreloicq 8809578 | _season_from_months can now handle np.nan and values outside of [1,12] I passed these tests: ``` def test_season(): months = np.array([ 1, 2, 3, 4, 5, np.nan]) assert ( _season_from_months(months) == np.array(['DJF', 'DJF', 'MAM', 'MAM', 'MAM', 'na']) ).all() months = np.array([ 1, 100, 3, 13, 0, -5]) assert ( _season_from_months(months) == np.array(['DJF', 'na', 'MAM', 'na', 'na', 'na']) ).all() months = np.array(range(1, 13)) assert ( _season_from_months(months) == np.array(['DJF', 'DJF', 'MAM', 'MAM', 'MAM', 'JJA', 'JJA', 'JJA', 'SON', 'SON', 'SON', 'DJF']) ).all() test_season() ``` | 2021-10-19T16:04:41Z | 2023-01-06T16:59:18Z | 2022-01-11T16:06:18Z | 2022-01-11T16:06:18Z | aeb00f9da90e4485d2e94f6796c7dd96a2cb1278 | 0 | 11d2f131fe2b830ec5d9a9620a6b701724d51c62 | 5b322c9ea18f560e35857edcb78efe4e4f323551 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/5876 | ||||
1100336216 | PR_kwDOAMm_X85BlcxY | 7226 | closed | 0 | make clearer that sortby() do not run inplace | pierreloicq 8809578 | ...as python List sort() is in-place | 2022-10-26T14:06:23Z | 2022-10-26T15:56:02Z | 2022-10-26T15:56:02Z | 2022-10-26T15:56:02Z | 97c70cfe7d1942a0350bb01fd6b2a076306440aa | 0 | fa24fc64c3af09dee01e326c3d638094d7f18a2b | ca57e5cd984e626487636628b1d34dca85cc2e7c | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/7226 | ||||
1101923043 | PR_kwDOAMm_X85BrgLj | 7230 | closed | 0 | set_coords docs: see also Dataset.assign_coords | pierreloicq 8809578 | 2022-10-27T16:06:08Z | 2022-10-28T07:14:42Z | 2022-10-27T17:08:30Z | 2022-10-27T17:08:30Z | b9aedd0155548ed0f34506ecc255b1688f07ffaa | 0 | 6cab940441a6a4f6a5c593f5b272fec065f800fb | c000690c7aa6dd134b45e580f377681a0de1996c | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/7230 | |||||
1101923241 | PR_kwDOAMm_X85BrgOp | 7231 | closed | 0 | assign_coords docs: see also Dataset.set_coords | pierreloicq 8809578 | 2022-10-27T16:06:19Z | 2022-10-28T07:15:03Z | 2022-10-27T17:07:28Z | 2022-10-27T17:07:28Z | 30cb42da9456971a5b21d950639d5a72c8f5fe1d | 0 | 145fa68905cbf02abda53ebd31c212d4b0fb203d | c000690c7aa6dd134b45e580f377681a0de1996c | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/7231 | |||||
1188414916 | PR_kwDOAMm_X85G1cXE | 7425 | closed | 0 | groupby in resample doc and vice-versa | pierreloicq 8809578 | Since groupby is a bit like resample on non contiguous data | 2023-01-06T16:54:32Z | 2023-01-09T09:43:41Z | 2023-01-06T18:25:02Z | 2023-01-06T18:25:02Z | 2ef82c535e2212a6c3dc21d0ac07e2e4236d68dc | 0 | 8fb459b91103642849a97b77827ec40ece359705 | d6d24507793af9bcaed79d7f8d3ac910e176f1ce | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/7425 | ||||
1219446335 | PR_kwDOAMm_X85Ir0Y_ | 7481 | closed | 0 | clarification for thresh arg of dataset.dropna() | pierreloicq 8809578 | 2023-01-27T15:00:34Z | 2023-02-14T13:57:38Z | 2023-02-14T13:57:37Z | 2023-02-14T13:57:37Z | cd901842144ce7f52b08b5b271310f31f6b04c26 | 0 | fb7fab71b34d1b198b750cd3b517e5e1f11b3616 | 50912e26f156cb3a6b9d9f347999bf7c7d432eb6 | CONTRIBUTOR | { "enabled_by": { "login": "mathause", "id": 10194086, "node_id": "MDQ6VXNlcjEwMTk0MDg2", "avatar_url": "https://avatars.githubusercontent.com/u/10194086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathause", "html_url": "https://github.com/mathause", "followers_url": "https://api.github.com/users/mathause/followers", "following_url": "https://api.github.com/users/mathause/following{/other_user}", "gists_url": "https://api.github.com/users/mathause/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathause/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathause/subscriptions", "organizations_url": "https://api.github.com/users/mathause/orgs", "repos_url": "https://api.github.com/users/mathause/repos", "events_url": "https://api.github.com/users/mathause/events{/privacy}", "received_events_url": "https://api.github.com/users/mathause/received_events", "type": "User", "site_admin": false }, "merge_method": "squash", "commit_title": "clarification for thresh arg of dataset.dropna() (#7481)", "commit_message": "* clarification for thresh arg of dataset.dropna()\r\n\r\n* Update xarray/core/dataset.py\r\n\r\n---------\r\n\r\nCo-authored-by: Mathias Hauser <mathause@users.noreply.github.com>" } |
xarray 13221727 | https://github.com/pydata/xarray/pull/7481 | ||||
1303348638 | PR_kwDOAMm_X85Nr4We | 7725 | closed | 0 | [DOC] resample and then apply func on time+other variables | pierreloicq 8809578 | It cannot be run with python since previous dataset is unidimensionnal. Feel free to make it more rigorous if you want. | 2023-04-05T14:54:45Z | 2023-04-06T07:13:58Z | 2023-04-06T02:15:01Z | 2023-04-06T02:15:01Z | 86266902d65df36482629aa8cf7b5719bd461970 | 0 | 0dea34378b191b126ac437af2e453603119ef549 | d4db16699f30ad1dc3e6861601247abf4ac96567 | CONTRIBUTOR | { "enabled_by": { "login": "dcherian", "id": 2448579, "node_id": "MDQ6VXNlcjI0NDg1Nzk=", "avatar_url": "https://avatars.githubusercontent.com/u/2448579?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcherian", "html_url": "https://github.com/dcherian", "followers_url": "https://api.github.com/users/dcherian/followers", "following_url": "https://api.github.com/users/dcherian/following{/other_user}", "gists_url": "https://api.github.com/users/dcherian/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcherian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcherian/subscriptions", "organizations_url": "https://api.github.com/users/dcherian/orgs", "repos_url": "https://api.github.com/users/dcherian/repos", "events_url": "https://api.github.com/users/dcherian/events{/privacy}", "received_events_url": "https://api.github.com/users/dcherian/received_events", "type": "User", "site_admin": false }, "merge_method": "squash", "commit_title": "[DOC] resample and then apply func on time+other variables (#7725)", "commit_message": "* [DOC] resample and then apply func on time+other variables:\r\n\r\n* [pre-commit.ci] auto fixes from pre-commit.com hooks\r\n\r\nfor more information, see https://pre-commit.ci\r\n\r\n* Update doc/user-guide/time-series.rst\r\n\r\n---------\r\n\r\nCo-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>\r\nCo-authored-by: Deepak Cherian <dcherian@users.noreply.github.com>" } |
xarray 13221727 | https://github.com/pydata/xarray/pull/7725 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);