home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

4 rows where user = 16700639

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
760167667 PR_kwDOAMm_X84tTzzz 5873 closed 0 Allow indexing unindexed dimensions using dask arrays bzah 16700639 <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #2511 - [x] Closes #4663 - [x] Closes #4276 - [x] xref #7516 - [x] Tests added - [x] Passes `pre-commit run --all-files` - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` This is a naive attempt to make `isel` work with Dask. Known limitation: it triggers the computation. **WIP** The code presented here is mainly to support the discussion on #2511. It has not been unit tested and should probably not be merged as is. 2021-10-18T07:56:58Z 2023-03-16T14:54:51Z 2023-03-15T02:47:59Z 2023-03-15T02:47:59Z 83e159e303906a6656f07b728ff9b69dadba39f2     0 75a629976ff5e431a80d02b591cd2ffdbe984422 49ae0f8dd39e8fc59ed3476f50b9938bbb40d0c4 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/5873  
800082232 PR_kwDOAMm_X84vsEk4 6068 closed 0 DOC: Add "auto" to dataarray `chunk` method bzah 16700639 <!-- Feel free to remove check-list items aren't relevant to your change --> - [ ] Closes #xxxx - [ ] Tests added - [x] Passes `pre-commit run --all-files` - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` This PR adds `str` type on `datarray.chunk` method the same way it is accepted on `dataset.chunk`. The corresponding documentation has been updated. I wanted to add a unit test for `da.chunk("auto")` but the existing tests seems to rely on zarr which I don't know at all. 2021-12-10T16:50:24Z 2022-01-03T21:35:02Z 2022-01-03T21:35:02Z 2022-01-03T21:35:02Z 0a40bf19536ec8b7e417e8085e384fb0208f06ba     0 5bce866afaa7409d6f0bd52178dcee4977e6868b 2694046c748a51125de6d460073635f1d789958e CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/6068  
828648264 PR_kwDOAMm_X84xZCtI 6182 closed 0 DOC: fix dead link bzah 16700639 <!-- Feel free to remove check-list items aren't relevant to your change --> Fix link to "Generalized Universal Function API" of Numpy's doc 2022-01-21T15:41:49Z 2022-01-21T16:19:07Z 2022-01-21T16:16:26Z 2022-01-21T16:16:26Z e512cf2f0b31cf9080b506cd5814ed0a5a185ce9     0 79199e0789e1a7d5c18caee2c1ccad25b21b7210 0ffb0f42282a1b67c4950e90e1e4ecd146307aa8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/6182  
1503215624 PR_kwDOAMm_X85ZmUAI 8147 closed 0 Add support for netCDF4.EnumType bzah 16700639 This pull request add support for enums on netcdf4 backend. Enum were added in netCDF4-python in 1.2.0 (September 2015). In the netcdf format, they are defined as types and can be use across the dataset to type variable when on creation. They are meant to be an alternative to flag_values, flag_meanings. This pull request makes it possible for xarray to read existing enums in a file, convert them into flag_values/flag_meanings and save them as enums when an special encoding flag is filled in. TODO: - [x] Add implementation for other backends ? Will be added in follow-up PR ---- <!-- Feel free to remove check-list items aren't relevant to your change --> - [x] Closes #8144 - [x] Tests added - [x] User visible changes (including notable bug fixes) are documented in `whats-new.rst` 2023-09-05T17:20:50Z 2024-01-17T19:10:51Z 2024-01-17T07:19:32Z 2024-01-17T07:19:32Z d20ba0d387d206a21d878eaf25c8b3392f2453d5     0 f22046dbf97d930b54163f4cda9b0081edc1fb38 33d51c8da3acbdbe550e496a006184bda3de3beb CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/8147  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 19.297ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows