home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

2 rows where user = 167802

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
236154553 MDExOlB1bGxSZXF1ZXN0MjM2MTU0NTUz 2591 closed 0 Fix h5netcdf saving scalars with filters or chunks mraspaud 167802 - [x] Closes #2563 - [x] Tests added (for all bug fixes or enhancements) - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) 2018-12-05T12:22:40Z 2018-12-11T07:27:27Z 2018-12-11T07:24:36Z 2018-12-11T07:24:36Z 53746c962701a864255f15e69e5ab5fec4cf908c     0 59bb90f77e74d42ce7d255ce07b4f56e7ae20074 77634d451ff57b95f33d76da73020df4a68eeed9 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2591  
1411248888 PR_kwDOAMm_X85UHfL4 7948 closed 0 Implement preferred_chunks for netcdf 4 backends mraspaud 167802 According to the `open_dataset` documentation, using `chunks="auto"` or `chunks={}` should yield datasets with variables chunked depending on the preferred chunks of the backend. However neither the netcdf4 nor the h5netcdf backend seem to implement the `preferred_chunks` encoding attribute needed for this to work. This PR adds this attribute to the encoding upon data reading. This results in `chunks="auto"` in `open_dataset` returning variables with chunk sizes multiples of the chunks in the nc file, and for `chunks={}`, returning the variables with then exact nc chunk sizes. - [x] Closes #1440 - [x] Tests added - [ ] User visible changes (including notable bug fixes) are documented in `whats-new.rst` - [ ] New functions/methods are listed in `api.rst` 2023-06-28T08:43:30Z 2023-09-12T09:01:03Z 2023-09-11T23:05:49Z 2023-09-11T23:05:49Z de66daec91aa277bf3330c66e0e7f70a8d2f5acc     0 8d1a140da55597b6b7074e2aad0e24b7bfc9ac90 3edd9978b4590666e83d8c0e4e8f574be09ff4c8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/7948  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 19.757ms · About: xarray-datasette