home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

2 rows where user = 2941720

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
121631853 MDExOlB1bGxSZXF1ZXN0MTIxNjMxODUz 1416 closed 0 Moved register_dataset_accessor examples docs to appropriate docstring lewisacidic 2941720 Just noticed this when reading through the code - VERY minor. The Examples docstring for register_dataarray_accessor referred to register_dataset_accessor instead. Moved the example to register_dataset_accessor. - [ ] Closes #xxxx - [ ] Tests added / passed - [ ] Passes ``git diff upstream/master | flake8 --diff`` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ^ hopefully don't need to fill these in 2017-05-20T17:43:25Z 2017-05-21T20:18:10Z 2017-05-21T20:18:10Z 2017-05-21T20:18:10Z 028454d9d8c6d7d2f8afd7d0133941f961dbe231     0 c29341b96a11510f003169c794b17d3fd1957afc d5c7e0612e8243c0a716460da0b74315f719f2df CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/1416  
121899206 MDExOlB1bGxSZXF1ZXN0MTIxODk5MjA2 1421 open 0 Adding arbitrary object serialization lewisacidic 2941720 This adds support for object serialization using the netCDF4-python backend.. Minimum working (at least appears to..) example, no tests yet. I added `allow_object` kwarg (rather than `allow_pickle`, no reason to firmly attach pickle to the api, could use something else for other backends). This is now for: - `to_netcdf` - `AbstractDataStore` (a `True` value raises `NotImplementedError` for everything but `NetCDF4DataStore`) - `cf_encoder` which when `True` alters its behaviour to allow `dtype('O')` through. `NetCDF4DataStore` handles this independently from the cf_encoder/decoder. The dtype support made it hard to decouple, plus I think object serialization is a backend dependent issue. There's a lot of potential for refactoring, just pushed this to get opinions about whether this was a reasonable approach - I'm relatively new to open source, so would appreciate any constructive feedback/criticisms! - [ ] Closes #xxxx - [ ] Tests added / passed - [ ] Passes ``git diff upstream/master | flake8 --diff`` - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ^ these will come later! 2017-05-23T01:59:37Z 2022-06-09T14:50:17Z     e0280d2771d39477920900d0857f248bce5ad87a     0 2b387ca6ef87be0d77ad1d78adf63d9531519f13 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/1421  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 22.806ms · About: xarray-datasette