home / github

Menu
  • GraphQL API
  • Search all tables

pull_requests

Table actions
  • GraphQL API for pull_requests

4 rows where user = 31115101

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: state, created_at (date), updated_at (date), closed_at (date), merged_at (date)

id ▼ node_id number state locked title user body created_at updated_at closed_at merged_at merge_commit_sha assignee milestone draft head base author_association auto_merge repo url merged_by
222737031 MDExOlB1bGxSZXF1ZXN0MjIyNzM3MDMx 2486 closed 0 Zarr determine chunks lilyminium 31115101 - [ ] Quick band-aid for #2300 - [ ] Test added to check that a zarr-originating dataset can be saved 2018-10-14T19:21:10Z 2018-10-14T19:26:40Z 2018-10-14T19:26:40Z   cd398ebb6dde285655e0b6ee2a8e1fe2390d80e6     0 734f9e1f6f1b5042edf95c0edd58d34c53158809 4bad455a801e91b329794895afa0040c868ff128 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2486  
222737809 MDExOlB1bGxSZXF1ZXN0MjIyNzM3ODA5 2487 closed 0 Zarr chunking (GH2300) lilyminium 31115101 - [x] Band-aid for #2300 - [x] Test added to check that zarr-originating array can save - [x] Updated whats-new I don't fully understand the ins-and-outs of Zarr, but it seems that if it can be serialised with a smaller end-chunk to begin with, then saving a Dataset constructed from Zarr should not have an issue with that either. 2018-10-14T19:35:06Z 2018-11-02T04:59:19Z 2018-11-02T04:59:04Z 2018-11-02T04:59:04Z f788084f6672e1325938ba2f6a0bd105aa412091     0 93990270674eb8bcbd45d3a4fa8b43200684c606 6d55f99905d664ef73cb708cfe8c52c2c651e8dc CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2487  
227407536 MDExOlB1bGxSZXF1ZXN0MjI3NDA3NTM2 2530 closed 0 Manually specify chunks in open_zarr lilyminium 31115101 - [x] Addresses #2423 - [x] Tests added (for all bug fixes or enhancements) - [x] Fully documented, including `whats-new.rst` This adds a `chunks` parameter that is analogous to Dataset.chunk. `auto_chunk` is kept for backwards compatibility and is equivalent to `chunks='auto'`. It seems reasonable that anyone manually specifying chunks may want to rewrite the dataset in those chunks, and the error that arises when the encoded Zarr chunks mismatch the variable Dask chunks may quickly get annoying. `overwrite_encoded_chunks=True` sets the encoded chunks to None so there is no clash. 2018-10-31T19:04:05Z 2019-04-18T14:35:21Z 2019-04-18T14:34:29Z 2019-04-18T14:34:29Z baf81b42b33bd4e6ffab94722a021d87b79931d7     0 f17cb5e99779acda42e211a5d18868aa168cef3b aaae999bfd4b90aaaf13fea7ac3279d26562ce70 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2530  
227447653 MDExOlB1bGxSZXF1ZXN0MjI3NDQ3NjUz 2533 open 0 Check shapes of coordinates and data during DataArray construction lilyminium 31115101 - [x] Closes #2529 (remove if there is no corresponding issue, which should only be the case for minor changes) - [x] Tests added (for all bug fixes or enhancements) - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API (remove if this change should not be visible to users, e.g., if it is an internal clean-up, or if this is part of a larger project that will be documented later) This sets DataArrayGroupBy.reduce(shortcut=False), as the shortcut first constructs a DataArray with the previous coordinates and the new mutated data before updating the coordinates; this order of events now raises a ValueError. 2018-10-31T21:28:04Z 2022-06-09T14:50:17Z     8ccafca1b3ed5d1d9ab87da01803e521b40507c0     0 e488efdc04af733f751a5299b6dea5b23c90ab73 d1e4164f3961d7bbb3eb79037e96cae14f7182f8 CONTRIBUTOR   xarray 13221727 https://github.com/pydata/xarray/pull/2533  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pull_requests] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [state] TEXT,
   [locked] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [body] TEXT,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [merged_at] TEXT,
   [merge_commit_sha] TEXT,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [draft] INTEGER,
   [head] TEXT,
   [base] TEXT,
   [author_association] TEXT,
   [auto_merge] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [url] TEXT,
   [merged_by] INTEGER REFERENCES [users]([id])
);
CREATE INDEX [idx_pull_requests_merged_by]
    ON [pull_requests] ([merged_by]);
CREATE INDEX [idx_pull_requests_repo]
    ON [pull_requests] ([repo]);
CREATE INDEX [idx_pull_requests_milestone]
    ON [pull_requests] ([milestone]);
CREATE INDEX [idx_pull_requests_assignee]
    ON [pull_requests] ([assignee]);
CREATE INDEX [idx_pull_requests_user]
    ON [pull_requests] ([user]);
Powered by Datasette · Queries took 21.505ms · About: xarray-datasette