pull_requests
2 rows where user = 4992424
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
106592251 | MDExOlB1bGxSZXF1ZXN0MTA2NTkyMjUx | 1272 | closed | 0 | Groupby-like API for resampling | darothen 4992424 | This is a work-in-progress to resolve #1269. - [x] Basic functionality - [x] Cleanly deprecate old API - [x] New test cases - [x] Documentation / examples - [x] "What's new" Openly welcome feedback/critiques on how I approached this. Subclassing `Data{Array/set}GroupBy` may not be the best way, but it would be easy enough to re-write the necessary helper functions (just `apply()`, I think) so that we do not need to inherit form them directly. Additional issues I'm working to resolve: - [x] I tried make sure that calls using the old API won't break by refactoring the old logic to `_resample_immediately()`. This may not be the best approach! - [x] Similarly, I copied all the original test cases and added the suffix `..._old_api`; these could trivially be placed into their related test cases for the new API. - [x] BUG: **keep_attrs** is ignored when you call it on methods chained to `Dataset.resample()`. Oddly enough, if I hard-code **keep_attrs=True** inside `reduce_array()` in `DatasetResample::reduce` it works just fine. I haven't figured out where the kwarg is getting lost. - [x] BUG: Some of the test cases (for instance, `test_resample_old_vs_new_api`) fail because the resampling by calling `self.groupby_cls` ends up not working - it crashes because the group sizes that get computed are not what it expects. Occurs with both new and old API | 2017-02-16T19:04:07Z | 2017-09-22T16:27:36Z | 2017-09-22T16:27:35Z | 2017-09-22T16:27:35Z | dc7d733bcc10ce935304d65d03124471661243a3 | 0.10 2415632 | 0 | 5cfba57c9dec5546c8441bb286107e55d048584c | 7611ed9b678c4004855856d2ec6dc6eb7ac59123 | NONE | xarray 13221727 | https://github.com/pydata/xarray/pull/1272 | |||
114680372 | MDExOlB1bGxSZXF1ZXN0MTE0NjgwMzcy | 1356 | closed | 0 | Add DatetimeAccessor for accessing datetime fields via `.dt` attribute | darothen 4992424 | - [x] Partially closes #358 - [x] tests added / passed - [x] passes ``git diff upstream/master | flake8 --diff`` - [x] whatsnew entry This uses the `register_dataarray_accessor` to add attributes similar to those in pandas` timeseries which let the users quickly access datetime fields from an underlying array of datetime-like values. The referenced issue (#358) also asks about adding similar accessors for `.str`, but this is a more complex topic - I think a compelling use-case would help in figuring out what the critical functionality is ## Virtual time fields Presumably this could be used to augment `Dataset._get_virtual_variable()`. A **season** field would need to be added as a special field to the accessor. | 2017-04-06T19:48:19Z | 2017-04-29T01:19:12Z | 2017-04-29T01:18:59Z | 2017-04-29T01:18:59Z | 8f6a68e3f821689203bce2bce52b412e9fe70b5c | 0 | b2863139ec3bc437030a48ff33a54d68ca91b026 | ab4ffee919d4abe9f6c0cf6399a5827c38b9eb5d | NONE | xarray 13221727 | https://github.com/pydata/xarray/pull/1356 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);