pull_requests
4 rows where user = 10720577
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
251611039 | MDExOlB1bGxSZXF1ZXN0MjUxNjExMDM5 | 2757 | closed | 0 | Allow expand_dims() method to support inserting/broadcasting dimensions with size>1 | pletchm 10720577 | This pull request enhances the `expand_dims` method for both `Dataset` and `DataArray` objects to support inserting/broadcasting dimensions with size > 1. It corresponds to this issue https://github.com/pydata/xarray/issues/2710. ## Changes: 1. dataset.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions 3. dataarray.expand_dims() method take dict like object where values represent length of dimensions or coordinates of dimesnsions 5. Add alternative option to passing a dict to the dim argument, which is now an optional kwarg, passing in each new dimension as its own kwarg 7. Add expand_dims enhancement from issue 2710 to whats-new.rst ## Included: - [ ] Tests added - [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API ## What's new: All of the old functionality is still there, so it shouldn't break anyone's existing code that uses it. You can now pass a dim as a dict, where the keys are the new dimensions and the values are either integers (giving the length of the new dimensions) or iterables (giving the coordinates of the new dimensions). ``` import numpy as np import xarray as xr >>> original = xr.Dataset({'x': ('a', np.random.randn(3)), 'y': (['b', 'a'], np.random.randn(4, 3))}, coords={'a': np.linspace(0, 1, 3), 'b': np.linspace(0, 1, 4), 'c': np.linspace(0, 1, 5)}, attrs={'key': 'entry'}) >>> original <xarray.Dataset> Dimensions: (a: 3, b: 4, c: 5) Coordinates: * a (a) float64 0.0 0.5 1.0 * b (b) float64 0.0 0.3333 0.6667 1.0 * c (c) float64 0.0 0.25 0.5 0.75 1.0 Data variables: x (a) float64 -1.556 0.2178 0.6319 y (b, a) float64 0.5273 0.6652 0.3418 1.858 ... -0.3519 0.8088 0.8753 Attributes: key: entry >>> … | 2019-02-08T21:59:36Z | 2019-03-26T02:42:11Z | 2019-03-26T02:41:48Z | 2019-03-26T02:41:48Z | 16a2c03bb23757a92f3f9b8e74c4d489e892e6d6 | 0 | cac434e45781f39643f5b42eb96fd4c1f82fe8ed | 164d20abb6ffc35eb3f314ce7fb5b9600cf9de3f | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2757 | ||||
274547973 | MDExOlB1bGxSZXF1ZXN0Mjc0NTQ3OTcz | 2930 | closed | 0 | Bugfix/coords not deep copy | pletchm 10720577 | This pull request fixes a bug that prevented making a complete deep copy of a `DataArray` or `Dataset`, because the `coords` weren't being deep copied. It took a small fix in the `IndexVariable.copy` method. This method now allows both deep and shallow copies of coords to be made. This pull request corresponds to this issue https://github.com/pydata/xarray/issues/1463. - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API | 2019-04-29T22:43:10Z | 2023-01-20T10:58:37Z | 2019-05-02T22:46:36Z | e2e3317ba115598235c4920e8deeae6f6f06552d | 0 | 3d964985c28a76fc829cb843a4dac6d52b6aa97d | 6d93a95d05bdbfc33fff24064f67d29dd891ab58 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2930 | |||||
275547871 | MDExOlB1bGxSZXF1ZXN0Mjc1NTQ3ODcx | 2936 | closed | 0 | BUGFIX: deep-copy wasn't copying coords, bug fixed within IndexVariable | pletchm 10720577 | This pull request fixes a bug that prevented making a complete deep copy of a `DataArray` or `Dataset`, because the `coords` weren't being deep copied. It took a small fix in the `IndexVariable.copy method`. This method now allows both deep and shallow copies of `coords` to be made. This pull request corresponds to this issue https://github.com/pydata/xarray/issues/1463. - [x] Tests added - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API | 2019-05-02T22:58:40Z | 2019-05-09T22:31:02Z | 2019-05-08T14:44:25Z | 2019-05-08T14:44:25Z | c04234d4892641e1da89e47c7164cdcf5e4777a4 | 0 | 1db167329596995bab739682e10773d64a66b31a | 5aaa6547cd14a713f89dfc7c22643d86fce87916 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2936 | ||||
277517546 | MDExOlB1bGxSZXF1ZXN0Mjc3NTE3NTQ2 | 2953 | closed | 0 | Mark test for copying coords of dataarray and dataset with xfail | pletchm 10720577 | Mark test for copying coords of dataarray and dataset with `xfail`. It looks like the test fails for the shallow copy, and apparently only on Windows for some reason. In Windows coords seem to be immutable unless it's one dataarray deep copied from another (which is why only the deep=False test fails). So I decided to just mark the tests as xfail for now (but I'd be happy to create an issue and look into it more in the future). <!-- Feel free to remove check-list items aren't relevant to your change --> | 2019-05-09T19:25:40Z | 2019-05-09T22:17:56Z | 2019-05-09T22:17:53Z | 2019-05-09T22:17:53Z | 698293e6cdff8be94464af00e8b8f171e8fd1557 | 0 | f43abfcc524ff20a0a6aee029118f1a42511b8ed | ab3972294860447f9515c7b7b0a04838db061496 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2953 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);