pull_requests
26 rows where milestone = 650893
This data as json, CSV (advanced)
Suggested facets: user, body, base, author_association, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15862044 | MDExOlB1bGxSZXF1ZXN0MTU4NjIwNDQ= | 128 | closed | 0 | Expose more information in DataArray.__repr__ | shoyer 1217238 | This PR changes the `DataArray` representation so that it displays more of the information associated with a data array: - "Coordinates" are indicated by their name and the `repr` of the corresponding pandas.Index object (to indicate how they are used as indices). - "Linked" dataset variables are also listed. - These are other variables in the dataset associated with a DataArray which are also indexed along with the DataArray. - They accessible from the `dataset` attribute or by indexing the data array with a string. - Perhaps their most convenient aspect is that they enable [`groupby` operations by name](http://xray.readthedocs.org/en/latest/tutorial.html#apply) for DataArray objets. - This is an admitedly somewhat confusing (though convenient) notion that I am considering [removing](https://github.com/xray- pydata/xray/issues/117), but we if we don't remove them we should certainly expose their existence more clearly, given the potential benefits in expressiveness and costs in performance. Questions to resolve: - Is "Linked dataset variables" the best name for these? - Perhaps it would be useful to show more information about these linked variables, such as their dimensions and/or shape? Examples of the new repr are on nbviewer: http://nbviewer.ipython.org/gist/shoyer/94936e5b71613683d95a | 2014-05-14T06:05:53Z | 2014-08-01T05:54:50Z | 2014-05-29T04:19:46Z | 2014-05-29T04:19:46Z | 166ba9652e44423de902351d65e94216f5d8125a | 0.2 650893 | 0 | 238cb2a3d360e4dc0977c0e37758faf62e262fab | ed3143e3082ba339d35dc4678ddabc7e175dd6b8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/128 | |||
16085838 | MDExOlB1bGxSZXF1ZXN0MTYwODU4Mzg= | 137 | closed | 0 | Dataset.reduce methods | jhamman 2443309 | A first attempt at implementing Dataset reduction methods. #131 | 2014-05-20T01:53:30Z | 2014-07-25T06:37:31Z | 2014-05-21T20:23:36Z | 2014-05-21T20:23:36Z | f6a6e7317c78e108176b74f1f67e12f5880e14fa | 0.2 650893 | 0 | b5d82a0887f7156ddd4ab1c1aab89345bd642162 | 7732816216bbb5d0c98946149c9f3b8dc54eb28f | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/137 | |||
16622100 | MDExOlB1bGxSZXF1ZXN0MTY2MjIxMDA= | 144 | closed | 0 | Use "equivalence" for all dictionary equality checks | shoyer 1217238 | This should fix a bug @mgarvert encountered with concatenating variables with different array attributes. In the process of fixing this issue, I encountered and fixed another bug with utils.remove_incompatible_items. | 2014-06-02T21:01:35Z | 2014-06-25T23:40:36Z | 2014-06-02T21:20:15Z | 2014-06-02T21:20:15Z | 955027efe5822cdb1d3f48ee1260318e1af8c0a8 | 0.2 650893 | 0 | eff435deecabd1ff9488ec640c126dde2fe4fca0 | 71137d1e50116e5cca63d9b1c169844b5737cec2 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/144 | |||
16802020 | MDExOlB1bGxSZXF1ZXN0MTY4MDIwMjA= | 147 | closed | 0 | Support "None" as a variable name and use it as a default | shoyer 1217238 | This makes the xray API a little more similar to pandas, which makes heavy use of `name = None` for objects that can but don't always have names like Series and Index. It will be a particular useful option to have around when we add a direct constructor for DataArray objects (#115). For now, arrays will probably only end up being named `None` if they are the result of some mathematical operation where the name could be ambiguous. | 2014-06-06T02:26:57Z | 2014-08-14T07:44:27Z | 2014-06-09T06:17:55Z | 2014-06-09T06:17:55Z | 0674f9350b26eb604d7cb729d34abbf52fde2e20 | 0.2 650893 | 0 | f448318ff7efc8e6c4e98140ecda0db7304fbfce | 77dd0c38a4065ea815368f3ca9490157b530a9c4 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/147 | |||
16873050 | MDExOlB1bGxSZXF1ZXN0MTY4NzMwNTA= | 149 | closed | 0 | Data array constructor | shoyer 1217238 | Fixes #115. Related: #116, #117. Note: a remaining major task will be to rewrite/reorganize the docs to introduce `DataArray` first, entirely independently of `Dataset`. This will make it easier for new users to figure out how to get started with xray, since DataArray is much simpler. | 2014-06-09T06:29:49Z | 2014-06-12T20:38:27Z | 2014-06-11T16:53:58Z | 2014-06-11T16:53:58Z | 467cf48090c5f3a7821f0b8bcda035e0bb26d1df | 0.2 650893 | 0 | 31cbb2fafea5d9f0db647cd65674201df9c2d9c0 | 3af0e34b90b8ec5436047419ad3ed2402ad5ff24 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/149 | |||
16896623 | MDExOlB1bGxSZXF1ZXN0MTY4OTY2MjM= | 150 | closed | 0 | Fix DecodedCFDatetimeArray was being incorrectly indexed. | akleeman 514053 | This was causing an error in the following situation: ``` ds = xray.Dataset() ds['time'] = ('time', [np.datetime64('2001-05-01') for i in range(5)]) ds['variable'] = ('time', np.arange(5.)) ds.to_netcdf('test.nc') ds = xray.open_dataset('./test.nc') ss = ds.indexed(time=slice(0, 2)) ss.dumps() ``` Thanks @shoyer for the fix. | 2014-06-09T17:25:05Z | 2014-06-09T17:43:50Z | 2014-06-09T17:43:50Z | 2014-06-09T17:43:50Z | 2ec8b7127f0d27683cb6d32da859a62e00ded6b9 | 0.2 650893 | 0 | 095e7070342a01ce5ee06a4cabd55087ad80395d | 3af0e34b90b8ec5436047419ad3ed2402ad5ff24 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/150 | |||
17117566 | MDExOlB1bGxSZXF1ZXN0MTcxMTc1NjY= | 161 | closed | 0 | Rename "Coordinate", "labeled" and "indexed" | shoyer 1217238 | Fixes #142 Fixes #148 All existing code should still work but issue a `FutureWarning` if any of the old names are used. Full list of updates: | Old | New | | --- | --- | | `Coordinate` | `Index` | | `coordinates` | `indexes` | | `noncoordinates` | `nonindexes` | | `indexed` | `isel` | | `labeled` | `sel` | | `select` | `select_vars` | | `unselect` | `drop_vars` | Most of these are both `Dataset` and `DataArray` methods/properties. | 2014-06-13T16:07:40Z | 2014-06-22T00:44:28Z | 2014-06-22T00:44:26Z | 2014-06-22T00:44:26Z | 9375aa280bb9254d9b83fe220baebed3526274da | 0.2 650893 | 0 | f51c7e8ca52e0d7cc5ec62a57c474c37d1debeb3 | 83ac662d3d90e31f6ee37262ebc85f059afa6751 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/161 | |||
17158398 | MDExOlB1bGxSZXF1ZXN0MTcxNTgzOTg= | 163 | closed | 0 | BUG: fix encoding issues (array indexing now resets encoding) | shoyer 1217238 | Fixes #156, #157 To elaborate on the changes: 1. When an array is indexed, its encoding will be reset. This takes care of the invalid chunksize issue. More generally, this seems like the right choice because it's not clear that the right encoding will be the same after slicing an array, anyways. 2. If an array has `encoding['dtype'] = np.dtype('S1')` (e.g., it was originally encoded in characters), it will be stacked up to be saved as a character array, even if it's being saved to a NetCDF4 file. Previously, the array would be cast to 'S1' without stacking, which would result in silent loss of data. | 2014-06-16T01:29:22Z | 2014-06-17T07:28:45Z | 2014-06-16T04:52:43Z | 2014-06-16T04:52:43Z | 2d8751e9f80f6ade4240162d8b6c0668d4f00be8 | 0.2 650893 | 0 | 667f26fad6af902fb0508693326bc3c313d7847d | 71226fb571e0b9cdc32cc476b333991eafebe466 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/163 | |||
17281384 | MDExOlB1bGxSZXF1ZXN0MTcyODEzODQ= | 165 | closed | 0 | WIP: cleanup conventions.encode_cf_variable | shoyer 1217238 | Almost ready, except for failing tests on Python 3. | 2014-06-18T08:47:35Z | 2014-06-22T00:36:01Z | 2014-06-22T00:35:42Z | 2014-06-22T00:35:42Z | 4c8bda09fdd7a03bd0293ed663320420b3b099bd | 0.2 650893 | 0 | 1675cc51e09b43cfeabbc34c6dac80976d26f28b | 4fce6d2e4aca03687a40f9041db7bdc5a30f9e09 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/165 | |||
17312904 | MDExOlB1bGxSZXF1ZXN0MTczMTI5MDQ= | 166 | closed | 0 | Revert using __slots__ for Mapping subclasses in xray.utils | shoyer 1217238 | This recently added some complexity for a very nominal speed benefit. And it appears that it breaks joblib serialization, somehow (even though pickle works). So for now, revert it -- and consider filing a joblib bug if we can narrow it down. | 2014-06-18T19:08:47Z | 2014-06-18T19:24:50Z | 2014-06-18T19:12:52Z | 2014-06-18T19:12:52Z | 57bba43983d48a9ba30b2770d375a742ba4c62cc | 0.2 650893 | 0 | 3d6eab5e8e2774d006481234847f348427aa87eb | 4fce6d2e4aca03687a40f9041db7bdc5a30f9e09 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/166 | |||
17446066 | MDExOlB1bGxSZXF1ZXN0MTc0NDYwNjY= | 169 | closed | 0 | Cleanups | shoyer 1217238 | 2014-06-22T06:44:17Z | 2014-06-22T06:56:22Z | 2014-06-22T06:56:20Z | 2014-06-22T06:56:20Z | 2543892501760532042b84352b0919833794ad10 | 0.2 650893 | 0 | c87d68bcb11bd0d6f19dcea863070bc8668895ca | 420655dbf13282e2754ff1f681fae12978a78291 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/169 | ||||
17446667 | MDExOlB1bGxSZXF1ZXN0MTc0NDY2Njc= | 171 | closed | 0 | Implementation of DatasetGroupBy summary methods | shoyer 1217238 | You can now do `ds.groupby('time.month').mean()` to apply the mean over all groups and variables in a dataset. It is not optimized like the DataArray.groupby summary methods but it should work. Thanks @jhamman for laying the groundwork for this! | 2014-06-22T08:38:51Z | 2014-06-23T07:25:10Z | 2014-06-23T07:25:08Z | 2014-06-23T07:25:08Z | fd8c731f7d98ab0315c1b4f956246dbc1af6a2e3 | 0.2 650893 | 0 | da3b0053eaa44e0526cf23a079804af6e08f7335 | 64d88a8537b8d107ab978410f47ea4e2280c6d89 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/171 | |||
17513266 | MDExOlB1bGxSZXF1ZXN0MTc1MTMyNjY= | 172 | closed | 0 | {DataArray,Dataset}.indexes no longer creates a new dict | shoyer 1217238 | According to the toy benchmark below, this shaves off between 20% (diff-indexes) to 40% (same-indexes) of xray's overhead for array math: ``` import numpy as np import xray x = np.random.randn(1000, 1000) y = np.random.randn(1000, 1000) dx = xray.DataArray(x) dy = xray.DataArray(y) %timeit x + x # raw-numpy %timeit dx + dx # same-indexes %timeit dx + dy # diff-indexes ``` | 2014-06-24T05:10:25Z | 2014-06-24T05:34:38Z | 2014-06-24T05:34:36Z | 2014-06-24T05:34:36Z | 17097a127a67c9bd245e83ede8ccfe64475ee887 | 0.2 650893 | 0 | 3f0a87b9c2e29670b14a69e352ab5e1f26bc9a95 | e0ffca26d30eab1731b6a5d380f2948c5f519dab | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/172 | |||
17513759 | MDExOlB1bGxSZXF1ZXN0MTc1MTM3NTk= | 173 | closed | 0 | Edge cases | shoyer 1217238 | 2014-06-24T05:34:05Z | 2014-06-24T17:55:16Z | 2014-06-24T17:55:14Z | 2014-06-24T17:55:14Z | eb7d9a577fced96e63b035496afab186c0765bb5 | 0.2 650893 | 0 | d0c1e95aaf265a79823b9bbe380276cf9bf54fbf | e0ffca26d30eab1731b6a5d380f2948c5f519dab | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/173 | ||||
17574726 | MDExOlB1bGxSZXF1ZXN0MTc1NzQ3MjY= | 174 | closed | 0 | Add isnull and notnull (wrapping pandas) | shoyer 1217238 | 2014-06-25T07:07:42Z | 2014-06-25T07:37:36Z | 2014-06-25T07:37:35Z | 2014-06-25T07:37:35Z | 2cf59a28ff1b071ea2f57a50a9f550af036d3bca | 0.2 650893 | 0 | e6203c1d952e1a3f422cae8db99a816aa2f11012 | 3672599bc9e605dbb2df05237bfc5b0c142a3257 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/174 | ||||
17840241 | MDExOlB1bGxSZXF1ZXN0MTc4NDAyNDE= | 177 | closed | 0 | Add python2.6 compatibility | aykuznetsova 3344007 | This change mainly involves an alternative import of OrderedDict, modified dict and set comprehensions, and using unittest2 for testing. | 2014-07-01T16:19:21Z | 2014-07-01T21:30:08Z | 2014-07-01T19:57:30Z | 2014-07-01T19:57:30Z | 930b795420e3e024545298eb05f501f5ac6bc1c3 | 0.2 650893 | 0 | 16bfa99e9bc9165510758dde07a4e02617e2b108 | 7e0e7b1f2b3663c9fddb7b9f1767e4e7f744d19c | NONE | xarray 13221727 | https://github.com/pydata/xarray/pull/177 | |||
18759550 | MDExOlB1bGxSZXF1ZXN0MTg3NTk1NTA= | 188 | closed | 0 | Dataset context manager and close() method | shoyer 1217238 | With this PR, it is possible to close the data store from which a dataset was loaded via `ds.close()` or automatically when a dataset is used with a context manager: ``` python with xray.open_dataset('data.nc') as ds: ... ``` The ability to cleanly close files opened from disk is pretty essential -- we probably should have had this a while ago. It should not be necessary to use the low-level/unstable datastore API to get this functionality. **Implementation question**: With this current implementation, calling `ds.close()` on (and using a context manager with) a dataset not linked to any file objects is a no-op. Should we raise an exception instead? Something like `IOError('no file object to close')`? CC @ToddSmall | 2014-07-23T07:03:49Z | 2014-07-29T19:47:46Z | 2014-07-29T19:44:30Z | 2014-07-29T19:44:30Z | 8e9c9ab7cd23507c0644207d5de1713d7a49c22c | 0.2 650893 | 0 | d1e739f27bec53f1c77d4625ffc5ddc44a2ac1e1 | 6c394b14ecc04a53d804893060ed33cadfde688e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/188 | |||
18879796 | MDExOlB1bGxSZXF1ZXN0MTg4Nzk3OTY= | 189 | closed | 0 | Implementation of Dataset.apply method | shoyer 1217238 | Fixes #140 | 2014-07-25T06:18:29Z | 2014-07-31T04:45:29Z | 2014-07-31T04:45:29Z | 2014-07-31T04:45:29Z | 2bce568f2195f98beeb4f9aa0fb02cd192dbae99 | 0.2 650893 | 0 | 4548d1015c38dc7c1c324b157da5939f232f4b46 | 6c394b14ecc04a53d804893060ed33cadfde688e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/189 | |||
18947350 | MDExOlB1bGxSZXF1ZXN0MTg5NDczNTA= | 192 | closed | 0 | Enhanced support for modifying Dataset & DataArray properties in place | shoyer 1217238 | With this patch, it is possible to perform the following operations: - `data_array.name = 'foo'` - `data_array.coordinates = ...` - `data_array.coordinates[0] = ...` - `data_array.coordinates['x'] = ...` - `dataset.coordinates['x'] = ...` - `dataset.rename(..., inplace=True)` It is no longer possible to set `data_array.variable = ....`, which was technically part of the public API but I would guess unused. | 2014-07-28T02:14:00Z | 2014-07-31T04:46:19Z | 2014-07-31T04:46:16Z | 2014-07-31T04:46:16Z | 8624314f0d0893e64f818e778ae40c9ffbaf89e3 | 0.2 650893 | 0 | a7f53516b31b49a841de2b67cfeb0027dfda5f71 | 6c394b14ecc04a53d804893060ed33cadfde688e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/192 | |||
19132736 | MDExOlB1bGxSZXF1ZXN0MTkxMzI3MzY= | 194 | closed | 0 | Consistently use shorter names: always use 'attrs', 'coords' and 'dims' | shoyer 1217238 | Cleaned up a few cases where `attributes` was used instead of `attrs` in function signatures. Fixes: #190 - [x] Switch names in xray itself - [x] Switch names in tests - [x] Switch names in documentation | 2014-07-31T05:11:12Z | 2014-08-14T05:08:01Z | 2014-08-14T05:07:58Z | 2014-08-14T05:07:58Z | a9b879898f3d5efffbfb0ee026e8cf2c1b4bac8e | 0.2 650893 | 0 | 6f0fca3584d8d2e079dbd15607a9cda6e183a76b | db292afdc68b4d8a1c7b17e5aacb8d9a67688de8 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/194 | |||
19133368 | MDExOlB1bGxSZXF1ZXN0MTkxMzMzNjg= | 195 | closed | 0 | .loc and .sel support indexing with boolean arrays | shoyer 1217238 | Fixes #182 | 2014-07-31T05:41:09Z | 2014-07-31T06:52:43Z | 2014-07-31T06:52:41Z | 2014-07-31T06:52:41Z | 4643cf790007c8c14ed6629ae3e4375552f03e66 | 0.2 650893 | 0 | d2196ce4bd8f335ea252f358bcc93743084038ac | 20d1939df8dcf016d85d3d71bd739494c586d4d9 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/195 | |||
19135159 | MDExOlB1bGxSZXF1ZXN0MTkxMzUxNTk= | 196 | closed | 0 | Raise NotImplementedError when attempting to use a pandas.MultiIndex | shoyer 1217238 | Related: #164 | 2014-07-31T06:53:04Z | 2014-07-31T07:00:43Z | 2014-07-31T07:00:40Z | 2014-07-31T07:00:40Z | 5798942b33531f0af6a0452a7885618c9bd97e36 | 0.2 650893 | 0 | f2057b1da829f1a6cc04042b2e7a65fd5d87dc08 | 0c66a06e4a7cb64f71979f4f8bb494ad8a2a218e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/196 | |||
19248308 | MDExOlB1bGxSZXF1ZXN0MTkyNDgzMDg= | 198 | closed | 0 | Cleanup of DataArray constructor / Dataset.__getitem__ | shoyer 1217238 | Now Dataset.**getitem** raises a KeyError when it can't find a variable. | 2014-08-02T18:12:36Z | 2014-08-02T18:28:54Z | 2014-08-02T18:28:52Z | 2014-08-02T18:28:52Z | 5108fdd8dea210233a848ed87347e708d9d2201f | 0.2 650893 | 0 | d605857b9593c4765c6efaf77ccc1d9e5909969c | 2debeb9313473a0664c79e213ef9a55a0229aaf1 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/198 | |||
19261817 | MDExOlB1bGxSZXF1ZXN0MTkyNjE4MTc= | 201 | closed | 0 | Fix renaming in-place bug with virtual variables | shoyer 1217238 | This is why mutating state is a bad idea. | 2014-08-04T01:20:06Z | 2014-08-04T01:24:32Z | 2014-08-04T01:22:58Z | 2014-08-04T01:22:58Z | dbc78ad25e85b1268e62c34087a5d23320468b40 | 0.2 650893 | 0 | e79ce168a694f54e191d97bfa5fe1fd3bcf5c57a | 590aa9e7e3f10e6e690cfe8b75ae6f3588b6f47d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/201 | |||
19494597 | MDExOlB1bGxSZXF1ZXN0MTk0OTQ1OTc= | 207 | closed | 0 | Raise an error when attempting to use a scalar variable as a dimension | shoyer 1217238 | If 'x' was a scalar variable in a dataset and you set a new variable with 'x' as a dimension, you could end up with a broken Dataset object. | 2014-08-07T21:07:03Z | 2014-08-07T21:13:12Z | 2014-08-07T21:13:02Z | 2014-08-07T21:13:02Z | bfb96f9bbb25ec14b5d709523e308a7a5083c6eb | 0.2 650893 | 0 | 81195ec7ce030315e8d953002aab96077c8a8b25 | d432677c20b98aff2e48a43699233288c34efbdc | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/207 | |||
19773281 | MDExOlB1bGxSZXF1ZXN0MTk3NzMyODE= | 213 | closed | 0 | Checklist for v0.2.0 | shoyer 1217238 | Should resolve all remaining items in #183. | 2014-08-14T08:08:25Z | 2014-08-14T17:20:05Z | 2014-08-14T17:20:02Z | 2014-08-14T17:20:01Z | 067ec9a104f019304ade3196b76882b40160485e | 0.2 650893 | 0 | 7fa33d7dd4fb9476a0f3bd50fa9e2c442dc6f9f3 | cd0ff19fbf1b57f443761b477bd6be01dd06c3f0 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/213 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);