pull_requests
10 rows where milestone = 1004936
This data as json, CSV (advanced)
Suggested facets: user, body, base, author_association, created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
30427125 | MDExOlB1bGxSZXF1ZXN0MzA0MjcxMjU= | 359 | closed | 0 | Raise informative exception when _FillValue and missing_value disagree | akleeman 514053 | Previously conflicting _FillValue and missing_value only raised an AssertionError, now it's more informative. | 2015-03-04T00:22:41Z | 2015-03-12T16:33:47Z | 2015-03-12T16:32:07Z | 2015-03-12T16:32:07Z | f1dbff3d12aa2f67c70a210651c31a37b60d838b | 0.4.1 1004936 | 0 | ec35efd763419f71fcb81a91a70251e55146f0e9 | 7187bb9af9b2fffedb931dcaa3766b58e769a13e | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/359 | |||
30491293 | MDExOlB1bGxSZXF1ZXN0MzA0OTEyOTM= | 361 | closed | 0 | Add resample, first and last | shoyer 1217238 | Fixes #354 `resample` lets you resample a dataset or array along a time axis to a coarser resolution. The syntax is the same as pandas, except you need to supply the time dimension explicitly: ``` In [1]: time = pd.date_range('2000-01-01', freq='6H', periods=10) In [2]: array = xray.DataArray(np.arange(10), [('time', time)]) In [3]: array.resample('1D', dim='time') Out[3]: <xray.DataArray (time: 3)> array([ 1.5, 5.5, 8.5]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 2000-01-03 ``` You can specify how to do the resampling with the how argument and other options such as closed and label let you control labeling: ``` In [4]: array.resample('1D', dim='time', how='sum', label='right') Out[4]: <xray.DataArray (time: 3)> array([ 6, 22, 17]) Coordinates: * time (time) datetime64[ns] 2000-01-02 2000-01-03 2000-01-04 ``` `first` and `last` methods on groupby objects let you take the first or last examples from each group along the grouped axis: ``` In [5]: array.groupby('time.day').first() Out[5]: <xray.DataArray (day: 3)> array([0, 4, 8]) Coordinates: * day (day) int64 1 2 3 ``` | 2015-03-04T18:32:24Z | 2015-03-05T19:29:42Z | 2015-03-05T19:29:39Z | 2015-03-05T19:29:39Z | eefca5e51afa2af5df3991b7ff4da570408787cd | 0.4.1 1004936 | 0 | 2358989b61c743a794e007d2324249f95964b5f8 | 7187bb9af9b2fffedb931dcaa3766b58e769a13e | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/361 | |||
30725562 | MDExOlB1bGxSZXF1ZXN0MzA3MjU1NjI= | 363 | closed | 0 | Fix (most) windows issues | shoyer 1217238 | xref #360 In this change: - Fix tests that relied on implicit conversion to int64 (Python's int on windows is int32). - Be more careful about always closing files, even in tests. Not addressed (yet): - Issues with scipy.io.netcdf_file (#341) | 2015-03-08T20:10:49Z | 2015-03-08T20:15:45Z | 2015-03-08T20:15:43Z | 2015-03-08T20:15:43Z | c851412ab2f1a1499de5a20510ff614532272b55 | 0.4.1 1004936 | 0 | dc2b0f04bd4c6a80fc059929f88c4d07305a58cb | 6512e272dee595ccd7e064f57041d771e5450f8d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/363 | |||
30742701 | MDExOlB1bGxSZXF1ZXN0MzA3NDI3MDE= | 365 | closed | 0 | Add "engine" argument and fix reading mmapped data with scipy.io.netcdf | shoyer 1217238 | Fixes #341 | 2015-03-09T08:25:18Z | 2015-03-09T17:30:28Z | 2015-03-09T17:30:28Z | 2015-03-09T17:30:28Z | b16c32ccc6cd2009b20e5ed2a9a1c608f550d5a7 | 0.4.1 1004936 | 0 | 02dadc9b92b5d6325880165f8192e697a3896e20 | 5e7b3dfa6080cee9ebd9aaa6f9c59a4a8a190578 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/365 | |||
30807765 | MDExOlB1bGxSZXF1ZXN0MzA4MDc3NjU= | 366 | closed | 0 | Silenced warnings for all-NaN slices when using nan functions from numpy | shoyer 1217238 | Fixes #344 These warnings are typically spurious on xray objects. Note that this does result in a _small_ performance penalty for these functions (e.g., a few percent). This can be avoided by install bottleneck. CC @jhammon | 2015-03-09T22:48:20Z | 2015-03-10T06:49:11Z | 2015-03-10T06:49:09Z | 2015-03-10T06:49:09Z | 5e7e35307d8afb105d08b929dc937679bc17f3c0 | 0.4.1 1004936 | 0 | 67b042363fdc7fc396728db8417c54283d978c12 | 003b65f056e44a035397022a4bc798dbc7d5a47a | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/366 | |||
31092718 | MDExOlB1bGxSZXF1ZXN0MzEwOTI3MTg= | 372 | closed | 0 | API: new methods {Dataset/DataArray}.swap_dims | shoyer 1217238 | Fixes #276 Exmaple usage: ``` In [8]: ds = xray.Dataset({'x': range(3), 'y': ('x', list('abc'))}) In [9]: ds Out[9]: <xray.Dataset> Dimensions: (x: 3) Coordinates: * x (x) int64 0 1 2 Data variables: y (x) |S1 'a' 'b' 'c' In [10]: ds.swap_dims({'x': 'y'}) Out[10]: <xray.Dataset> Dimensions: (y: 3) Coordinates: * y (y) |S1 'a' 'b' 'c' x (y) int64 0 1 2 Data variables: *empty* ``` This is a slightly more verbose API than strictly necessary, because the new dimension names must be along existing dimensions (e.g., we could spell this `ds.set_dims(['y'])`). But I still think it's a good idea, for two reasons: 1. It's more explicit. Users control know which dimensions are being swapped. 2. It opens up the possibility of specifying new dimensions with dictionary like syntax, e.g., `ds.swap_dims('x': ('y', list('abc')))` CC @aykuznetsova | 2015-03-13T01:08:15Z | 2015-03-17T15:44:30Z | 2015-03-17T15:44:30Z | 2015-03-17T15:44:30Z | 908965075ddb59fc6c67684813fd41c25b1e4259 | 0.4.1 1004936 | 0 | 4be2e38dd94cdf5310e2eb71f77fbfade7bec4df | 48ce8c23c31b9a5d092f29974715aa1888b95044 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/372 | |||
31177345 | MDExOlB1bGxSZXF1ZXN0MzExNzczNDU= | 373 | closed | 0 | New docs on multi-file IO and time-series data | shoyer 1217238 | 2015-03-14T03:12:50Z | 2015-03-16T04:02:20Z | 2015-03-16T04:02:18Z | 2015-03-16T04:02:18Z | 768d7f274dd4a493f4a4e56d41aee2c4d0eed95f | 0.4.1 1004936 | 0 | 2dd06e78d771a899a3d1a630d9edfc13f01dd6ac | 3884888a74cfa2f905df3440e64df654a9a9795d | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/373 | ||||
31343866 | MDExOlB1bGxSZXF1ZXN0MzEzNDM4NjY= | 375 | closed | 0 | DOC: Refreshed docs frontpage, including adding logo | shoyer 1217238 | 2015-03-17T15:10:22Z | 2015-03-17T15:18:52Z | 2015-03-17T15:18:51Z | 2015-03-17T15:18:51Z | 2a6f8f07a473ca26cba9dadd515d88d4498b7a73 | 0.4.1 1004936 | 0 | 524f6945d189812c46938ce3a5014c082f6c5010 | 48ce8c23c31b9a5d092f29974715aa1888b95044 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/375 | ||||
31352441 | MDExOlB1bGxSZXF1ZXN0MzEzNTI0NDE= | 376 | closed | 0 | BUG: Fix failing to determine time units | shoyer 1217238 | Fixed a regression in v0.4 where saving to netCDF could fail with the error `ValueError: could not automatically determine time units`. | 2015-03-17T16:32:00Z | 2015-03-17T16:40:25Z | 2015-03-17T16:40:23Z | 2015-03-17T16:40:23Z | 6974c4476902ed30c7020a38344cbddc2430bfd6 | 0.4.1 1004936 | 0 | dee44888624d11be3c3845542879906da82e0e82 | bd911aa1d82dcf452cbee97422f568287836a2f9 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/376 | |||
31360448 | MDExOlB1bGxSZXF1ZXN0MzEzNjA0NDg= | 377 | closed | 0 | Add Appveyor for CI on Windows | shoyer 1217238 | Fixes #360 Note: several tests for netCDF4 were previously defined twice, by accident. | 2015-03-17T17:52:33Z | 2015-03-17T18:26:48Z | 2015-03-17T18:26:46Z | 2015-03-17T18:26:46Z | 6ac9060ae113cf776ea59e9d2132595a9dc547f7 | 0.4.1 1004936 | 0 | a8b5a5d49c67dd3612189130d29bb92cf7d99fd9 | 4ca0860db17cd00e37d4d2bfe22e2842b3f7ae45 | MEMBER | xarray 13221727 | https://github.com/pydata/xarray/pull/377 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);