pull_requests
2 rows where user = 102827
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date), merged_at (date)
id ▼ | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
121379600 | MDExOlB1bGxSZXF1ZXN0MTIxMzc5NjAw | 1414 | closed | 0 | Speed up `decode_cf_datetime` | cchwala 102827 | - [x] Closes #1399 - [x] Tests added / passed - [x] Passes ``git diff upstream/master | flake8 --diff`` - [x] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API Instead of casting the input numeric dates to float, they are now casted to nanoseconds as int64 which makes `pd.to_timedelta()` work much faster (x100 speedup on my machine). On my machine all existing tests for `conventions.py` pass. Overflows should be handled by [these two already existing lines](https://github.com/cchwala/xarray/commit/d7d7c01f3e2f14c38c44e62f648b30474469b078#diff-d94eba38daa73be812c57c756f01f0daR158) since everything in the valid range of `pd.to_datetime` should be save. | 2017-05-18T21:15:40Z | 2017-07-26T07:40:24Z | 2017-07-25T17:42:52Z | 2017-07-25T17:42:52Z | d275ad6df25457b53a594953f45b252d14260115 | 0 | 3095ecc044bd5bc7107103f526ed43dfab4c64c0 | 5d245b22e9500a7eb805193ba5c65bb5474a5ae1 | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/1414 | ||||
227443527 | MDExOlB1bGxSZXF1ZXN0MjI3NDQzNTI3 | 2532 | closed | 0 | [WIP] Fix problem with wrong chunksizes when using rolling_window on dask.array | cchwala 102827 | - [ ] Closes #2514 - [ ] Closes #2531 - [ ] Tests added (for all bug fixes or enhancements) - [ ] Fully documented, including `whats-new.rst` for all changes ## Short summary The two rolling-window functions for `dask.array` * [dask_rolling_wrapper](https://github.com/pydata/xarray/blob/b622c5e7da928524ef949d9e389f6c7f38644494/xarray/core/dask_array_ops.py#L23) * [rolling_window](https://github.com/pydata/xarray/blob/b622c5e7da928524ef949d9e389f6c7f38644494/xarray/core/dask_array_ops.py#L43) will be fixed to preserve `dask.array` chunksizes. ## Long summary The specific initial problem with chunksizes and `interpolate_na()` in #2514 is caused by the padding done in https://github.com/pydata/xarray/blob/5940100761478604080523ebb1291ecff90e779e/xarray/core/dask_array_ops.py#L74-L85 which adds a small array with a small chunk to the initial array. There is another related problem where `DataArray.rolling()` changes the size and distribution of `dask.array` chunks which stems from this code https://github.com/pydata/xarray/blob/b622c5e7da928524ef949d9e389f6c7f38644494/xarray/core/dask_array_ops.py#L23 For some (historic) reason there are these two rolling-window functions for `dask`. Both need to be fixed to preserve chunksize of a `dask.array` in all cases. | 2018-10-31T21:12:03Z | 2021-03-26T19:50:50Z | 2021-03-26T19:50:50Z | efc36181613b0fb29d2f67b2ef40f1e463fd133f | 0 | 8a4e5904ccede22565a51dc3775b99d713df6259 | 7bf9df9d75c40bcbf2dd28c47204529a76561a3f | CONTRIBUTOR | xarray 13221727 | https://github.com/pydata/xarray/pull/2532 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pull_requests] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [state] TEXT, [locked] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [body] TEXT, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [merged_at] TEXT, [merge_commit_sha] TEXT, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [draft] INTEGER, [head] TEXT, [base] TEXT, [author_association] TEXT, [auto_merge] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [url] TEXT, [merged_by] INTEGER REFERENCES [users]([id]) ); CREATE INDEX [idx_pull_requests_merged_by] ON [pull_requests] ([merged_by]); CREATE INDEX [idx_pull_requests_repo] ON [pull_requests] ([repo]); CREATE INDEX [idx_pull_requests_milestone] ON [pull_requests] ([milestone]); CREATE INDEX [idx_pull_requests_assignee] ON [pull_requests] ([assignee]); CREATE INDEX [idx_pull_requests_user] ON [pull_requests] ([user]);