pull_requests: 1188736584
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1188736584 | PR_kwDOAMm_X85G2q5I | 7426 | closed | 0 | Add lazy backend ASV test | 14371165 | This tests xr.open_dataset without any slow file reading that can quickly become the majority of the performance time. Related to #7374. Timings for the new ASV-tests: ``` [ 50.85%] ··· dataset_io.IOReadCustomEngine.time_open_dataset ok [ 50.85%] ··· ======== ============ chunks -------- ------------ None 265±4ms {} 1.17±0.02s ======== ============ [ 54.69%] ··· dataset_io.IOReadSingleFile.time_read_dataset ok [ 54.69%] ··· ========= ============= ============= -- chunks --------- --------------------------- engine None {} ========= ============= ============= scipy 4.81±0.1ms 6.65±0.01ms netcdf4 8.41±0.08ms 10.9±0.2ms ========= ============= ============= ``` From the IOReadCustomEngine test we can see that chunking datasets with many variables (2000+) is considerably slower. | 2023-01-06T22:01:26Z | 2023-01-12T16:00:05Z | 2023-01-11T18:56:25Z | 2023-01-11T18:56:25Z | 17933e7654d5502c2a580b1433c585241f915c18 | 0 | cd9fae42815949e6441a8fde3c8dec4bb48a79ec | f3b7c69e21a35452e7ba307815bc80ef39ebd2c1 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/7426 |
Links from other tables
- 4 rows from pull_requests_id in labels_pull_requests