issues
2 rows where "created_at" is on date 2021-09-14, repo = 13221727 and user = 14371165 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
996352280 | PR_kwDOAMm_X84rv1Fo | 5794 | Single matplotlib import | Illviljan 14371165 | closed | 0 | 7 | 2021-09-14T19:15:12Z | 2022-08-12T09:06:30Z | 2021-10-24T09:54:28Z | MEMBER | 0 | pydata/xarray/pulls/5794 | Reduce number of imports inside functions. I think it helps making the code easier to read as well, as now you know that
Seems to not be a major difference in initial imports from (my small sample of) repeated tests: This branch: ```python %timeit -n1 -r1 import xarray 3.81 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.83 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.87 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.7 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.77 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.91 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.8 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) np.mean([3.81, 3.83, 3.87, 3.7, 3.77, 3.91, 3.8]) Out[3]: 3.812857142857143 ``` Main: ```python %timeit -n1 -r1 import xarray 3.93 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.69 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.64 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.76 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.79 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.81 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) 3.68 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) np.mean([3.93, 3.69, 3.64, 3.76, 3.79, 3.81, 3.68]) Out[4]: 3.7571428571428567 ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5794/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
996475523 | PR_kwDOAMm_X84rwPey | 5796 | Add asv benchmark jobs to CI | Illviljan 14371165 | closed | 0 | 16 | 2021-09-14T22:00:49Z | 2022-08-12T09:01:15Z | 2021-10-24T10:08:02Z | MEMBER | 0 | pydata/xarray/pulls/5796 | Workflow based on the version from scikit-image. Modfied to have Notes: * https://github.com/scikit-image/scikit-image doesn't have the same benchmark folder setup, for example config file is in root directory, other folder names. * https://github.com/numpy/numpy has same folder name as sckit-image. config file is in the folder however. References: * https://labs.quansight.org/blog/2021/08/github-actions-benchmarks/ * https://github.com/scikit-image/scikit-image/pull/5424 * https://github.com/jaimergp/scikit-image/pull/1 Tests checked:
TODO: * self.setup_cache
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5796/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);