issues
2 rows where repo = 13221727, state = "closed" and user = 2418513 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 427410885 | MDU6SXNzdWU0Mjc0MTA4ODU= | 2857 | Quadratic slowdown when saving multiple datasets to the same h5 file (h5netcdf) | aldanor 2418513 | closed | 0 | 24 | 2019-03-31T15:47:40Z | 2022-01-12T07:19:06Z | 2022-01-12T07:19:06Z | NONE | I can't quite understand what's wrong with my side of the code, wondering if this kind of slowdown is expected or not? Basically, what I'm doing is something like this:
And here's the log for saving 20 datasets, the listed times are for each dataset independently. Instead of the expected 10 sec (which is already kind of slow, but whatever), I get 2 minutes. The time to save each dataset seems to increase linearly, which leads to a quadratic overall slowdown: ``` saving dataset... 00:00:00.559135 saving dataset... 00:00:00.924617 saving dataset... 00:00:01.351670 saving dataset... 00:00:01.818111 saving dataset... 00:00:02.356307 saving dataset... 00:00:02.971077 saving dataset... 00:00:03.685565 saving dataset... 00:00:04.375104 saving dataset... 00:00:04.575837 saving dataset... 00:00:05.179975 saving dataset... 00:00:05.793876 saving dataset... 00:00:06.517916 saving dataset... 00:00:07.190257 saving dataset... 00:00:07.993795 saving dataset... 00:00:08.786421 saving dataset... 00:00:09.414821 saving dataset... 00:00:10.729006 saving dataset... 00:00:11.584044 saving dataset... 00:00:14.160655 saving dataset... 00:00:14.460564 CPU times: user 1min 49s, sys: 12.8 s, total: 2min 2s Wall time: 2min 4s ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 549712566 | MDU6SXNzdWU1NDk3MTI1NjY= | 3695 | mypy --strict fails on scripts/packages depending on xarray; __all__ required | aldanor 2418513 | closed | 0 | crusaderky 6213168 | 3 | 2020-01-14T17:27:44Z | 2020-01-17T20:42:25Z | 2020-01-17T20:42:25Z | NONE | Checked this with both 0.14.1 and master branch. Create
and run:
which results in
I did a bit of digging trying to make it work, it looks like what makes the above script work with mypy is adding
to Should |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/3695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);