issues
3 rows where comments = 9, type = "issue" and user = 5635139 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2110888925 | I_kwDOAMm_X8590Zvd | 8690 | Add `nbytes` to repr? | max-sixty 5635139 | closed | 0 | 9 | 2024-01-31T20:13:59Z | 2024-02-19T22:18:47Z | 2024-02-07T20:47:38Z | MEMBER | Is your feature request related to a problem?Would having the I frequently find myself logging this separately. For example:
Describe the solution you'd likeNo response Describe alternatives you've consideredStatus quo :) Additional contextNo response |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/8690/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 393710539 | MDU6SXNzdWUzOTM3MTA1Mzk= | 2627 | Is pep8speaks working well? | max-sixty 5635139 | closed | 0 | 9 | 2018-12-22T23:38:14Z | 2018-12-30T11:08:41Z | 2018-12-30T11:08:41Z | MEMBER | I returned to do some work on xarray, and looks like there are lots of linting errors. Maybe we shouldn't worry ourselves with this; though let's make a deliberate decision Is pep8speaks working well? One example is this PR: https://github.com/pydata/xarray/pull/2553, where it looks like pep8speaks initially complained, but doesn't have a test so was potentially lost in the discussion. Any thoughts? I'm happy to help see if there are alternatives / pep8speaks can be added as a test / add a test in travis / fix these. FWIW I've used
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 206632333 | MDU6SXNzdWUyMDY2MzIzMzM= | 1257 | PERF: Add benchmarking? | max-sixty 5635139 | closed | 0 | 9 | 2017-02-09T21:17:40Z | 2017-07-26T16:17:34Z | 2017-07-26T16:17:34Z | MEMBER | Because xarray is all python and generally not doing much compute itself (i.e. it marshals other libraries to do that), this hasn't been that important. IIRC most of the performance issues have arisen where xarray builds on (arguably) shaky foundations, like Though as we mature, is it worth adding some benchmarks? If so, what's a good way to do this? Pandas uses asv successfully. I don't have experience with https://github.com/ionelmc/pytest-benchmark but that could be a lower cost way of getting started. Any others? |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/1257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);