issues
3 rows where state = "open", type = "issue" and user = 2560426 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: updated_at, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
713834297 | MDU6SXNzdWU3MTM4MzQyOTc= | 4482 | Allow skipna in .dot() | heerad 2560426 | open | 0 | 13 | 2020-10-02T18:52:41Z | 2020-10-20T22:21:14Z | NONE | Is your feature request related to a problem? Please describe. Right now there's no efficient way to do a dot product that skips over nan elements. Describe the solution you'd like
I want to be able to treat the summation in Describe alternatives you've considered
It's possible to implement this by hand, but it ends up being extremely inefficient in one of my use-cases:
- |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
712052219 | MDU6SXNzdWU3MTIwNTIyMTk= | 4474 | Implement rolling_exp for dask arrays | heerad 2560426 | open | 0 | 7 | 2020-09-30T15:31:50Z | 2020-10-15T16:32:03Z | NONE | Is your feature request related to a problem? Please describe.
I use dask-based chunking on my arrays regularly and would like to leverage the efficient Describe the solution you'd like
It's possible to compute a rolling exp mean as a function of rolling exp means of contiguous, non-overlapping subsets (chunks). You just need to first "un-normalize" the rolling_exps of each chunk in order to split them into their corresponding numerators and denominators (see the Then, scale each chunk's numerator and denominator series (derived from their Describe alternatives you've considered
I implemented my own inefficient weighted rolling mean using xarray's |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4474/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
712189206 | MDU6SXNzdWU3MTIxODkyMDY= | 4475 | Preprocess function for save_mfdataset | heerad 2560426 | open | 0 | 9 | 2020-09-30T18:47:06Z | 2020-10-15T16:32:03Z | NONE | Is your feature request related to a problem? Please describe.
I would like to supply a
Describe the solution you'd like Instead, I'd like the ability to do:
Describe alternatives you've considered Not sure. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);