issue_comments
9 rows where user = 7316393 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, created_at (date), updated_at (date)
user 1
- arongergely · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1403706989 | https://github.com/pydata/xarray/pull/7391#issuecomment-1403706989 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85Tqt5t | arongergely 7316393 | 2023-01-25T14:27:15Z | 2023-01-25T14:27:15Z | CONTRIBUTOR | maybe not. I changed it back so you could squash to exclude the noise. The test is adjusted accordingly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1384325872 | https://github.com/pydata/xarray/pull/7391#issuecomment-1384325872 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85SgyLw | arongergely 7316393 | 2023-01-16T16:58:15Z | 2023-01-16T16:58:15Z | CONTRIBUTOR | Will you cherry pick? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1384253560 | https://github.com/pydata/xarray/pull/7391#issuecomment-1384253560 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85Sggh4 | arongergely 7316393 | 2023-01-16T16:02:52Z | 2023-01-16T16:03:02Z | CONTRIBUTOR |
Agreed! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1384142858 | https://github.com/pydata/xarray/pull/7391#issuecomment-1384142858 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85SgFgK | arongergely 7316393 | 2023-01-16T14:24:23Z | 2023-01-16T14:24:23Z | CONTRIBUTOR | Changing the option setters to accept |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1384130795 | https://github.com/pydata/xarray/pull/7391#issuecomment-1384130795 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85SgCjr | arongergely 7316393 | 2023-01-16T14:16:27Z | 2023-01-16T14:16:56Z | CONTRIBUTOR | Hit a roadblock. For binary ops the only way to set To cricle around this we could let Or perhaps I missed something? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1378812265 | https://github.com/pydata/xarray/issues/7414#issuecomment-1378812265 | https://api.github.com/repos/pydata/xarray/issues/7414 | IC_kwDOAMm_X85SLwFp | arongergely 7316393 | 2023-01-11T14:11:41Z | 2023-01-11T14:11:41Z | CONTRIBUTOR | Looks like this is the culprit: https://github.com/scipy/scipy/issues/17718 to be fixed in scipy 1.10.1 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error using xarray.interp - function signature does not match with scipy.interpn 1518812301 | |
1378800682 | https://github.com/pydata/xarray/issues/7414#issuecomment-1378800682 | https://api.github.com/repos/pydata/xarray/issues/7414 | IC_kwDOAMm_X85SLtQq | arongergely 7316393 | 2023-01-11T14:04:09Z | 2023-01-11T14:04:29Z | CONTRIBUTOR | This breaks the documentation build too, it seems! Sphinx errors out when it tries to parse https://github.com/pydata/xarray/blob/main/doc/user-guide/interpolation.rst |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Error using xarray.interp - function signature does not match with scipy.interpn 1518812301 | |
1378579381 | https://github.com/pydata/xarray/pull/7391#issuecomment-1378579381 | https://api.github.com/repos/pydata/xarray/issues/7391 | IC_kwDOAMm_X85SK3O1 | arongergely 7316393 | 2023-01-11T11:03:40Z | 2023-01-11T11:03:40Z | CONTRIBUTOR | Thanks for your suggestions @keewis!
I was puzzled initially. We would introduce the I like the idea, shouldn't we
- implement this for could do all these, let me know |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Follow keep_attrs in Dataset binary ops 1503573351 | |
1359023892 | https://github.com/pydata/xarray/issues/7377#issuecomment-1359023892 | https://api.github.com/repos/pydata/xarray/issues/7377 | IC_kwDOAMm_X85RAQ8U | arongergely 7316393 | 2022-12-20T08:53:34Z | 2022-12-20T08:57:52Z | CONTRIBUTOR | Hi, this is a known issue coming from numpy.nanquantile / numpy.nanpercentile. I had the same problem - AFAIK the workaround is to implement your own nanpercentiles calculation. If you want to take that route: There is a blog post about the issue + a numpy workaround for 3D arrays: https://krstn.eu/np.nanpercentile()-there-has-to-be-a-faster-way/ I also turned to the numpy mailing list. Abel Aoun had a suggestion to look into the algo used at the xclim project. See our thread here: https://mail.python.org/archives/list/numpy-discussion@python.org/message/EKQIS4KNOHS6ZAU5OSYTLNOOH7U2Y5TW/ I ended up taking that one and rewrote it to suit my needs. I achieved >100x speedup in my case Good luck! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Aggregating a dimension using the Quantiles method with `skipna=True` is very slow 1497031605 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 3