issue_comments
4 rows where issue = 908464731 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Bottleneck bug with unusual strides - causes segfault or wrong number · 4 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 862579076 | https://github.com/pydata/xarray/issues/5424#issuecomment-862579076 | https://api.github.com/repos/pydata/xarray/issues/5424 | MDEyOklzc3VlQ29tbWVudDg2MjU3OTA3Ng== | dcherian 2448579 | 2021-06-16T17:42:34Z | 2021-06-16T17:42:34Z | MEMBER |
Yeah I think it'd be nice to opt-in/out to bottleneck and maybe even support numbagg somehow. |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Bottleneck bug with unusual strides - causes segfault or wrong number 908464731 | |
| 852355465 | https://github.com/pydata/xarray/issues/5424#issuecomment-852355465 | https://api.github.com/repos/pydata/xarray/issues/5424 | MDEyOklzc3VlQ29tbWVudDg1MjM1NTQ2NQ== | max-sixty 5635139 | 2021-06-01T18:36:04Z | 2021-06-01T18:36:04Z | MEMBER | I don't think there's a config for disabling bottleneck — assuming that's correct, we'd take a PR for one. FYI one does seem to work is setting the type to float:
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Bottleneck bug with unusual strides - causes segfault or wrong number 908464731 | |
| 852334068 | https://github.com/pydata/xarray/issues/5424#issuecomment-852334068 | https://api.github.com/repos/pydata/xarray/issues/5424 | MDEyOklzc3VlQ29tbWVudDg1MjMzNDA2OA== | lusewell 3801015 | 2021-06-01T18:01:05Z | 2021-06-01T18:01:05Z | CONTRIBUTOR | Annoyingly the bug affects pretty much every bottleneck function, not just max, and I'm dealing with a large codebase where lots of the code just uses the methods attached to the Is there a way of disabling use of bottleneck inside xarray without uninstalling bottleneck? And if so do you know if this is expected to give the same results? Pandas (probably a few versions ago now) had a situation where if you uninstalled bottleneck it would use some other routine, but the nan-handling was then different - I think it caused the all-nan Quick response appreciated though, and I might have a delve into fixing bottleneck myself if I get the free time. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Bottleneck bug with unusual strides - causes segfault or wrong number 908464731 | |
| 852265735 | https://github.com/pydata/xarray/issues/5424#issuecomment-852265735 | https://api.github.com/repos/pydata/xarray/issues/5424 | MDEyOklzc3VlQ29tbWVudDg1MjI2NTczNQ== | max-sixty 5635139 | 2021-06-01T16:33:09Z | 2021-06-01T16:33:16Z | MEMBER | Thanks @lusewell . Unfortunately — as you suggest — I don't think there's much we can do — but this does seem like a bad bug. It might be worth checking out numbagg — https://github.com/numbagg/numbagg — which we use for fast operations that bottleneck doesn't include. Disclaimer that it comes from @shoyer , and I've recently given it a spring cleaning. To the extent this isn't fixed in bottleneck, we could offer an option to use numbagg, though it would probably require a contribution. If you need this working for now, you could probably write a workaround for yourself using numbagg fairly quickly; e.g. ```python In [6]: numbagg.nanmax(xarr.values) Out[6]: 0.0 or, more generally:In [12]: xr.apply_ufunc(numbagg.nanmax, xarr, input_core_dims=(('A','B','C'),)) Out[12]: <xarray.DataArray ()> array(0.) ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Bottleneck bug with unusual strides - causes segfault or wrong number 908464731 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 3