issue_comments
7 rows where issue = 523438384 and user = 6213168 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: body, reactions, created_at (date), updated_at (date)
issue 1
- Numpy 1.18 support · 7 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
554980232 | https://github.com/pydata/xarray/pull/3537#issuecomment-554980232 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDk4MDIzMg== | crusaderky 6213168 | 2019-11-18T11:42:16Z | 2019-11-18T11:42:16Z | MEMBER | Ready for review and merge |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554980168 | https://github.com/pydata/xarray/pull/3537#issuecomment-554980168 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDk4MDE2OA== | crusaderky 6213168 | 2019-11-18T11:42:04Z | 2019-11-18T11:42:04Z | MEMBER |
Pre-empting an observation: yes, the nanmin function I wrote could be used to work around the shortcomings of both numpy and dask on datetime64. I don't think xarray should do it and IMHO the workaround currently implemented for mean() should never have been done to begin with. Again, this PR only fixes the regression on numpy 1.18. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554495070 | https://github.com/pydata/xarray/pull/3537#issuecomment-554495070 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDQ5NTA3MA== | crusaderky 6213168 | 2019-11-15T19:25:48Z | 2019-11-15T19:25:48Z | MEMBER | I just realised there are no tests at all for |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554485869 | https://github.com/pydata/xarray/pull/3537#issuecomment-554485869 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDQ4NTg2OQ== | crusaderky 6213168 | 2019-11-15T18:58:24Z | 2019-11-15T19:00:46Z | MEMBER | Updated PR description: Fix mean() and nanmean() for datetime64 arrays on numpy backend when upgrading from numpy 1.17 to 1.18. All other nan-reductions on datetime64s were broken before and remain broken. mean() on datetime64 and dask was broken before and remains broken. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554481913 | https://github.com/pydata/xarray/pull/3537#issuecomment-554481913 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDQ4MTkxMw== | crusaderky 6213168 | 2019-11-15T18:47:21Z | 2019-11-15T18:50:38Z | MEMBER | @dopplershift fully agree.
This PR does not fix the nan-reductions for datetime arrays. It very narrowly fixes the use case of mean() and nanmean() with a numpy backend, so that they work on numpy 1.18 as they did on 1.17. As a matter of fact they never worked with dask, were never tested, and still won't work after the PR. To emphasize: I am not introducing any new feature or fixing bugs on numpy 1.17; I'm just getting rid of regressions on the upgrade from 1.17 to 1.18. Specifically, the PR fixes this failing test: https://github.com/pydata/xarray/blob/aa876cfd6b3b97ee5028d089ec741d057e3ed688/xarray/tests/test_duck_array_ops.py#L276-L293 which was not well formulated - despite its name, it doesn't test all reductions, only mean(), and in fact never runs on dask (the index of a dask array is not chunked). I have now rewritten it for the sake of clarity. If numpy/numpy#14841 makes into master before 1.18 is published, and if it actually delivers on making reductions work with NaT (I did not test it yet), I'll be happy to write a second PR and rework everything. Even then, a switch of this sort will still be needed:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554352386 | https://github.com/pydata/xarray/pull/3537#issuecomment-554352386 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDM1MjM4Ng== | crusaderky 6213168 | 2019-11-15T13:08:25Z | 2019-11-15T13:08:25Z | MEMBER | Ready for review and merge |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 | |
554341027 | https://github.com/pydata/xarray/pull/3537#issuecomment-554341027 | https://api.github.com/repos/pydata/xarray/issues/3537 | MDEyOklzc3VlQ29tbWVudDU1NDM0MTAyNw== | crusaderky 6213168 | 2019-11-15T12:26:50Z | 2019-11-15T12:26:50Z | MEMBER | xref #3434 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Numpy 1.18 support 523438384 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1