issue_comments
8 rows where issue = 1221848774 and user = 43316012 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, updated_at (date)
issue 1
- polyval: Use Horner's algorithm + support chunked inputs · 8 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1117620357 | https://github.com/pydata/xarray/pull/6548#issuecomment-1117620357 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CnYiF | headtr1ck 43316012 | 2022-05-04T17:33:07Z | 2022-05-04T17:33:37Z | COLLABORATOR | Personally I would allow coeffs without explicit index since I am a lazy person and would like to do But I am happy with this code and look forward to use it in my projects :) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1115830683 | https://github.com/pydata/xarray/pull/6548#issuecomment-1115830683 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85Cgjmb | headtr1ck 43316012 | 2022-05-03T07:57:46Z | 2022-05-03T08:06:29Z | COLLABORATOR | One minor open point: what to do with a non-integer "degree" index? Float type could be cast to integer (thats what is happening now). But (nonsense) datetime etc. should raise an error? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114539954 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114539954 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85Cboey | headtr1ck 43316012 | 2022-05-02T06:30:21Z | 2022-05-02T09:50:49Z | COLLABORATOR | Edit: nvmd, was only confusing output when the benchmark was failing. Now the benchmark looks good :) First time working with asv... It seems that module level variables affect all other peakmem tests (i guess memory usage of the phyton process is measured,). We should refactor all dataarrays into the setup functions, otherwise O(n) memory algos will show wrong numbers and adding new tests will show regression on other tests. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114297460 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114297460 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CatR0 | headtr1ck 43316012 | 2022-05-01T18:00:07Z | 2022-05-01T18:00:07Z | COLLABORATOR | Benchmark did not succeed since the inputs are not compatible with the old algorith... Do we change it such that it is compatible? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114002143 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114002143 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CZlLf | headtr1ck 43316012 | 2022-04-30T14:57:58Z | 2022-05-01T11:50:12Z | COLLABORATOR | Several open points still:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114212076 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114212076 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CaYbs | headtr1ck 43316012 | 2022-05-01T11:41:54Z | 2022-05-01T11:41:54Z | COLLABORATOR | I added a rough support for datetime values. Someone with more knowledge of handling them should take a look, the code seems too complicated and I am sure there is a more clever solution (I could not use
I think keeping support is nice, since they are a commonly occuring coordinates and we do not want to break anything if possible. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114196391 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114196391 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CaUmn | headtr1ck 43316012 | 2022-05-01T10:27:52Z | 2022-05-01T10:27:52Z | COLLABORATOR | Some performance comparison: With 5th order polynomial and 10 x-values: old: 1.05 ms ± 15.8 µs per loop new: 1.41 ms ± 11.6 µs per loop With 5th order polynomial and 10000 x-values: old: 1.46 ms ± 10.5 µs per loop new: 1.41 ms ± 14.5 µs per loop With 5th order polynomial and 1mio x-values: old: 65.1 ms ± 332 µs per loop new: 6.99 ms ± 168 µs per loop As expected for small arrays the new method creates some overhead, but for larger arrays the speedup is quite nice. Also, it uses in-place operations with much less memory usage. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 | |
1114028796 | https://github.com/pydata/xarray/pull/6548#issuecomment-1114028796 | https://api.github.com/repos/pydata/xarray/issues/6548 | IC_kwDOAMm_X85CZrr8 | headtr1ck 43316012 | 2022-04-30T18:01:10Z | 2022-04-30T18:01:10Z | COLLABORATOR | I noticed that broadcasting Datasets behaves weird, see https://github.com/pydata/xarray/issues/6549, so I used a "hack" of adding an 0-valued DataArray/Dataset. Anyone got a better idea? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
polyval: Use Horner's algorithm + support chunked inputs 1221848774 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1