issue_comments
5 rows where issue = 437765416 and user = 7441788 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Feature/weighted · 5 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
601885539 | https://github.com/pydata/xarray/pull/2922#issuecomment-601885539 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTg4NTUzOQ== | seth-p 7441788 | 2020-03-20T19:57:54Z | 2020-03-20T20:00:20Z | CONTRIBUTOR | All good points:
Good idea, though I don't know what the performance hit would be of the extra check (in the case that da does contain NaNs, so the check is for naught).
Well,
Yes. You can continue not supporting NaNs in the weights, yet not explicitly check that there are no NaNs (optionally, if the caller assures you that there are no NaNs).
Correct. These have nothing to do with the NaNs issue. For profiling memory usage, I use |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601709733 | https://github.com/pydata/xarray/pull/2922#issuecomment-601709733 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTcwOTczMw== | seth-p 7441788 | 2020-03-20T13:47:39Z | 2020-03-20T16:31:14Z | CONTRIBUTOR | @mathause, have you considered using these functions?
- np.average() to calculate weighted |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601708110 | https://github.com/pydata/xarray/pull/2922#issuecomment-601708110 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTcwODExMA== | seth-p 7441788 | 2020-03-20T13:44:03Z | 2020-03-20T13:52:06Z | CONTRIBUTOR | @mathause, ideally
Either way, this only addresses the Also, perhaps the test Maybe I'm more sensitive to this than others, but I regularly deal with 10-100GB arrays. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601699091 | https://github.com/pydata/xarray/pull/2922#issuecomment-601699091 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTY5OTA5MQ== | seth-p 7441788 | 2020-03-20T13:25:21Z | 2020-03-20T13:25:21Z | CONTRIBUTOR | @max-sixty, I wish I could, but I'm afraid that I cannot submit code due to employer limitations. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601496897 | https://github.com/pydata/xarray/pull/2922#issuecomment-601496897 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTQ5Njg5Nw== | seth-p 7441788 | 2020-03-20T02:11:53Z | 2020-03-20T02:12:24Z | CONTRIBUTOR | I realize this is a bit late, but I'm still concerned about memory usage, specifically in https://github.com/pydata/xarray/blob/master/xarray/core/weighted.py#L130 and https://github.com/pydata/xarray/blob/master/xarray/core/weighted.py#L143.
If I would have implemented this using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1