issue_comments
16 rows where author_association = "MEMBER" and issue = 437765416 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Feature/weighted · 16 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
601612380 | https://github.com/pydata/xarray/pull/2922#issuecomment-601612380 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTYxMjM4MA== | mathause 10194086 | 2020-03-20T09:45:23Z | 2020-10-27T14:47:22Z | MEMBER | tldr: if someone knows how to do memory profiling with reasonable effort this can still be changed It's certainly not too late to change the "backend" of the weighting functions. I once tried to profile the memory usage but I gave up at some point (I think I would have needed to annotate a ton of functions, also in numpy). @fujiisoup suggested using Also It think it should not be very difficult to write something that can be passed to So there would be three possibilities: (1) the current implementation (using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601824129 | https://github.com/pydata/xarray/pull/2922#issuecomment-601824129 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTgyNDEyOQ== | mathause 10194086 | 2020-03-20T17:31:15Z | 2020-03-20T17:31:15Z | MEMBER | There is some stuff I can do to reduce the memory footprint if
Yes, this would be nice. What could be done, though is to only do
I assume so. I don't know what kind of temporary variables
Again this could be avoided if
Do you want to leave it away for performance reasons? Because it was a deliberate decision to not support
No it's important to make sure this stuff works for large arrays. However, using
None of your suggested functions support I am all in to support more functions, but currently I am happy we got a weighted sum and mean into xarray after 5(!) years! Further libraries that support weighted operations:
|
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601514904 | https://github.com/pydata/xarray/pull/2922#issuecomment-601514904 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTUxNDkwNA== | max-sixty 5635139 | 2020-03-20T04:01:34Z | 2020-03-20T04:01:34Z | MEMBER | We do those sorts of operations fairly frequently, so it's not unique here. Generally users should expect to have available ~3x the memory of an array for most operations. @seth-p it's great you've taken an interest in the project! Is there any chance we could harness that into some contributions? 😄 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601377953 | https://github.com/pydata/xarray/pull/2922#issuecomment-601377953 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTM3Nzk1Mw== | max-sixty 5635139 | 2020-03-19T19:34:42Z | 2020-03-19T19:34:42Z | MEMBER |
😂 @mathause props for the persistence... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601298407 | https://github.com/pydata/xarray/pull/2922#issuecomment-601298407 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTI5ODQwNw== | jhamman 2443309 | 2020-03-19T16:58:57Z | 2020-03-19T16:58:57Z | MEMBER | Big time!!!! Thanks @mathause! #422 was opened in June of 2015, amazing. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601283025 | https://github.com/pydata/xarray/pull/2922#issuecomment-601283025 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTI4MzAyNQ== | max-sixty 5635139 | 2020-03-19T16:37:43Z | 2020-03-19T16:37:43Z | MEMBER | Thanks @mathause ! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601214104 | https://github.com/pydata/xarray/pull/2922#issuecomment-601214104 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTIxNDEwNA== | mathause 10194086 | 2020-03-19T14:35:25Z | 2020-03-19T14:35:25Z | MEMBER | Great! Thanks for all the feedback and support! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
601210885 | https://github.com/pydata/xarray/pull/2922#issuecomment-601210885 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDYwMTIxMDg4NQ== | dcherian 2448579 | 2020-03-19T14:29:42Z | 2020-03-19T14:29:42Z | MEMBER | This is going in. Thanks @mathause. This is a major contribution! |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
595373665 | https://github.com/pydata/xarray/pull/2922#issuecomment-595373665 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDU5NTM3MzY2NQ== | mathause 10194086 | 2020-03-05T18:18:22Z | 2020-03-05T18:18:22Z | MEMBER | I updated this once more. Mostly moved the example to a notebook. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
562206026 | https://github.com/pydata/xarray/pull/2922#issuecomment-562206026 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDU2MjIwNjAyNg== | mathause 10194086 | 2019-12-05T16:29:51Z | 2019-12-05T16:29:51Z | MEMBER | This is now ready for a full review. I added tests for weighted reductions over several dimensions and docs. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
545512847 | https://github.com/pydata/xarray/pull/2922#issuecomment-545512847 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDU0NTUxMjg0Nw== | mathause 10194086 | 2019-10-23T15:55:35Z | 2019-10-23T15:55:35Z | MEMBER |
I agree, requiring valid weights is a sensible choice.
Im not sure... Assume I want to do a meridional mean and only have data over land, this would then raise an error, which is not what I want. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
545200082 | https://github.com/pydata/xarray/pull/2922#issuecomment-545200082 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDU0NTIwMDA4Mg== | dcherian 2448579 | 2019-10-22T23:35:52Z | 2019-10-22T23:35:52Z | MEMBER |
Can we raise an error instead? It should be easy for the user to do
Should we raise an error here?
I think NaN is fine since that's the result of |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
543358453 | https://github.com/pydata/xarray/pull/2922#issuecomment-543358453 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDU0MzM1ODQ1Mw== | mathause 10194086 | 2019-10-17T20:56:32Z | 2019-10-17T20:59:08Z | MEMBER | I finally made some time to work on this - altough I feel far from finished...
Questions:
* does this implementation look reasonable to you?
* |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
512243216 | https://github.com/pydata/xarray/pull/2922#issuecomment-512243216 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDUxMjI0MzIxNg== | mathause 10194086 | 2019-07-17T12:59:16Z | 2019-07-17T12:59:16Z | MEMBER | Thanks, I am still very interested to get this in. I don't think I'll manage before my holidays, though. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
511002355 | https://github.com/pydata/xarray/pull/2922#issuecomment-511002355 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDUxMTAwMjM1NQ== | rabernat 1197350 | 2019-07-12T19:16:16Z | 2019-07-12T19:16:16Z | MEMBER | Hi @mathause - We really appreciate your contribution. Sorry your PR has stalled! Do you think you can respond to @fujiisoup's review and add documentation? Then we can get this merged. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 | |
488031173 | https://github.com/pydata/xarray/pull/2922#issuecomment-488031173 | https://api.github.com/repos/pydata/xarray/issues/2922 | MDEyOklzc3VlQ29tbWVudDQ4ODAzMTE3Mw== | mathause 10194086 | 2019-04-30T16:57:05Z | 2019-04-30T16:57:05Z | MEMBER | I updated the PR
* added a weighted Before I continue, it would be nice to get some feedback.
As mentioned by @aaronspring, esmlab already implemented weighted statistic functions. Similarly, statsmodels for 1D data without handling of NaNs (docs / code). Thus it should be feasible to implement further statistics here (weighted |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature/weighted 437765416 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 5