issue_comments: 1284713697
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/7152#issuecomment-1284713697 | https://api.github.com/repos/pydata/xarray/issues/7152 | 1284713697 | IC_kwDOAMm_X85Mkyzh | 73678798 | 2022-10-19T23:52:47Z | 2022-10-19T23:52:47Z | CONTRIBUTOR | I've merged the cumulative and reduction files into generate_aggregations.py and _aggregations.py. This uses the original version of reductions with an additional statement on the dataset methods that adds the original coordinates back in. Using apply_ufunc and np.cumsum/cumprod has some issues as it only finds the cumulative across one axis which makes iterating through each dimension necessary. This makes it slower than the original functions and also causes some problems with the groupby method. Happy for any input on how the method using apply_ufunc might be usable or on any ways to change the current method. I'm getting a few issues I don't quite understand: - When running pytest on my local repository I get no errors but it's failing the checks here with a NotImplementedError - Black is having an issue with some of the strings in generate_aggregations. It's saying it cannot parse what should be valid code. Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1403614394 |