issue_comments
9 rows where issue = 785329941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Improve performance of xarray.corr() on big datasets · 9 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
849813462 | https://github.com/pydata/xarray/issues/4804#issuecomment-849813462 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDg0OTgxMzQ2Mg== | dcherian 2448579 | 2021-05-27T17:33:45Z | 2021-05-27T17:33:45Z | MEMBER | Reopening for the suggestions in https://github.com/pydata/xarray/issues/4804#issuecomment-760114285 cc @AndrewWilliams3142 if you're looking for a small followup PR with potentially large impact :) |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
767138669 | https://github.com/pydata/xarray/issues/4804#issuecomment-767138669 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc2NzEzODY2OQ== | dcherian 2448579 | 2021-01-25T21:57:03Z | 2021-01-25T21:57:03Z | MEMBER | @kathoef we'd be happy to merge a PR with some of the suggestions proposed here. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
760114285 | https://github.com/pydata/xarray/issues/4804#issuecomment-760114285 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc2MDExNDI4NQ== | willirath 5700886 | 2021-01-14T10:44:19Z | 2021-01-14T10:44:19Z | CONTRIBUTOR | I'd also add that https://github.com/pydata/xarray/blob/master/xarray/core/computation.py#L1320_L1330 which is essentially
|
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
760025539 | https://github.com/pydata/xarray/issues/4804#issuecomment-760025539 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc2MDAyNTUzOQ== | aaronspring 12237157 | 2021-01-14T08:44:22Z | 2021-01-14T08:44:22Z | CONTRIBUTOR | Thanks for the suggestion with xr.align. my speculation is that xs.pearson_r is a bit faster because we first write the whole function in numpy and then pass it through xr.apply_ufunc. I think therefore it only works for xr but not dask.da |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
759780514 | https://github.com/pydata/xarray/issues/4804#issuecomment-759780514 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc4MDUxNA== | mathause 10194086 | 2021-01-13T22:32:47Z | 2021-01-14T01:15:02Z | MEMBER | @aaronspring I had a quick look at your version - do you have an idea why it is is faster? Does yours also work for dask arrays?
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
759795213 | https://github.com/pydata/xarray/issues/4804#issuecomment-759795213 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc5NTIxMw== | mathause 10194086 | 2021-01-13T22:52:19Z | 2021-01-13T22:52:19Z | MEMBER | Another possibility is to replace with |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
759767957 | https://github.com/pydata/xarray/issues/4804#issuecomment-759767957 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc2Nzk1Nw== | aaronspring 12237157 | 2021-01-13T22:04:38Z | 2021-01-13T22:04:38Z | CONTRIBUTOR | Your function from the notebook could also easily implement the builtin weighted function |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
759766466 | https://github.com/pydata/xarray/issues/4804#issuecomment-759766466 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc2NjQ2Ng== | aaronspring 12237157 | 2021-01-13T22:01:49Z | 2021-01-13T22:01:49Z | CONTRIBUTOR | We implemented xr.corr as xr.pearson_r in https://xskillscore.readthedocs.io/en/stable/api/xskillscore.pearson_r.html#xskillscore.pearson_r and it’s ~30% faster than xr.corr see #4768 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 | |
759745055 | https://github.com/pydata/xarray/issues/4804#issuecomment-759745055 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc0NTA1NQ== | mathause 10194086 | 2021-01-13T21:17:34Z | 2021-01-13T21:17:34Z | MEMBER | Yes Other improvements
* I am not sure if ```python if skipna: # 2. Ignore the nans valid_values = da_a.notnull() & da_b.notnull()
else: # shortcut for skipna=False # da_a and da_b are aligned, so the have the same dims and shape axis = da_a.get_axis_num(dim) valid_count = np.take(da_a.shape, axis).prod() - ddof ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Improve performance of xarray.corr() on big datasets 785329941 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4