issue_comments
5 rows where author_association = "MEMBER" and issue = 785329941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Improve performance of xarray.corr() on big datasets · 5 ✖
| id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 849813462 | https://github.com/pydata/xarray/issues/4804#issuecomment-849813462 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDg0OTgxMzQ2Mg== | dcherian 2448579 | 2021-05-27T17:33:45Z | 2021-05-27T17:33:45Z | MEMBER | Reopening for the suggestions in https://github.com/pydata/xarray/issues/4804#issuecomment-760114285 cc @AndrewWilliams3142 if you're looking for a small followup PR with potentially large impact :) |
{
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Improve performance of xarray.corr() on big datasets 785329941 | |
| 767138669 | https://github.com/pydata/xarray/issues/4804#issuecomment-767138669 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc2NzEzODY2OQ== | dcherian 2448579 | 2021-01-25T21:57:03Z | 2021-01-25T21:57:03Z | MEMBER | @kathoef we'd be happy to merge a PR with some of the suggestions proposed here. |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Improve performance of xarray.corr() on big datasets 785329941 | |
| 759780514 | https://github.com/pydata/xarray/issues/4804#issuecomment-759780514 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc4MDUxNA== | mathause 10194086 | 2021-01-13T22:32:47Z | 2021-01-14T01:15:02Z | MEMBER | @aaronspring I had a quick look at your version - do you have an idea why it is is faster? Does yours also work for dask arrays?
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Improve performance of xarray.corr() on big datasets 785329941 | |
| 759795213 | https://github.com/pydata/xarray/issues/4804#issuecomment-759795213 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc5NTIxMw== | mathause 10194086 | 2021-01-13T22:52:19Z | 2021-01-13T22:52:19Z | MEMBER | Another possibility is to replace with |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Improve performance of xarray.corr() on big datasets 785329941 | |
| 759745055 | https://github.com/pydata/xarray/issues/4804#issuecomment-759745055 | https://api.github.com/repos/pydata/xarray/issues/4804 | MDEyOklzc3VlQ29tbWVudDc1OTc0NTA1NQ== | mathause 10194086 | 2021-01-13T21:17:34Z | 2021-01-13T21:17:34Z | MEMBER | Yes Other improvements
* I am not sure if ```python if skipna: # 2. Ignore the nans valid_values = da_a.notnull() & da_b.notnull()
else: # shortcut for skipna=False # da_a and da_b are aligned, so the have the same dims and shape axis = da_a.get_axis_num(dim) valid_count = np.take(da_a.shape, axis).prod() - ddof ``` |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
Improve performance of xarray.corr() on big datasets 785329941 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] (
[html_url] TEXT,
[issue_url] TEXT,
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[created_at] TEXT,
[updated_at] TEXT,
[author_association] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
ON [issue_comments] ([user]);
user 2