issue_comments
where issue = 68759727 and user = 1217238 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: updated_at (date)
These facets timed out: author_association, issue
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
94085685 | https://github.com/pydata/xarray/issues/392#issuecomment-94085685 | https://api.github.com/repos/pydata/xarray/issues/392 | MDEyOklzc3VlQ29tbWVudDk0MDg1Njg1 | shoyer 1217238 | 2015-04-17T22:05:10Z | 2015-04-17T22:05:10Z | MEMBER | Yeah, like I said in the other issue I don't think this is a blocker (we can add a disclaimer to the docs). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Non-aggregating grouped operations on dask arrays are painfully slow to construct 68759727 | |
94079544 | https://github.com/pydata/xarray/issues/392#issuecomment-94079544 | https://api.github.com/repos/pydata/xarray/issues/392 | MDEyOklzc3VlQ29tbWVudDk0MDc5NTQ0 | shoyer 1217238 | 2015-04-17T21:31:56Z | 2015-04-17T21:31:56Z | MEMBER | The good news about that timing info is that dask is still much faster for calculating the graph than doing the actual computation. But it's still not ideal from an interactivity perspective. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Non-aggregating grouped operations on dask arrays are painfully slow to construct 68759727 | |
94079391 | https://github.com/pydata/xarray/issues/392#issuecomment-94079391 | https://api.github.com/repos/pydata/xarray/issues/392 | MDEyOklzc3VlQ29tbWVudDk0MDc5Mzkx | shoyer 1217238 | 2015-04-17T21:30:47Z | 2015-04-17T21:30:47Z | MEMBER | Here's the timing info: ``` %time res = ds.t2m.groupby('time.month').mean('time').sum() CPU times: user 133 ms, sys: 6.39 ms, total: 140 msWall time: 145 ms%time res.load_data() CPU times: user 2min 47s, sys: 1min, total: 3min 48sWall time: 1min 19s%time res = ds.t2m.groupby('time.month').apply(lambda x: x - x.mean()).sum() CPU times: user 49.1 s, sys: 6.39 s, total: 55.5 sWall time: 55.1 s%time res.load_data() CPU times: user 6min 17s, sys: 2min 20s, total: 8min 38sWall time: 3min 25s``` Blocks shape is |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Non-aggregating grouped operations on dask arrays are painfully slow to construct 68759727 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1