home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 234022793

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/912#issuecomment-234022793 https://api.github.com/repos/pydata/xarray/issues/912 234022793 MDEyOklzc3VlQ29tbWVudDIzNDAyMjc5Mw== 7504461 2016-07-20T17:36:02Z 2016-07-20T17:36:17Z NONE

Thanks, @shoyer !

Setting smaller chunks helps, however my issue is the way back.

This is fine:

%time conc_avg = ds.conc_profs.chunk({'burst': 10}).mean(('z','duration'))

CPU times: user 24 ms, sys: 0 ns, total: 24 ms Wall time: 23.8 ms

But this:

%time result = conc_avg.load()

takes an insane amount of time which intrigues me because is just a vector with 2845 points.

Is there another way to tackle this without dask like using a for-loop?

If dask is the way to go, what would be the quickest way to convert to numpy array?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  166593563
Powered by Datasette · Queries took 0.683ms · About: xarray-datasette