home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 135510417

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/516#issuecomment-135510417 https://api.github.com/repos/pydata/xarray/issues/516 135510417 MDEyOklzc3VlQ29tbWVudDEzNTUxMDQxNw== 3688009 2015-08-27T18:11:43Z 2015-08-27T18:11:43Z NONE

using ncdump -hs, I found the chunk sizes of any of the files to be: _ChunkSizes = 1, 90, 180 ;

Using that, it took even more time:

``` datal = xray.open_mfdataset(filename, chunks={'time':1, 'lat':90, 'lon':180})

In [7]: %time datal.tasmax[:, 360, 720].values CPU times: user 3min 3s, sys: 59.4 s, total: 4min 3s Wall time: 12min 8s ```

I should say that I am using open source data, and therefore do not control how the original data is being chunked. This is also using open_mfdataset on around 100 files

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  99026442
Powered by Datasette · Queries took 0.913ms · About: xarray-datasette