home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 431657200

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/2499#issuecomment-431657200 https://api.github.com/repos/pydata/xarray/issues/2499 431657200 MDEyOklzc3VlQ29tbWVudDQzMTY1NzIwMA== 12229877 2018-10-21T10:30:23Z 2018-10-21T10:30:23Z CONTRIBUTOR

dataset = xr.open_dataset(netcdf_precip, chunks={'lat': 1})

This makes me really suspicious - lat=1 is a very very small chunk size, and completely unchunked in time and lon. Without knowing anything else, I'd try chunks=dict(lat=200, lon=200) or higher depending on the time dim - Dask is most efficient with chunks of around 10MB for most workloads.

This all also depends on the data layout on disk too - can you share repr(xr.open_dataset(netcdf_precip))? What does ncdump say?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  372244156
Powered by Datasette · Queries took 0.589ms · About: xarray-datasette