home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 439263419

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1385#issuecomment-439263419 https://api.github.com/repos/pydata/xarray/issues/1385 439263419 MDEyOklzc3VlQ29tbWVudDQzOTI2MzQxOQ== 1217238 2018-11-16T02:45:05Z 2018-11-16T02:45:05Z MEMBER

@chuaxr What do you see when you use %prun when opening the dataset? This might point to the bottleneck.

One way to fix this would be to move our call to decode_cf() in open_dataset() to after applying chunking, i.e., to switch up the order of operations on these lines: https://github.com/pydata/xarray/blob/f547ed0b379ef70a3bda5e77f66de95ec2332ddf/xarray/backends/api.py#L270-L296

In practice, is the difference between using xarray's internal lazy array classes for decoding and dask for decoding. I would expect to see small differences in performance between these approaches (especially when actually computing data), but for constructing the computation graph I would expect them to have similar performance. It is puzzling that dask is orders of magnitude faster -- that suggests that something else is going wrong in the normal code path for decode_cf(). It would certainly be good to understand this before trying to apply any fixes.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  224553135
Powered by Datasette · Queries took 75.92ms · About: xarray-datasette