home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 263647433

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/463#issuecomment-263647433 https://api.github.com/repos/pydata/xarray/issues/463 263647433 MDEyOklzc3VlQ29tbWVudDI2MzY0NzQzMw== 11411331 2016-11-29T17:59:20Z 2016-11-29T17:59:20Z CONTRIBUTOR

Sorry for the delay... I saw the reference and then needed to find some time to read back over the issues to get some context.

You are correct. The PyReshaper was designed to address this type of problem, though not exactly the issue with xarray and dask. It's a pretty common problem, and it's the reason that the CESM developers are moving to long-term archival of time-series files ONLY. (In other words, PyReshaper is being incorporated into the automated CESM run-processes.) ...Of course, one could argue that this step shouldn't be necessary with some clever I/O in the models themselves to write time-series directly.

The PyReshaper opens and closes each time-slice file explicitly before and after each read, respectively. And, if fully scaled (i.e., 1 MPI process per output file), you only ever have 2 files open at a time per process. In this particular operation, the overhead associated with open/close on the input files is negligible compared to the total R/W times.

So, anyway, the PyReshaper (https://github.com/NCAR/PyReshaper) can definitely help...though I consider it a stop-gap for the moment. I'm happy to help people figure out how to get it to work for you problems, if that's a path you want to consider.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  94328498
Powered by Datasette · Queries took 0.578ms · About: xarray-datasette