home / github / issue_comments

Menu
  • Search all tables
  • GraphQL API

issue_comments: 489101053

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1823#issuecomment-489101053 https://api.github.com/repos/pydata/xarray/issues/1823 489101053 MDEyOklzc3VlQ29tbWVudDQ4OTEwMTA1Mw== 1197350 2019-05-03T13:47:12Z 2019-05-03T13:47:12Z MEMBER

So I think it is quite important to consider this issue together with #2697. An xml specification called NCML already exists which tells software how to put together multiple netCDF files into a single virtual netcdf. We should leverage this existing spec as much as possible.

A realistic use case for me is that I have, say 1000 files of high-res model output, each with large coordinate variables, all generated from the same model run. If we want to for for which we know a priori that certain coordinates (dimension coordinates or otherwise) are identical, we could save a lot of disk reads (the slow part of open_mfdataset) by never reading those coordinates at all. Enabling this would require a pretty low-level change in xarray. For example, we couldn't even rely on open_dataset in its current form to open files, because open_dataset eagerly loads all dimension coordinates into indexes. One way forward might be to create a new Store class.

For a catalog of tricks I use to optimize opening these sorts of big, complex, multi-file datasets (e.g. CMIP), check out https://github.com/pangeo-data/esgf2xarray/blob/master/esgf2zarr/aggregate.py

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  288184220
Powered by Datasette · Queries took 0.685ms · About: xarray-datasette