issue_comments: 686540299
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2697#issuecomment-686540299 | https://api.github.com/repos/pydata/xarray/issues/2697 | 686540299 | MDEyOklzc3VlQ29tbWVudDY4NjU0MDI5OQ== | 81219 | 2020-09-03T14:42:19Z | 2020-09-03T14:42:19Z | CONTRIBUTOR | I'd like to revive this issue.
We're increasingly using NcML aggregations within our THREDDS server to create "logical" datasets. This allows us to fix some non-CF-conforming metadata fields without changing files on disk (which would break syncing with ESGF nodes). More importantly, by aggregating multiple time periods, variables and realizations, we're able to create catalog entries for simulations instead of files, which we expect will greatly facilitate parsing catalog search results. We'd like to offer the same aggregation functionality outside of the THREDDS server.
Ideally, this would be supported right from the netcdf-c library (see https://github.com/Unidata/netcdf-c/issues/1478), but an @andersy005 In terms of API, I think the need is not so much to create or modify NcML files, but rather to return an The THREDDS repo contains a number of unit tests that could be emulated to steer the Python implementation. My understanding is that getting this done could involve a fair amount of work, so I'd like to see who's interested in collaborating on this and maybe schedule a meeting to plan work for this year or the next. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
401874795 |