home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 365410944

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/1536#issuecomment-365410944 https://api.github.com/repos/pydata/xarray/issues/1536 365410944 MDEyOklzc3VlQ29tbWVudDM2NTQxMDk0NA== 6213168 2018-02-13T21:31:15Z 2018-02-13T21:32:43Z MEMBER

@shoyer I'm starting to work on this.

I'm not sure I understood your latest comment - are you implying that to_hdf5 should internally use the h5netcdf module? I understand the rationale but it sounds a bit counter-intuitive to me?

Also, to allow for non-zlib compression we need to either tap into the new h5netcdf API, or into h5py directly - so I'm afraid to_hdf5 can't be a simple wrapper around to_netcdf.

Could you help me compile a shopping list? - new method Dataset.to_hdf5 - starts as a copy-paste of to_netcdf, including the backend functions underneath - new unit tests, starting as a copy-paste of all unit tests for to_netcdf - change open_dataset and open_mfdataset: - add new possible value for the engine field, "hdf5" - if engine is None and file name terminates with .nc, use the current algorithm to choose default engine - if engine is None and file name terminates with .h5, use h5py - if engine is not None, ignore file extension - add to high level documentation and tutorials - Other?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  253476466
Powered by Datasette · Queries took 0.476ms · About: xarray-datasette