html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/548#issuecomment-134279075,https://api.github.com/repos/pydata/xarray/issues/548,134279075,MDEyOklzc3VlQ29tbWVudDEzNDI3OTA3NQ==,1217238,2015-08-24T16:18:00Z,2015-08-24T16:18:00Z,MEMBER,"This is actually already supported, though poorly documented (so it's basically unknown).
We seem to have some sort of bug in our documentation generation for recent versions, but in the v0.5.1 IO docs, you can see the `encoding` attribute at the end of the section on writing netCDFs:
http://xray.readthedocs.org/en/v0.5.1/io.html#netcdf
The way this works is that `encoding` on each data array stores a dictionary of options that is used when serializing that array to disk. It support most of the options in netCDF4-python's `createVariable` method, including `chunksizes`, `zlib`, `scale_factor`, `add_offset`, `_FillValue` and `dtype`. This metadata is automatically filled in when reading a file from disk, which means that in principle xray should roundtrip the encoding faithfully.
Because encoding is read in when files are opened, invalid encoding options are currently ignored when saving a file to disk. This means that the current API is not very user friendly.
So I'd like to extend this into a keyword argument `encoding` for the `to_netcdf` method. The keyword argument would expect a dictionary where the keys are variable names and the values are encoding parameters, and errors would be raised for invalid encoding options. Here's my branch for that feature:
https://github.com/shoyer/xray/tree/encoding-error-handling
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,102703065
https://github.com/pydata/xarray/issues/548#issuecomment-134256961,https://api.github.com/repos/pydata/xarray/issues/548,134256961,MDEyOklzc3VlQ29tbWVudDEzNDI1Njk2MQ==,2443309,2015-08-24T15:47:15Z,2015-08-24T15:47:15Z,MEMBER,"I don't see any reason why we couldn't support this. The difficulty is that the implementation will be different (or not possible) for difficult backends.
netCDF4 adds compression at the Variable level so we would have to think about how to implement this to our `Dataset.to_netcdf` method. Would we end up setting the compression level / type in each DataArray or would we add an argument to the `to_netcdf` method?
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,102703065
https://github.com/pydata/xarray/issues/548#issuecomment-134220841,https://api.github.com/repos/pydata/xarray/issues/548,134220841,MDEyOklzc3VlQ29tbWVudDEzNDIyMDg0MQ==,5356122,2015-08-24T14:16:32Z,2015-08-24T14:16:32Z,MEMBER,"This seems useful. xray uses the [netCDF4 library](http://unidata.github.io/netcdf4-python/) here, and they support it. In the meantime, you could always add a post processing step from the command line: http://www.unidata.ucar.edu/blogs/developer/en/entry/netcdf_compression.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,102703065