html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2040#issuecomment-379418732,https://api.github.com/repos/pydata/xarray/issues/2040,379418732,MDEyOklzc3VlQ29tbWVudDM3OTQxODczMg==,1217238,2018-04-07T00:32:46Z,2018-04-07T00:32:46Z,MEMBER,"One potentially option would be to make choose the default behavior based on the string data type: - Fixed-width unicode arrays (`np.unicode_`) get written as fixed-width strings with a stored encoding. - Object arrays full of Python strings (`np.object_`) get written as variable width strings. Note that fixed-width unicode in NumPy (fixed number of unicode characters) does *not* correspond to the same memory layout as fixed width strings in HDF5 (fixed length in bytes), but maybe it's close enough. The main reason why we don't do any special handling for object arrays currently in xarray is that our conventions coding/decoding system has no way of marking variable length string arrays. We should probably handle this by making a custom dtype like h5py that marks variables length strings using dtype metadata: http://docs.h5py.org/en/latest/special.html#variable-length-strings","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,311578894 https://github.com/pydata/xarray/issues/2040#issuecomment-379294800,https://api.github.com/repos/pydata/xarray/issues/2040,379294800,MDEyOklzc3VlQ29tbWVudDM3OTI5NDgwMA==,1217238,2018-04-06T15:47:24Z,2018-04-06T15:47:24Z,MEMBER,"The main reason for preferring variable length strings was that netCDF4-python always properly decoded them as unicode strings, even on Python 3. Basically, it was required to properly round-trip strings to a netCDF file on Python 3. However, this is no longer the case, now that we specify an encoding when writing fixed length strings (https://github.com/pydata/xarray/pull/1648). So we could potentially revisit the default behavior. I'll admit I'm also a little surprised by how large the storage overhead turns out to be for variable length datatypes. The HDF5 docs claim it's [32 bytes per element](https://support.hdfgroup.org/HDF5/doc/TechNotes/VLTypes.html), which would be about [10 MB](https://www.google.com/search?q=25+*+32+*+12000+bytes&rlz=1C5CHFA_enUS784US784&oq=25+*+32+*+12000+bytes&aqs=chrome..69i57j6.51320j0j7&sourceid=chrome&ie=UTF-8) or so for your dataset. And apparently it interacts poorly with compression, too.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,311578894