issue_comments: 379418732
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/2040#issuecomment-379418732 | https://api.github.com/repos/pydata/xarray/issues/2040 | 379418732 | MDEyOklzc3VlQ29tbWVudDM3OTQxODczMg== | 1217238 | 2018-04-07T00:32:46Z | 2018-04-07T00:32:46Z | MEMBER | One potentially option would be to make choose the default behavior based on the string data type:
- Fixed-width unicode arrays ( Note that fixed-width unicode in NumPy (fixed number of unicode characters) does not correspond to the same memory layout as fixed width strings in HDF5 (fixed length in bytes), but maybe it's close enough. The main reason why we don't do any special handling for object arrays currently in xarray is that our conventions coding/decoding system has no way of marking variable length string arrays. We should probably handle this by making a custom dtype like h5py that marks variables length strings using dtype metadata: http://docs.h5py.org/en/latest/special.html#variable-length-strings |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
311578894 |