issue_comments: 42261453
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/53#issuecomment-42261453 | https://api.github.com/repos/pydata/xarray/issues/53 | 42261453 | MDEyOklzc3VlQ29tbWVudDQyMjYxNDUz | 1217238 | 2014-05-06T02:22:48Z | 2014-05-06T02:22:48Z | MEMBER | First of all, thank you for tackling this! Your first suggestion (decoding all string data to unicode) sounds like the right choice to me. Ideally this can be done in a lazy fashion (without needing to load all array data from disk when opening a file), but honestly I'm not too concerned about NetCDF3 performance for partially loading files from disk with SciPy library, given that NetCDF3 are already limited to be smaller than 2GB. Let me give you a little bit of context: The SciPy NetCDF module only works with an obsolete file format (NetCDF3; the current version, based on HDF5, is NetCDF4). The main reason we support it is because it serves as a (somewhat non-ideal) wire format, because SciPy can read and write file-like objects without files actually existing on disk, which is not possible with the NetCDF4 library. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
28940534 |