issues: 83700033
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
83700033 | MDU6SXNzdWU4MzcwMDAzMw== | 416 | Automatically decode netCDF data to native endianness | 1217238 | closed | 0 | 1143506 | 1 | 2015-06-01T21:23:52Z | 2015-06-10T16:01:00Z | 2015-06-06T03:51:13Z | MEMBER | Unfortunately, netCDF3 is big endian, but most modern CPUs are little endian. Cython requires that data match native endianness in order to perform operations. This means that users can get strange errors when performing aggregations with bottleneck or after converting an xray dataset to pandas. It would be nice to handle this automatically as part of the "decoding" process. I don't think there are any particular advantages to preserving non-native endianness (except, I suppose, for serialization back to another netCDF3 file). My understanding is that most calculations require native endianness, anyways. CC @bareid |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |