pull_requests: 17158398
This data as json
id | node_id | number | state | locked | title | user | body | created_at | updated_at | closed_at | merged_at | merge_commit_sha | assignee | milestone | draft | head | base | author_association | auto_merge | repo | url | merged_by |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17158398 | MDExOlB1bGxSZXF1ZXN0MTcxNTgzOTg= | 163 | closed | 0 | BUG: fix encoding issues (array indexing now resets encoding) | 1217238 | Fixes #156, #157 To elaborate on the changes: 1. When an array is indexed, its encoding will be reset. This takes care of the invalid chunksize issue. More generally, this seems like the right choice because it's not clear that the right encoding will be the same after slicing an array, anyways. 2. If an array has `encoding['dtype'] = np.dtype('S1')` (e.g., it was originally encoded in characters), it will be stacked up to be saved as a character array, even if it's being saved to a NetCDF4 file. Previously, the array would be cast to 'S1' without stacking, which would result in silent loss of data. | 2014-06-16T01:29:22Z | 2014-06-17T07:28:45Z | 2014-06-16T04:52:43Z | 2014-06-16T04:52:43Z | 2d8751e9f80f6ade4240162d8b6c0668d4f00be8 | 650893 | 0 | 667f26fad6af902fb0508693326bc3c313d7847d | 71226fb571e0b9cdc32cc476b333991eafebe466 | MEMBER | 13221727 | https://github.com/pydata/xarray/pull/163 |
Links from other tables
- 1 row from pull_requests_id in labels_pull_requests