issues: 35634854
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35634854 | MDU6SXNzdWUzNTYzNDg1NA== | 156 | Encoding preserves chunksize even if it no longer makes sense | 1217238 | closed | 0 | 0 | 2014-06-13T00:10:25Z | 2014-06-16T04:52:43Z | 2014-06-16T04:52:43Z | MEMBER | For example, even after decoding a character array or indexing an variable, the chunksize is not updated. This means that netCDF4 reports an error when trying to save such a file. Perhaps we should add some sort of sanity check to chunksize when writing a dataset? Possibly issuing a warning? Thanks @ToddSmall for reporting this issue. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |