id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 2098882374,I_kwDOAMm_X859GmdG,8660,dtype encoding ignored during IO?,35968931,closed,0,,,3,2024-01-24T18:50:47Z,2024-02-05T17:35:03Z,2024-02-05T17:35:02Z,MEMBER,,,,"### What happened? When I set the `.encoding['dtype']` attribute before saving a to disk, the actual on-disk representation appears to store a record of the dtype encoding, but when opening it back up in xarray I get the same dtype I had before, not the one specified in the encoding. Is that what's supposed to happen? How does this work? (This happens with both zarr and netCDF.) ### What did you expect to happen? I expected that setting `.encoding['dtype']` would mean that once I open the data back up, it would be in the new dtype that I set in the encoding. ### Minimal Complete Verifiable Example ```Python air = xr.tutorial.open_dataset('air_temperature') air['air'].dtype # returns dtype('float32') air['air'].encoding['dtype'] # returns dtype('int16'), which already seems weird air.to_zarr('air.zarr') # I would assume here that the encoding actually does something during IO # now if I check the zarr `.zarray` metadata for the `air` variable it says `""dtype"": `""