html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/7771#issuecomment-1538819904,https://api.github.com/repos/pydata/xarray/issues/7771,1538819904,IC_kwDOAMm_X85buIdA,5821660,2023-05-08T18:11:00Z,2023-05-08T18:11:00Z,MEMBER,"Setting status back to draft for now, still evaluating solutions for the CF encoding/decoding.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1676309093 https://github.com/pydata/xarray/pull/7771#issuecomment-1516573065,https://api.github.com/repos/pydata/xarray/issues/7771,1516573065,IC_kwDOAMm_X85aZRGJ,5821660,2023-04-20T15:53:58Z,2023-04-20T15:53:58Z,MEMBER,"OK it seems this is ready for a first round of reviews. A bit of added context: Currently there is no dedicated function for checking for CF standard conformance. The idea is to read as much as possible also non-standard conforming data files, but restrict writing non-standard conforming files. The implemented function `ensure_scale_offset_conformance` takes a `strict` keyword argument, which is `True` when encoding and `False` when decoding. If `strict=True` it will raise errors if there is a mismatch with the standard and when `strict=False` it will issue warnings. I've only had to adapt a few tests which where not conforming to standard on encoding to align with that. I've observed some warnings in the test suite which we might to have a look into. One idea would be to fix erroneous `scale_factor`/`add_offset` with our best fitting estimate. This is already done for list-type `scale_factor`/`add_offset`. I will follow-up with checks for CFMaskCoder. ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1676309093