home / github / issues

Menu
  • GraphQL API
  • Search all tables

issues: 753965875

This data as json

id node_id number title user state locked assignee milestone comments created_at updated_at closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
753965875 MDU6SXNzdWU3NTM5NjU4NzU= 4631 Decode_cf fails when scale_factor is a length-1 list 1197350 closed 0     4 2020-12-01T03:07:48Z 2021-01-15T18:19:56Z 2021-01-15T18:19:56Z MEMBER      

Some datasets I work with have scale_factor and add_offset encoded as length-1 lists. The following code worked as of Xarray 0.16.1

python import xarray as xr ds = xr.DataArray([0, 1, 2], name='foo', attrs={'scale_factor': [0.01], 'add_offset': [1.0]}).to_dataset() xr.decode_cf(ds)

In 0.16.2 (just released) and current master, it fails with this error

```

AttributeError Traceback (most recent call last) <ipython-input-2-a0b01d6a314b> in <module> 2 attrs={'scale_factor': [0.01], 3 'add_offset': [1.0]}).to_dataset() ----> 4 xr.decode_cf(ds)

~/Code/xarray/xarray/conventions.py in decode_cf(obj, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables, use_cftime, decode_timedelta) 587 raise TypeError("can only decode Dataset or DataStore objects") 588 --> 589 vars, attrs, coord_names = decode_cf_variables( 590 vars, 591 attrs,

~/Code/xarray/xarray/conventions.py in decode_cf_variables(variables, attributes, concat_characters, mask_and_scale, decode_times, decode_coords, drop_variables, use_cftime, decode_timedelta) 490 and stackable(v.dims[-1]) 491 ) --> 492 new_vars[k] = decode_cf_variable( 493 k, 494 v,

~/Code/xarray/xarray/conventions.py in decode_cf_variable(name, var, concat_characters, mask_and_scale, decode_times, decode_endianness, stack_char_dim, use_cftime, decode_timedelta) 333 variables.CFScaleOffsetCoder(), 334 ]: --> 335 var = coder.decode(var, name=name) 336 337 if decode_timedelta:

~/Code/xarray/xarray/coding/variables.py in decode(self, variable, name) 271 dtype = _choose_float_dtype(data.dtype, "add_offset" in attrs) 272 if np.ndim(scale_factor) > 0: --> 273 scale_factor = scale_factor.item() 274 if np.ndim(add_offset) > 0: 275 add_offset = add_offset.item()

AttributeError: 'list' object has no attribute 'item' ```

I'm very confused, because this feels quite similar to #4471, and I thought it was resolved #4485. However, the behavior is different with 'scale_factor': np.array([0.01]). That works fine--no error.

How might I end up with a dataset with scale_factor as a python list? It happens when I open a netcdf file using the h5netcdf engine (documented by @gerritholl in https://github.com/pydata/xarray/issues/4471#issuecomment-702018925) and then write it to zarr. The numpy array gets encoded as a list in the zarr json metadata. 🙃

This problem would go away if we could resolve the discrepancies between the two engines' treatment of scalar attributes.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4631/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed 13221727 issue

Links from other tables

  • 1 row from issues_id in issues_labels
  • 4 rows from issue in issue_comments
Powered by Datasette · Queries took 74.462ms · About: xarray-datasette