id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,active_lock_reason,draft,pull_request,body,reactions,performed_via_github_app,state_reason,repo,type 247703455,MDU6SXNzdWUyNDc3MDM0NTU=,1500,Support for attributes with different dtypes when serialising to netcdf4,2941720,open,0,,,4,2017-08-03T13:18:12Z,2020-03-17T14:18:39Z,,CONTRIBUTOR,,,,"At the moment, bool and dates aren't supported as attributes when serializing to netcdf4: ```python >>> da = xr.DataArray(range(5), attrs={'test': True}) >>> da array([0, 1, 2, 3, 4]) Dimensions without coordinates: dim_0 Attributes: test: True >>> da.to_netcdf('test_bool.nc') ... TypeError: illegal data type for attribute, must be one of dict_keys(['S1', 'i1', 'u1', 'i2', 'u2', 'i4', 'u4', 'i8', 'u8', 'f4', 'f8']), got b1 >>> da = xr.DataArray(range(5), attrs={'test': pd.to_datetime('now')}) >>> da array([0, 1, 2, 3, 4]) Dimensions without coordinates: dim_0 Attributes: test: 2017-08-03 13:02:29 >>> da.to_netcdf('test_dt.nc') ... TypeError: Invalid value for attr: 2017-08-03 13:02:29 must be a number string, ndarray or a list/tuple of numbers/strings for serialization to netCDF files ``` I assume bool attributes aren't supported by `netcdf4-python` and dates are difficult (could always just write these as a string), but this would be really nice to have if possible. As an aside, using `h5netcdf` works for bools, but coerces them to int64. ","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1500/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue 230158616,MDU6SXNzdWUyMzAxNTg2MTY=,1415,Save arbitrary Python objects to netCDF,2941720,open,0,,,5,2017-05-20T14:58:42Z,2019-04-21T05:08:03Z,,CONTRIBUTOR,,,,"I am looking to transition from pandas to xarray, and the only feature that I am really missing is the ability to seamlessly save arrays of python objects to hdf5 (or netCDF). This might be an issue for the backend netCDF4 libraries instead, but I thought I would post it here first to see what the opinions were about this functionality. For context, Pandas allows this by using pytables' `ObjectAtom` to serialize the object using pickle, then saves as a variable length bytes data type. It is already possible to do this using netCDF4, by applying to each object in the array `np.fromstring(pickle.dumps(obj), dtype=np.uint8)`, and saving these using a uint8 VLType. Then retrieving is simply `pickle.reads(obj.tostring())` for each array. I know pickle can be a security problem, it can cause an problem if you try to save a numerical array that accidently has dtype=object (pandas gives a warning), and that this is probably quite slow (I think pandas pickles a list containing all the objects for speed), but it would be incredibly convenient.","{""url"": ""https://api.github.com/repos/pydata/xarray/issues/1415/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,,13221727,issue