issue_comments: 1255073449
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7065#issuecomment-1255073449 | https://api.github.com/repos/pydata/xarray/issues/7065 | 1255073449 | IC_kwDOAMm_X85Kzuap | 4160723 | 2022-09-22T14:04:22Z | 2022-09-22T14:05:56Z | MEMBER | Actually there's another conversion when you reuse an xarray dimension coordinate in array-like computations: ```python ds = xr.Dataset(coords={"x": np.array([1.2, 1.3, 1.4], dtype=np.float16)}) coordinate data is a wrapper around a pandas.Index object(it keeps track of the original array dtype)ds.variables["x"]._data PandasIndexingAdapter(array=Float64Index([1.2001953125, 1.2998046875, 1.400390625], dtype='float64', name='x'), dtype=dtype('float16'))This coerces the pandas.Index back as a numpy arraynp.asarray(ds.x) array([1.2, 1.3, 1.4], dtype=float16)which is equivalent tods.variables["x"]._data.array() array([1.2, 1.3, 1.4], dtype=float16)``` The round-trip conversion preserves the original dtype so different execution times may be expected. I can't tell much why the results are different (how much are they different?), but I wouldn't be surprised if it's caused by rounding errors accumulated through the computation of a complex formula like haversine. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1381955373 |