html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/6434#issuecomment-1089206433,https://api.github.com/repos/pydata/xarray/issues/6434,1089206433,IC_kwDOAMm_X85A6_ih,4160723,2022-04-05T19:07:34Z,2022-04-05T19:07:34Z,MEMBER,"> Alternatively, we could loop over datasets and call expand_dims if dim is present and is a scalar.
Ah yes 👍 . Not sure why this case didn't fill the conditions for calling `expand_dims` on the input datasets.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1189140909
https://github.com/pydata/xarray/issues/6434#issuecomment-1086751973,https://api.github.com/repos/pydata/xarray/issues/6434,1086751973,IC_kwDOAMm_X85AxoTl,2448579,2022-04-03T01:00:48Z,2022-04-03T02:02:34Z,MEMBER,"> which doesn't seem to create the right kind of index given the value type:
There's a typo in the first line, we need `da.time`, this does actually work
```
array = da.isel(time=0).time.values
value = array.item()
seq = np.array([value], dtype=array.dtype)
pd.Index(seq, dtype=array.dtype)
# DatetimeIndex(['2013-01-01'], dtype='datetime64[ns]', freq=None)
```
The issue is that the `.item()` converts datetime64 to int and so `safe_cast_to_index` creates a IntIndex because we don't pass `dtype` to the `pd.Index` constructor (so that's one possible fix):
https://github.com/pydata/xarray/blob/3ead17ea9e99283e2511b65b9d864d1c7b10b3c4/xarray/core/utils.py#L130-L133
Alternatively, we could loop over datasets and call `expand_dims` if `dim` is present and is a scalar. We already do something similar here:
https://github.com/pydata/xarray/blob/305533d585389f7240ae2383a323337d4761d33a/xarray/core/concat.py#L470-L472","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1189140909
https://github.com/pydata/xarray/issues/6434#issuecomment-1086704180,https://api.github.com/repos/pydata/xarray/issues/6434,1086704180,IC_kwDOAMm_X85Axco0,4160723,2022-04-02T19:08:48Z,2022-04-02T19:08:48Z,MEMBER,"The first example works because there's no index.
In the second example, a `PandasIndex` is created from the scalar value (wrapped in a sequence) so that concat works on the ""time"" dimension (required since the logic has moved to `Index.concat`). See https://github.com/pydata/xarray/pull/5692#discussion_r820581305.
The problem is when creating a `PandasIndex` we call `pd.Index`, which doesn't seem to create the right kind of index given the value type:
```python
array = da.isel(time=0).values
value = array.item()
seq = np.array([value], dtype=array.dtype)
pd.Index(seq, dtype=array.dtype)
# Float64Index([1.0], dtype='float64')
```
So in the example above you end-up with different index types, which `xr.align` doesn't like:
```python
concat.indexes[""time""]
# Index([1356998400000000000, 2013-01-01 06:00:00], dtype='object', name='time')
da.indexes[""time""]
# DatetimeIndex(['2013-01-01 00:00:00', '2013-01-01 06:00:00'], dtype='datetime64[ns]', name='time', freq=None)
concat.indexes[""time""].equals(da.indexes[""time""])
# False
```
I'm not very satisfied with the current solution in concat but I'm not sure what we should do here:
- Special case for datetime, and other value types?
- Review the approach used to concatenate scalar coordinates (no-index) and indexed array coordinates?
- Depreciate concatenating a mix of scalar coordinates and indexed coordinates?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1189140909