issue_comments: 1086704180
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/6434#issuecomment-1086704180 | https://api.github.com/repos/pydata/xarray/issues/6434 | 1086704180 | IC_kwDOAMm_X85Axco0 | 4160723 | 2022-04-02T19:08:48Z | 2022-04-02T19:08:48Z | MEMBER | The first example works because there's no index. In the second example, a The problem is when creating a ```python array = da.isel(time=0).values value = array.item() seq = np.array([value], dtype=array.dtype) pd.Index(seq, dtype=array.dtype) Float64Index([1.0], dtype='float64')``` So in the example above you end-up with different index types, which ```python concat.indexes["time"] Index([1356998400000000000, 2013-01-01 06:00:00], dtype='object', name='time')da.indexes["time"] DatetimeIndex(['2013-01-01 00:00:00', '2013-01-01 06:00:00'], dtype='datetime64[ns]', name='time', freq=None)concat.indexes["time"].equals(da.indexes["time"]) False``` I'm not very satisfied with the current solution in concat but I'm not sure what we should do here:
|
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
1189140909 |