issue_comments: 583488834
This data as json
| html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
|---|---|---|---|---|---|---|---|---|---|---|---|
| https://github.com/pydata/xarray/issues/3761#issuecomment-583488834 | https://api.github.com/repos/pydata/xarray/issues/3761 | 583488834 | MDEyOklzc3VlQ29tbWVudDU4MzQ4ODgzNA== | 743508 | 2020-02-07T16:37:05Z | 2020-02-07T16:37:05Z | CONTRIBUTOR | I think it makes sense to support the conversion. Perhaps a better example is with a dataset: ```python x = np.arange(10) y = np.arange(10) data = np.zeros((len(x), len(y))) ds = xr.Dataset({k: xr.DataArray(data, coords=[x, y], dims=['x', 'y']) for k in ['a', 'b', 'c']}) ds.sel(x=1,y=1)
The output is a dataset of scalars, which converts fairly intuitively to a single row dataframe. But the folloiwing throws the same error.
Or think of it another way - isn't it very un-intuitive that converting a single-item dataset to a dataframe works only if the item was selected using a length-1 list? To me that seems like a very arbitrary restriction. Following that logic, it also makes sense to have consistent behaviour between Datasets and DataArrays (even if you end up producing a single-element table). |
{
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
561539035 |