html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/7456#issuecomment-1397637864,https://api.github.com/repos/pydata/xarray/issues/7456,1397637864,IC_kwDOAMm_X85TTkLo,14077947,2023-01-19T21:34:09Z,2023-01-19T21:34:09Z,CONTRIBUTOR,"> Okay I think I get the philosophy now. However, indexing a DataSet with an integer actually does work. If performance is the goal, shouldn't something like ds[0] throw a warning or an error?
Can you share your code for this? I would interpret that as meaning you have a variable in your dataset mapped to an integer key, which is allowed as a hashable type but can cause problems with downstream packages.
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1548355645
https://github.com/pydata/xarray/issues/7456#issuecomment-1397627848,https://api.github.com/repos/pydata/xarray/issues/7456,1397627848,IC_kwDOAMm_X85TThvI,14077947,2023-01-19T21:24:30Z,2023-01-19T21:24:30Z,CONTRIBUTOR,"I'm not an xarray developer, but my guess is that your argument is why positional indexing/slicing is not available for datasets.
As for the specific case of using `axis` parameter of `expand_dims`, I think this is useful for the case in which the user is either confident about the axis order in each DataArray or will use label based operations such that axis order doesn’t matter. I was curious so I did a quick comparison of the speed for using this parameter versus a subsequent transpose operation:
```
shape = (10, 50, 100, 200)
ds = xr.Dataset(
{
""foo"": ([""time"", ""x"", ""y"", ""z""], np.random.rand(*shape)),
""bar"": ([""time"", ""x"", ""y"", ""z""], np.random.randint(0, 10, shape)),
},
{
""time"": ([""time""], np.arange(shape[0])),
""x"": ([""x""], np.arange(shape[1])),
""y"": ([""y""], np.arange(shape[2])),
""z"": ([""z""], np.arange(shape[3])),
},
)
```
```
%%timeit -r 4
ds1 = ds.expand_dims(""sample"", axis=1)
```
38.1 µs ± 76 ns per loop (mean ± std. dev. of 4 runs, 10,000 loops each)
```
%%timeit -r 4
ds2 = ds.expand_dims(""sample"").transpose(""time"", ""sample"", ""x"", ""y"", ""z"")
```
172 µs ± 612 ns per loop (mean ± std. dev. of 4 runs, 10,000 loops each)","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1548355645
https://github.com/pydata/xarray/issues/7456#issuecomment-1397567627,https://api.github.com/repos/pydata/xarray/issues/7456,1397567627,IC_kwDOAMm_X85TTTCL,14077947,2023-01-19T20:34:04Z,2023-01-19T20:34:04Z,CONTRIBUTOR,"> Okay, regardless of expected behavior here, my particular use-case _requires_ that I transpose these dimensions. Can someone show me a way to do this? I tried to explain the xarray point of view to Keras, but Keras is really not interested ;)
>
> I tried something like `ds.expand_dims(""sample"").transpose('sample','nlat','nlon')` to complete futility, probably something to do with the `Frozen` stuff if I had to guess.
The transpose method should change the dimension order on each array in the dataset. One particularly important component from Kai's comment above is that `ds.dims` does not tell you information about the axis order for the DataArrays in the Dataset. Can you please describe how the DataArray dimension order reported by the code below differs from your expectations?
```
for var in ds.data_vars:
print(ds[var].sizes)
```
","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1548355645
https://github.com/pydata/xarray/pull/6797#issuecomment-1189473091,https://api.github.com/repos/pydata/xarray/issues/6797,1189473091,IC_kwDOAMm_X85G5etD,14077947,2022-07-19T19:31:31Z,2022-07-19T19:31:31Z,CONTRIBUTOR,"> Thanks @maxrjones can you push any WIP tests you might have for this?
If it's alright I will wait for https://github.com/pydata/xarray/pull/6804 to be merged first because that will greatly simplify testing the changes. After that PR, the following could be added to `xarray/tests/test_array_api.py` to test both cases:
```
def test_properties(arrays) -> None:
np_arr, xp_arr = arrays
assert np_arr.nbytes == 48
assert xp_arr.nbytes == 48
```
","{""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1306903264
https://github.com/pydata/xarray/pull/6797#issuecomment-1186293138,https://api.github.com/repos/pydata/xarray/issues/6797,1186293138,IC_kwDOAMm_X85GtWWS,14077947,2022-07-16T21:08:11Z,2022-07-16T21:08:11Z,CONTRIBUTOR,Please let me know if I should add a docstring as suggested in https://github.com/pydata/xarray/issues/6565#issuecomment-1115545322. I didn't yet because most of the properties do not have docstrings.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1306903264