html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5529#issuecomment-868767399,https://api.github.com/repos/pydata/xarray/issues/5529,868767399,MDEyOklzc3VlQ29tbWVudDg2ODc2NzM5OQ==,14371165,2021-06-25T18:52:37Z,2021-06-25T18:52:37Z,MEMBER,"One way of solving it could be to slice the arrays to a smaller size but still showing the same repr. Because `coords[0:12]` seems easy to print, not sure how tricky it is to slice it in this way though.
I'm using https://github.com/spyder-ide/spyder for the profiling and general hacking.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,929818771
https://github.com/pydata/xarray/issues/5529#issuecomment-868735859,https://api.github.com/repos/pydata/xarray/issues/5529,868735859,MDEyOklzc3VlQ29tbWVudDg2ODczNTg1OQ==,14371165,2021-06-25T17:54:00Z,2021-06-25T17:54:00Z,MEMBER,"I think it's some lazy calculation that kicks in. Because I can reproduce using np.asarray.
```python
import numpy as np
import xarray as xr
ds = xr.tutorial.load_dataset(""air_temperature"")
da = ds[""air""].stack(z=[...])
coord = da.z.variable.to_index_variable()
# This is very slow:
a = np.asarray(coord)
da._repr_html_()
```

","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,929818771