html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/issues/5434#issuecomment-863119738,https://api.github.com/repos/pydata/xarray/issues/5434,863119738,MDEyOklzc3VlQ29tbWVudDg2MzExOTczOA==,10137,2021-06-17T10:20:46Z,2021-06-17T10:26:12Z,NONE,"Sorry for late response.
I was trying to read a big geotif file as follows.
import xarray as xr
xds = xr.open_rasterio(geotif_file)
My task was to array indexing and to save output into disk.
columns = [8,9,7,100,1050,......, 9000]
rows = [18,19,17,1100,1105,......, 9100]
data = xds.isel(x=xr.DataArray(columns), y=xr.DataArray(rows))
np.save('output.npy', data)
Unfortunately, the performance in terms of time requirement seems quite unsatisfactory.
When I saw docs on `xr.open_rasterio()`, it mentions it is an experimental now.
So, I'm curious if it could be much faster when it becomes `stable`.
I look forward to see it as `stable` version.
Thank you so much.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,910844095