html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/6633#issuecomment-1142380488,https://api.github.com/repos/pydata/xarray/issues/6633,1142380488,IC_kwDOAMm_X85EF1fI,2448579,2022-05-31T16:50:21Z,2022-05-31T16:50:21Z,MEMBER,This would also fix #2233 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137851771,https://api.github.com/repos/pydata/xarray/issues/6633,1137851771,IC_kwDOAMm_X85D0j17,1197350,2022-05-25T21:10:44Z,2022-05-25T21:10:44Z,MEMBER,Yes it is definitely a pathological example. 💣 But the fact remains that there are many cases where we just want to discover dataset contents as quickly as possible and want to avoid the cost of loading coordinates and creating indexes.,"{""total_count"": 4, ""+1"": 4, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137839614,https://api.github.com/repos/pydata/xarray/issues/6633,1137839614,IC_kwDOAMm_X85D0g3-,1217238,2022-05-25T20:55:14Z,2022-05-25T20:55:14Z,MEMBER,"Looking at this mur-sst dataset in particular, it stores time in chunks of size 5. That means fetching the 6443 time values requires 1288 separate HTTP requests -- no wonder it's so slow! If the time axis were instead stored in a single chunk of 51 KB, Xarray would only need 3 small size HTTP requests to load the lat, lon and time indexes, which would probably complete in a fraction of a second. That said, I agree that this would be nice to have in general.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137821786,https://api.github.com/repos/pydata/xarray/issues/6633,1137821786,IC_kwDOAMm_X85D0cha,1197350,2022-05-25T20:34:30Z,2022-05-25T20:34:59Z,MEMBER,"Here is an example that really highlights the performance cost of always loading dimension coordinates: ```python import zarr store = zarr.storage.FSStore(""s3://mur-sst/zarr/"", anon=True) %time list(zarr.open_consolidated(store)) # -> Wall time: 86.4 ms %time ds = xr.open_dataset(store, engine='zarr') # -> Wall time: 17.1 s ``` `%prun` confirms that Xarray is spending most of its time just loading data for the `time` axis, which you can reproduce at the zarr level as: ```python zgroup = zarr.open_consolidated(store) %time _ = zgroup['time'][:] # -> Wall time: 14.7 s ``` Obviously this example is pretty extreme. There are things that could be done to optimize it, etc. But it really highlights the costs of eagerly loading dimension coordinates. If I don't care about label-based indexing for this dataset, I would rather have my 17s back! :+1: to ""`indexes={}` (empty dictionary) to explicitly skip creating indexes"". ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137754031,https://api.github.com/repos/pydata/xarray/issues/6633,1137754031,IC_kwDOAMm_X85D0L-v,1217238,2022-05-25T19:12:40Z,2022-05-25T19:12:40Z,MEMBER,"> > but another option (post explicit index refactor) might be an option for opening a dataset without creating indexes for 1D coordinates along dimensions. > > It might indeed be worth considering this case too in #6392. Maybe `indexes=None` (default) to create default indexes for 1D coordinates and `indexes={}` (empty dictionary) to explicitly skip creating indexes? +1 this syntax makes sense to me!","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137748248,https://api.github.com/repos/pydata/xarray/issues/6633,1137748248,IC_kwDOAMm_X85D0KkY,35968931,2022-05-25T19:07:50Z,2022-05-25T19:07:50Z,MEMBER,"Thanks for replying both. > All that said -- Do you have a specific example where this has been problematic? I'll have to defer to the others I tagged for the gory details. Perhaps one of them can cross-link to the specific issue they were having? > `indexes={}` (empty dictionary) to explicitly skip creating indexes? I would probably do `indexes=False` just to avoid using a mutable default, but an option like this sounds good to me.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137710350,https://api.github.com/repos/pydata/xarray/issues/6633,1137710350,IC_kwDOAMm_X85D0BUO,4160723,2022-05-25T18:47:14Z,2022-05-25T18:47:14Z,MEMBER,"> but another option (post explicit index refactor) might be an option for opening a dataset without creating indexes for 1D coordinates along dimensions. It might indeed be worth considering this case too in #6392. Maybe `indexes=None` (default) to create default indexes for 1D coordinates and `indexes={}` (empty dictionary) to explicitly skip creating indexes?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680 https://github.com/pydata/xarray/issues/6633#issuecomment-1137572812,https://api.github.com/repos/pydata/xarray/issues/6633,1137572812,IC_kwDOAMm_X85DzfvM,1217238,2022-05-25T17:10:04Z,2022-05-25T17:10:04Z,MEMBER,"Early versions of Xarray used to have lazy loading of data for indexes, but we removed this for the sake of simplicity. In principle we could restore lazy indexes, but another option (post explicit index refactor) might be an option for opening a dataset _without_ creating indexes for 1D coordinates along dimensions. Another way to solve this sort of challenges might be to load index data in parallel when using Dask. Right now I believe the data corresponding to indexes is always loaded eagerly, without using Dask. All that said -- Do you have a specific example where this has been problematic? In my experience it has been pretty reasonable to use xarray.Dataset objects for schema-like templates, even with index data needing to be loaded eagerly. Possibly another Zarr chunking scheme for your index data could be more efficient?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1247010680