html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/7075#issuecomment-1262007838,https://api.github.com/repos/pydata/xarray/issues/7075,1262007838,IC_kwDOAMm_X85LOLYe,4160723,2022-09-29T09:20:59Z,2022-09-29T09:20:59Z,MEMBER,"What happens if you create `Dataset` objects fully in memory instead of loading data from files? Is there a significant slowdown when you increase the size of the Dataset dimensions? Could you measure the time it takes at a more fined-grained level? I.e., loading files vs. extracting a slice vs. convert to dataframe. This would help better identifying the possible source of slowdown.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1384226112 https://github.com/pydata/xarray/issues/7075#issuecomment-1261249300,https://api.github.com/repos/pydata/xarray/issues/7075,1261249300,IC_kwDOAMm_X85LLSMU,2448579,2022-09-28T17:45:54Z,2022-09-28T17:45:54Z,MEMBER,Hi @rilllydi can you post a small reproducible example illustrating the issue please?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1384226112