html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/7356#issuecomment-1362322800,https://api.github.com/repos/pydata/xarray/issues/7356,1362322800,IC_kwDOAMm_X85RM2Vw,90008,2022-12-22T02:40:59Z,2022-12-22T02:40:59Z,CONTRIBUTOR,"Any chance of a release, this is quite breaking for large datasets that can only be out of memory.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1346924547,https://api.github.com/repos/pydata/xarray/issues/7356,1346924547,IC_kwDOAMm_X85QSHAD,90008,2022-12-12T17:27:47Z,2022-12-12T17:27:47Z,CONTRIBUTOR,👍🏾 ,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1339624818,https://api.github.com/repos/pydata/xarray/issues/7356,1339624818,IC_kwDOAMm_X85P2Q1y,90008,2022-12-06T16:19:19Z,2022-12-06T16:19:19Z,CONTRIBUTOR,"Yes, without chunks of anything","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1339624418,https://api.github.com/repos/pydata/xarray/issues/7356,1339624418,IC_kwDOAMm_X85P2Qvi,90008,2022-12-06T16:18:59Z,2022-12-06T16:18:59Z,CONTRIBUTOR,Very smart test!,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1339457617,https://api.github.com/repos/pydata/xarray/issues/7356,1339457617,IC_kwDOAMm_X85P1oBR,90008,2022-12-06T14:18:11Z,2022-12-06T14:18:11Z,CONTRIBUTOR,The data is loaded from an NetCDF store through open_dataset,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1339452942,https://api.github.com/repos/pydata/xarray/issues/7356,1339452942,IC_kwDOAMm_X85P1m4O,90008,2022-12-06T14:14:57Z,2022-12-06T14:14:57Z,CONTRIBUTOR,"No explicit test was added to ensure that the data wasn't loaded. I just experienced this bug enough (we would accidentally load 100GB files in our code base) that I knew exactly how to fix it. If you want i can add a test to ensure that future optimizations to nbytes do not trigger a data load. I was hoping the 1 line fix would be a shoe in.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1336731702,https://api.github.com/repos/pydata/xarray/issues/7356,1336731702,IC_kwDOAMm_X85PrOg2,90008,2022-12-05T04:20:08Z,2022-12-05T04:20:08Z,CONTRIBUTOR,It seems that checking hasattr on the `_data` variable achieves both purposes.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1336711830,https://api.github.com/repos/pydata/xarray/issues/7356,1336711830,IC_kwDOAMm_X85PrJqW,90008,2022-12-05T03:58:50Z,2022-12-05T03:58:50Z,CONTRIBUTOR,"I think that at the very lease, the current implementation works as well as the old one for arrays that are defined by the `sparse` package.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1336700669,https://api.github.com/repos/pydata/xarray/issues/7356,1336700669,IC_kwDOAMm_X85PrG79,90008,2022-12-05T03:36:31Z,2022-12-05T03:36:31Z,CONTRIBUTOR,"Looking into the history a little more. I seem to be proposing to revert: https://github.com/pydata/xarray/commit/60f8c3d3488d377b0b21009422c6121e1c8f1f70 I think this is important since many users have arrays that are larger than memory. For me, I found this bug when trying to access the number of bytes in a 16GB dataset that I'm trying to load on my wimpy laptop. Not fun to start swapping. I feel like others might be hitting this too. xref: https://github.com/pydata/xarray/pull/6797 https://github.com/pydata/xarray/issues/4842","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394 https://github.com/pydata/xarray/pull/7356#issuecomment-1336696899,https://api.github.com/repos/pydata/xarray/issues/7356,1336696899,IC_kwDOAMm_X85PrGBD,90008,2022-12-05T03:30:31Z,2022-12-05T03:30:31Z,CONTRIBUTOR,I personally do not even think the `hasattr` is really that useful. You might as well use size and itemsize,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,1475567394