html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/4113#issuecomment-636849900,https://api.github.com/repos/pydata/xarray/issues/4113,636849900,MDEyOklzc3VlQ29tbWVudDYzNjg0OTkwMA==,36678697,2020-06-01T13:06:02Z,2020-06-01T13:06:02Z,NONE,"> I think it depends on the chunk size. Yes, I'm not very familiar with chunks, it seems that it's not good to have too many of them. > I am not sure where 512 comes from in your example (maybe dask does something). Sorry it should have been `(100, 2048)`, it comes from the second dimension of stacking (explained below). My screenshot was for `.stack(px=(""y"", ""x""))`, my bad. > If I work with `chunks=dict(x=128, y=128)`, the chunksize after the stacking was `(100, 16384)`, which is reasonable (`z=100`, `px=(128, 128)`). Yes, after some more experiments I found out that the second chunksize after stacking is `(100, X)` where X is a multiple of the size of the second stacking dimension (here `""y""`), hence why it is working in your case (`128 * 128 == 2048 * 8`). The formula for X is something like: `shape[1] * ( (x_chunk * y_chunk) // shape[1] + bool((x_chunk * y_chunk) % shape[1]) ) ` So, minimum value for X is `shape[1]` (size of `""y""` dim, hence my case with small values for `x_chunk` and `y_chunk`). That's why I was saying that ""chunks along the second stacking dimension seem to be merged"". This might be normal, just unexpected, and still quite obscure for me. And it must be happening on dask side anyway. Thanks a lot for your insights. > You can do `reset_index` before saving it into the netCDF, but it requires another computation when creating the MultiIndex after loading. Ah yes, thanks! I thought `reset_index` was similar to `unstack` for indexes created with `stack`.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,627735640 https://github.com/pydata/xarray/issues/4113#issuecomment-636491064,https://api.github.com/repos/pydata/xarray/issues/4113,636491064,MDEyOklzc3VlQ29tbWVudDYzNjQ5MTA2NA==,36678697,2020-05-31T16:04:39Z,2020-05-31T16:04:39Z,NONE,"Thanks for the answer. I tried some experiments with chunked reading with dask, but I have observations I don't fully get : ##### 1) Still loading memory Reading with chunks load the memory more than reading without chunks, but not loading an amount of memory equals to the size of the array (300MB for a 800MB array in the example below). And by the way, also loading up the memory a bit more when stacking. But I think this may be normal, because of something like loading the dask machinery in the memory, and that I will see the full benefits when working on bigger data, _am I right?_ ##### 2) Stacking is breaking the chunks When stacking a chunked array, only chunks alongside the first stacking dimension are conserved, and chunks along the second stacking dimension seem to be merged. I think this has something to do with the very nature of indexes, but not sure. ##### 3) Rechunking load the memory A workaround to 2) could have been to re-chunk as wanted after stacking, but then it is fully loading the data. ##### Example (Considering the following to replace the `main()` function of the script in the original post.) ```python def main(): fname = ""da.nc"" shape = 512, 2048, 100 # 800 MB xr.DataArray( np.random.randn(*shape), dims=(""x"", ""y"", ""z""), ).to_netcdf(fname) print_ram_state() da = xr.open_dataarray(fname, chunks=dict(x=1, y=1)) print(f"" da: {mb(da.nbytes)} MB"") print_ram_state() mda = da.stack(px=(""x"", ""y"")) print_ram_state() mda = mda.chunk(dict(px=1)) print_ram_state() ``` which outputs something like: ``` RAM: 94.52 MB da: 800.0 MB RAM: 398.83 MB RAM: 589.05 MB RAM: 1409.11 MB ``` Chunks displayed thanks to the jupyter notebook visualization: Before stacking: After stacking: A workaround could have been to save the data already stacked, but ""MultiIndex cannot yet be serialized to netCDF"". Maybe there is another workaround? (Sorry for the long post) ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,627735640