html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/issues/2278#issuecomment-404510872,https://api.github.com/repos/pydata/xarray/issues/2278,404510872,MDEyOklzc3VlQ29tbWVudDQwNDUxMDg3Mg==,1197350,2018-07-12T13:24:51Z,2018-07-12T13:24:51Z,MEMBER,"Yes, this is the same underlying issue. On Thu, Jul 12, 2018 at 2:59 PM Aurélien Ponte wrote: > Note that there is also a fix here that is simply del > ds['v'].encoding['chunks'] prior to data storage. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > , or mute > the thread > > . > ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,340192831 https://github.com/pydata/xarray/issues/2278#issuecomment-404429223,https://api.github.com/repos/pydata/xarray/issues/2278,404429223,MDEyOklzc3VlQ29tbWVudDQwNDQyOTIyMw==,1197350,2018-07-12T08:15:43Z,2018-07-12T08:16:02Z,MEMBER,"> Any idea about how serious this is and/or where it's coming from? The source of the bug is that encoding metadata `chunks` (which describes the chunk size of the underlying zarr store) is automatically getting populated when you load the zarr store (`ds = xr.open_zarr('data.zarr')`), and this encoding metadata is being preserved as you transform (sub-select) the dataset. Some possible solutions would be to 1. Not put `chunks` into encoding at all. 2. Figure out a way to strip `chunks` when performing selection operations or other operations that change shape. Idea 1 is easier but would mean discarding some relevant metadata about encoding. This would break round-tripping of the un-modified zarr dataset.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,340192831