issue_comments: 1301171446
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/7248#issuecomment-1301171446 | https://api.github.com/repos/pydata/xarray/issues/7248 | 1301171446 | IC_kwDOAMm_X85Njkz2 | 92732695 | 2022-11-02T20:09:36Z | 2022-11-02T20:09:36Z | NONE |
This would be correct if we added that many 8MB data sets, but we just added one! So I'd expect the entire store to be roughly that size (8MB) before compression since in the code example above we are just adding one data array to the Zarr data store. Would you be able to clarify this calculation
So the reason why all the dimension sizes have grown is because we initialize the Zarr with the set of all possibilities that the coordinates might be (read somewhere that this is what you should do - initialize an empty Zarr with the coords/dimensions you will need). Then what I figured I needed to do was align the dimensions of the .nc file with that of the Zarr store, so that when the data array is added the coordinates and dimensions line up - this is why we see the dimensions grow. Is this the correct approach? I found the limitation/design of Without aligning, I would get error messages that complained about dimension sizes being different. Thanks again for your help! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1433534927 |