issue_comments: 632285419
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/1440#issuecomment-632285419 | https://api.github.com/repos/pydata/xarray/issues/1440 | 632285419 | MDEyOklzc3VlQ29tbWVudDYzMjI4NTQxOQ== | 11411331 | 2020-05-21T19:01:36Z | 2020-05-21T19:01:36Z | CONTRIBUTOR | @rabernat When you say "underlying array store", are you talking about the storage layer? That is, the zarr store or the netcdf file? It seems to me that the there are lots of "layers" of "chunking", especially when you are talking about chunking an entire dataset, which really confuses the whole issue. On an HPC system, there's filesystem blocksize, NetCDF/HDF5 "internal" chunks, chunking by spreading the data over multiple files, and in-memory chunks (i.e., Dask chunks). I'm not an expert on object store, but my understanding of object store is that (if you are storing NetCDF/HDF5 on object store) there is still "interal" NetCDF/HDF5 "chunking", then chunking over objects/files, and then in-memory chunking. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
233350060 |