issue_comments: 133992153
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/issues/516#issuecomment-133992153 | https://api.github.com/repos/pydata/xarray/issues/516 | 133992153 | MDEyOklzc3VlQ29tbWVudDEzMzk5MjE1Mw== | 6063709 | 2015-08-24T02:21:43Z | 2015-08-24T02:21:43Z | CONTRIBUTOR | What is the netCDF4 chunking scheme for your compressed data? (use 'ncdump -hs' to reveal the per variable chunking scheme). Very large datasets can have very long load times depending on the access pattern. This can be overcome with an appropriately chosen chunking scheme, but if the chunk sizes are not well chosen (and the default library chunking is pretty terrible) then certain access patterns might still be very slow. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
99026442 |