issue_comments: 350379064
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/pydata/xarray/pull/1528#issuecomment-350379064 | https://api.github.com/repos/pydata/xarray/issues/1528 | 350379064 | MDEyOklzc3VlQ29tbWVudDM1MDM3OTA2NA== | 703554 | 2017-12-08T21:40:40Z | 2017-12-08T22:27:35Z | CONTRIBUTOR | Some examples of compressor benchmarking here may be useful http://alimanfoo.github.io/2016/09/21/genotype-compression-benchmark.html The specific conclusions probably won't apply to your data but some of the code and ideas may be useful. Since writing that article I added Zstd and LZ4 compressors in numcodecs so those may also be worth trying in addition to Blosc with various configurations. (Blosc breaks up each chunk into blocks which enables multithreaded compression/decompression but can also reduce compression ratio over the same compressor library used without Blosc. I.e., Blosc(cname='zstd', clevel=1) will behave differently from Zstd(level=1) even though the same underlying compression library (Zstandard) is being used.) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
253136694 |