Code Monkey home page Code Monkey logo

lzbench's Introduction

Introduction

lzbench is an in-memory benchmark of open-source LZ77/LZSS/LZMA compressors. It joins all compressors into a single exe. At the beginning an input file is read to memory. Then all compressors are used to compress and decompress the file and decompressed file is verified. This approach has a big advantage of using the same compiler with the same optimizations for all compressors. The disadvantage is that it requires source code of each compressor (therefore Slug or lzturbo are not included).

Status
Build Status Build status

Usage

usage: lzbench [options] input [input2] [input3]

where [input] is a file or a directory and [options] are:
 -b#   set block/chunk size to # KB (default = MIN(filesize,1747626 KB))
 -c#   sort results by column # (1=algname, 2=ctime, 3=dtime, 4=comprsize)
 -e#   #=compressors separated by '/' with parameters specified after ',' (deflt=fast)
 -iX,Y set min. number of compression and decompression iterations (default = 1, 1)
 -j    join files in memory but compress them independently (for many small files)
 -l    list of available compressors and aliases
 -m#   set memory limit to # MB (default = no limit)
 -o#   output text format 1=Markdown, 2=text, 3=text+origSize, 4=CSV (default = 2)
 -p#   print time for all iterations: 1=fastest 2=average 3=median (default = 1)
 -r    operate recursively on directories
 -s#   use only compressors with compression speed over # MB (default = 0 MB)
 -tX,Y set min. time in seconds for compression and decompression (default = 1, 2)
 -v    disable progress information
 -x    disable real-time process priority
 -z    show (de)compression times instead of speed

Example usage:
  lzbench -ezstd filename = selects all levels of zstd
  lzbench -ebrotli,2,5/zstd filename = selects levels 2 & 5 of brotli and zstd
  lzbench -t3 -u5 fname = 3 sec compression and 5 sec decompression loops
  lzbench -t0 -u0 -i3 -j5 -ezstd fname = 3 compression and 5 decompression iter.
  lzbench -t0u0i3j5 -ezstd fname = the same as above with aggregated parameters

Compilation

For Linux/MacOS/MinGW (Windows):

make

For 32-bit compilation:

make BUILD_ARCH=32-bit

The default linking for Linux is dynamic and static for Windows. This can be changed with make BUILD_STATIC=0/1.

To remove one of compressors you can add -DBENCH_REMOVE_XXX to DEFINES in Makefile (e.g. DEFINES += -DBENCH_REMOVE_LZ4 to remove LZ4). You also have to remove corresponding *.o files (e.g. lz4/lz4.o and lz4/lz4hc.o).

lzbench was tested with:

  • Ubuntu: gcc 4.8 (both 32-bit and 64-bit), 4.9, 5 (32-bit and 64-bit), 6 (32-bit and 64-bit), 7, 8, 9 and clang 3.5, 3.6, 3.8, 3.9, 4.0, 5.0, 6.0, 7, 8, 9
  • MacOS: Apple LLVM version 9.1.0
  • MinGW (Windows): gcc 5.3 (32-bit), gcc 6.2 (both 32-bit and 64-bit), gcc 9.1

Supported compressors

Warning: some of the compressors listed here have security issues and/or are no longer maintained. For information about the security of the various compressors, see the CompFuzz Results page.

CUDA support

If CUDA is available, lzbench supports additional compressors:

  • cudaMemcpy - similar to the reference memcpy benchmark, using GPU memory
  • nvcomp 1.2.2 LZ4 GPU-only compressor

The directory where the CUDA compiler and libraries are available can be passed to make via the CUDA_BASE variable, e.g.:

make CUDA_BASE=/usr/local/cuda

Benchmarks

The following results are obtained with lzbench 1.8 with the -t16,16 -eall options using 1 core of Intel Core i7-8700K, Ubuntu 18.04.3 64-bit, and clang 9.0.1 with "silesia.tar" which contains tarred files from Silesia compression corpus. The results sorted by ratio are available here.

Compressor name Compress. Decompress. Compr. size Ratio
memcpy 10362 MB/s 10790 MB/s 211947520 100.00
blosclz 2.0.0 -1 6485 MB/s 7959 MB/s 211947520 100.00
blosclz 2.0.0 -3 1073 MB/s 5909 MB/s 199437330 94.10
blosclz 2.0.0 -6 412 MB/s 1083 MB/s 137571765 64.91
blosclz 2.0.0 -9 403 MB/s 1037 MB/s 135557850 63.96
brieflz 1.2.0 -1 197 MB/s 431 MB/s 81138803 38.28
brieflz 1.2.0 -3 108 MB/s 436 MB/s 75550736 35.65
brieflz 1.2.0 -6 19 MB/s 468 MB/s 67208420 31.71
brieflz 1.2.0 -8 0.46 MB/s 473 MB/s 64912139 30.63
brotli 2019-10-01 -0 420 MB/s 419 MB/s 78433298 37.01
brotli 2019-10-01 -2 154 MB/s 485 MB/s 68060686 32.11
brotli 2019-10-01 -5 35 MB/s 520 MB/s 59568603 28.11
brotli 2019-10-01 -8 10 MB/s 533 MB/s 57140168 26.96
brotli 2019-10-01 -11 0.63 MB/s 451 MB/s 50412404 23.79
bzip2 1.0.8 -1 18 MB/s 52 MB/s 60484813 28.54
bzip2 1.0.8 -5 16 MB/s 44 MB/s 55724395 26.29
bzip2 1.0.8 -9 15 MB/s 41 MB/s 54572811 25.75
crush 1.0 -0 53 MB/s 413 MB/s 73064603 34.47
crush 1.0 -1 6.11 MB/s 455 MB/s 66494412 31.37
crush 1.0 -2 0.82 MB/s 468 MB/s 63746223 30.08
csc 2016-10-13 -1 21 MB/s 73 MB/s 56201092 26.52
csc 2016-10-13 -3 9.38 MB/s 71 MB/s 53477914 25.23
csc 2016-10-13 -5 3.86 MB/s 77 MB/s 49801577 23.50
density 0.14.2 -1 2214 MB/s 2677 MB/s 133042166 62.77
density 0.14.2 -2 933 MB/s 1433 MB/s 101651444 47.96
density 0.14.2 -3 432 MB/s 529 MB/s 87649866 41.35
fastlz 0.1 -1 341 MB/s 806 MB/s 104628084 49.37
fastlz 0.1 -2 368 MB/s 811 MB/s 100906072 47.61
fastlzma2 1.0.1 -1 23 MB/s 90 MB/s 59030954 27.85
fastlzma2 1.0.1 -3 11 MB/s 94 MB/s 54023837 25.49
fastlzma2 1.0.1 -5 7.44 MB/s 103 MB/s 51209571 24.16
fastlzma2 1.0.1 -8 5.18 MB/s 103 MB/s 49126740 23.18
fastlzma2 1.0.1 -10 3.99 MB/s 105 MB/s 48666065 22.96
gipfeli 2016-07-13 403 MB/s 663 MB/s 87931759 41.49
libdeflate 1.3 -1 201 MB/s 865 MB/s 73318371 34.59
libdeflate 1.3 -3 161 MB/s 912 MB/s 70668968 33.34
libdeflate 1.3 -6 99 MB/s 924 MB/s 67928189 32.05
libdeflate 1.3 -9 16 MB/s 898 MB/s 65701539 31.00
libdeflate 1.3 -12 7.39 MB/s 900 MB/s 64801629 30.57
lizard 1.0 -10 635 MB/s 4173 MB/s 103402971 48.79
lizard 1.0 -12 179 MB/s 3955 MB/s 86232422 40.69
lizard 1.0 -15 85 MB/s 4081 MB/s 81187330 38.31
lizard 1.0 -19 4.60 MB/s 4043 MB/s 77416400 36.53
lizard 1.0 -20 481 MB/s 2985 MB/s 96924204 45.73
lizard 1.0 -22 149 MB/s 2904 MB/s 84866725 40.04
lizard 1.0 -25 18 MB/s 2853 MB/s 75867915 35.80
lizard 1.0 -29 2.07 MB/s 2697 MB/s 68694227 32.41
lizard 1.0 -30 453 MB/s 1414 MB/s 85727429 40.45
lizard 1.0 -32 193 MB/s 1641 MB/s 78652654 37.11
lizard 1.0 -35 95 MB/s 2279 MB/s 74563583 35.18
lizard 1.0 -39 4.37 MB/s 2475 MB/s 69807522 32.94
lizard 1.0 -40 354 MB/s 1497 MB/s 80843049 38.14
lizard 1.0 -42 131 MB/s 1621 MB/s 73350988 34.61
lizard 1.0 -45 17 MB/s 1810 MB/s 67317588 31.76
lizard 1.0 -49 1.95 MB/s 1729 MB/s 60679215 28.63
lz4 1.9.2 737 MB/s 4448 MB/s 100880800 47.60
lz4fast 1.9.2 -3 838 MB/s 4423 MB/s 107066190 50.52
lz4fast 1.9.2 -17 1201 MB/s 4632 MB/s 131732802 62.15
lz4hc 1.9.2 -1 131 MB/s 4071 MB/s 83803769 39.54
lz4hc 1.9.2 -4 81 MB/s 4210 MB/s 79807909 37.65
lz4hc 1.9.2 -9 33 MB/s 4378 MB/s 77884448 36.75
lz4hc 1.9.2 -12 11 MB/s 4427 MB/s 77262620 36.45
lzf 3.6 -0 400 MB/s 869 MB/s 105682088 49.86
lzf 3.6 -1 398 MB/s 914 MB/s 102041092 48.14
lzfse 2017-03-08 90 MB/s 934 MB/s 67624281 31.91
lzg 1.0.10 -1 91 MB/s 653 MB/s 108553667 51.22
lzg 1.0.10 -4 53 MB/s 655 MB/s 95930551 45.26
lzg 1.0.10 -6 29 MB/s 702 MB/s 89490220 42.22
lzg 1.0.10 -8 9.30 MB/s 762 MB/s 83606901 39.45
lzham 1.0 -d26 -0 11 MB/s 271 MB/s 64089870 30.24
lzham 1.0 -d26 -1 2.98 MB/s 340 MB/s 54740589 25.83
lzjb 2010 394 MB/s 601 MB/s 122671613 57.88
lzlib 1.11 -0 36 MB/s 61 MB/s 63847386 30.12
lzlib 1.11 -3 6.81 MB/s 69 MB/s 56320674 26.57
lzlib 1.11 -6 2.82 MB/s 74 MB/s 49777495 23.49
lzlib 1.11 -9 1.82 MB/s 76 MB/s 48296889 22.79
lzma 19.00 -0 34 MB/s 80 MB/s 64013917 30.20
lzma 19.00 -2 25 MB/s 91 MB/s 58867911 27.77
lzma 19.00 -4 14 MB/s 95 MB/s 57201645 26.99
lzma 19.00 -5 3.28 MB/s 103 MB/s 49710307 23.45
lzma 19.00 -9 2.66 MB/s 107 MB/s 48707450 22.98
lzmat 1.01 38 MB/s 479 MB/s 76485353 36.09
lzo1 2.10 -1 308 MB/s 799 MB/s 106474519 50.24
lzo1 2.10 -99 123 MB/s 857 MB/s 94946129 44.80
lzo1a 2.10 -1 309 MB/s 811 MB/s 104202251 49.16
lzo1a 2.10 -99 121 MB/s 869 MB/s 92666265 43.72
lzo1b 2.10 -1 257 MB/s 805 MB/s 97036087 45.78
lzo1b 2.10 -3 255 MB/s 821 MB/s 94044578 44.37
lzo1b 2.10 -6 244 MB/s 823 MB/s 91382355 43.12
lzo1b 2.10 -9 186 MB/s 816 MB/s 89261884 42.12
lzo1b 2.10 -99 126 MB/s 839 MB/s 85653376 40.41
lzo1b 2.10 -999 12 MB/s 945 MB/s 76594292 36.14
lzo1c 2.10 -1 269 MB/s 812 MB/s 99550904 46.97
lzo1c 2.10 -3 262 MB/s 829 MB/s 96716153 45.63
lzo1c 2.10 -6 211 MB/s 819 MB/s 93303623 44.02
lzo1c 2.10 -9 169 MB/s 820 MB/s 91040386 42.95
lzo1c 2.10 -99 110 MB/s 828 MB/s 88112288 41.57
lzo1c 2.10 -999 24 MB/s 878 MB/s 80396741 37.93
lzo1f 2.10 -1 244 MB/s 793 MB/s 99743329 47.06
lzo1f 2.10 -999 21 MB/s 833 MB/s 80890206 38.17
lzo1x 2.10 -1 680 MB/s 868 MB/s 100572537 47.45
lzo1x 2.10 -11 735 MB/s 893 MB/s 106604629 50.30
lzo1x 2.10 -12 717 MB/s 875 MB/s 103238859 48.71
lzo1x 2.10 -15 699 MB/s 871 MB/s 101462094 47.87
lzo1x 2.10 -999 8.76 MB/s 827 MB/s 75301903 35.53
lzo1y 2.10 -1 674 MB/s 863 MB/s 101258318 47.78
lzo1y 2.10 -999 8.87 MB/s 822 MB/s 75503849 35.62
lzo1z 2.10 -999 8.67 MB/s 814 MB/s 75061331 35.42
lzo2a 2.10 -999 27 MB/s 667 MB/s 82809337 39.07
lzrw 15-Jul-1991 -1 317 MB/s 646 MB/s 113761625 53.67
lzrw 15-Jul-1991 -3 381 MB/s 726 MB/s 105424168 49.74
lzrw 15-Jul-1991 -4 392 MB/s 630 MB/s 100131356 47.24
lzrw 15-Jul-1991 -5 150 MB/s 677 MB/s 90818810 42.85
lzsse2 2019-04-18 -1 24 MB/s 3276 MB/s 87976095 41.51
lzsse2 2019-04-18 -6 10 MB/s 3741 MB/s 75837101 35.78
lzsse2 2019-04-18 -12 9.74 MB/s 3754 MB/s 75829973 35.78
lzsse2 2019-04-18 -16 9.82 MB/s 3762 MB/s 75829973 35.78
lzsse4 2019-04-18 -1 21 MB/s 3965 MB/s 82542106 38.94
lzsse4 2019-04-18 -6 10 MB/s 4272 MB/s 76118298 35.91
lzsse4 2019-04-18 -12 10 MB/s 4272 MB/s 76113017 35.91
lzsse4 2019-04-18 -16 10 MB/s 4291 MB/s 76113017 35.91
lzsse8 2019-04-18 -1 19 MB/s 4166 MB/s 81866245 38.63
lzsse8 2019-04-18 -6 10 MB/s 4503 MB/s 75469717 35.61
lzsse8 2019-04-18 -12 9.86 MB/s 4491 MB/s 75464339 35.61
lzsse8 2019-04-18 -16 9.90 MB/s 4461 MB/s 75464339 35.61
lzvn 2017-03-08 73 MB/s 1223 MB/s 80814609 38.13
pithy 2011-12-24 -0 647 MB/s 2084 MB/s 103072463 48.63
pithy 2011-12-24 -3 597 MB/s 2083 MB/s 97255186 45.89
pithy 2011-12-24 -6 483 MB/s 2221 MB/s 92090898 43.45
pithy 2011-12-24 -9 400 MB/s 2256 MB/s 90360813 42.63
quicklz 1.5.0 -1 550 MB/s 715 MB/s 94720562 44.69
quicklz 1.5.0 -2 286 MB/s 708 MB/s 84555627 39.89
quicklz 1.5.0 -3 59 MB/s 1069 MB/s 81822241 38.60
shrinker 0.1 985 MB/s 3180 MB/s 172535778 81.40
slz_zlib 1.0.0 -1 301 MB/s 380 MB/s 99657958 47.02
slz_zlib 1.0.0 -2 297 MB/s 378 MB/s 96863094 45.70
slz_zlib 1.0.0 -3 293 MB/s 379 MB/s 96187780 45.38
snappy 2019-09-30 591 MB/s 1868 MB/s 102146767 48.19
tornado 0.6a -1 437 MB/s 520 MB/s 107381846 50.66
tornado 0.6a -2 300 MB/s 488 MB/s 90076660 42.50
tornado 0.6a -3 186 MB/s 301 MB/s 72662044 34.28
tornado 0.6a -4 133 MB/s 310 MB/s 70513617 33.27
tornado 0.6a -5 51 MB/s 195 MB/s 64129604 30.26
tornado 0.6a -6 34 MB/s 195 MB/s 62364583 29.42
tornado 0.6a -7 16 MB/s 194 MB/s 59026325 27.85
tornado 0.6a -10 5.73 MB/s 192 MB/s 57588241 27.17
tornado 0.6a -13 6.94 MB/s 202 MB/s 55614072 26.24
tornado 0.6a -16 2.15 MB/s 207 MB/s 53257046 25.13
ucl_nrv2b 1.03 -1 58 MB/s 322 MB/s 81703168 38.55
ucl_nrv2b 1.03 -6 20 MB/s 375 MB/s 73902185 34.87
ucl_nrv2b 1.03 -9 2.09 MB/s 407 MB/s 71031195 33.51
ucl_nrv2d 1.03 -1 59 MB/s 333 MB/s 81461976 38.43
ucl_nrv2d 1.03 -6 21 MB/s 386 MB/s 73757673 34.80
ucl_nrv2d 1.03 -9 2.09 MB/s 422 MB/s 70053895 33.05
ucl_nrv2e 1.03 -1 59 MB/s 330 MB/s 81195560 38.31
ucl_nrv2e 1.03 -6 21 MB/s 391 MB/s 73302012 34.58
ucl_nrv2e 1.03 -9 2.13 MB/s 429 MB/s 69645134 32.86
wflz 2015-09-16 305 MB/s 1183 MB/s 109605264 51.71
xpack 2016-06-02 -1 171 MB/s 890 MB/s 71090065 33.54
xpack 2016-06-02 -6 43 MB/s 1086 MB/s 62213845 29.35
xpack 2016-06-02 -9 17 MB/s 1116 MB/s 61240928 28.89
xz 5.2.4 -0 24 MB/s 70 MB/s 62579435 29.53
xz 5.2.4 -3 6.76 MB/s 84 MB/s 55745125 26.30
xz 5.2.4 -6 2.95 MB/s 89 MB/s 49195929 23.21
xz 5.2.4 -9 2.62 MB/s 88 MB/s 48745306 23.00
yalz77 2015-09-19 -1 105 MB/s 578 MB/s 93952728 44.33
yalz77 2015-09-19 -4 56 MB/s 539 MB/s 87392632 41.23
yalz77 2015-09-19 -8 35 MB/s 532 MB/s 85153287 40.18
yalz77 2015-09-19 -12 24 MB/s 518 MB/s 84050625 39.66
yappy 2014-03-22 -1 165 MB/s 2809 MB/s 105750956 49.89
yappy 2014-03-22 -10 128 MB/s 2969 MB/s 100018673 47.19
yappy 2014-03-22 -100 96 MB/s 3001 MB/s 98672514 46.56
zlib 1.2.11 -1 119 MB/s 383 MB/s 77259029 36.45
zlib 1.2.11 -6 35 MB/s 407 MB/s 68228431 32.19
zlib 1.2.11 -9 14 MB/s 404 MB/s 67644548 31.92
zling 2018-10-12 -0 75 MB/s 216 MB/s 62990590 29.72
zling 2018-10-12 -1 67 MB/s 221 MB/s 62022546 29.26
zling 2018-10-12 -2 60 MB/s 225 MB/s 61503093 29.02
zling 2018-10-12 -3 53 MB/s 226 MB/s 60999828 28.78
zling 2018-10-12 -4 46 MB/s 226 MB/s 60626768 28.60
zstd 1.4.3 -1 480 MB/s 1203 MB/s 73508823 34.68
zstd 1.4.3 -2 356 MB/s 1067 MB/s 69594511 32.84
zstd 1.4.3 -5 104 MB/s 932 MB/s 63993747 30.19
zstd 1.4.3 -8 46 MB/s 1055 MB/s 60757793 28.67
zstd 1.4.3 -11 20 MB/s 1001 MB/s 59239357 27.95
zstd 1.4.3 -15 7.12 MB/s 1024 MB/s 57167422 26.97
zstd 1.4.3 -18 3.58 MB/s 912 MB/s 53690572 25.33
zstd 1.4.3 -22 2.28 MB/s 865 MB/s 52738312 24.88

lzbench's People

Contributors

chipturner avatar coderobe avatar data-man avatar daxtens avatar fwyzard avatar inikep avatar jhollowe avatar jibsen avatar jinfeihan57 avatar juliankunkel avatar nolange avatar svpv avatar tansy avatar travisdowns avatar wtarreau avatar xuchunmei000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lzbench's Issues

Consider disabling pithy

As part of my efforts to fuzz compression libraries, I recently tested pithy and found some vulnerabilities. The author has not responded to email about the issue for over a month, nor the pull request filed against the project, so AFAICT it's abandoned.

I'm not sure whether or not it is appropriate for this project to keep supporting pithy or not. It obviously shouldn't be used in production code, but lzbench isn't really intended for production code… I guess the distinction should be whether lzbench is intended to help people choose a compression library for their code (in which case it would be better not to include pithy), or as a tool for people writing compression libraries (in which case it would be better to include it).

Feedback

Hi inikep,
many thanks for new releases, I appreciate your work, will play with v1.8 next week and will share results on i7-3630QM @3.4GHz turbo and i5-7200U @3.1GHz...

A bit more for "known issues" of various compressors - yappy.

I've noticed yappy gives decompression error if I build it on ARM board. Decompression test fails, lzbench reports ERROR. Works fine on x86-64 though. I guess it can look like this:

diff --git a/README.md b/README.md
index a9bd5c2..8402aad 100644
--- a/README.md
+++ b/README.md
@@ -89,7 +89,7 @@ ucl 1.03
 wflz 2015-09-16 (WARNING: it can throw SEGFAULT compiled with gcc 4.9+ -O3)
 xz 5.2.2
 yalz77 2015-09-19
-yappy 2014-03-22
+yappy 2014-03-22 (WARNING: fails to decompress properly on ARM)
 zlib 1.2.8
 zling 2015-09-15
 zstd v0.4.1

LZSSE requirement check is incomplete

Currently, LZSSE code is built if CPU supports SSE4.1 intrinsics, which is not enough if CPU is running in 32-bit mode (e.g., _mm_cvtsi64_si128 is 64-bit only). Consider the patch below which enforces the check:

-# LZSSE requires gcc with support of __SSE4_1__
-ifeq ($(shell echo|$(CC) -dM -E - -march=native|grep -c SSE4_1), 0)
+# LZSSE requires compiler with __SSE4_1__ support and 64-bit CPU
+ifneq ($(shell echo|$(CC) -dM -E - -march=native|egrep -c '__(SSE4_1|x86_64)__'), 2)
        DONT_BUILD_LZSSE ?= 1
 endif

lzbench reports weird results when built on ARMv7hf 32 bit (wrong compr. size numbers)

Configuration:
Ubuntu @ ARMv7, hardfloat ABI, 32-bit LE CPU.
lzbench: current version, at commit 5937923
Compiler: GCC 4.8.1 (Linaro) - armhf abi, target is ARMv7 (Allwinner A10 based board).
Flags: defaults from makefile, I've only changed the following lines to match my system better:

  • BUILD_SYSTEM = linux
  • BUILD_ARCH = 32-bit

Problem description:

  • Lzbench shows some insane numbers instead of Compr. size, which makes reading results hard.
  • Works fine otherwise, compression ratio seems to be adequate, etc.

Note:
Compr size looks pretty close to 2^32, which could be something like integer overflow or other bug related to dealing with (32-bit) integers. Keep in mind that on ARM and in general, sizeof(int), long, pointers, etc could vary and you can't expect it to have certain size, unless you go for something like uint32_t, etc from modern C standards.

And btw, when it comes to 64 bits, there're 64-bit ARMs and they seems to have sunrise of their popularity, so you better do not expect each 64 bit CPU to be x86. Some could be ARMs, etc.

Bonus:
If you want a test run results from rather strange machine, I can launch whatever benchmark you want, subject to machine limits (only 1Gb of RAM for everything, no swap, CPU and RAM aren't exactly fastest, memcpy tops at 256Mb/sec or so).

Example output on just a 512Kb file follows:

$ ./lzbench -i15 tst 
lzbench 0.8 (32-bit Linux)   Assembled by P.Skibinski
| Compressor name             | Compression| Decompress.| Compr. size | Ratio |
| memcpy                      |   256 MB/s |   256 MB/s |  4295491584 |100.00 |
| density 0.12.5 beta level 1 |    32 MB/s |    46 MB/s |  4295404942 | 83.47 |
| density 0.12.5 beta level 2 |    12 MB/s |    15 MB/s |  4295370026 | 76.81 |
| density 0.12.5 beta level 3 |  6.65 MB/s |  6.83 MB/s |  4295348558 | 72.72 |
| fastlz 0.1 level 1          |    36 MB/s |    85 MB/s |  4295318107 | 66.91 |
| fastlz 0.1 level 2          |    30 MB/s |    85 MB/s |  4295314572 | 66.24 |
| lz4 r131                    |    32 MB/s |   128 MB/s |  4295315866 | 66.48 |
| lz4fast r131 level 3        |    51 MB/s |   170 MB/s |  4295343773 | 71.81 |
| lz4fast r131 level 17       |   128 MB/s |   256 MB/s |  4295408636 | 84.18 |
| lz5 r131b                   |  7.31 MB/s |    85 MB/s |  4295302981 | 64.03 |
| lzf 3.6 level 0             |    17 MB/s |    85 MB/s |  4295315797 | 66.47 |
| lzf 3.6 level 1             |    16 MB/s |    85 MB/s |  4295313702 | 66.07 |
| lzjb 2010                   |    32 MB/s |    73 MB/s |  4295387745 | 80.19 |
| lzo 2.09 level 1            |    19 MB/s |   102 MB/s |  4295315275 | 66.37 |
| lzo 2.09 level 1001         |    24 MB/s |   102 MB/s |  4295311518 | 65.66 |
| lzo 2.09 level 2001         |    22 MB/s |   102 MB/s |  4295310634 | 65.49 |
| lzo 2.09 level 3001         |    73 MB/s |   170 MB/s |  4295383550 | 79.39 |
| lzo 2.09 level 4001         |    73 MB/s |   170 MB/s |  4295384028 | 79.49 |
| lzrw 15-Jul-1991 level 1    |    34 MB/s |    64 MB/s |  4295351851 | 73.35 |
| lzrw 15-Jul-1991 level 2    |    32 MB/s |    73 MB/s |  4295350191 | 73.03 |
| lzrw 15-Jul-1991 level 3    |    34 MB/s |    64 MB/s |  4295330009 | 69.18 |
| lzrw 15-Jul-1991 level 4    |    34 MB/s |    39 MB/s |  4295324550 | 68.14 |
| lzrw 15-Jul-1991 level 5    |    11 MB/s |    39 MB/s |  4295314410 | 66.21 |
| pithy 2011-12-24 level 0    |    73 MB/s |   170 MB/s |  4295338432 | 70.79 |
| pithy 2011-12-24 level 3    |    46 MB/s |   170 MB/s |  4295330983 | 69.37 |
| pithy 2011-12-24 level 6    |    17 MB/s |   128 MB/s |  4295322934 | 67.83 |
| pithy 2011-12-24 level 9    |    11 MB/s |   128 MB/s |  4295319890 | 67.25 |
| quicklz 1.5.0 level 1       |    46 MB/s |    42 MB/s |  4295312758 | 65.89 |
| quicklz 1.5.0 level 2       |    17 MB/s |    34 MB/s |  4295295219 | 62.55 |
| shrinker 0.1                |    25 MB/s |   128 MB/s |  4295305322 | 64.47 |
| snappy 1.1.3                |    42 MB/s |   128 MB/s |  4295326217 | 68.46 |
| tornado 0.6a level 1        |    21 MB/s |    39 MB/s |  4295367013 | 76.24 |
| tornado 0.6a level 2        |    13 MB/s |    34 MB/s |  4295316733 | 66.65 |
| tornado 0.6a level 3        |  4.83 MB/s |    12 MB/s |  4295259130 | 55.66 |
| zstd v0.3.6                 |    14 MB/s |    36 MB/s |  4295282462 | 60.11 |
done... (15 iterations, chunk_size=512 KB, min_compr_speed=0 MB)

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/4.8/lto-wrapper
Target: arm-linux-gnueabihf
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.8.1-10ubuntu9' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-libitm --disable-libquadmath --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-armhf/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-armhf --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-armhf --with-arch-directory=arm --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --enable-multilib --disable-sjlj-exceptions --with-arch=armv7-a --with-fpu=vfpv3-d16 --with-float=hard --with-mode=thumb --disable-werror --enable-checking=release --build=arm-linux-gnueabihf --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf
Thread model: posix
gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) 

Very bad benchmark accuracy on small files or few iterations on my hardware.

lzbench: current version, at commit 5937923
OS: Xubuntu 64-bit, 15.10.
Compler: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
Flags: defauts from makefile, only BUILD_SYSTEM=linux uncommented.

To reproduce:
Try to run benchmark on small file. Like, say, 100Kbytes long.

Result:
Memcpy speed is laughable, and not anyhow close to what hardware does, half of benchmarks show identical result.
It is pretty clear benchmark got limited by time measurement accuracy, rather than something else.

Extra info:
Using higher iterations....

  • Somehow seems to have no major effect on weird memcpy speed reported, it only depends on file size. Strange.
  • Could be PITA, because codecs and even some codec modes can have drastically different speed. So, doing lz4fast 20 times is okay, but doing high level compression on brotli for 20 times can be a great test for patience.
  • Still results in some strange and skewed results with half of codecs exposing same speeds, likely hitting time measurements accuracy issues rather than something else.

Idea: "automatic number of passes" (and, ideally, enable it by default). Should algo run take a time below of say, 1 second, it have to be considered inaccurate and re-tested in several runs, until total run time of about 1 second or more has been reached. It makes little sense to make many iterations on slow algos since they already got fairly accurate result, even on single run, because if it takes 10 seconds, jitter is negligible and jet another waste of 10 seconds would not improve anything. On other hand, if some algo is much faster, it can expose odd results being probably some time measurement/rounding error than something else. Because it's not like if half of codecs can expose the very same "20Mb/s" performance, they are supposed to have different performance. And they do, if I take somewhat larger file.

On side note, large files tend to crash on some strong algos like LZMA on low-RAM systems, probably hitting out of memory (but I do not see OOM killer). So I may want to limit to files like 2Mb or below on such systems. Yet, I can't achieve reasonable accuracy due to timing errors or so.

[Feature Request] Include memory consumption in benchmark results

Hi, some codec's high compression level may consume a a lot of memory. Is it possible to include memory consumption in benchmark results?

Also, -z means compression time instead of speed. The benchmark process can be interrupted by other processes, is it possible to include cpu time, which better indicates cpu consumption?

lzbench gets killed on low memory devices without any error message

I run lzbench (master branch) on NanoPi NEO2 aarch64 board to test all zstd compression levels on Silesia compression corpus.

Everything runs fine till I get to level 22, at this point lzbench gets killed without any error message.

Running zstd on same file with compression level 22 produces error message.

Would quite appreciate if you can incorporate error message in lzbench for low memory condition.

I would quite appreciate your comments on memory requirements estimation for each compression levels also.

Here I would like to share some output logs:

otlabs@nanopineo2:~$ free
              total        used        free      shared  buff/cache   available
Mem:        1011064       80988      723160       11536      206916      846836
Swap:        505528       13564      491964

otlabs@nanopineo2:~/dev/lzbench$ zstd --ultra -22 silesia.tar
zstd: error 11 : Allocation error : not enough memory

otlabs@nanopineo2:~/dev/lzbench$ zstd --ultra -21 silesia.tar
silesia.tar     : 24.88%   (211957760 => 52743949 bytes, silesia.tar.zst) 

otlabs@nanopineo2:~/dev/lzbench$ ./lzbench -ezstd,22 silesia.tar
lzbench 1.8 (64-bit Linux)   Assembled by P.Skibinski
Compressor name         Compress. Decompress. Compr. size  Ratio Filename
memcpy                    675 MB/s   769 MB/s   211947520 100.00 silesia.tar
Killed

otlabs@nanopineo2:~/dev/lzbench$ ./lzbench -ezstd,22 lzbench18_sorted.md 
lzbench 1.8 (64-bit Linux)   Assembled by P.Skibinski
Compressor name         Compress. Decompress. Compr. size  Ratio Filename
memcpy                   3429 MB/s  4788 MB/s       14940 100.00 lzbench18_sorted.md
zstd 1.4.4 -22           0.59 MB/s   104 MB/s        3573  23.92 lzbench18_sorted.md
done... (cIters=1 dIters=1 cTime=1.0 dTime=2.0 chunkSize=1706MB cSpeed=0MB)

When running against a directory, memcpy only runs once

When using

./lzbench -r -elz4 ../corpus/silesia

To run against a directory, memcpy only runs once:

lzbench 1.5 (64-bit Linux)   Assembled by P.Skibinski
Compressor name         Compress. Decompress. Compr. size  Ratio Filename
memcpy                  11571 MB/s 11643 MB/s    10192446 100.00 ../corpus/silesia/dickens
lz4 1.7.3                 267 MB/s  1830 MB/s     6428742  63.07 ../corpus/silesia/dickens
lz4 1.7.3                 449 MB/s  2207 MB/s     7716839  35.72 ../corpus/silesia/samba
lz4 1.7.3                 290 MB/s  1660 MB/s    20139988  48.58 ../corpus/silesia/webster
lz4 1.7.3                 437 MB/s  1978 MB/s    26435667  51.61 ../corpus/silesia/mozilla
lz4 1.7.3                1001 MB/s  5196 MB/s     8390195  99.01 ../corpus/silesia/x-ray
lz4 1.7.3                 376 MB/s  1783 MB/s     5256666  52.12 ../corpus/silesia/osdb
lz4 1.7.3                 260 MB/s  1817 MB/s     3181387  48.00 ../corpus/silesia/reymont
lz4 1.7.3                 360 MB/s  1685 MB/s     4338918  70.53 ../corpus/silesia/ooffice
lz4 1.7.3                 375 MB/s  2230 MB/s     6790273  93.63 ../corpus/silesia/sao
lz4 1.7.3                 565 MB/s  2078 MB/s     1227495  22.96 ../corpus/silesia/xml
lz4 1.7.3                 413 MB/s  2197 MB/s     5440937  54.57 ../corpus/silesia/mr
lz4 1.7.3                 695 MB/s  2731 MB/s     5533040  16.49 ../corpus/silesia/nci

Presumably the intent is for memcpy to run against all the files, like the other algos.

zstd and zstd_HC level 1 appear to be duplicate algos?

I gave lzbench a try on some files and eventually I noticed that if I use -eall, zstd_HC level 1 result always exactly matches zstd on everything I've tested. Both speeds and compressed size are identical. At the end of day it looks pretty much like identical algo tested twice.

It typically looks like this:

zstd_HC v0.3.6 level 1          53 MB/s    165 MB/s      3551054  28.89
zstd v0.3.6                     52 MB/s    165 MB/s      3551054  28.89

...so my guess would be zstd HC level 1 isn't really necessary in -eall?

[dev] GCC now gives warning about types during compile.

Both GCC 5.2 @ x86-64 and 4.8 @ ARM are now giving warning when I'm building -dev.
I'm on -dev, 949291c
Warning text is:

In file included from _lzbench/lzbench.cpp:26:0:
_lzbench/lzbench.cpp: In function ‘void print_stats(lzbench_params_t*, const compressor_desc_t*, int, std::vector<long unsigned int>&, std::vector<long unsigned int>&, uint32_t, uint32_t, bool)’:
_lzbench/lzbench.h:10:92: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
 #define LZBENCH_DEBUG(level, fmt, args...) if (params->verbose >= level) printf(fmt, ##args)
                                                                                            ^
_lzbench/lzbench.cpp:116:42: note: in expansion of macro ‘LZBENCH_DEBUG’
     if (params->cspeed > insize/cnano) { LZBENCH_DEBUG(9, "%s FULL slower than %d MB/s\n", desc->name, insize/cnano); return; } 
                                          ^

fails to compile on Fedora 32

gcc-10.0.1-0.11.fc32.x86_64

$ make
...
/usr/bin/ld: glza/GLZAdecode.o:(.bss+0x1b20bb): multiple definition of `prior_is_cap'; glza/GLZAencode.o:(.bss+0x21275): first defined here
/usr/bin/ld: glza/GLZAdecode.o:(.bss+0x1b20be): multiple definition of `cap_encoded'; glza/GLZAcompress.o:(.bss+0x1ec91): first defined here
/usr/bin/ld: glza/GLZAdecode.o:(.bss+0x1cf128): multiple definition of `symbol'; glza/GLZAencode.o:(.bss+0xd838): first defined here
/usr/bin/ld: glza/GLZAdecode.o:(.bss+0x1cf108): multiple definition of `symbol_to_move'; glza/GLZAencode.o:(.bss+0x1c1cc): first defined here
/usr/bin/ld: glza/GLZAdecode.o:(.bss+0x1cf10c): multiple definition of `symbol_index'; glza/GLZAencode.o:(.bss+0x1c1d4): first defined here
/usr/bin/ld: tornado/tor_test.o: in function `GetTempDir':
tor_test.cpp:(.text+0x4295): warning: the use of `tempnam' is dangerous, better use `mkstemp'
collect2: error: ld returned 1 exit status
make: *** [Makefile:310: lzbench] Error 1

LZMAT library could crash on compressing large files.

Configuration:
lzbench: commit 5937923
OS: Xubuntu 64-bit, 15.10.
Compler: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
Flags: defauts from makefile, only BUILD_SYSTEM=linux uncommented.

Description:
Another day, another crash... to reproduce:

  1. Build lzbench with default compile flags on similar configuration.
  2. Try to compress some reasonably large file.
  3. Take a look what happens, preferrably under debugger.

Result:
Crash could occur in lzmat compressor if file is large enough.
Files smaller than few hundreds kb or so do not cause crash, but attempt to compress file larger than megabyte causes crash in lzmat_encode.
For example, attempt to compress lzmabench itself :) would fail.

Note:

  1. This algo presents in -eall, so for almost all reasonably large files lzbench just dies in the middle. Which is kinda unfortunate.
  2. It seems author's page is down and I failed to find reasonable way to file a bug.
Program received signal SIGSEGV, Segmentation fault.
0x000000000056e6da in lzmat_encode ()
(gdb) bt full
#0  0x000000000056e6da in lzmat_encode ()
No symbol table info available.
#1  0x00000000005ce28d in lzbench_lzmat_compress(char*, unsigned long, char*, unsigned long, unsigned long, unsigned long, unsigned long)
    ()
No symbol table info available.
#2  0x00000000005bfdff in lzbench_compress(long (*)(char*, unsigned long, char*, unsigned long, unsigned long, unsigned long, unsigned long), unsigned long, std::vector<unsigned long, std::allocator<unsigned long> >&, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned long, unsigned long, unsigned long) ()
No symbol table info available.
#3  0x00000000005c1661 in lzbench_test(compressor_desc_t const*, int, int, unsigned long, int, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, timespec, unsigned long, unsigned long, unsigned long) [clone .constprop.142] ()
No symbol table info available.
#4  0x00000000005c203d in lzbench_test_with_params(char*, int, unsigned long, int, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, timespec) ()
No symbol table info available.
#5  0x00000000005c1e0f in lzbench_test_with_params(char*, int, unsigned long, int, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, timespec) ()
No symbol table info available.
#6  0x00000000005c2422 in lzbenchmark(_IO_FILE*, char*, int, unsigned int, int) ()
No symbol table info available.
#7  0x0000000000401267 in main ()
No symbol table info available.

Feedback

Question #1:
When -eglza was used (in the command line below) the desktop froze (the Task manager in the tray went 100%) and I couldn't even break the process?!

Question #2:

In previous .BAT files (that invoked lzbench.exe) the %1 was transformed into the actual filename, in 1.7.1 I see in the Filename column:

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>lzbench171 -c4 -j15 -o3 -ebrotli24,11/tornado,16/blosclz,9/brieflz/crush,2/csc,5/density,3/fastlz,2/gipfeli/zstd,12,22/zstd24,12,22/lzo1b,999/lzham,4/lzham24,4/libdeflate,12/lz4/lz4hc,10,12/lizard,19,29,39,49/lzf,1/lzfse/lzg,9/lzham,1/lzjb/lzlib,9/lzma,9/lzrw,5/lzsse2,17/lzsse4,17/lzsse8,17/lzvn/pithy,9/quicklz,3/snappy/slz_zlib,3/ucl_nrv2b,9/ucl_nrv2d,9/ucl_nrv2e,9/xpack,9/xz,9/yalz77,12/yappy,99/yappy,9999/zlib,9/zling,4/shrinker/wflz/lzmat Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt
lzbench 1.7.1 (64-bit Windows)   Assembled by P.Skibinski
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
memcpy                   5584 MB/s  5548 MB/s     5589889      5589889 100.00 1 files
...
done... (cIters=1 dIters=1 cTime=1.0 dTime=2.0 chunkSize=1706MB cSpeed=0MB)

The results sorted by column number 4:
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
csc 2016-10-13 -5        2.31 MB/s    36 MB/s     5589889      1546709  27.67 1 files
brotli24 2017-03-10 -11  0.34 MB/s   216 MB/s     5589889      1564776  27.99 1 files
...

Question #3:

How do I set 15 decompression iterations, I tried -j15 but the reported ones are ... 1 i.e. dIters=1:

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>lzbench171 -c4 -j15 -o3 -ebrotli24,11/tornado,16/blosclz,9/brieflz/crush,2/csc,5/density,3/fastlz,2/gipfeli/zstd,12,22/zstd24,12,22/lzo1b,999/lzham,4/lzham24,4/libdeflate,12/lz4/lz4hc,10,12/lizard,19,29,39,49/lzf,1/lzfse/lzg,9/lzham,1/lzjb/lzlib,9/lzma,9/lzrw,5/lzsse2,17/lzsse4,17/lzsse8,17/lzvn/pithy,9/quicklz,3/snappy/slz_zlib,3/ucl_nrv2b,9/ucl_nrv2d,9/ucl_nrv2e,9/xpack,9/xz,9/yalz77,12/yappy,99/yappy,9999/zlib,9/zling,4/shrinker/wflz/lzmat Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt
lzbench 1.7.1 (64-bit Windows)   Assembled by P.Skibinski
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
memcpy                   5584 MB/s  5548 MB/s     5589889      5589889 100.00 1 files
...
done... (cIters=1 dIters=1 cTime=1.0 dTime=2.0 chunkSize=1706MB cSpeed=0MB)

Question #4:

Could you provide a link with MinGW package which contains 'make'?
I tried the latest one with GCC 6.3.0 and put 'make.exe' into bin folder but there were errors during compile:

'cc' not recognized ...
and others.

I wonder how such a BASIC and must-have process is not clear even to me let alone other users wanting to compile with 'make'.

And as always, one quick test with latest 1.7.1 lzbench, my laptop with Core 2 Q9550s @2.83GHz was used:

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>dir

03/19/2017  12:14 PM         9,816,348 lzbench171.exe
02/08/2017  12:48 AM             1,632 MokujIN GREEN 224 prompt.lnk
03/19/2017  01:06 PM           156,672 Nakamichi_Washigan+_(1xQWORD+1xXMM)_Intel_15.0_64bit_SSE41.exe
03/06/2017  06:55 PM         5,589,889 Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt
03/06/2017  10:12 PM         2,026,440 Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt.L17.LZSSE2
03/06/2017  10:12 PM         1,994,028 Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt.Nakamichi
03/19/2017  12:12 AM               640 _BENCH_a_file.BAT

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>_BENCH_a_file.BAT Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>"Nakamichi_Washigan+_(1xQWORD+1xXMM)_Intel_15.0_64bit_SSE41.exe" Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt.Nakamichi /bench
Nakamichi 'Washigan+', written by Kaze, based on Nobuo Ito's LZSS source, babealicious suggestion by m^2 enforced, muffinesque suggestion by Jim Dempsey enforced.
Note0: Nakamichi 'Eye-of-the-Eagle' is 100% FREE, licenseless that is.
Note1: Hamid Buzidi's LzTurbo ([a] FASTEST [Textual] Decompressor, Levels 19/39) retains kingship, his TurboBench (2016-Dec-26) proves the supremacy of LzTurbo, Turbo-Amazing!
Note2: Conor Stokes' LZSSE2 ([a] FASTEST Textual Decompressor, Level 17) is embedded, all credits along with many thanks go to him.
Note3: 'Washigan' predecessors are 'Okamigan', 'Zato', 'Tsubame', 'Tengu-Tsuyo' and 'Tengu'.
Note4: This compile can handle files up to 5120MB.
Note5: The matchfinder/memmem() is 'Railgun_BawBaw_reverse'.
Note6: Instead of '_mm_loadu_si128' '_mm_lddqu_si128' is used.
Note7: The lookahead 'Tsuyo' heuristic which looks one char ahead is applied thrice, still not strengthened, though.
Note8: The compile made 2017-Mar-19, the decompression time measuring is done in 16x8 passes choosing the top score from 64 back-to-back runs - the goal - to enter [maximal] Turbo Mode.
Note9: Just to reduce the codesize, the 3xQWORD become (in this compile) 1xQWORD+1xXMMWORD.
NoteA: Please send me (at [email protected]) decompression results obtained on machines with fast CPU-RAM subsystems.
Current priority class is REALTIME_PRIORITY_CLASS.
Allocating Source-Buffer 1 MB ...
Allocating Target-Buffer 5,120 MB ...
Decompressing 1,994,028 bytes (being the compressed stream) ...
Warming up ...
RAM-to-RAM performance:
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 439 MB/s; 419 MB/s
Enforcing 17 seconds idling to avoid throttling ...
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 450 MB/s
Enforcing 17 seconds idling to avoid throttling ...
450 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 450 MB/s; 441 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 452 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
450 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s; 451 MB/s
Enforcing 17 seconds idling to avoid throttling ...
This CPU seems to be working at 2,829 MHz, or more due to ensleeping.
RAM-to-RAM (peak) performance: 452 MB/s.
Source-file-Hash(FNV1A_YoshimitsuTRIAD) = 0x89b2,0682
Target-file-Hash(FNV1A_YoshimitsuTRIAD) = 0x4e34,ebb9
Allocating Source-Buffer 1 MB ...
Allocating Target-Buffer 5,120 MB ...
Decompressing 'Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt.L17.LZSSE2' (2,026,440 bytes, being the compressed stream) ...
Warming up ...
RAM-to-RAM performance:
766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s
Enforcing 17 seconds idling to avoid throttling ...
766 MB/s; 766 MB/s; 768 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s
Enforcing 17 seconds idling to avoid throttling ...
768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s
Enforcing 17 seconds idling to avoid throttling ...
766 MB/s; 766 MB/s; 766 MB/s; 764 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 754 MB/s; 721 MB/s; 721 MB/s; 712 MB/s; 721 MB/s; 766 MB/s; 732 MB/s; 710 MB/s
Enforcing 17 seconds idling to avoid throttling ...
766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 768 MB/s
Enforcing 17 seconds idling to avoid throttling ...
766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 764 MB/s; 766 MB/s; 766 MB/s; 766 MB/s
Enforcing 17 seconds idling to avoid throttling ...
766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s
Enforcing 17 seconds idling to avoid throttling ...
768 MB/s; 766 MB/s; 768 MB/s; 768 MB/s; 766 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 766 MB/s; 764 MB/s; 770 MB/s; 768 MB/s; 766 MB/s; 766 MB/s; 766 MB/s
Enforcing 17 seconds idling to avoid throttling ...
This CPU seems to be working at 2,829 MHz, or more due to ensleeping.
RAM-to-RAM (peak) performance: 770 MB/s.
Nakamichi 'Washigan+' vs LZSSE2 17, c.size: 0.98x
LZSSE2 17 vs Nakamichi 'Washigan+', d.rate: 1.70x
Bottomline:
Nakamichi 'Washigan+' expanding 2.80x to 5,589,889 at 452 MB/s.

D:\Nakamichi_Washigan+\Washigan+_vs_lzbench_vs_TurboBench_(LzTurbo-OFFICIAL_vs_Zstd_vs_Oodle)_2017-Mar-20>lzbench171 -c4 -j15 -o3 -ebrotli24,11/tornado,16/blosclz,9/brieflz/crush,2/csc,5/density,3/fastlz,2/gipfeli/zstd,12,22/zstd24,12,22/lzo1b,999/lzham,4/lzham24,4/libdeflate,12/lz4/lz4hc,10,12/lizard,19,29,39,49/lzf,1/lzfse/lzg,9/lzham,1/lzjb/lzlib,9/lzma,9/lzrw,5/lzsse2,17/lzsse4,17/lzsse8,17/lzvn/pithy,9/quicklz,3/snappy/slz_zlib,3/ucl_nrv2b,9/ucl_nrv2d,9/ucl_nrv2e,9/xpack,9/xz,9/yalz77,12/yappy,99/yappy,9999/zlib,9/zling,4/shrinker/wflz/lzmat Project_Gutenberg_EBook_of_The_Complete_Works_of_William_Shakespeare.txt
lzbench 1.7.1 (64-bit Windows)   Assembled by P.Skibinski
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
memcpy                   5584 MB/s  5548 MB/s     5589889      5589889 100.00 1 files
...
done... (cIters=1 dIters=1 cTime=1.0 dTime=2.0 chunkSize=1706MB cSpeed=0MB)

The results sorted by column number 4:
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
csc 2016-10-13 -5        2.31 MB/s    36 MB/s     5589889      1546709  27.67 1 files
brotli24 2017-03-10 -11  0.34 MB/s   216 MB/s     5589889      1564776  27.99 1 files
lzlib 1.8 -9             1.26 MB/s    37 MB/s     5589889      1565357  28.00 1 files
xz 5.2.3 -9              1.36 MB/s    51 MB/s     5589889      1566465  28.02 1 files
lzma 16.04 -9            1.20 MB/s    58 MB/s     5589889      1566595  28.03 1 files
tornado 0.6a -16         1.64 MB/s   131 MB/s     5589889      1579551  28.26 1 files
zstd 1.1.4 -22           1.50 MB/s   369 MB/s     5589889      1582233  28.31 1 files
zstd24 1.1.4 -22         1.49 MB/s   368 MB/s     5589889      1582233  28.31 1 files
lzham24 1.0 -4           0.91 MB/s   127 MB/s     5589889      1585461  28.36 1 files
lzham 1.0 -d26 -4        0.91 MB/s   126 MB/s     5589889      1586475  28.38 1 files
lzham 1.0 -d26 -1        1.41 MB/s   113 MB/s     5589889      1721336  30.79 1 files
zling 2016-01-10 -4        21 MB/s    98 MB/s     5589889      1726028  30.88 1 files
zstd 1.1.4 -12           6.46 MB/s   361 MB/s     5589889      1775255  31.76 1 files
zstd24 1.1.4 -12         6.49 MB/s   361 MB/s     5589889      1775255  31.76 1 files
crush 1.0 -2             0.26 MB/s   193 MB/s     5589889      1829442  32.73 1 files
lizard 1.0 -49           1.42 MB/s   506 MB/s     5589889      1845619  33.02 1 files
xpack 2016-06-02 -9      8.63 MB/s   315 MB/s     5589889      1896917  33.93 1 files
libdeflate 0.7 -12       4.40 MB/s   344 MB/s     5589889      1933067  34.58 1 files
lizard 1.0 -39           3.68 MB/s   615 MB/s     5589889      1961496  35.09 1 files

Nakamichi 'Washigan+'                452 MB/s                  1994028

zlib 1.2.11 -9           5.48 MB/s   184 MB/s     5589889      2023362  36.20 1 files
lzsse2 2016-05-14 -17    4.19 MB/s   808 MB/s     5589889      2026440  36.25 1 files
lzsse4 2016-05-14 -17    4.90 MB/s   717 MB/s     5589889      2037489  36.45 1 files
lzfse 2017-03-08           27 MB/s   321 MB/s     5589889      2050530  36.68 1 files
lzsse8 2016-05-14 -17    4.55 MB/s   684 MB/s     5589889      2058358  36.82 1 files
ucl_nrv2e 1.03 -9        0.86 MB/s   145 MB/s     5589889      2186081  39.11 1 files
lizard 1.0 -29           1.48 MB/s   699 MB/s     5589889      2209160  39.52 1 files
ucl_nrv2d 1.03 -9        0.86 MB/s   145 MB/s     5589889      2212936  39.59 1 files
lzo1b 2.09 -999          7.05 MB/s   367 MB/s     5589889      2239176  40.06 1 files
ucl_nrv2b 1.03 -9        0.86 MB/s   146 MB/s     5589889      2261637  40.46 1 files
lz4hc 1.7.5 -12          5.14 MB/s   899 MB/s     5589889      2274994  40.70 1 files
lizard 1.0 -19           3.84 MB/s   796 MB/s     5589889      2277091  40.74 1 files
lz4hc 1.7.5 -10          9.53 MB/s   910 MB/s     5589889      2305281  41.24 1 files
lzmat 1.01                 11 MB/s   182 MB/s     5589889      2338743  41.84 1 files
brieflz 1.1.0              57 MB/s   100 MB/s     5589889      2453497  43.89 1 files
lzg 1.0.8 -9             1.09 MB/s   394 MB/s     5589889      2490683  44.56 1 files
lzvn 2017-03-08            23 MB/s   437 MB/s     5589889      2580181  46.16 1 files
gipfeli 2016-07-13        134 MB/s   227 MB/s     5589889      2584769  46.24 1 files
yalz77 2015-09-19 -12      19 MB/s   180 MB/s     5589889      2599486  46.50 1 files
quicklz 1.5.0 -3           25 MB/s   423 MB/s     5589889      2619697  46.86 1 files
density 0.12.5 beta -3    183 MB/s   160 MB/s     5589889      2650239  47.41 1 files
lzrw 15-Jul-1991 -5        60 MB/s   226 MB/s     5589889      2663632  47.65 1 files
pithy 2011-12-24 -9       173 MB/s   598 MB/s     5589889      2789253  49.90 1 files
shrinker 0.1              143 MB/s   425 MB/s     5589889      3010365  53.85 1 files
fastlz 0.1 -2             137 MB/s   270 MB/s     5589889      3069586  54.91 1 files
lzf 3.6 -1                147 MB/s   328 MB/s     5589889      3071349  54.94 1 files
yappy 2014-03-22 -9999     36 MB/s   630 MB/s     5589889      3086825  55.22 1 files
yappy 2014-03-22 -99       37 MB/s   630 MB/s     5589889      3087556  55.23 1 files
blosclz 2015-11-10 -9     118 MB/s   285 MB/s     5589889      3143838  56.24 1 files
slz_zlib 1.0.0 -3         106 MB/s   155 MB/s     5589889      3166645  56.65 1 files
snappy 1.1.4              153 MB/s   458 MB/s     5589889      3288674  58.83 1 files
lz4 1.7.5                 181 MB/s  1002 MB/s     5589889      3334631  59.65 1 files
wflz 2015-09-16           125 MB/s   399 MB/s     5589889      3775542  67.54 1 files
lzjb 2010                 118 MB/s   261 MB/s     5589889      4000806  71.57 1 files

Done. To copy the console content into clipboard: 1] Right Click 2] Select All 3] Enter

To reproduce the above test:

Washigan+_vs_lzbench_2017-Mar-20.zip (8,878,706 bytes):
https://1drv.ms/u/s!AmWWFXGMzDmEgni1UrZIf60UUy00

Washigan+_TurboBench_53-files-tested.pdf (47,901,915 bytes):
https://1drv.ms/b/s!AmWWFXGMzDmEgnmI0fYXMKAiIQH-

Build errors on FreeBSD 11

In file included from tornado/Compression.h:11:
tornado/Common.h:45:2: error: "You're compiling for Motorola byte order, but FREEARC_INTEL_BYTE_ORDER was defined."
#error "You're compiling for Motorola byte order, but FREEARC_INTEL_BYTE_ORDER was defined."
 ^
1 error generated.
gmake: *** [Makefile:248: tornado/tor_test.o] Error 1
gmake: *** Waiting for unfinished jobs....
_lzbench/lzbench.cpp:155:13: warning: enumeration value 'MARKDOWN2' not handled in switch [-Wswitch]
    switch (params->textformat)
            ^
_lzbench/lzbench.cpp:664:37: warning: invalid conversion specifier 'Z' [-Wformat-invalid-specifier]
          printf("Seeking to: %llu %Zu %Zu\n", pos, params->chunk_size, insize);
                                   ~^
_lzbench/lzbench.cpp:664:41: warning: invalid conversion specifier 'Z' [-Wformat-invalid-specifier]
          printf("Seeking to: %llu %Zu %Zu\n", pos, params->chunk_size, insize);
                                       ~^
_lzbench/lzbench.cpp:664:53: warning: data argument not used by format string [-Wformat-extra-args]
          printf("Seeking to: %llu %Zu %Zu\n", pos, params->chunk_size, insize);
                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~       ^
_lzbench/lzbench.cpp:823:13: error: use of undeclared identifier 'recursive'
            recursive = 1;
            ^
_lzbench/lzbench.cpp:928:5: warning: add explicit braces to avoid dangling else [-Wdangling-else]
    else
    ^

Compression/Decompression rates

Hello, I had a question regarding the reporting of the rates for the compression and decompression algorithms. Are the rates (given in MB/s) reported for the output or input of the compression? How about the decompression? IE does a decompression speed of 500 MB/s mean that the decompression algorithm outputs 500MB of uncompressed data each second, or does it decompress 500MB of compressed data into, say, 1000MB of raw data each second?

ZSTD Levels

ZSTD expanded their range of compression levels to include negative integers. The current "max" (-5) pushes it into LZ4 territory. I would make a pull request, but I'm not sure how many additional levels you want to test by default.

Highest levels are missing for LZ4/LZ5 in -eall?

After some grinding through compression algos code I've stumbled on the fact LZ4/5 seems to have level 16 as maximum in their HC versions. Requesting lvl 16 manually leads to somewhat smaller data in some cases.

Same was true about crush, which had level 2, previously not included anywhere. Yet giving better compression vs lvl 0/1 and in fact being competitive against best/slowest LZs not using entropy coding in terms of ratio, if I've got it right. Though now it fixed and crush lvl 2 included to "all".

In some use cases one may want to get absolutely most from pre-existing compressoin engine, taking little care of compression speed. For the very same reason it can be interesting idea to add zophli or something like this to benchmark. This is something like zlib_HC, for cases where we care to get absolutely best from some pre-existing algo :)

Idea: -h and /or --help should display help.

Right now program seems to only show help if started without parameters at all. If one starts it with more or less common -h or --help parameter, contrary to expectations program would complain about missing file, which is inconvenient/strange/unusual.

P.S. other than that, it is a really cool benchmark thingie, thanks :)

"make clean" does not removes lzbench executable out of the way.

Configuration:
lzbench: commit 5937923
OS: Xubuntu 64-bit, 15.10.
Compler: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
Flags: defauts from makefile, only BUILD_SYSTEM=linux uncommented.

To reproduce:

  1. Build lzbench.
  2. run make clean.

Result:
Lzbench executable stays in build tree. Everything else is fine.

Expected
Lzbench executable also removed. There is little point to trash all object files but leave main executable if I want clean build.

UCL algos selection is confusing.

I haven't understood how to select UCL algos via -e, and if I just ask for ucl, results are confusing as well.

If I would just ask for UCL, I do get strange results:

  • Algos from 11 to 39 would be tested.
  • Algos 20 and 30 appear to be just the very same as memcpy. Are there any reasons to test them?
  • I know UCL haves 3 algos in its core: nrv2b, nrv2d and nrv2e, this has been shown in your results published on encode.ru or somewhere like this, but I'm not really sure how these maps to numbers, and if they do at all. This is just plain confusing.

Confusing "out of memory" behavior when specifying directory input

I specified a directory as input like:

lzbench corpus/silesia/

where /silesia is a directory. I got following error:

lzbench 1.5 (64-bit Linux)   Assembled by P.Skibinski
Not enough memory, please use -m option!done... (cIters=1 dIters=1 cTime=1.0 dTime=2.0 chunkSize=1706MB cSpeed=0MB)

This error didn't make much sense since the default for -m is apparently "unlimited", so it can't exactly be increased. I did try a few options like -m1000, -m1, -m6000 - and these all resulted in the (different) error output:

lzbench 1.5 (64-bit Linux)   Assembled by P.Skibinski
Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
memcpy                   0.00 MB/s      ERROR           0            0   -nan     corpus/silesia

Note that the silesia directory only contains ~203 MB of files in total and that I have ~12GB of available physical memory on my box.

I note that using the -r option does do what is expected, but it's not clear to me that it is needed for directories (certainly, the documentation doesn't make it clear). If -r must be specified for directories, why not just remove the option and make that the behavior if a directory is given?

"./lzbench -elzlib,0 -j -r -s1000 /dir" freezes on this input

Steps to reproduce:

  • Create a directory, say, /tmp/a (you should have enough free space on that partition)
  • Unzip this file to that directory: 00000601.ppm.gz under name 1.ppm (this is normal ppm image)
  • Create 299 more copies of this file (so total number is 300), for example (in bash): cd /tmp/a; for I in {2..300}; do cp 1.ppm $I.ppm; done
  • Run this: time -p ./lzbench -elzlib,0 -j -r -x /tmp/a
  • The command will finish in some reasonable time, for my computer this is 47.20 seconds
  • Now run this: time -p ./lzbench -elzlib,0 -j -r -x -s1000 /tmp/a
  • The command will not finish in 4 minutes and it seems it will run forever. This is very strange, because, as well as I understand, adding -s1000 should not slow down computation, so, this computation should take no more time than computation without -s1000, i. e. 47.20 seconds

My system is Debian stretch amd64 (with some packages from Debian buster). Linux 4.19.0. GCC 6.3.0. Tests performed on real hardware

memcpy performance varies wildly depending on file size and also from run to run

When benchmarking files of different sizes, I saw a huge variation in memcpy performance. On my machine "large" memcpy (i.e,. much larger than L3, like 100 MB) runs at about 10 - 11 GB/s, and many times lzbench reports that, but then for even larger files the performance often drops by an order of magnitude (e.g., 1 GB/s). The effect isn't consistent - for very large files (say 1 GB) it usually happens, and for smaller files it usually doesn't, but there are exceptions on both sides (e.g., if you run it a few times with smaller files you'll get some runs with bad performance, etc).

Back to back runs often tend to show improvements, e.g, run 1 might get you 1 GB/s, then 2 GB/s, then 5 GB/s, then it will stay there.

Similarly, the performance sometimes affected only the "compression" side of memcpy, sometimes only the "decompression" side, and often both (i.e., you'd get something like 1 GB/s compression, 10 GB/s decomp, or vice-versa).

I traced this down to the way the buffers are allocated - the file, comp buffers use malloc and the decomp uses calloc. The issue is that for large mallocs (and sometimes, for large callocs) the memory isn't commited by the OS - it will be committed on first access. So the first algorithm to run pays a large penalty to page-in the entire buffer.

So why doesn't this always bite? Why does the performance differ from run to run? It comes down to the DEFAULT_LOOP_TIME (100 ms) - if an algorithm executes in less than that it gets a second run, which will run at full speed, and since FASTEST is the default mode for picking a time, you get a full speed result. So somewhere between 100 MB and 1,000 MB on my box, the first memcpy run starts taking more than 100 ms, and hence doesn't get a second run and the slow time is reported.

xpack compiling error

xpack/lib/xpack_compress.c:120: warning: declaration does not declare anything
xpack/lib/xpack_compress.c:121: warning: declaration does not declare anything
xpack/lib/xpack_compress.c: In function ‘write_block’:
xpack/lib/xpack_compress.c:1185: error: ‘struct codes’ has no member named ‘literal_state_counts’
xpack/lib/xpack_compress.c:1191: error: ‘struct codes’ has no member named ‘litrunlen_state_counts’
xpack/lib/xpack_compress.c:1197: error: ‘struct codes’ has no member named ‘length_state_counts’
xpack/lib/xpack_compress.c:1203: error: ‘struct codes’ has no member named ‘offset_state_counts’
xpack/lib/xpack_compress.c:1210: error: ‘struct codes’ has no member named ‘aligned_state_counts’
xpack/lib/xpack_compress.c:1222: error: ‘struct codes’ has no member named ‘aligned_state_counts’
xpack/lib/xpack_compress.c:1222: error: ‘struct codes’ has no member named ‘state_counts’
xpack/lib/xpack_compress.c:1222: error: ‘struct codes’ has no member named ‘state_counts’
xpack/lib/xpack_compress.c:1226: error: ‘struct codes’ has no member named ‘state_counts’
xpack/lib/xpack_compress.c:1226: error: ‘struct codes’ has no member named ‘state_counts’
xpack/lib/xpack_compress.c:1230: error: ‘struct codes’ has no member named ‘state_counts’
xpack/lib/xpack_compress.c:1254: error: ‘struct codes’ has no member named ‘literal_state_counts’
xpack/lib/xpack_compress.c:1260: error: ‘struct codes’ has no member named ‘litrunlen_state_counts’
xpack/lib/xpack_compress.c:1266: error: ‘struct codes’ has no member named ‘length_state_counts’
xpack/lib/xpack_compress.c:1272: error: ‘struct codes’ has no member named ‘offset_state_counts’
xpack/lib/xpack_compress.c:1279: error: ‘struct codes’ has no member named ‘aligned_state_counts’
make: *** [xpack/lib/xpack_compress.o] Error 1

Problems with statistics.

In lzbench_test, the compression and decompression steps are run 1 or more times until the total time is above a threshold. If the time an iteration takes is above 10000 nanoseconds, then that time is pushed onto a vector.

After that time is done, the average time is also placed onto the vector.

Because there are two different types of values on the vector, then the statistics (fastest, average, median) collected on the algorithms are not as accurate as they could be.

Add orz

Could orz be added to the benchmarks? It is an ROLZ algorithm written in Rust

[dev] even worse speed measurement accuracy on my configuration + regression.

Mostly revival of #3 (not sure how to reopen it).
Differences were:

  • I've tested -dev branch at 21ee70f
  • I've compressed lzbench binary itself, which is ~8Mb.

Some results:

  1. Memcpy vs various # of runs:
  • One iteration:
$ ./lzbench -eall -c5 .gitignore 
lzbench 0.8.1 (64-bit Linux)   Assembled by P.Skibinski
| Compressor name             | Compression| Decompress.| Compr. size | Ratio |
| memcpy                      |   235 MB/s |   235 MB/s |         235 |100.00 |
  • Two iterations:
$ ./lzbench -eall -i2 -c5 ./lzbench 
lzbench 0.8.1 (64-bit Linux)   Assembled by P.Skibinski
| Compressor name             | Compression| Decompress.| Compr. size | Ratio |
| memcpy                      |  1305 MB/s |  1245 MB/s |     8149432 |100.00 |
  • Three iterations:
$ ./lzbench -eall -i3 -c5 ./lzbench 
lzbench 0.8.1 (64-bit Linux)   Assembled by P.Skibinski
| Compressor name             | Compression| Decompress.| Compr. size | Ratio |
| memcpy                      |  1745 MB/s |  1789 MB/s |     8149432 |100.00 |
  • 4 and more runs produce more or same result like 3 runs.

So it seems it hasn't become anyhow beter.

Furthermore, there is regression:

$ ./lzbench -eall -i3 -c5 ./lzbench 
lzbench 0.8.1 (64-bit Linux)   Assembled by P.Skibinski
| Compressor name             | Compression| Decompress.| Compr. size | Ratio |
| memcpy                      |  1766 MB/s |  1733 MB/s |     8149432 |100.00 |
| brieflz 1.1.0               |    27 MB/s |    74 MB/s |     2299101 | 28.21 |
| brotli 2015-10-29 level 0   |    26 MB/s |    95 MB/s |     1917206 | 23.53 |
| brotli 2015-10-29 level 2   |    21 MB/s |    96 MB/s |     1909860 | 23.44 |
| brotli 2015-10-29 level 5   |  5.00 MB/s |    96 MB/s |     1727591 | 21.20 |
| brotli 2015-10-29 level 8   |  1.00 MB/s |    94 MB/s |     1699386 | 20.85 |
| brotli 2015-10-29 level 11  |  0.00 MB/s |    82 MB/s |     1515425 | 18.60 |

It looks like I've got really bad precision in -dev: I can measure either 1Mb/s or 0Mb/s in 5 runs, which takes quite a lot of time on brotli lvl 11.

So it seems Linux timings accuracy got actually even worse than with old code :(

As far as I know, high-resolution time measurements are rather tricky thing and you can take a look around how others doing it in linux, etc.

I'm not expert in this area, so take further details with grain of salt and better seek for advice os some expert in the area. But at first glance I noticed you're using realtime clocks. But I guess something like CLOCK_MONOTONIC_RAW would do it better. At least, because realtime clocks are subject to both sudden adjustment without warning by things like ntp and ntp can also fiddle with second's length to try to adjust system time in less disturbing manner. CLOCK_MONOTONIC_RAW on other hand can not jump and its seconds have constant time. Even though these are hardware timer based rather then real time based. Yet, I can probably miss something since I'm not expert in accurate timing measurements.

build error on Suse 12.2

Hi,
I have built lzbench 1.7.2 on SUSE Linux Enterprise Server 12 (x86_64). Unfortunately it fails due to a lib link error:
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: cannot find -lrt
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: cannot find -lpthread
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: cannot find -lm
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: cannot find -lc

I have changed the Makefile, removed "LDFLAGS += -lrt -static" line 54 and "LDFLAGS += -lpthread" line 56 since the logic says if the system is not Darwin. It is a static lib link problem. This might be an issue.

BR,
Xiao

New Zstd 1.3.2

Is possible to update to last Zstd 1.3.2 and support the new long-range option?

thanks!

Crash in shrinker_compress.

After messing with various files I've eventually stumbled on another edge case.

Configuration:
GCC 5.2 @ x86-64, Linux.
lzbench -dev branch at 949291c
Default flags, etc.

Info:
Shrinker compressor could crash during compression if you'll try to compress some >=2Mb poorly compressible file (I've tested it on ~2Mb ogg sound).
This only happens with -O3, so it have to be demoted to -O2 as well, unfortunately.

Backtrace follows (partially optimized out due to -O3)

Program received signal SIGSEGV, Segmentation fault.
shrinker_compress (in=in@entry=0x7ffff7e20010, out=out@entry=0x7ffff7bf9010, size=size@entry=1920106) at shrinker/shrinker.c:138
138     MEMCPY_NOOVERLAP_NOSURPASS(dst, p_last_lit, src);
(gdb) bt full
#0  shrinker_compress (in=in@entry=0x7ffff7e20010, out=out@entry=0x7ffff7bf9010, size=size@entry=1920106) at shrinker/shrinker.c:138
        ht = {1919054, 3357316263, 4162657658, 1209854581, 672971416, 3894190478, 2820466262, 3491576845, 2820444135, 941383503, 
          807186245, 1826989, 2954671924, 1746742483, 3625762943, 1917658, 4162616814, 1918002, 2686268610, 1896331, 941335832, 
          2015127924, 4162626043, 1344092269, 3088835399, 807225254, 538773969, 672976133, 2820490928, 1478289263, 941385833, 3759992215, 
          1209868260, 1746709675, 4028425939, 404506619, 3223140401, 2954639374, 941392504, 1746747208, 3088871349, 3894225328, 
          2954700106, 2552038736, 2552022291, 270300864, 807216336, 270345901, 2149292566, 3088922298, 3759994886, 270301143, 1746712067, 
          3223120066, 2552025977, 2283587384, 136127415, 807173287, 404554225, 2417837136, 2820459038, 2686235255, 1075633669, 4162633020, 
          2954616933, 404536190, 2820473731, 3357266512, 1880965274, 3760003906, 3894220498, 672974319, 672983291, 538742501, 672970612, 
          2686252339, 1902348, 1746721687, 1880962511, 1344014576, 2552037343, 4162647782, 2686253481, 3894148377, 2954677885, 1746740461, 
          4028403285, 807205936, 2551983821, 941433739, 2820375539, 941408963, 1880953599, 1858117, 2283562976, 2417829796, 3894180036, 
          4162618722, 1209875690, 136124613, 807210168, 2417812184, 1880879366, 1343971640, 3088926190, 3491510128, 2552049476, 
          1209727692, 3088896462, 807201356, 270336482, 4028445869, 4028423729, 1746655281, 1902781, 270320085, 2552026574, 1209850856, 
          1075655571, 3088898532, 2015178583, 538756589, 672998424, 941437137, 2954673159, 2149211523, 270346401, 3357349936, 4028446585, 
          4028397243, 3223140735, 2015162232, 1880876222, 2552019813, 4162638467, 672989433, 4028423869, 2417838948, 1746749073, 
          2015166597, 404484460, 4028422077, 2015091905, 1478290469, 3357347112, 2552037501, 2015182006, 1209871950, 1746743277, 
          1075617925, 2417786403, 2552044042, 941440522, 538728613, 3357198107, 1880875372, 2149381766, 2015182866, 4162594049, 
          3760012016, 4028365269, 4162634389, 3088913529, 2417824218, 1612457852, 3223117087, 3894215445, 1478225261, 1075528298, 
          2417832869, 4028377514, 2686245397, 4162642960, 1344041891, 941407911, 2954701907, 1880959823, 1746748846, 2820473678, 
          3223069704, 1344059546, 1478310496, 1612485641, 1209870019, 1478309582, 404535455, 3625796676, 672983476, 1880955653, 
          2954702177, 941411125, 941421832, 3088921911, 807156468, 3357351969, 2820488104, 3357354564, 3088906951, 1870862, 3760012704...}
        src = 0x7ffff7ff4c72 ""
        dst = <optimized out>
        src_end = 0x7ffff7ff4c6e "\"\031`{"
        dst_end = 0x7ffff7dcdc6e "\"\031`{"
        pfind = <optimized out>
        pcur = <optimized out>
        cur_hash = <optimized out>
        p_last_lit = <optimized out>
        cpy_len = <optimized out>
        match_dist = <optimized out>
        flag = <optimized out>
        pflag = <optimized out>
        cache = <optimized out>
        cur_u32 = <optimized out>
---Type <return> to continue, or q <return> to quit--- 
#1  0x00000000005d9b51 in lzbench_shrinker_compress (inbuf=inbuf@entry=0x7ffff7e20010 "OggS", insize=insize@entry=1920106, 
    outbuf=outbuf@entry=0x7ffff7bf9010 <incomplete sequence \343>, outsize=outsize@entry=2256507) at _lzbench/compressors.cpp:1064
No locals.
#2  0x00000000005cbf72 in lzbench_compress (workmem=0x0, param2=0, param1=0, outsize=2256507, 
    outbuf=0x7ffff7bf9010 <incomplete sequence \343>, insize=1920106, inbuf=0x7ffff7e20010 "OggS", compr_lens=..., 
    compress=0x5d9b40 <lzbench_shrinker_compress(char*, unsigned long, char*, unsigned long, unsigned long, unsigned long, char*)>, 
    chunk_size=<optimized out>, params=0x7fffffffddc0) at _lzbench/lzbench.cpp:152
        clen = <optimized out>
        part = 1920106
        sum = 0
        start = 0x7ffff7e20010 "OggS"
#3  lzbench_test (params=params@entry=0x7fffffffddc0, desc=desc@entry=0xab5f78 <comp_desc+1848>, level=level@entry=0, 
    inbuf=inbuf@entry=0x7ffff7e20010 "OggS", insize=insize@entry=1920106, compbuf=compbuf@entry=0x7ffff7bf9010 <incomplete sequence \343>, 
    comprsize=2256507, decomp=0x7ffff7a20010 "", param1=0, ticksPerSecond=..., param2=0) at _lzbench/lzbench.cpp:239
        nanosec = 6974727
        ii = 1
        start_ticks = {tv_sec = 1238077, tv_nsec = 777780529}
        end_ticks = {tv_sec = 140737488346432, tv_nsec = 0}
        complen = 0
        ctime = {<std::_Vector_base<unsigned long, std::allocator<unsigned long> >> = {
            _M_impl = {<std::allocator<unsigned long>> = {<__gnu_cxx::new_allocator<unsigned long>> = {<No data fields>}, <No data fields>}, _M_start = 0x0, _M_finish = 0x0, _M_end_of_storage = 0x0}}, <No data fields>}
        dtime = {<std::_Vector_base<unsigned long, std::allocator<unsigned long> >> = {
            _M_impl = {<std::allocator<unsigned long>> = {<__gnu_cxx::new_allocator<unsigned long>> = {<No data fields>}, <No data fields>}, _M_start = 0x0, _M_finish = 0x0, _M_end_of_storage = 0x0}}, <No data fields>}
        compr_lens = {<std::_Vector_base<unsigned long, std::allocator<unsigned long> >> = {
            _M_impl = {<std::allocator<unsigned long>> = {<__gnu_cxx::new_allocator<unsigned long>> = {<No data fields>}, <No data fields>}, _M_start = 0x0, _M_finish = 0x0, _M_end_of_storage = 0x0}}, <No data fields>}
        decomp_error = false
        workmem = <optimized out>
        blosclz = false
#4  0x00000000005ccca0 in lzbench_test_with_params (params=0x7fffffffddc0, namesWithParams=<optimized out>, inbuf=0x7ffff7e20010 "OggS", 
    insize=1920106, compbuf=0x7ffff7bf9010 <incomplete sequence \343>, comprsize=2256507, decomp=0x7ffff7a20010 "", ticksPerSecond=...)
    at _lzbench/lzbench.cpp:336
---Type <return> to continue, or q <return> to quit---
        level = 0
        i = <optimized out>
        found = true
        delimiters = "/"
        delimiters2 = ","
        copy = 0x5d6ba80 "shrinker"
        copy2 = 0x5d6ba60 "shrinker"
        token = <optimized out>
        token2 = <optimized out>
        token3 = <optimized out>
        save_ptr = 0x5d6ba88 ""
        save_ptr2 = 0x5d6ba68 ""
#5  0x00000000005cceac in lzbenchmark (params=0x7fffffffddc0, in=<optimized out>, encoder_list=0x5d6b810 "shrinker")
    at _lzbench/lzbench.cpp:389
        comprsize = <optimized out>
        insize = <optimized out>
        inbuf = 0x7ffff7e20010 "OggS"
        compbuf = 0x7ffff7bf9010 <incomplete sequence \343>
        decomp = 0x7ffff7a20010 ""
#6  0x00000000004013e4 in main (argc=<optimized out>, argv=0x7fffffffdf30) at _lzbench/lzbench.cpp:499
        in = 0x5d6b830
        params = {timetype = FASTEST, textformat = TEXT, c_iters = 1, d_iters = 3, chunk_size = 1920106, cspeed = 0, verbose = 0, 
          results = {<std::_Vector_base<string_table, std::allocator<string_table> >> = {
              _M_impl = {<std::allocator<string_table>> = {<__gnu_cxx::new_allocator<string_table>> = {<No data fields>}, <No data fields>}, _M_start = 0x5d6bb10, _M_finish = 0x5d6bb48, _M_end_of_storage = 0x5d6bb48}}, <No data fields>}}
        encoder_list = <optimized out>
        sort_col = <optimized out>

Urgent: "Tornado algo" getting stuck with mozilla.bz2 workload on RHEL8.0, fedora32

Hello,
i started running this workbench from last few weeks, have done multiple cycles of all the algorithm run through lzbench and the corpus workloads

during each run I've been observing "Tornado compression/Decom algorithm" getting stuck with "Mozilla.bz2" workload. system is operable i.e. no hang or freeze.
it's only the "Tornado"which is always getting stuck for "Mozilla.bz2"

can you please help to fix the issue asap.

Crash in WFLZ decompressor code.

I've got crash in decompressor while trying to benchmark WFLZ code.

lzbench: current version, at commit 5937923
OS: Xubuntu 64-bit, 15.10.
Compler: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
Flags: defauts from makefile, only BUILD_SYSTEM=linux uncommented.

Generally everything ran okay, until I've requested wflz benches. Then lzbench crashed in WFLZ decompression code:

Program received signal SIGSEGV, Segmentation fault.
0x00000000005befbb in wfLZ_Decompress ()

(gdb) thread apply all bt full

Thread 1 (process 5016):
#0  0x00000000005befbb in wfLZ_Decompress ()
No symbol table info available.
#1  0x00000000005cefec in lzbench_wflz_decompress(char*, unsigned long, char*, unsigned long, unsigned long, unsigned long, unsigned long)
    ()
No symbol table info available.
#2  0x00000000005c17f3 in lzbench_test(compressor_desc_t const*, int, int, unsigned long, int, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, timespec, unsigned long, unsigned long, unsigned long) [clone .constprop.142] ()
No symbol table info available.
#3  0x00000000005c203d in lzbench_test_with_params(char*, int, unsigned long, int, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, timespec) ()
No symbol table info available.
#4  0x00000000005c2422 in lzbenchmark(_IO_FILE*, char*, int, unsigned int, int) ()
No symbol table info available.
#5  0x0000000000401267 in main ()
No symbol table info available.

update lz4 to 1.7.5

Could you please update lz4 to the latest version? I have an important change in github.com/svpv/lzbench which I believe speeds up 'lz4hc -1' by 10-20%. But it needs more profiling, specifically with the recent version of lz4.

[dev] [patch] potential type mismatch detected for column4.

Backstory: column4 reported bogus data on 32-bit platforms (see #6), so I've proposed fix, involving moving from size_t to uint64_t which solved it for me.

However, I missed size_t also used in table itself. So it seems uint64_t could be assigned to size_t which isn't exactly best idea. So, there is fix. I've tested it on ARMv7 32 bit and x86-64 and haven't detected any fallouts.

This is against 21ee70f

diff --git a/_lzbench/lzbench.cpp b/_lzbench/lzbench.cpp
index fd202a7..9625495 100644
--- a/_lzbench/lzbench.cpp
+++ b/_lzbench/lzbench.cpp
@@ -77,7 +77,7 @@ typedef struct string_table
     std::string column1;
     float column2, column3, column5;
     uint64_t column4;
-    string_table(std::string c1, float c2, float c3, size_t c4, float c5) : column1(c1), column2(c2), column3(c3), column4(c4), column5(c5) {}
+    string_table(std::string c1, float c2, float c3, uint64_t c4, float c5) : column1(c1), column2(c2), column3(c3), column4(c4), column5(c5) {}
 } string_table_t;

 struct less_using_1st_column { inline bool operator() (const string_table_t& struct1, const string_table_t& struct2) {  return (struct1.column1 < struct2.column1); } };

Version 1.8 does not compile on Mac

Version 1.8 does not compile on Mac (10.14.6). Version 1.7.4 does. The error output with version 1.8 is as allows:

libzling/libzling_huffman.cpp:44:5: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    auto scaling = 0;
    ^
libzling/libzling_huffman.cpp:73:5: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    auto nodes = std::vector<huffman_node*>();
    ^
libzling/libzling_huffman.cpp:73:30: warning: template argument uses local type 'huffman_node' [-Wlocal-type-template-args]
    auto nodes = std::vector<huffman_node*>();
                             ^~~~~~~~~~~~~
libzling/libzling_huffman.cpp:75:10: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    for (auto i = 0; i < max_codes; i++) {
         ^
libzling/libzling_huffman.cpp:83:5: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    auto nodes_heap = std::priority_queue<
    ^
libzling/libzling_huffman.cpp:85:21: warning: template argument uses local type 'huffman_node' [-Wlocal-type-template-args]
        std::vector<huffman_node*>,
                    ^~~~~~~~~~~~~
libzling/libzling_huffman.cpp:84:9: warning: template argument uses local type 'huffman_node' [-Wlocal-type-template-args]
        huffman_node*,
        ^~~~~~~~~~~~~
libzling/libzling_huffman.cpp:90:9: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
        auto min1 = nodes_heap.top(); nodes_heap.pop();
        ^
libzling/libzling_huffman.cpp:91:9: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
        auto min2 = nodes_heap.top(); nodes_heap.pop();
        ^
libzling/libzling_huffman.cpp:96:19: warning: template argument uses local type 'huffman_node' [-Wlocal-type-template-args]
    std::function<void (huffman_node*, int)> code_length_extractor = [&](auto node, auto code_length) {
                  ^~~~~~~~~~~~~~~~~~~~~~~~~
libzling/libzling_huffman.cpp:96:70: error: expected expression
    std::function<void (huffman_node*, int)> code_length_extractor = [&](auto node, auto code_length) {
                                                                     ^
libzling/libzling_huffman.cpp:117:5: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    auto code = 0;
    ^
libzling/libzling_huffman.cpp:120:10: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    for (auto codelen = 1; codelen <= max_codelen; codelen++) {
         ^
libzling/libzling_huffman.cpp:121:14: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
        for (auto i = 0; i < max_codes; i++) {
             ^
libzling/libzling_huffman.cpp:131:10: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    for (auto i = 0; i < max_codes; i++) {
         ^
libzling/libzling_huffman.cpp:146:10: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
    for (auto c = 0; c < max_codes; c++) {
         ^
libzling/libzling_huffman.cpp:148:18: warning: 'auto' type specifier is a C++11 extension [-Wc++11-extensions]
            for (auto i = encode_table[c]; i < (1 << max_codelen); i += (1 << length_table[c])) {
                 ^
16 warnings and 1 error generated.
make: *** [libzling/libzling_huffman.o] Error 1

Timings ?

Are timings done by calculating the actual process time or an absolute time ?
I'm just wondering as a quick look at the code made me think it might be the latter case. That could be problematic as a heavily or variably loaded test machine would generate distorted results.

Update zstd to 1.3.0

I've tried to compile with zstd git and got this error:

g++  -Wno-unknown-pragmas -Wno-sign-compare -Wno-conversion -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3 -DNDEBUG -I. -Izstd/lib -Izstd/lib/common -Ixpack/common -Ilibcsc -DHAVE_CONFIG_H _lzbench/compressors.cpp -c -o _lzbench/compressors.o
_lzbench/compressors.cpp: In function 'int64_t lzbench_zstd_compress(char*, size_t, char*, size_t, size_t, size_t, char*)':
_lzbench/compressors.cpp:1771:220: error: 'ZSTD_btopt2' was not declared in this scope
 >zparams.cParams.chainLog = windowLog + ((zstd_params->zparams.cParams.strategy == ZSTD_btlazy2) || (zstd_params->zparams.cParams.strategy == ZSTD_btopt) || (zstd_params->zparams.cParams.strategy == ZSTD_btopt2));
                                                                                                                                                                                                        ^~~~~~~~~~~
_lzbench/compressors.cpp:1771:220: note: suggested alternative: 'ZSTD_btopt'
 >zparams.cParams.chainLog = windowLog + ((zstd_params->zparams.cParams.strategy == ZSTD_btlazy2) || (zstd_params->zparams.cParams.strategy == ZSTD_btopt) || (zstd_params->zparams.cParams.strategy == ZSTD_btopt2));
                                                                                                                                                                                                        ^~~~~~~~~~~
                                                                                                                                                                                                                            ZSTD_btopt
make: *** [Makefile:248: _lzbench/compressors.o] Error 1

Add zopfli to suite.

It would be nice to ad zopfli to the suite as a compressor targeting at high compression and outputting a regular deflate/zlib stream.

possible to add non lz compressors

Since there's no google group sounding it out here.
Might be nice to add a few more compressors (that aren't lz) just in the theory that "more data is good!" or what not. Though I know it goes against the name and scope of the project :)

(ex: zpaq, bsc, whatever else squash has that lzbench doesn't)

Thanks for the benchmarks! :)

missing benchmark results?

Hello there,

Out of curiosity, can you upload the results for the missing (following) compressors?

blosclz 2015-11-10
brieflz 1.1.0
gipfeli 2015-11-01 with bugfixes from https://github.com/jibsen/gipfeli
lzg 1.0.8
lzlib 1.7
xz 5.2.2

Keep up the good work :)

Lzbench build issues

Hi ,
I'm trying to build lzbench using gcc 6.1.0 and seeing below elab errors:
xpack/lib/xpack_compress.c:1185: error: 'struct codes' has no member named 'literal_state_counts'
xpack/lib/xpack_compress.c:1191: error: 'struct codes' has no member named 'litrunlen_state_counts'
xpack/lib/xpack_compress.c:1197: error: 'struct codes' has no member named 'length_state_counts'
xpack/lib/xpack_compress.c:1203: error: 'struct codes' has no member named 'offset_state_counts'
xpack/lib/xpack_compress.c:1210: error: 'struct codes' has no member named 'aligned_state_counts'
xpack/lib/xpack_compress.c:1222: error: 'struct codes' has no member named 'aligned_state_counts'
xpack/lib/xpack_compress.c:1222: error: 'struct codes' has no member named 'state_counts'
xpack/lib/xpack_compress.c:1222: error: 'struct codes' has no member named 'state_counts'
xpack/lib/xpack_compress.c:1226: error: 'struct codes' has no member named 'state_counts'
xpack/lib/xpack_compress.c:1226: error: 'struct codes' has no member named 'state_counts'
xpack/lib/xpack_compress.c:1230: error: 'struct codes' has no member named 'state_counts'
xpack/lib/xpack_compress.c:1254: error: 'struct codes' has no member named 'literal_state_counts'
xpack/lib/xpack_compress.c:1260: error: 'struct codes' has no member named 'litrunlen_state_counts'
xpack/lib/xpack_compress.c:1266: error: 'struct codes' has no member named 'length_state_counts'
xpack/lib/xpack_compress.c:1272: error: 'struct codes' has no member named 'offset_state_counts'
xpack/lib/xpack_compress.c:1279: error: 'struct codes' has no member named 'aligned_state_counts'
make: *** [xpack/lib/xpack_compress.o] Error 1
how can I resolve them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.