Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Gzip

News Books See also Recommended Links

Compression

Reference  Tips
star gnu tar zip rar History Humor Etc

Introduction

While gzip was the widely used compression program Unix (after compress) it is not developed further. Actually it was MS DOS which provided strong impetus for the development of sophisticated compression programs. Many talented authors competed with each other until the dust settles and winners emerged.

Right now on linux for middle size text tarballs (say up to 500MB) the  winner is xz. It is now widely used in dirstirbuting sources in Linux.  For large size files it is way too slow.

For large size text files such  as FASTA/FASTQ pgzip2 is now probably the optimal option. It is much faster then gzip and has decent error recovery capabilities (important in case the archive is corrupted)

Right now for large files pigz (parallel implementation which can use multiple cores) should be used instead of of gzip.

History

The algorithms used in gzip came from MS DOS. The most popular DOS compression program was pkzip by late Phil Katz who created the company PKWARE.  Starting from pkzip 2.0 (released in 1993) it used  so called "deflating" a lossless data compression algorithm based on a combination of the LZ77 algorithm and Huffman coding. The resulting file format has since become ubiquitous in DOS and later Windows as well as on BBS and later the Internet -- almost all files with the .ZIP  (or .zip) extension are in PKZIP 2.x format.

Utilities to read and write these files are available on all common platforms. It was later specified in RFC 1951  and several OSes like Windows 2000 and XP are able to work with such files natively.

Gzip is an attempt to replicate part of functionality of pkzip in Unix environment. Like pkzip gzip  uses Lempel-Ziv coding (LZ77).

Competition

GZIP competes with

Gzip is weaker as the for compression ratio but has other advantages like high speed and wide availability. It  is still it is more or less adequate and despite being obsolete from the compression ration standpoint is widely used.

By default applying gzip to the file leads to replacing the original file fo the archive with the extension .gz. it keeps while keeping the same ownership modes, access and modification times. If no files are specified or if a file name is "-", the standard input is compressed to the standard output. gzip  will only attempt to compress regular files. In particular, it will ignore symbolic links.

If the new file name is too long for its file system, gzip  truncates it. gzip  attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name consists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length.

By default, gzip  keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the `-N' option (default) This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer. However, due to limitations in the current gzip  file format, fractional seconds are discarded. Also, time stamps must fall within the range 1970-01-01 00:00:00 through 2106-02-07 06:28:15 UTC, and hosts whose operating systems use 32-bit time stamps are further restricted to time stamps no later than 2038-01-19 03:14:07 UTC. The upper bounds assume the typical case where leap seconds are ignored.

Compressed files can be restored to their original form using `gzip -d' or gunzip  or zcat. Using gzip -c naturally leads to the loss of permissions, owners and datastamps. If the original name saved in the compressed file is not suitable for its file system, a new name is constructed from the original one to make it legal.

gunzip  takes a list of files on its command line and replaces each file whose name ends with `.gz', `.z', `.Z', `-gz', `-z' or `_z' and which begins with the correct magic number with an uncompressed file without the original extension. gunzip  also recognizes the special extensions `.tgz' and `.taz' as shorthands for `.tar.gz' and `.tar.Z' respectively. When compressing, gzip  uses the `.tgz' extension if necessary instead of truncating a file with a `.tar' extension.

gunzip  can currently decompress files created by gzip, zip, compress  or pack. The detection of the input format is automatic. When using the first two formats, gunzip  checks a 32 bit CRC (cyclic redundancy check). For pack, gunzip  checks the uncompressed length. The compress  format was not designed to allow consistency checks. However gunzip  is sometimes able to detect a bad `.Z' file. If you get an error when uncompressing a `.Z' file, do not assume that the `.Z' file is correct simply because the standard uncompress  does not complain. This generally means that the standard uncompress  does not check its input, and happily generates garbage output. The SCO `compress -H' format (lzh  compression method) does not include a CRC but also allows some consistency checks.

Files created by zip  can be uncompressed by gzip  only if they have a single member compressed with the 'deflation' method. This feature is only intended to help conversion of tar.zip  files to the tar.gz  format. To extract a zip  file with a single member, use a command like `gunzip <foo.zip' or `gunzip -S .zip foo.zip'. To extract zip  files with several members, use unzip  instead of gunzip.

zcat  is identical to `gunzip -c'. zcat  uncompresses either a list of files on the command line or its standard input and writes the uncompressed data on standard output. zcat  will uncompress files that have the correct magic number whether they have a `.gz' suffix or not.

gzip  uses the Lempel-Ziv algorithm used in zip  and PKZIP. The amount of compression obtained depends on the size of the input and the distribution of common substrings. Typically, text such as source code or English is reduced by 60-70%. Compression is generally much better than that achieved by LZW (as used in compress), Huffman coding (as used in pack), or adaptive Huffman coding (compact).

Compression is always performed, even if the compressed file is slightly larger than the original. The worst case expansion is a few bytes for the gzip  file header, plus 5 bytes every 32K block, or an expansion ratio of 0.015% for large files. Note that the actual number of used disk blocks almost never increases. gzip  normally preserves the mode, ownership and time stamps of files when compressing or decompressing.

The gzip  file format is specified in P. Deutsch, gzip file format specification version 4.3, Internet RFC 1952 (May 1996). The zip  deflation format is specified in P. Deutsch, deflate Compressed Data Format Specification version 1.3, Internet RFC 1951 (May 1996).

Examples

 

Here are some realistic examples of running gzip.

Compress the ISO content into gz file:

gzip -rc /mnt > /tmp/iso.gz
You can time this operation is the speed of comression of ISO can serve as a poor man test of performace of the computer
time gzip -rc /mnt > /tmp/iso.gz 

From 11 Simple Gzip Examples RootUsers

 

This is the output of the command `gzip -h':

     gzip version-number
     usage: gzip [-cdfhlLnNrtvV19] [-S suffix] [file ...]
      -c --stdout      write on standard output, keep original files unchanged
      -d --decompress  decompress
      -f --force       force overwrite of output file and compress links
      -h --help        give this help
      -l --list        list compressed file contents
      -L --license     display software license
      -n --no-name     do not save or restore the original name and time stamp
      -N --name        save or restore the original name and time stamp
      -q --quiet       suppress all warnings
      -r --recursive   operate recursively on directories
      -S .suf  --suffix .suf     use suffix .suf on compressed files
      -t --test        test compressed file integrity
      -v --verbose     verbose mode
      -V --version     display version number
      -1 --fast        compress faster
      -9 --best        compress better
      file...          files to (de)compress. If none given, use standard input.
     Report bugs to <[email protected]>.

This is the output of the command `gzip -v texinfo.tex':

     texinfo.tex:             69.7% -- replaced with texinfo.tex.gz

The following command will find all gzip  files in the current directory and subdirectories, and extract them in place without destroying the original:

     find . -name '*.gz' -print | sed 's/^\(.*\)[.]gz$/gunzip < "&" > "\1"/' | sh

The format for running the gzip  program is:

     gzip option ...

gzip  supports the following options:

`--stdout'
`--to-stdout'
`-c'
Write output on standard output; keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing them.
 
`--decompress'
`--uncompress'
`-d'
Decompress.
 
`--force'
`-f'
Force compression or decompression even if the file has multiple links or the corresponding file already exists, or if the compressed data is read from or written to a terminal. If the input data is not in a format recognized by gzip, and if the option `--stdout' is also given, copy the input data without change to the standard output: let zcat  behave as cat. If `-f' is not given, and when not running in the background, gzip  prompts to verify whether an existing file should be overwritten.
 
`--help'
`-h'
Print an informative help message describing the options then quit.
 
`--list'
`-l'
For each compressed file, list the following fields:
          compressed size: size of the compressed file
          uncompressed size: size of the uncompressed file
          ratio: compression ratio (0.0% if unknown)
          uncompressed_name: name of the uncompressed file
     

The uncompressed size is given as `-1' for files not in gzip  format, such as compressed `.Z' files. To get the uncompressed size for such a file, you can use:

          zcat file.Z | wc -c
     

In combination with the `--verbose' option, the following fields are also displayed:

          method: compression method (deflate,compress,lzh,pack)
          crc: the 32-bit CRC of the uncompressed data
          date & time: time stamp for the uncompressed file
     

The crc is given as ffffffff for a file not in gzip format.

With `--verbose', the size totals and compression ratio for all files is also displayed, unless some sizes are unknown. With `--quiet', the title and totals lines are not displayed.

The gzip  format represents the input size modulo 2^32, so the uncompressed size and compression ratio are listed incorrectly for uncompressed files 4 GB and larger. To work around this problem, you can use the following command to discover a large uncompressed file's true size:

          zcat file.gz | wc -c
     

 
`--license'
`-L'
Display the gzip  license then quit.
 
`--no-name'
`-n'
When compressing, do not save the original file name and time stamp by default. (The original name is always saved if the name had to be truncated.) When decompressing, do not restore the original file name if present (remove only the gzip  suffix from the compressed file name) and do not restore the original time stamp if present (copy it from the compressed file). This option is the default when decompressing.
 
`--name'
`-N'
When compressing, always save the original file name and time stamp; this is the default. When decompressing, restore the original file name and time stamp if present. This option is useful on systems which have a limit on file name length or when the time stamp has been lost after a file transfer.
 
`--quiet'
`-q'
Suppress all warning messages.
 
`--recursive'
`-r'
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip  will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip).
 
`--suffix suf'
`-S suf'
Use suffix `suf' instead of `.gz'. Any suffix can be given, but suffixes other than `.z' and `.gz' should be avoided to avoid confusion when files are transferred to other systems. A null suffix forces gunzip to try decompression on all given files regardless of suffix, as in:
          gunzip -S "" *        (*.* for MSDOS)
     

Previous versions of gzip used the `.z' suffix. This was changed to avoid a conflict with pack.
 

`--test'
`-t'
Test. Check the compressed file integrity.
 
`--verbose'
`-v'
Verbose. Display the name and percentage reduction for each file compressed.
 
`--version'
`-V'
Version. Display the version number and compilation options, then quit.
 
`--fast'
`--best'
`-n'
Regulate the speed of compression using the specified digit n, where `-1' or `--fast' indicates the fastest compression method (less compression) and `--best' or `-9' indicates the slowest compression method (optimal compression). The default compression level is `-6' (that is, biased towards high compression at expense of speed).

Multiple compressed files can be concatenated. In this case, gunzip  will extract all members at once. If one member is damaged, other members might still be recovered after removal of the damaged member. Better compression can be usually obtained if all members are decompressed and then recompressed in a single step.

This is an example of concatenating gzip  files:

     gzip -c file1  > foo.gz
     gzip -c file2 >> foo.gz

Then

     gunzip -c foo

is equivalent to

     cat file1 file2

In case of damage to one member of a `.gz' file, other members can still be recovered (if the damaged member is removed). However, you can get better compression by compressing all members at once:

     cat file1 file2 | gzip > foo.gz

compresses better than

     gzip -c file1 file2 > foo.gz

If you want to recompress concatenated files to get better compression, do:

     zcat old.gz | gzip > new.gz

If a compressed file consists of several members, the uncompressed size and CRC reported by the `--list' option applies to the last member only. If you need the uncompressed size for all members, you can use:

     zcat file.gz | wc -c

If you wish to create a single archive file with multiple members so that members can later be extracted independently, use an archiver such as tar  or zip. GNU tar  supports the `-z' option to invoke gzip  transparently. gzip  is designed as a complement to tar, not as a replacement.

The environment variable GZIP  can hold a set of default options for gzip. These options are interpreted first and can be overwritten by explicit command line parameters. For example:

     for sh:    GZIP="-8v --name"; export GZIP
     for csh:   setenv GZIP "-8v --name"
     for MSDOS: set GZIP=-8v --name

When writing compressed data to a tape, it is generally necessary to pad the output with zeroes up to a block boundary. When the data is read and the whole block is passed to gunzip  for decompression, gunzip  detects that there is extra trailing garbage after the compressed data and emits a warning by default if the garbage contains nonzero bytes. You have to use the `--quiet' option to suppress the warning. This option can be set in the GZIP  environment variable, as in:

     for sh:    GZIP="-q"  tar -xfz --block-compress /dev/rst0
     for csh:   (setenv GZIP "-q"; tar -xfz --block-compress /dev/rst0)

In the above example, gzip  is invoked implicitly by the `-z' option of GNU tar. Make sure that the same block size (`-b' option of tar) is used for reading and writing compressed data on tapes. (This example assumes you are using the GNU version of tar.)


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[May 20, 2018] Compressors galore pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file

Those are pretty basic experiments but the key conclusion is valid -- pbzip2 is probably the best option out of "universal" archivers for FASTQ files
Notable quotes:
"... note: if you check the vbsupport benchmark above, you'll see that lbzip2 had probably fixed slight lagging behind pbzip2 for regular multi-stream files; this improvement is also confirmed by my testing ..."
"... note: xz is a successor of lzma-utils ..."
"... digital archaeologist ..."
"... digital archaeologist ..."
"... note: this was done only for some compressors, not all ..."
"... major test disappointment ..."
"... major test disappointment ..."
"... major test disappointment ..."
"... major test disappointment ..."
"... For comparison, a single block damage with bzip2 would only cause the loss of between 100 and 900 K of compressed data, which – for fastq files – will probably have negligible effects. ..."
May 20, 2018 | bogdan.org.ua

Compressors galore: pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file

28th March 2015

About 2 years ago I had already reviewed some parallel (and not) compressing utilities , settling at that time on pbzip2 – it scales quasi-linearly with the number of CPUs/cores, stores compressed data in relatively small 900k blocks, is fast, and has good compression ratio. pbzip2 was (and still is) a very good choice. Yesterday I got somewhat distracted, and thus found lbzip2 -

an independent, multi-threaded implementation of bzip2. It is commonly the fastest SMP (and uniprocessor) bzip2 compressor and decompressor

- as it says in the Debian package description. Is it really "commonly the fastest" one? How does it compare to pbzip2 ? Should I use lbzip2 instead of pbzip2 ?

This minor distraction had grown into a full-scale web-search and comparison, adding to the mix plzip (a parallel version of lzip ), xz , and lrzip . After reading thousands of characters, all of these were put to a simple test: compressing an about 2 gigabyte FASTQ file with default options.

All the external links and benchmarks, as well as my own mini-benchmark results, are provided below.

The conclusion is that out of all the tested compressors lbzip2 is indeed the best one (for my practical use). It is only slightly better than the trusty pbzip2 , which takes the second place. All the other compressors performed so poorly, that they do not get any place in my practical rating

So, let us first ask internet wisdom/foolishness, if lbzip2 or pbzip2 is faster/better?

So, at least in theory lbzip2 is indeed better than pbzip2 , even if only at faster decompression of bzip2 -compressed files.

While looking for benchmarks, I've found this one (old but good), which highly praises lzop compressor. Apparently, lzop is noticeably faster than even gzip , and compresses only a little bit worse. However, I am not really interested in a faster gzip: I need something with much better compression, but still fast enough for multi-gigabyte files.

Next, I have stumbled upon lzip and plzip (.lz). What are these compressors?

That doesn't really tell us much on how plzip / lzip compare to, say, pbzip2 . But before performance, let us pay some more attention to long-term storage features of lzip :

The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability:

(I really liked the part about the digital archaeologist ! And the copyleft, to a lesser extent.)

Looks really attractive! Because what I am using compressors for is, essentially, longer-term archiving, with unpredictable needs to sometimes decompress some of the files. And, of course, storage media will fail fully or partially, so recovering is important, too. But what is this xz compressor?.. I've seen it before, in the contexts with words "overtake the world" or similar

xz

This hasn't really added any clarity, has it? Moreover, we now have one more unknown – the lrzip compressor. lrzip is a redundancy compressor with LZO, gzip, bzip2, ZPAQ and LZMA back-ends. It is highly efficient for highly redundant data, even if redundancies are separated with long stretches of other data. (FASTQ files are fairly redundant, though bzip2 seems to utilize that fairly well already; can lrzip do better?)

However, what if a part of the archive is damaged? How much information is lost then? Is it at all possible to recover some of the data from damaged .lrz archives?
Author's benchmarks showcase how good lrzip is at redundant data compression (although lrzip is multithreaded, so comparison in the benchmark to non-multithreaded algorithm implementations is not quite correct ). Damaged archive recovery concerns would have prevented me from using lrzip anyway, but I was really interested if a "long-range redundancy" compressor can do better than usual, "short-range redundancy" compressors.

My testing setup

Below come testing results. I have not put them into a single table, but I do comment the results in a few places. Entire testing followed this pattern:

bzip2: 309 159 275 bytes
bzip2 was used as a baseline, to highlight speed benefits of both lbzip2 and pbzip2 .

test.fastq: 7.193:1, 1.112 bits/byte, 86.10% saved, 2223860346 in, 309159275 out.
bzip2 -v test.fastq: 190.63 s , 7608 Kb
bzip2 -v -d test.fastq.bz2: 51.58 s , 4620 Kb

Bzip2 is neither particularly slow, nor particularly fast. It also seems to have modest memory requirements.

pbzip2: 310 462 610 bytes
pbzip2 is the currently used reference. For any other compressor to become a successor of pbzip2 , that other compressor must be either a little faster (while compressing as good as pbzip2 ), or a little better compressor (while being as fast as pbzip2 ), or both. Note that compressed file size is only a tiny bit larger than with bzip2 .

"test.fastq.bz2″: compression ratio is 1:7.163, space savings is 86.04%
pbzip2 -v test.fastq: 46.22 s , 67436 Kb
pbzip2 -dv test.fastq.bz2: 19.80 s , 46672 Kb

Interestingly, pbzip2 --test uses 1 thread only (but also consumes only 6MB RAM), resulting in decompression times similar to those of bzip2 . lbzip2 uses all 8 threads also during testing.

lbzip2: 311 040 543 bytes

lbzip2: compressing "test.fastq" to "test.fastq.bz2″
lbzip2: "test.fastq": compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -v test.fastq: 22.67 s , 49812 Kb

lbzip2: decompressing "test.fastq.bz2″ to "test.fastq"
lbzip2: "test.fastq.bz2″: compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -vd test.fastq.bz2: 18.86 s , 46652 Kb

I repeated pbzip2 and lbzip2 tests several times, and it was always that lbzip2 compressed this same file about twice as fast Wow! Decompression speed is about the same, compressed file size is marginally larger than with pbzip2 . Overall, lbzip2 does look like a new drop-in replacement of bzip2 / pbzip2 for me.

xz -0 ––threads=8: 517 967 372 bytes
I would call this one major test disappointment . Default setting, -6, was way too slow (estimated 28 minutes to compress!!!). Even the fastest -0 setting was still too slow! And here's one of the reasons, straight from the xz man page:

Multithreaded compression and decompression are not implemented yet, so this option has no effect for now. As of writing (2010-09-27), it hasn't been decided if threads will be used by default on multicore systems once support for threading has been implemented.

Also, I forgot to use the --block-size=900k option, but that seems to be of no concern with such results:

100 % 492.5 MiB / 2,120.8 MiB = 0.232 18 MiB/s 1:59
xz -0 -v test.fastq: 119.25 s , 4780 Kb
xz ––test ––verbose ––threads=8 test.fastq.xz: 36.00 s , 2568 Kb
100 % 492.5 MiB / 2,120.8 MiB = 0.232 58 MiB/s 0:36
xz -d -v test.fastq.xz: 36.54 s , 2500 Kb

xz -0 was both slower and had significantly worse compression when compared to lbzip2 and pbzip2 . xz -0 was faster than good old bzip2 , but had significantly worse compression Really, major test disappointment .

plzip: between 407 696 562 and 498 708 539 bytes
One more major test disappointment . (Or am I somehow using these compressors in a wrong way? ) I haven't found a way to set block/member size (for lzip , that would be the -b option). Default speed setting -6 was also way too slow, but settings -1 to -3 were comparable to pbzip2 , so I did all three.

plzip -1: 498 708 539 bytes

test.fastq: 4.459:1, 1.794 bits/byte, 77.57% saved, 2223860346 in, 498708539 out.
plzip -1 ––verbose ––threads=8 test.fastq: 30.27 s , 126360 Kb (this seems to be per-thread memory )
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.86 s , 11640 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 7.24 s , 11644 Kb

Compression speed and ratio: both worse than lbzip2 . But the fastest testing and decompression so far.

plzip -2: 456 301 558 bytes

test.fastq: 4.874:1, 1.641 bits/byte, 79.48% saved, 2223860346 in, 456301558 out.
plzip -2 ––verbose ––threads=8 test.fastq: 38.81 s , 193416 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.26 s , 14828 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.38 s , 14736 Kb

Compression time worse than lbzip2 , a little better than pbzip2 , but compression ratio worse than any one of these. But even faster testing and decompression.

plzip -3: 407 696 562 bytes

test.fastq: 5.455:1, 1.467 bits/byte, 81.67% saved, 2223860346 in, 407696562 out.
plzip -3 ––verbose ––threads=8 test.fastq: 63.74 s , 245756 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 5.82 s , 18936 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.10 s , 18944 Kb

Even faster testing and decompression! But compression ratio and speed are still worse than lbzip2 and pbzip2 .

And the final contestant, lrzip ! All 5 back-ends were tested: LZO, gzip, bzip2, LZMA, ZPAQ.

lrzip has several peculiarities, which hinder its use as a drop-in replacement for, say, bzip2 . Most importantly, when a file is compressed, it is not deleted, unless a -D options is specified. Unlike pbzip2 and lbzip2 , which use all available CPUs/cores by default, lrzip only uses 2 by default ( -p 8 in the results below requests use of 8 cores). Another unusual feature is that during testing a file is uncompressed to a storage medium, and then deleted; almost all the other compressors only verify the decompressed data stream, which is then immediately discarded and never written to storage medium. Related feature is a -c option, which performs file verification after decompression by reading the decompressed file from storage medium and comparing it to the decompressed stream. lrzip also stores MD5 hashes of data, and allows verifying these. lrzip comes with several helper scripts – for example, one which allows tarballing and lrzipping a chosen directory in a single command. Actually, lrzip is more of an archive utility, and not just a compressor.

lrzip -D -p 8: 334 504 383 bytes
In this default (LZMA) mode, lrzip starts with 1 thread, but eventually uses more and more cores (though never all 8, or I haven't noticed this). Decompressing seems to use more threads, but that also depends on the back-end used (the slower it is – the more threads will be used, e.g. ZPAQ versus LZO).

test.fastq – Compression Ratio: 6.648. Average Compression Speed: 3.113MB/s.
Total time: 00:11:21.85
lrzip -D -p 8 test.fastq: 681.84 s , 3331080 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 124.706MB/s
[OK] – 2223860346 bytes
Total time: 00:00:17.13
lrzip -t -p 8 test.fastq.lrz: 17.21 s , 2567608 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 117.778MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:17.59
lrzip -d -p 8 -D test.fastq.lrz: 17.67 s , 2567664 Kb

In the default LZMA mode, lrzip is significantly slower than even bzip2, and has somewhat worse compression ratio. Yes, this is the 3rd major test disappointment .

gzip back-end: lrzip -g -L 9 -D -p 8: 430 013 769 bytes
Despite specifying -p 8 , lrzip mostly operates in 1 thread, and only sometimes in 2 (probably invokes gzip library). Testing is also done with 1 thread only, but is very fast (but slower than plzip ). The -L 9 option is supposed to be translated into -9 for gzip; as this normally has nearly no effect, it wasn't used in the following lrzip tests.

test.fastq – Compression Ratio: 5.172. Average Compression Speed: 0.704MB/s.
Total time: 00:50:11.34
lrzip -p 8 -g -L 9 -D test.fastq: 3011.34 s , 2745520 Kb

100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
[OK] – 2223860346 bytes
Total time: 00:00:12.71
lrzip -t -p 8 test.fastq.lrz: 12.79 s , 2577632 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:12.88
lrzip -d -p 8 -D test.fastq.lrz: 12.95 s , 2577728 Kb

And again, compression speed and ratio are worse than for bzip2

LZO back-end: lrzip -l -D -p 8: 766 520 776 bytes

test.fastq – Compression Ratio: 2.901. Average Compression Speed: 4.690MB/s.
Total time: 00:07:32.89
lrzip -l -D -p 8 test.fastq: 452.88 s , 2714452 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 212.000MB/s
[OK] – 2223860346 bytes
Total time: 00:00:10.58
lrzip -t -p 8 test.fastq.lrz: 10.66 s , 2582516 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 192.727MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:11.32
lrzip -d -p 8 -D test.fastq.lrz: 11.39 s , 2582504 Kb

No comments.

bzip2 back-end: lrzip -b -D -p 8: 353 473 476 bytes

test.fastq – Compression Ratio: 6.291. Average Compression Speed: 4.473MB/s.
Total time: 00:07:53.95
lrzip -b -D -p 8 test.fastq: 473.94 s , 2781104 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 68.387MB/s
[OK] – 2223860346 bytes
Total time: 00:00:30.69
lrzip -t -p 8 test.fastq.lrz: 30.77 s , 2583156 Kb

Decompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 66.250MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:31.92
lrzip -d -p 8 -D test.fastq.lrz: 32.00 s , 2583108 Kb

Hadn't I done all of these simple tests myself, by now I'd think that this test was rigged to show how good pbzip2 and lbzip2 are at compressing FASTQ files :D

ZPAQ back-end: lrzip -z -D -p 8: 292 380 439 bytes

test.fastq – Compression Ratio: 7.606. Average Compression Speed: 2.804MB/s.% 7:100%
Total time: 00:12:36.51
lrzip -z -D -p 8 test.fastq: 756.51 s , 3585740 Kb

Decompressing
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.970MB/s
[OK] – 2223860346 bytes
Total time: 00:08:54.57
lrzip -t -p 8 test.fastq.lrz: 534.65 s , 2583424 Kb

Decompressing
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.759MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:09:24.27
lrzip -d -p 8 -D test.fastq.lrz: 564.36 s , 2583460 Kb

Finally!!! We have compression better than bzip2 ! But it is also much slower than bzip2 (and some 12 times slower than pbzip2 ), so not really an option. Alas. And decompression time is the worst in the test – almost 10 minutes for what plzip does in under 7 seconds ! (I do realize that compression ratio is also different – but not that much.) I wonder if slow lrzip speeds have anything to do with test.fastq being effectively in RAM? I do not know if there are any performance penalties to mmap ing a file which is already on a RAM-mounted partition.

The test.fastq file that I've used was somehow really hard for the tested compressors to tackle as fast and as good as lbzip2 and pbzip2 could

Questions? Comments? Improvements, including plots of these figures? Comment below.

Share

This entry was posted on Saturday, March 28th, 2015 at 1:34 and is filed under *nix , Comparison , Links , Misc , Software . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response , or trackback from your own site. 6 Responses to "Compressors galore: pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file"

  1. Seumas Says:
    April 29th, 2015 at 23:46

    Version 5.2 of xz is out, which does have multi-thread support. You may have to compile it yourself but it might be worth testing. I haven't tested it myself yet.

    I use xz for non-realtime compression (e.g. overnight backups), because although it's slow, it's so much better than bzip2 and, of course, if it's overnight I don't care if it takes half an hour or whatever to run.

  2. Bogdan Says:
    April 30th, 2015 at 11:36

    It is good to know, thanks. I was using versions currently available in Debian testing. I guess I'll make another comparison in a year or so :)

    I must say that even with multithreading xz with default settings will likely be significantly slower than lbzip2 – on the order of 200+ seconds on the same test file and hardware, and assuming a really good parallelism implementation. For my use this is way too slow, and probably not worth the extra savings. Also, more complicated xz file format looks like another drawback to me (harder to recover data).

    Clearly, everyone's needs are different, so I'm not saying that lbzip2 is much better overall – but it is for me ;)

  3. hmage Says:
    September 30th, 2016 at 21:55

    Try pxz , it's a parallel version of xz and is a drop-in replacement in terms of file format.

  4. Bogdan Says:
    October 6th, 2016 at 21:17

    Thanks Hmage, that sounds interesting. Maybe in my next installment of compressor testing I'll include pxz , too :)

    I did eventually try a newer (already parallel, I think) version of xz on genomic data, and had mixed success.
    lbzip2 sometimes achieved even better ratios, mostly just a little bit worse, rarely much worse, but was always many times faster.

  5. trotos Says:
    October 11th, 2016 at 15:02

    Could you please try that derivative of zpaq?
    http://mattmahoney.net/dc/fastqz/

  6. Bogdan Says:
    October 12th, 2016 at 23:15

    Trotos,

    your comment reminded me that I did mention Fastqz in my previous post on the topic: http://bogdan.org.ua/2013/10/17/favourite-file-compressor-gzip-bzip2-7z.html

    Looks like I haven't actually tested it, because of the concern that data recovery _might_ be too complicated with Fastqz.
    For comparison, a single block damage with bzip2 would only cause the loss of between 100 and 900 K of compressed data, which – for fastq files – will probably have negligible effects.

    Another reason to not test it was that it is not clear if it will see any future support.
    If, for example, a change in compiler makes building fastqz not possible without first modifying the code, then it's bad :)

    Maybe I'll test it anyway – next time.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: May 22, 2018