|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | Books | See also | Recommended Links | Reference | Tips | |
star | gnu tar | zip | rar | History | Humor | Etc |
|
While gzip was the widely used compression program Unix (after compress) it is not developed further. Actually it was MS DOS which provided strong impetus for the development of sophisticated compression programs. Many talented authors competed with each other until the dust settles and winners emerged.
Right now on linux for middle size text tarballs (say up to 500MB) the winner is xz. It is now widely used in dirstirbuting sources in Linux. For large size files it is way too slow.For large size text files such as FASTA/FASTQ pgzip2 is now probably the optimal option. It is much faster then gzip and has decent error recovery capabilities (important in case the archive is corrupted)
Right now for large files pigz (parallel implementation which can use multiple cores) should be used instead of of gzip.
|
The algorithms used in gzip came from MS DOS. The most popular DOS compression program was pkzip by late Phil Katz who created the company PKWARE. Starting from pkzip 2.0 (released in 1993) it used so called "deflating" a lossless data compression algorithm based on a combination of the LZ77 algorithm and Huffman coding. The resulting file format has since become ubiquitous in DOS and later Windows as well as on BBS and later the Internet -- almost all files with the .ZIP (or .zip) extension are in PKZIP 2.x format.
Utilities to read and write these files are available on all common platforms. It was later specified in RFC 1951 and several OSes like Windows 2000 and XP are able to work with such files natively.
Gzip is an attempt to replicate part of functionality of pkzip in Unix environment. Like pkzip gzip uses Lempel-Ziv coding (LZ77).
GZIP competes with
Gzip is weaker as the for compression ratio but has other advantages like high speed and wide availability. It is still it is more or less adequate and despite being obsolete from the compression ration standpoint is widely used.
By default applying gzip to the file leads to replacing the original file fo the archive with the extension .gz. it keeps while keeping the same ownership modes, access and modification times. If no files are specified or if a file name is "-", the standard input is compressed to the standard output. gzip will only attempt to compress regular files. In particular, it will ignore symbolic links.
If the new file name is too long for its file system, gzip truncates it. gzip attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name consists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length.
By default, gzip keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the `-N' option (default) This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer. However, due to limitations in the current gzip file format, fractional seconds are discarded. Also, time stamps must fall within the range 1970-01-01 00:00:00 through 2106-02-07 06:28:15 UTC, and hosts whose operating systems use 32-bit time stamps are further restricted to time stamps no later than 2038-01-19 03:14:07 UTC. The upper bounds assume the typical case where leap seconds are ignored.
Compressed files can be restored to their original form using `gzip -d' or gunzip or zcat. Using gzip -c naturally leads to the loss of permissions, owners and datastamps. If the original name saved in the compressed file is not suitable for its file system, a new name is constructed from the original one to make it legal.
gunzip takes a list of files on its command line and replaces each file whose name ends with `.gz', `.z', `.Z', `-gz', `-z' or `_z' and which begins with the correct magic number with an uncompressed file without the original extension. gunzip also recognizes the special extensions `.tgz' and `.taz' as shorthands for `.tar.gz' and `.tar.Z' respectively. When compressing, gzip uses the `.tgz' extension if necessary instead of truncating a file with a `.tar' extension.
gunzip can currently decompress files created by gzip, zip, compress or pack. The detection of the input format is automatic. When using the first two formats, gunzip checks a 32 bit CRC (cyclic redundancy check). For pack, gunzip checks the uncompressed length. The compress format was not designed to allow consistency checks. However gunzip is sometimes able to detect a bad `.Z' file. If you get an error when uncompressing a `.Z' file, do not assume that the `.Z' file is correct simply because the standard uncompress does not complain. This generally means that the standard uncompress does not check its input, and happily generates garbage output. The SCO `compress -H' format (lzh compression method) does not include a CRC but also allows some consistency checks.
Files created by zip can be uncompressed by gzip only if they have a single member compressed with the 'deflation' method. This feature is only intended to help conversion of tar.zip files to the tar.gz format. To extract a zip file with a single member, use a command like `gunzip <foo.zip' or `gunzip -S .zip foo.zip'. To extract zip files with several members, use unzip instead of gunzip.
zcat is identical to `gunzip -c'. zcat uncompresses either a list of files on the command line or its standard input and writes the uncompressed data on standard output. zcat will uncompress files that have the correct magic number whether they have a `.gz' suffix or not.
gzip uses the Lempel-Ziv algorithm used in zip and PKZIP. The amount of compression obtained depends on the size of the input and the distribution of common substrings. Typically, text such as source code or English is reduced by 60-70%. Compression is generally much better than that achieved by LZW (as used in compress), Huffman coding (as used in pack), or adaptive Huffman coding (compact).
Compression is always performed, even if the compressed file is slightly larger than the original. The worst case expansion is a few bytes for the gzip file header, plus 5 bytes every 32K block, or an expansion ratio of 0.015% for large files. Note that the actual number of used disk blocks almost never increases. gzip normally preserves the mode, ownership and time stamps of files when compressing or decompressing.
The gzip file format is specified in P. Deutsch, gzip file format specification version 4.3, Internet RFC 1952 (May 1996). The zip deflation format is specified in P. Deutsch, deflate Compressed Data Format Specification version 1.3, Internet RFC 1951 (May 1996).
Here are some realistic examples of running gzip.
Compress the ISO content into gz file:
gzip -rc /mnt > /tmp/iso.gzYou can time this operation is the speed of comression of ISO can serve as a poor man test of performace of the computer
time gzip -rc /mnt > /tmp/iso.gz
From 11 Simple Gzip Examples RootUsers
- 1. Compress a single file
This will compress file.txt and create file.txt.gz, note that this will remove the original file.txt file.
gzip file.txt- 2. Compress multiple files at once
This will compress all files specified in the command, note again that this will remove the original files specified by turning file1.txt, file2.txt and file3.txt into file1.txt.gz, file2.txt.gz and file3.txt.gz
gzip file1.txt file2.txt file3.txtTo instead compress all files within a directory, see example 8 below.
- 3. Compress a single file and keep the original
You can instead keep the original file and create a compressed copy.
gzip -c file.txt > file.txt.gzThe -c flag outputs the compressed copy of file.txt to stdout, this is then sent to file.txt.gz, keeping the original file.txt file in place. Newer versions of gzip may also have -k or �keep available, which could be used instead with �gzip -k file.txt�.
- 4. Compress all files recursively
All files within the directory and all sub directories can be compressed recursively with the -r flag
[root@centos test]# ls -laR .: drwxr-xr-x. 2 root root 24 Jul 28 18:05 example -rw-r--r--. 1 root root 8 Jul 28 17:09 file1.txt -rw-r--r--. 1 root root 3 Jul 28 17:54 file2.txt -rw-r--r--. 1 root root 5 Jul 28 17:54 file3.txt ./example: -rw-r--r--. 1 root root 5 Jul 28 18:00 example.txt [root@centos test]# gzip -r * [root@centos test]# ls -laR .: drwxr-xr-x. 2 root root 27 Jul 28 18:07 example -rw-r--r--. 1 root root 38 Jul 28 17:09 file1.txt.gz -rw-r--r--. 1 root root 33 Jul 28 17:54 file2.txt.gz -rw-r--r--. 1 root root 35 Jul 28 17:54 file3.txt.gz ./example: -rw-r--r--. 1 root root 37 Jul 28 18:00 example.txt.gzIn the above example there are 3 .txt files in the test directory which is our current working directory, there is also an example sub directory which contains example.txt. Upon running gzip with the -r flag over everything, all files were recursively compressed.
This can be reversed by running �gzip -dr *�, where -d is used to decompress and -r performs this on all of the files recursively.
- 5. Decompress a gzip compressed file
To reverse the compression process and get the original file back that you have compressed, you can use the gzip command itself or gunzip which is also part of the gzip package.
gzip -d file.txt.gzOR
gunzip file.txt.gzBoth of these commands will produce the same result, decompressing file.txt.gz to file.txt, removing the compressed file.txt.gz file.
Similar to example 3, it is possible to decompress a file and keep the original .gz file as below.
gunzip -c file.txt.gz > file.txtAs mentioned in step 4, -d can be combined with -r to decompress all files recursively.
- 6. List compression information
With the -l or --list flag we can see useful information regarding a compressed .gz file such as the compressed and uncompressed size of the file as well as the compression ratio, which shows us how much space our compression is saving.
[root@centos ~]# gzip -l linux-3.18.19.tar.gz compressed uncompressed ratio uncompressed_name 126117045 580761600 78.3% linux-3.18.19.tar [root@centos ~]# ls -lah -rw-r--r--. 1 root root 554M Jul 28 17:24 linux-3.18.19.tar -rw-r--r--. 1 root root 121M Jul 28 17:25 linux-3.18.19.tar.gzIn this example, a gzipped copy of the Linux kernel has compressed to 78.3% of its original size, taking up 121MB of space rather than 554MB.
- 7. Adjust compression level
The level of compression applied to a file using gzip can be specified as a value between 1 (less compression) and 9 (best compression). Using option 1 will complete faster, but space saved from the compression will not be optimal. Using option 9 will take longer to complete, however you will have the largest amount of space saved.
The below example compares the differences between -1 and -9, as shown while -1 finishes much faster it compresses around 5% less (approximately 30mb more space required).
[root@centos ~]# time gzip -1 linux-3.18.19.tar real 0m13.602s user 0m12.908s sys 0m0.662s [root@mirror1 ~]# gzip -l linux-3.18.19.tar.gz compressed uncompressed ratio uncompressed_name 156001021 580761600 73.1% linux-3.18.19.tar [root@centos ~]# time gzip -9 linux-3.18.19.tar real 0m58.129s user 0m57.193s sys 0m0.735s [root@centos ~]# gzip -l linux-3.18.19.tar.gz compressed uncompressed ratio uncompressed_name 125064095 580761600 78.5% linux-3.18.19.tar-1 can also be specified with the flag --fast, while option -9 can also be specified with the flag --best. By default gzip uses a compression level of -6, which is slightly biased towards higher compression at the expense of speed. When selecting a value between 1 and 9 it is important to consider what is more important to you, the amount of space saved or the amount of time spent compressing, the default -6 option provides a fair trade off.
- 8. Compress a directory
With the help of the tar command, we can create a tar file of a whole directory and gzip the result. We can perform the whole lot in one step, as the tar command allows us to specify a compression method to use.
tar czvf etc.tar.gz /etc/This example creates a compressed etc.tar.gz file of the entire /etc/ directory. The tar flags are as follows, �c� creates a new tar archive, �z� specifies that we want to compress with gzip, �v� provides verbose information, and �f� specifies the file to create. The resulting etc.tar.gz file contains all files within /etc/ compressed using gzip.
- 9. Integrity test
The -t or --test flag can be used to check the integrity of a compressed file.
On a normal file, the result will be listed as OK, shown below.
[root@centos test]# gzip -tv file1.txt.gz file1.txt.gz: OKI have now manually modified this file with a text editor and added a random value, essentially introducing corruption and it is now no longer valid.
[root@centos test]# gzip -tv file1.txt.gz file1.txt.gz: gzip: file1.txt.gz: invalid compressed data--crc error gzip: file1.txt.gz: invalid compressed data--length errorThe compressed .gz file makes use of cyclic redundancy check (CRC) in order to detect errors. The CRC value can be viewed by running gzip with the -l and -v flags, as shown below.
[root@centos test]# gzip -lv file1.txt.gz method crc date time compressed uncompressed ratio uncompressed_name defla 08db5c50 Jul 28 18:15 40 167772160 100.0% file1.txt- 10. Concatenate multiple files
Multiple files can be concatenated into a single .gz file.
gzip -c file1.txt > files.gz gzip -c file2.txt >> files.gzThe files.gz now contains the contents of both file1.txt and file2.txt, if you decompress files.gz you will get a file named �files� which contains the content of both .txt files. The output is similar to running �cat file1.txt file2.txt�. If instead you want to create a single file that contains multiple files you can use the tar command which supports gzip compression, as covered above in example 8.
- 11. Additional commands included with gzip
The gzip package provides some very useful commands for working with compressed files, such as zcat, zgrep and zless/zmore.
As you can probably tell by the names of the commands, these are essentially the cat, grep, and less/more commands, however they work directly on compressed data. This means that you can easily view or search the contents of a compressed file without having to decompress it and then view or search it in a second step.
[root@centos test]# zcat test.txt.gz test example text [root@centos test]# zgrep exa test.txt.gz exampleThis is especially useful when searching through or reviewing log files which have been compressed during log rotation.
This is the output of the command `gzip -h':
gzip version-number usage: gzip [-cdfhlLnNrtvV19] [-S suffix] [file ...] -c --stdout write on standard output, keep original files unchanged -d --decompress decompress -f --force force overwrite of output file and compress links -h --help give this help -l --list list compressed file contents -L --license display software license -n --no-name do not save or restore the original name and time stamp -N --name save or restore the original name and time stamp -q --quiet suppress all warnings -r --recursive operate recursively on directories -S .suf --suffix .suf use suffix .suf on compressed files -t --test test compressed file integrity -v --verbose verbose mode -V --version display version number -1 --fast compress faster -9 --best compress better file... files to (de)compress. If none given, use standard input. Report bugs to <[email protected]>.
This is the output of the command `gzip -v texinfo.tex':
texinfo.tex: 69.7% -- replaced with texinfo.tex.gz
The following command will find all gzip files in the current directory and subdirectories, and extract them in place without destroying the original:
find . -name '*.gz' -print | sed 's/^\(.*\)[.]gz$/gunzip < "&" > "\1"/' | sh
The format for running the gzip program is:
gzip option ...
gzip supports the following options:
compressed size: size of the compressed file uncompressed size: size of the uncompressed file ratio: compression ratio (0.0% if unknown) uncompressed_name: name of the uncompressed file
The uncompressed size is given as `-1' for files not in gzip format, such as compressed `.Z' files. To get the uncompressed size for such a file, you can use:
zcat file.Z | wc -c
In combination with the `--verbose' option, the following fields are also displayed:
method: compression method (deflate,compress,lzh,pack) crc: the 32-bit CRC of the uncompressed data date & time: time stamp for the uncompressed file
The crc is given as ffffffff for a file not in gzip format.
With `--verbose', the size totals and compression ratio for all files is also displayed, unless some sizes are unknown. With `--quiet', the title and totals lines are not displayed.
The gzip format represents the input size modulo 2^32, so the uncompressed size and compression ratio are listed incorrectly for uncompressed files 4 GB and larger. To work around this problem, you can use the following command to discover a large uncompressed file's true size:
zcat file.gz | wc -c
gunzip -S "" * (*.* for MSDOS)
Previous versions of gzip used the `.z' suffix. This was changed to avoid a conflict
with pack.
Multiple compressed files can be concatenated. In this case, gunzip will extract all members at once. If one member is damaged, other members might still be recovered after removal of the damaged member. Better compression can be usually obtained if all members are decompressed and then recompressed in a single step.
This is an example of concatenating gzip files:
gzip -c file1 > foo.gz gzip -c file2 >> foo.gz
Then
gunzip -c foo
is equivalent to
cat file1 file2
In case of damage to one member of a `.gz' file, other members can still be recovered (if the damaged member is removed). However, you can get better compression by compressing all members at once:
cat file1 file2 | gzip > foo.gz
compresses better than
gzip -c file1 file2 > foo.gz
If you want to recompress concatenated files to get better compression, do:
zcat old.gz | gzip > new.gz
If a compressed file consists of several members, the uncompressed size and CRC reported by the `--list' option applies to the last member only. If you need the uncompressed size for all members, you can use:
zcat file.gz | wc -c
If you wish to create a single archive file with multiple members so that members can later be extracted independently, use an archiver such as tar or zip. GNU tar supports the `-z' option to invoke gzip transparently. gzip is designed as a complement to tar, not as a replacement.
The environment variable GZIP can hold a set of default options for gzip. These options are interpreted first and can be overwritten by explicit command line parameters. For example:
for sh: GZIP="-8v --name"; export GZIP for csh: setenv GZIP "-8v --name" for MSDOS: set GZIP=-8v --name
When writing compressed data to a tape, it is generally necessary to pad the output with zeroes up to a block boundary. When the data is read and the whole block is passed to gunzip for decompression, gunzip detects that there is extra trailing garbage after the compressed data and emits a warning by default if the garbage contains nonzero bytes. You have to use the `--quiet' option to suppress the warning. This option can be set in the GZIP environment variable, as in:
for sh: GZIP="-q" tar -xfz --block-compress /dev/rst0 for csh: (setenv GZIP "-q"; tar -xfz --block-compress /dev/rst0)
In the above example, gzip is invoked implicitly by the `-z' option of GNU tar. Make sure that the same block size (`-b' option of tar) is used for reading and writing compressed data on tapes. (This example assumes you are using the GNU version of tar.)
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
May 20, 2018 | bogdan.org.ua
Compressors galore: pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file
28th March 2015
About 2 years ago I had already reviewed some parallel (and not) compressing utilities , settling at that time on pbzip2 – it scales quasi-linearly with the number of CPUs/cores, stores compressed data in relatively small 900k blocks, is fast, and has good compression ratio. pbzip2 was (and still is) a very good choice. Yesterday I got somewhat distracted, and thus found lbzip2 -an independent, multi-threaded implementation of bzip2. It is commonly the fastest SMP (and uniprocessor) bzip2 compressor and decompressor
- as it says in the Debian package description. Is it really "commonly the fastest" one? How does it compare to pbzip2 ? Should I use lbzip2 instead of pbzip2 ?
This minor distraction had grown into a full-scale web-search and comparison, adding to the mix plzip (a parallel version of lzip ), xz , and lrzip . After reading thousands of characters, all of these were put to a simple test: compressing an about 2 gigabyte FASTQ file with default options.
All the external links and benchmarks, as well as my own mini-benchmark results, are provided below.
The conclusion is that out of all the tested compressors lbzip2 is indeed the best one (for my practical use). It is only slightly better than the trusty pbzip2 , which takes the second place. All the other compressors performed so poorly, that they do not get any place in my practical rating
So, let us first ask internet wisdom/foolishness, if lbzip2 or pbzip2 is faster/better?
- this askubuntu question shows that lbzip2 is compressing faster (1:43) than pbzip2 (2:34)
- this nice benchmark also confirms that lbzip2 is indeed faster at compressing; lbzip2 also appears to use less RAM and a little bit less CPU during compression; during decompression, lbzip2 (reportedly) uses much more RAM. lbzip2 achieved at least as good (and even marginally better) compression ratios as pbzip2 .
- lbzip2 github page and also this unrelated page both say that lbzip2 is fully cross-compatible with bzip2
- probably most importantly, lbzip2 github readme says that even bzip2 -compressed archives get a decompression speedup (which is definitely not the case with pbzip2 )
- lbzip2 also uses 100-900k blocks (900k by default)
- it is not clear if lbzip2 is somewhat less widely tested than pbzip2
- lbzip2 's author has performed some testing (back in 2009, mind you!), and these were the most important results:
- lbzip2 is better when decompressing from a pipe, no matter the producer, and also when the compressed input coming from a regular file is single stream
- pbzip2 beats lbzip2 when the compressed input is coming from a regular file and is multi-stream (yes, pbzip2 can decompress even lbzip2′s compressed output faster than lbzip2 itself, when it's coming from a regular file) note: if you check the vbsupport benchmark above, you'll see that lbzip2 had probably fixed slight lagging behind pbzip2 for regular multi-stream files; this improvement is also confirmed by my testing
So, at least in theory lbzip2 is indeed better than pbzip2 , even if only at faster decompression of bzip2 -compressed files.
While looking for benchmarks, I've found this one (old but good), which highly praises lzop compressor. Apparently, lzop is noticeably faster than even gzip , and compresses only a little bit worse. However, I am not really interested in a faster gzip: I need something with much better compression, but still fast enough for multi-gigabyte files.
Next, I have stumbled upon lzip and plzip (.lz). What are these compressors?
- plzip is a parallel version of lzip , and fully lzip-compatible
- lzip is an LZMA compressor
- reading the documentation leaves an impression that [p]lzip achieves better compression, is slower, and needs much more RAM than competing compressors
- there is a special utility called lziprecover , which helps recover data from damaged lzip archives, by leveraging, on the one hand, CRC checksums of compressed blocks, and, on the other, multiple damaged copies of the archive (if available)
- from the official website:
Lzip is a lossless data compressor with a user interface similar to the one of gzip or bzip2 . Lzip is about as fast as gzip , compresses most files more than bzip2 , and is better than both from a data recovery perspective.
- default "member" (compressed block/chunk) size is 4 petabytes , but can be set to a lower value (minimal 100kb), mimicking bzip2′s chunk size
- supports multiple, independent volumes (loosing one volume will still allow recovering data from all other volumes)
- with multiple cores, plzip creates multi-member files by default (but it is not clear, what is the size of these members? Default is said to be twice the dictionary size, but default for dictionary size is not specified in the manual – so lzip/plzip seem to require compression level -1 -9 specification)
- here lzip compresses a little bit better than xz without the
--extreme
option- (l|p)bzip2 should still be faster than either lzip or xz
- I started mentioning xz , because lzip and xz (at least historically) are competing LZMA-based compressors
- a 1 year old opinion makes the following statements about lzip:
- lzip is a marginal archiver with no real benefits since the appearance of xz ( note: xz is a successor of lzma-utils )
- xz is more popular, more widely accepted
- xz has a community, while lzip has 1 author
- performance of xz and lzip is comparable
- xz has more features
- but lzip does indeed have a recovery utility that xz doesn't
That doesn't really tell us much on how plzip / lzip compare to, say, pbzip2 . But before performance, let us pay some more attention to long-term storage features of lzip :
The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability:
- The lzip format provides very safe integrity checking and some data recovery means. The lziprecover program can repair bit-flip errors (one of the most common forms of data corruption) in lzip files, and provides data recovery capabilities, including error-checked merging of damaged copies of a file.
- The lzip format is as simple as possible (but not simpler). The lzip manual provides the code of a simple decompressor along with a detailed explanation of how it works, so that with the only help of the lzip manual it would be possible for a digital archaeologist to extract the data from a lzip file long after quantum computers eventually render LZMA obsolete.
- Additionally, the lzip reference implementation is copylefted, which guarantees that it will remain free forever.
(I really liked the part about the digital archaeologist ! And the copyleft, to a lesser extent.)
Looks really attractive! Because what I am using compressors for is, essentially, longer-term archiving, with unpredictable needs to sometimes decompress some of the files. And, of course, storage media will fail fully or partially, so recovering is important, too. But what is this xz compressor?.. I've seen it before, in the contexts with words "overtake the world" or similar
xz
- much more complex file format than lzip , but maybe it has some benefits for client programs and/or recovery?
- supports integrity checks and multiple compressed blocks
- according to this post from 2012 , xz (single-threaded) both compressed and decompressed much faster than lzip and lrzip (depends on settings, of course)
- lzip is older than xz , and was better than xz predecessor – lzma-utils
- xz is adopted by some linux distributions and software projects for package compression
- xz does not seem to have an equivalent of lziprecover
- tar supports both
--lzip
and--xz
, also with--auto-compress
This hasn't really added any clarity, has it? Moreover, we now have one more unknown – the lrzip compressor. lrzip is a redundancy compressor with LZO, gzip, bzip2, ZPAQ and LZMA back-ends. It is highly efficient for highly redundant data, even if redundancies are separated with long stretches of other data. (FASTQ files are fairly redundant, though bzip2 seems to utilize that fairly well already; can lrzip do better?)
However, what if a part of the archive is damaged? How much information is lost then? Is it at all possible to recover some of the data from damaged .lrz archives?
Author's benchmarks showcase how good lrzip is at redundant data compression (although lrzip is multithreaded, so comparison in the benchmark to non-multithreaded algorithm implementations is not quite correct ). Damaged archive recovery concerns would have prevented me from using lrzip anyway, but I was really interested if a "long-range redundancy" compressor can do better than usual, "short-range redundancy" compressors.My testing setup
- Debian testing 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux
- Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (4 physical cores with HT enabled: 8 hardware threads)
- 16GiB RAM
- test file name: test.fastq
- test file size: 2 223 860 346 bytes (a little over 2 gigabytes)
- test file was copied once to RAM-mounted /tmp, to exclude any I/O bottleneck effects on compression speeds
- bzip2: 1.0.6
- lbzip2: 2.5
- pbzip2: 1.1.9
- xz: 5.1.0alpha
- plzip: 1.2
- lrzip: 0.616
- command execution time and maximal process RSS memory were measured with
/usr/bin/time -f '%C: %e s, %M Kb' compressor arguments
(note: this is not bash's built-in time ); please note that memory measurement can be incorrect for multithreaded compressorsBelow come testing results. I have not put them into a single table, but I do comment the results in a few places. Entire testing followed this pattern:
- compress test.fastq, deleting the original
- test compressed archive ( note: this was done only for some compressors, not all )
- decompress archive back to test.fastq, delete archive
- if 3 previous steps are fast enough: repeat 1-2 more times (but only show the best result below); otherwise continue
- repeat with the next compressor
bzip2: 309 159 275 bytes
bzip2 was used as a baseline, to highlight speed benefits of both lbzip2 and pbzip2 .test.fastq: 7.193:1, 1.112 bits/byte, 86.10% saved, 2223860346 in, 309159275 out.
bzip2 -v test.fastq: 190.63 s , 7608 Kb
bzip2 -v -d test.fastq.bz2: 51.58 s , 4620 KbBzip2 is neither particularly slow, nor particularly fast. It also seems to have modest memory requirements.
pbzip2: 310 462 610 bytes
pbzip2 is the currently used reference. For any other compressor to become a successor of pbzip2 , that other compressor must be either a little faster (while compressing as good as pbzip2 ), or a little better compressor (while being as fast as pbzip2 ), or both. Note that compressed file size is only a tiny bit larger than with bzip2 ."test.fastq.bz2″: compression ratio is 1:7.163, space savings is 86.04%
pbzip2 -v test.fastq: 46.22 s , 67436 Kb
pbzip2 -dv test.fastq.bz2: 19.80 s , 46672 KbInterestingly,
pbzip2 --test
uses 1 thread only (but also consumes only 6MB RAM), resulting in decompression times similar to those of bzip2 . lbzip2 uses all 8 threads also during testing.lbzip2: 311 040 543 bytes
lbzip2: compressing "test.fastq" to "test.fastq.bz2″
lbzip2: "test.fastq": compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -v test.fastq: 22.67 s , 49812 Kblbzip2: decompressing "test.fastq.bz2″ to "test.fastq"
lbzip2: "test.fastq.bz2″: compression ratio is 1:7.150, space savings is 86.01%
lbzip2 -vd test.fastq.bz2: 18.86 s , 46652 KbI repeated pbzip2 and lbzip2 tests several times, and it was always that lbzip2 compressed this same file about twice as fast Wow! Decompression speed is about the same, compressed file size is marginally larger than with pbzip2 . Overall, lbzip2 does look like a new drop-in replacement of bzip2 / pbzip2 for me.
xz -0 ––threads=8: 517 967 372 bytes
I would call this one major test disappointment . Default setting, -6, was way too slow (estimated 28 minutes to compress!!!). Even the fastest -0 setting was still too slow! And here's one of the reasons, straight from the xz man page:Multithreaded compression and decompression are not implemented yet, so this option has no effect for now. As of writing (2010-09-27), it hasn't been decided if threads will be used by default on multicore systems once support for threading has been implemented.
Also, I forgot to use the
--block-size=900k
option, but that seems to be of no concern with such results:100 % 492.5 MiB / 2,120.8 MiB = 0.232 18 MiB/s 1:59
xz -0 -v test.fastq: 119.25 s , 4780 Kb
xz ––test ––verbose ––threads=8 test.fastq.xz: 36.00 s , 2568 Kb
100 % 492.5 MiB / 2,120.8 MiB = 0.232 58 MiB/s 0:36
xz -d -v test.fastq.xz: 36.54 s , 2500 Kbxz -0 was both slower and had significantly worse compression when compared to lbzip2 and pbzip2 . xz -0 was faster than good old bzip2 , but had significantly worse compression Really, major test disappointment .
plzip: between 407 696 562 and 498 708 539 bytes
One more major test disappointment . (Or am I somehow using these compressors in a wrong way? ) I haven't found a way to set block/member size (for lzip , that would be the-b
option). Default speed setting -6 was also way too slow, but settings -1 to -3 were comparable to pbzip2 , so I did all three.plzip -1: 498 708 539 bytes
test.fastq: 4.459:1, 1.794 bits/byte, 77.57% saved, 2223860346 in, 498708539 out.
plzip -1 ––verbose ––threads=8 test.fastq: 30.27 s , 126360 Kb (this seems to be per-thread memory )
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.86 s , 11640 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 7.24 s , 11644 KbCompression speed and ratio: both worse than lbzip2 . But the fastest testing and decompression so far.
plzip -2: 456 301 558 bytes
test.fastq: 4.874:1, 1.641 bits/byte, 79.48% saved, 2223860346 in, 456301558 out.
plzip -2 ––verbose ––threads=8 test.fastq: 38.81 s , 193416 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 6.26 s , 14828 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.38 s , 14736 KbCompression time worse than lbzip2 , a little better than pbzip2 , but compression ratio worse than any one of these. But even faster testing and decompression.
plzip -3: 407 696 562 bytes
test.fastq: 5.455:1, 1.467 bits/byte, 81.67% saved, 2223860346 in, 407696562 out.
plzip -3 ––verbose ––threads=8 test.fastq: 63.74 s , 245756 Kb
plzip ––test ––verbose ––threads=8 test.fastq.lz: 5.82 s , 18936 Kb
plzip -d ––verbose ––threads=8 test.fastq.lz: 6.10 s , 18944 KbEven faster testing and decompression! But compression ratio and speed are still worse than lbzip2 and pbzip2 .
And the final contestant, lrzip ! All 5 back-ends were tested: LZO, gzip, bzip2, LZMA, ZPAQ.
lrzip has several peculiarities, which hinder its use as a drop-in replacement for, say, bzip2 . Most importantly, when a file is compressed, it is not deleted, unless a
-D
options is specified. Unlike pbzip2 and lbzip2 , which use all available CPUs/cores by default, lrzip only uses 2 by default (-p 8
in the results below requests use of 8 cores). Another unusual feature is that during testing a file is uncompressed to a storage medium, and then deleted; almost all the other compressors only verify the decompressed data stream, which is then immediately discarded and never written to storage medium. Related feature is a-c
option, which performs file verification after decompression by reading the decompressed file from storage medium and comparing it to the decompressed stream. lrzip also stores MD5 hashes of data, and allows verifying these. lrzip comes with several helper scripts – for example, one which allows tarballing and lrzipping a chosen directory in a single command. Actually, lrzip is more of an archive utility, and not just a compressor.lrzip -D -p 8: 334 504 383 bytes
In this default (LZMA) mode, lrzip starts with 1 thread, but eventually uses more and more cores (though never all 8, or I haven't noticed this). Decompressing seems to use more threads, but that also depends on the back-end used (the slower it is – the more threads will be used, e.g. ZPAQ versus LZO).test.fastq – Compression Ratio: 6.648. Average Compression Speed: 3.113MB/s.
Total time: 00:11:21.85
lrzip -D -p 8 test.fastq: 681.84 s , 3331080 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 124.706MB/s
[OK] – 2223860346 bytes
Total time: 00:00:17.13
lrzip -t -p 8 test.fastq.lrz: 17.21 s , 2567608 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 117.778MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:17.59
lrzip -d -p 8 -D test.fastq.lrz: 17.67 s , 2567664 KbIn the default LZMA mode, lrzip is significantly slower than even bzip2, and has somewhat worse compression ratio. Yes, this is the 3rd major test disappointment .
gzip back-end: lrzip -g -L 9 -D -p 8: 430 013 769 bytes
Despite specifying-p 8
, lrzip mostly operates in 1 thread, and only sometimes in 2 (probably invokes gzip library). Testing is also done with 1 thread only, but is very fast (but slower than plzip ). The-L 9
option is supposed to be translated into -9 for gzip; as this normally has nearly no effect, it wasn't used in the following lrzip tests.test.fastq – Compression Ratio: 5.172. Average Compression Speed: 0.704MB/s.
Total time: 00:50:11.34
lrzip -p 8 -g -L 9 -D test.fastq: 3011.34 s , 2745520 Kb100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
[OK] – 2223860346 bytes
Total time: 00:00:12.71
lrzip -t -p 8 test.fastq.lrz: 12.79 s , 2577632 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 163.077MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:12.88
lrzip -d -p 8 -D test.fastq.lrz: 12.95 s , 2577728 KbAnd again, compression speed and ratio are worse than for bzip2
LZO back-end: lrzip -l -D -p 8: 766 520 776 bytes
test.fastq – Compression Ratio: 2.901. Average Compression Speed: 4.690MB/s.
Total time: 00:07:32.89
lrzip -l -D -p 8 test.fastq: 452.88 s , 2714452 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 212.000MB/s
[OK] – 2223860346 bytes
Total time: 00:00:10.58
lrzip -t -p 8 test.fastq.lrz: 10.66 s , 2582516 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 192.727MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:11.32
lrzip -d -p 8 -D test.fastq.lrz: 11.39 s , 2582504 KbNo comments.
bzip2 back-end: lrzip -b -D -p 8: 353 473 476 bytes
test.fastq – Compression Ratio: 6.291. Average Compression Speed: 4.473MB/s.
Total time: 00:07:53.95
lrzip -b -D -p 8 test.fastq: 473.94 s , 2781104 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 68.387MB/s
[OK] – 2223860346 bytes
Total time: 00:00:30.69
lrzip -t -p 8 test.fastq.lrz: 30.77 s , 2583156 KbDecompressing
100% 2120.84 / 2120.84 MB
Average DeCompression Speed: 66.250MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:00:31.92
lrzip -d -p 8 -D test.fastq.lrz: 32.00 s , 2583108 KbHadn't I done all of these simple tests myself, by now I'd think that this test was rigged to show how good pbzip2 and lbzip2 are at compressing FASTQ files
ZPAQ back-end: lrzip -z -D -p 8: 292 380 439 bytes
test.fastq – Compression Ratio: 7.606. Average Compression Speed: 2.804MB/s.% 7:100%
Total time: 00:12:36.51
lrzip -z -D -p 8 test.fastq: 756.51 s , 3585740 KbDecompressing
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.970MB/s
[OK] – 2223860346 bytes
Total time: 00:08:54.57
lrzip -t -p 8 test.fastq.lrz: 534.65 s , 2583424 KbDecompressing
100% 2120.84 / 2120.84 MB 1:100% 2:100% 3:100% 4:100% 5:100% 6:100% 7:100%
Average DeCompression Speed: 3.759MB/s
Output filename is: test.fastq: [OK] – 2223860346 bytes
Total time: 00:09:24.27
lrzip -d -p 8 -D test.fastq.lrz: 564.36 s , 2583460 KbFinally!!! We have compression better than bzip2 ! But it is also much slower than bzip2 (and some 12 times slower than pbzip2 ), so not really an option. Alas. And decompression time is the worst in the test – almost 10 minutes for what plzip does in under 7 seconds ! (I do realize that compression ratio is also different – but not that much.) I wonder if slow lrzip speeds have anything to do with test.fastq being effectively in RAM? I do not know if there are any performance penalties to mmap ing a file which is already on a RAM-mounted partition.
The test.fastq file that I've used was somehow really hard for the tested compressors to tackle as fast and as good as lbzip2 and pbzip2 could
Questions? Comments? Improvements, including plots of these figures? Comment below.
This entry was posted on Saturday, March 28th, 2015 at 1:34 and is filed under *nix , Comparison , Links , Misc , Software . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response , or trackback from your own site. 6 Responses to "Compressors galore: pbzip2, lbzip2, plzip, xz, and lrzip tested on a FASTQ file"
- Seumas Says:
April 29th, 2015 at 23:46Version 5.2 of xz is out, which does have multi-thread support. You may have to compile it yourself but it might be worth testing. I haven't tested it myself yet.
I use xz for non-realtime compression (e.g. overnight backups), because although it's slow, it's so much better than bzip2 and, of course, if it's overnight I don't care if it takes half an hour or whatever to run.
- Bogdan Says:
April 30th, 2015 at 11:36It is good to know, thanks. I was using versions currently available in Debian testing. I guess I'll make another comparison in a year or so
I must say that even with multithreading xz with default settings will likely be significantly slower than lbzip2 – on the order of 200+ seconds on the same test file and hardware, and assuming a really good parallelism implementation. For my use this is way too slow, and probably not worth the extra savings. Also, more complicated xz file format looks like another drawback to me (harder to recover data).
Clearly, everyone's needs are different, so I'm not saying that lbzip2 is much better overall – but it is for me
- hmage Says:
September 30th, 2016 at 21:55Try pxz , it's a parallel version of xz and is a drop-in replacement in terms of file format.
- Bogdan Says:
October 6th, 2016 at 21:17Thanks Hmage, that sounds interesting. Maybe in my next installment of compressor testing I'll include pxz , too
I did eventually try a newer (already parallel, I think) version of xz on genomic data, and had mixed success.
lbzip2 sometimes achieved even better ratios, mostly just a little bit worse, rarely much worse, but was always many times faster.- trotos Says:
October 11th, 2016 at 15:02Could you please try that derivative of zpaq?
http://mattmahoney.net/dc/fastqz/- Bogdan Says:
October 12th, 2016 at 23:15Trotos,
your comment reminded me that I did mention Fastqz in my previous post on the topic: http://bogdan.org.ua/2013/10/17/favourite-file-compressor-gzip-bzip2-7z.html
Looks like I haven't actually tested it, because of the concern that data recovery _might_ be too complicated with Fastqz.
For comparison, a single block damage with bzip2 would only cause the loss of between 100 and 900 K of compressed data, which – for fastq files – will probably have negligible effects.Another reason to not test it was that it is not clear if it will see any future support.
If, for example, a change in compiler makes building fastqz not possible without first modifying the code, then it's badMaybe I'll test it anyway – next time.
Google matched content |
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright � 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: May 22, 2018