Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Slightly Skeptical View on Sorting Algorithms

News

See Also Recommended Books Recommended Links TAOCP Volume3 Assembler is not for Dummies
Animations Testing Sorting Algorithms The Art of Debugging C language Flowcharts  
Simple Insertion sort Selection sort Bubblesort    
Insertion based Insertion sort Shellsort      
Merge based   Mergesort Natural two way merge    
Sorting by selection Selection sort Heapsort      
Sorting by exchanging Bubblesort Shaker sort
(bidirectional bubblesort)
Quicksort    
Distribution sorts Bucket sort Radix sort      
Sorting algorithms style

Encyclopedia of Integer Sequences

Unix sort Flashsort Humor Etc

Introduction

This page was partially created form lecture notes of Professor Bezroukov.

This is a very special page: here students can find several references and actual implementation for Bubblesort :-). Actually bubblesort is a rather weak sorting algorithm for arrays. For some strange reason it still dominates introductory courses. Often it is implemented incorrectly, despite being one of the simplest sorting algorithms in existence. Still while inferior to, way, insertion sort, in most cases,  it is that bad on lists and on already sorted arrays. Or "almost sorted" arrays with small percentage of permutations.  An important point is that it is very often is implemented incorrectly.  You can guess that the number of incorrect implementations grow with the complexity of the algorithms (BTW few people, including many instructors,  are able to write correct binary search algorism, if asked and have no references, so algorithms are generally difficult areas with which saying that to err is human is especially applicable).

Actually one need to read volume 3 of Knuth to appreciate the beauty and complexity of some advanced sorting algorithms. Please note that sorting algorithms published in textbooks are more often then not implemented with errors. Even insertion sort presents a serious challenge to many book authors ;-). Sometimes the author does not know the programming language he/she uses well, sometimes details of the algorithm are implemented incorrectly. And it is not that easy to debug them.  In this sense Knuth remains "the reference", one of the few books where the author took a great effort to debug each and every algorithm he presented.

Notes: 

  1. When writing or debugging sort algorithms, UNIX sort can often be used to check results for correctness (you can automate it to check several tricky cases, not just one sample). Diff with Unix sort results will instantly tell you if your algorithm works correctly. The only problem is that if you sort only keys it is impossible from the final order to determine weather the sort was stable (which preserve the initial sequence of records with identical keys) or not. Of cause, this difference make sense only if there are multiple identical keys in the initial array...
  2. Sorting algorithms is one of the few areas of computer science where flowcharts are extremely useful and are really illuminating
  3. It make sense to instrument algorithm from the very beginning to calculate the number of comparison (separately successful and not) and moves. That actually not only later when you investigate performance issues but with the debugging too.
  4. You can do prototyping of sorting algorithms in scripting language even if the final version should be written in compiled language. Prototyping in scripting language gets you to the final version quicker and allow to understand the internals of algorithms better (because such languages have more high level debugger, able to work in the terms of the language itself)  and then you can just manually recode it into compiled language.

But the most dangerous case is when the instructor emphasizes object oriented programming while describing sorting. That applies to book authors too. It's better to get a second book in such case, as typically those guys try to obscure the subject making it more complex (and less interesting) than it should be.   Of course, as a student you have no choice and need to pass the exam, but be very wary of this trap of mixing apples and oranges.

You might fail to understand the course material just because the instructor is too preoccupied with OO machinery instead of algorithms.  So while you may feel that you are incapable to master the subject, in reality that simply means that the instructor is an overcomplexity junky, nothing more nothing less ;-). In any case OO obscures the elegance of sorting algorithms to such an extent that most students hate the subject for the rest of their life.  If this is the case, that only means that you are normal.

Classic treatise on sorting algorithms remains  Volume 3 of TAoCP  by Donald Knuth. The part of the book in which sorting algorithms are explained is very short  (the volume also covers searching and several other subjects). Just the first 180 pages. Also when Knuth  start writing the third volume he was exhausted and that shows. The quality of his writing is definitely lower than for the first volume.  Some explanations are sketchy and it is clear that he could benefit from more experimentation and pay excessive attention to establishing exact formula for the running  time of sorting of random data. I think "O notation" (see below) is enough for this case.  Worst cases running time is a really interesting area for sorting algorithms about which Knuth is very brief.  I think for algorithms which compare keys it is impossible to do better then O(n*log(n)) in worst case (both Heapsort and Mergesort do achieve that bound). Knowing the class of input data for which given algorithm, for example Quicksort produces worse case results is can be important in many practical cases. For example, if the algorithm degrades significantly in worst case it is unsuitable for real time system, no matter how great is its running time on random data.

Still TAoCP vol. 3 is a valuable book, that still does not  completely lost its value. You can buy the first edition for 3-5 dollars (and they are still as good as later editions for teaching purposes as Knuth did s good job in the first edition and never improved on it in later editions -- changes are mostly small corrections and typographic niceties; but please note that all flowcharts in the book are a horrible mess -- shame on Knuth (but more of Addison Wesley, who should know better). Both failed to do proper job t, this destroying his own aesthetics of book writing. When I see some of the provided flowcharts, I have mixed feeling, something between laugh and disgust. But again we need to understand that the third volume was rushed by-semi-exhausted Knuth and that shows. But Knuth emerges in this volume as a scientist who is  really interested in his field of study, which I can't tell about several other books on the subject.  This is a virtue for which we can forgive a lot of shortcomings.  

The problem with the third volume is not only that he was pretty much exhausted by publishing the first two volumes of TAoCP. The third volume also was based not on Knuth own research but on lecture notes of Professor Robert W. Floyd, one of the few winners of ACM Turing prize, the top 50 scientists and technologists of the  USA early computer science explosion.  But despite brilliance of Professor  Floyd  (who also make important contribution to the compiler algorithms) at time he gave his lectures the field still quickly developed and this fact is reflected in the unevenness of his lecture notes which migrated to the book content. Some algorithms are well covered but other important not so well, because their value was still unclear at the most of writing.  that's why the third volume can be viewed as the most outdated volume out of three, despite the fact that it was published last. Nevertheless it remains classic because Knuth is Knuth, and his touch on the subject can't be replicated easily.

As any classic, it is quite difficult to read (examples are in MIX assembler; and growing on ancient  IBM 650 Knuth missed the value of System/360 instruction set). Still like classic should,  it produces insights that you never get reading regular textbooks. The most valuable are probably not the content, Knuth style and exercises. Most of the content of volume 3 can now be found in other books, although in somewhat emasculated fashion.

Again I would like to warn you that the main disaster in courses "Data structures and algorithms" happens when they try to mix applies with oranges, poisoning sorting algorithms by injecting into the course pretty much unrelated subject -- OO. And it is students in this case who suffer, not the instructor ;-)

Stability, number of comparison, number of key exchanges and small sets vs. large sets

Among simple sorting algorithms, the insertion sort seems to be the best small sets. It is stable and works perfectly on "almost sorted" arrays if you guessed the direction of sorting right.  Those "almost sorted" data arrays are probably the most important class of input data for sorting algorithms (and  they were undeservingly ignored by Knuth). Generally guessing the structure of array that you are sorting right is the key to high speed sorting. Otherwise you need to rely on worst case scenarios which often is horribly slow.

Selection sort, while not bad, does not takes advantage of the preexisting sorting order and is not stable. At the same time it moves "dissident" elements by larger distances then insertion sort, so the total number of moves might well be less. But a move typically cost about the same as unsuccessful comparison (on modern computer with predictive execution unsuccessful comparison costs 10 or more times then when comparison that does not change order of execution (typically successful comparison).  So it is total number of comparisons plus total number of moves that counts.  That also suggest that on modern computers algorithms that does not use comparisons of keys might have an edge.

Despite this fact it is still make sense to judge the quality of sorting algorithms by just the number of comparisons, despite the fact that algorithms which do not use comparisons at all have significant advantages on modern computers. But sun of the total number of comparisons and moves is even a b metric. But not all comparisons are created equal -- what really matter are unsuccessful comparisons, not so much successful one. Each unsuccessful comparison results in a jump in compiled code which forces the flush of the instruction pipeline. 

The second important fact is the size of the set. There is no and can't be a sorting algorithms that is equally efficient of small sets and on large sets.  For small sets "overhead" of the algorithms itself is an important factor, so the simpler algorithm is the better it performs on small sets.  So no complex algorithms probably can beat simpler sorting algorithms on sets up to, say, 21 element.

On larger sets several more complex algorithms such as  radixsort (which does not used comparisons), shellsort, mergesort, heapsort, and even quite fragile, but still fashionable quicksort are much faster. But for some algorithms like  quick sort that is tricky effect called self-poisoning -- on larger sets higher number of subsets degenerate into worst case scenario.  Mergesort does not has this defect. That's why mergesort is now used by several scripting languages as internal sorting algorithm instead of Quicksort.

A least significant digit (LSD) radix sort  recently has resurfaced as an alternative to other high performance comparison-based sorting algorithms for integer keys and short strings because unlike Heapsort and Mergesort it does not use comparisons and can utilize modern CPUs much better then traditional comparison based algorithms.

Anyway, IMHO if an instructor uses bubblesort as an example you should be slightly vary ;-). The same is true if Quicksort is over-emphasized in the course as all singing-all dancing solution for sorting large sets.. 

Length of the record to the length of the key ratio

Most often you not just the set of keys, but a set of records, each of which contains specific key as a part of the record. Only the key is used for comparison during the sorting  If record is long , then in most cases it does not make much sense to sort records "in place", especially if you need to write the result to the disk. You better create a separate array with keys and indexes (or references) of the records and sort it. For example, any implementation of Unix sort utility that involves moving of the whole records for cases when the length of the record is three or more time longer then the key is suboptimal, as the result always is written tot he disk

Even in cases of in-memory sort,  if records are long in many cases it is better to copy keys and index (or reference) to the record into separate array, This array has records in which  the length of the record is typically less then two times the length of the key, so they can be sorted "in place" very efficiently.  Speed up can more then ten or even 100 times if records are sufficiently longer then the key (for example, if the key is IP address and records are HTTP proxy logs)

Modern computers have enough memory for this methods to be already preferable to "in place" sorting of the whole records, as  moving long records after comparison of keys in such cases take much more time then comparison of the keys. If this is necessary, then softring algorithms which minimize the number of moves have the edge over all others. I would say that if record is 3 of more times longer then the key, sorting array of keys with indexes/references should be considered as an option.

This method also allow faster insertion of new records into the array as you can already written them at the end of the existing array and  insert only into "key-Index" array.

There are also some other less evident advantages of this approach.  For example,  it allows sorting array consisting of records of unequal length.

Generally preferences for "in-place" sorting which are typical for books like Knuth vol 3 now need to be reconsidered, as often  sorted array can be recorstucted in the new space and the memory of the initial array can be reclaimed later via garbage collection mechanisms which exist in most modern languages (actually all scripting languages, plus Java).

Classification of sorting algorithms

One popular way to classify sorting algorithms is by the number of comparison made during the sorting of the random array ("big O" notation which we mentioned above).  Here is can distinguish between tree major classes of sorting algorithms:

The second important criteria is the complexity of algorithms. We can classify sorting algorithms as simple and complex with simple algorithms beating complex of small arrays (say up to 21-25 elements).

Another  important classification is based on the internal structure of the algorithm:

  1. Swap-based sorting algorithms begin conceptually with the entire list, and exchange particular pairs of elements (adjacent elements like in bubblesort or elements with a certain step like in  Shell sort) moving toward a more sorted list.
  2. Merge-based sorting algorithms  creates initial "naturally" or "unnaturally" sorted sequences, and then add either element by element to it preserving sorting order (insertion sort) or merge two already sorted sequences until there is a single sorting segment, encompassing all the data. 
  3. Tree-based sorting algorithms store the data, at least conceptually, in a binary tree; there are two different approaches, one based on heaps, and the other based on search trees.   
  4. Sorting by distribution  algorithms (See Knuth Vol 3, 5.2.5. Sorting by distribution. P.168) use additional key-value information, implicit in storing digits and strings in computer. The classic example of this type of algorithms is so called radix sort. Radix Sort processes array elements one digit at a time, starting either from the most significant digit (MSD) or the least significant digit (LSD).  MSD radix sorts use lexicographic order, which is especially suitable for sorting strings, such as words, or fixed-length integer representations. A least significant digit (LSD) radix sort is suitable only for short keys (for example integers or IP addresses) and when all keys have the same length (or can  padded to the same length) and is a fast stable sorting algorithm. It is suitable, for example for sorting log records by IP address. See also Bucket sort  and postman sort

Number of comparison and the complexity of the algorithms are not the only way to classify sorting algorithms. There are many others. We can also classify sorting algorithms by several other criteria such as:

Computational complexity and average number of comparisons

Computational complexity (worst, average and best number of comparisons for several typical test cases, see below) in terms of the size of the list (n). Typically, good average number of comparisons is O(n log n) and bad is O(n2).  Please note that O matters. Even if both algorithms belong to O(n log n) class, algorithm for which O=1 is 100 times faster then algorithm for which O=100.  For small sets (say less then 100 elements) the value of O can be decisive.

Another problem with O notation is that this asymptotic analysis does not tell about algorithms behavior on small lists, or worst case behavior. This is actually one of the major drawback of Knuth book in which he was carried too much by the attempt to establish theoretical bounds on algorithms, often sliding into what is called "mathiness" . Worst case behavior is probably tremendously more important then average.

For example "plain-vanilla" Quicksort requires O(n2) comparisons in case of already sorted in the opposite direction or almost sorted array.

I would like to stress that sorting of almost completely sorted array is a very important in practice case as in many case sorting is used as "incompetent implementation of the merge" instead of merging two sets the second set concatenated to the sorted set and then the whole array is resorted. If you get statistic of usage of Unix sort in some large datacenter you will be see that a lot of sorting used for processing huge log files and then resorting them for additional key. For example the pipeline

gzip -dc $1.gz | grep '" 200' | cut -d '"' -f 2 | cut -d '/' -f 3 | \
      '[:upper:]' '[:lower:]' | sort | uniq -c | sort -r > most_frequent

can be applied to 100M or larger file and this script can be processed daily.

Sort algorithms which only use generic key comparison operation always need at least O(n log n) comparisons on average; while sort algorithms which exploit the structure of the key space cannot sort faster than O(n log k) where k is the size of the keyspace.  So if key space is much less then N (a lot of records with identical keys), no comparison algorithms is competitive with radix type sorting.  For example if you sort student grades there are only a few valid keys in what can be quite large array.

Please note that the number of comparison is just convenient theoretical metric. In reality both moves and comparisons matter, and failed comparisons matter more then successful; also on short keys the cost of move is comparable to the cost of successful comparison (even if pointers are used).

Stability of the sorting algorithm

Stability is the way sorting algorithms behave is the array to be sorted contain multiple identical keys. Stable sorting algorithms maintain the relative order of records with equal keys. 

If all keys are different then this distinction does not make any sense. But if there are equal keys, then  a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.

Stability is a valuable property of sorting algorithms as in many cases the order of identical keys in the sorting array should be preserved.

Difference between worst case and average behaviour

A very important way to classify sorting algorithms is by the differences  between worst cases behaviors and "average" behaviour

For example, Quicksort is efficient only on the average, and its worst case is n2 , while Heapsort has an interesting property that the worst case is not much different from the an average case. 

Generalizing we can talk about the behavior of the particular sorting algorithms on practically important data sets. Among  them:

  1. Completely sorted data set (you will be surprised how many sorting operations are performed on already sorted data
  2. Inversely sorted data set
  3. "Almost sorted" data set (1 to K permutations, where K is less then 10% or N/10). 

The latter is often is created as a perverted way of inserting one or two records into a large data set: instead of merging them they are added to the bottom and the whole dataset is sorted. Those has tremendous practical implications which are often not addressed or addressed incorrectly in textbooks that devote some pages to sorting.

Memory usage: those days algorithms which requires 2N space need a second look

Memory usage is an important criteral by which we can classify sorting algorithms. But it was more important in the last then now. Traditionally the most attention in studying sorting algorithms is devoted to a large class of algorithms that are called "in-place sorting algorithms. They do not require additional space to perform the sorting.  But there is no free lunch and they are generally slower than algorithms that use additional memory: additional memory can be used for mapping of keyspace which raises the efficiency of sorting. Also most fast stable algorithms use additional memory.

This "excessive" attention  to "in-place" sorting now is a somewhat questionable approach. Modern computers have tremendous amount of RAM in comparison with ancient computers (which means computers produced before 1980). So usage of some additional memory by sorting algorithms is no longer a deficiency like it were in good old days of computers with just 64KB (the first IBM/360 mainframe), or with 1 MB  of RAM ( Initial IBM PC; note this is a megabyte, not a gigabyte) of memory.  With smart phones often having 4GB of RAM or more using additional space during sorting now is at least  less a deficiency.

Please note that  smartphone which has 2 GB of RAM has 2000 more RAM then an ancient computer with 1 MB of RAM.  With the current 8-16GB typical memory size on laptops and 64GB typical memory size on the servers as well as virtual memory implemented in all major OSes, the old emphasis on algorithms that does not require additional memory should probably be reconsidered.

That means that  algorithms that require log2N or even N of additional space are now more or less acceptable. Please note that sorting of arrays up to 1TB is now possible to be performed completely in RAM. 

This of course such a development was impossible to predict in early 70th  when Knuth vol 3 of TAOCP was written (it was published in 1973). Robert Floyd lectures notes on sorting, on which Kruth third volume was partially based were written even earlier.

 Speedup that can be achieved by using of a small amount of additional memory is considerable and it is stupid to ignore it. Moreover even for classic in-place algorithms you can always use pointers to speed up moving records. for long records this additional space proportional to N actually speed up moved dramatically and just due to this such an implementation will outcompete other algorithms. In other words when sorting records not to use pointers in modern machines in any case when the record of the sorting array is larger then 64 bit is not a good strategy.  In real life sorting records usually are size several times bigger then the size of a pointers (4 bytes in 32 bit CPUs, 8 bytes on 64 bit).  That means that, essentially,  study of sorting algorithms on just  integer arrays is a much more realistic abstraction, then it appear from the first site.

Locality of reference

In modern computer multi-level memory is used with dast CPU-based case having the size of few megabytes (let's say 4-6MB for expensive multicore CPUs) . Cache-aware versions of the sort algorithms, in which operations have been specifically tuned to minimize the movement of pages in and out cache, can be dramatically quicker. One example is the tiled merge sort algorithm which stops partitioning sub arrays when subarrays of size S are reached, where S is the number of data items fitting into a single page in memory. Each of these subarrays is sorted with an in-place sorting algorithm, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on modern servers that have CPU with substantial amount of cache. (LaMarca & Ladner 1997)

Minimal number of non-linear execution sequences

Modern CPUs designs try to minimize wait states of CPU that are inevitable due to the fact that it is connected to much slower (often 3 or more times) memory using a variety of techniques such as CPU caches, instruction pipelines, instruction prefetch, branch prediction. For example, the number of instruction pipeline (see  also Classic RISC pipeline - Wikipedia) flushes (misses) on typical data greatly influence the speed of algorithm execution.  Modern CPU typically use branch prediction (for example The Intel Core i7 has two branch target buffers and possibly two or more branch predictors). If branch guessed incorrectly penalty is significant and proportional to the number of stages in the pipeline. So one possible optimization of code strategy for modern CPUs is reordering branches, so that higher probability code was located after the branch instruction.  This is a difficult optimization that requires instrumentation of algorithms and its profiling, gathering set of statistics about branching behaviors based on relevant data samples as well as some compiler pragma to implement it.  In any case this is important factor for modern  CPUs which have 7, 10 and even 20 stages pipelines (like in the Intel Pentium 4) and were cost of a pipeline miss can cost as much as three or fours instructions executed sequentially.  

Overhyped vs. useful sorting algorithms

Among non-stable algorithms heapsort and Shellsort are probably the most underappreciated (with mergesort close to them),  and quicksort  is one of the most overhyped. Please note the quicksort is a fragile algorithm that is not that good in using predictive execution of instructions in pipeline (typical feature of modern CPUs). There are practically valuable sequences ("near sorted" data sets) in which for example shellsort is twice faster then quicksort (see examples below).

It is fragile because the choice of pivot is equal to guessing an important property of the data to be sorted (and if it went wrong the performance can be close to quadratic). There is also some experimental evidence that on very large sets Quicksort runs into "suboptimal partitioning" on a regular basis, so it becomes a feature, not a bug. It does not work well on already sorted or "almost sorted" (with a couple or permutations) data sorted in a reverse order, or worse have "chainsaw like" order. It does not work well on data that contain a lot of identical keys. Those are important cases that are frequent in real world usage of sorting (for example when you need to sort proxy logs by IP). Again I would like to stress that one will be surprised to find what percentage of sorting is performed on "almost" sorted datasets (datasets with less then 10% of permutations although not necessary in correct order, for example ascending when you need descending, or vice versa) is probably ten time higher that on "almost random" datasets.  

For those (and not only for those ;-) reasons you need to be skeptical about "quicksort lobby" with Robert Sedgewick (at least in the past) as the main cheerleader.  Quicksort is a really elegant algorithm invented by Hoare in Moscow in 1961, but in real life other qualities then elegance and speed of random data are more valuable ;-).

On important practical case of  "semi-sorted" and "almost reverse sorted" data quicksort is far from being optimal and often demonstrates dismal performance. You need to do some experiments to see how horrible quicksort can be in case of already sorted data (simple variants exhibit quadratic behavior, the fact that is not mentioned in many textbooks on the subject) and how good shellsort is ;-).

And on typical corporate data sets including log records heapsort usually beats quicksort because performance of quicksort  too much depends of the choice of pivot and series of bad choices of pivot increase time considerably.  Each time the pivot element is close to minimum (or maximum) the performance of this stage goes into the drain and on large sets such degenerative cases are not only common, they can happen in series (I suspect that on large data sets quicksort "self-created" such cases  -- in other words Quicksort poisons its data during partitions making bad choices of pivot progressively more probable).  Here are results from one contrarian article written by Paul Hsieh  in 2004

 
Athlon XP 1.620Ghz
Power4 1Ghz
  Intel C/C++
/O2 /G6 /Qaxi /Qxi /Qip
WATCOM C/C++
/otexan /6r
GCC
-O3 -march=athlon-xp
MSVC
/O2 /Ot /Og /G6
CC
-O3  
Heapsort 2.09 4.06 4.16 4.12 16.91
Quicksort 2.58 3.24 3.42 2.80 14.99
Mergesort 3.51 4.28 4.83 4.01 16.90

Data is time in seconds taken to sort 10000 lists of varying size of about 3000 integers each. Download test here

Please note that total time while important, does not tell you the whole story. Actually it reveals that using Intel compiler Heapsort can beat Quicksort even "on average" -- not a small feat. But, of course, you also need also know the standard deviation.

That means that you need to take any predictions about relative efficiency of algorithms with the grain of salt unless they are provided for at least a dozen typical sets of data as described below. Shallow authors usually limit themselves to random sets, which are of little practical importance. 

Suitability of a sorting algorithms in a particular situation

It is clear that you should not use sorting algorithms in worst case O(n2) in real time applications ;-). But even is less strict environment many of algorithms are simply not sutable. In order to judge suitability of a sorting algorithms to a particular application the key question is "what do you know about the data?

Other important questions include:

Generally the more we know about the properties of data to be sorted, the faster we can sort them.

As we already mentioned the size of key space is one of the most important dimensions (sort algorithms that use the size of key space can sort any sequence for time O(n log k) ). For example if we are sorting subset of  a card deck  we can take into account that there are only 52 keys in any input sequence and select an algorithms that uses limited keyspace for dramatic speeding of the sorting.  In this case using generic sorting algorithm is just a waist.  Similar situation exist if we need to sort people ages. Here clearly the size of  space is limited to interval of 1-150. Or 1-200, if you want to be generous and expect important advancement in gerontology (especially for super rich ;-).

Moreover, the relative speed of the algorithms depends on the size of the data set: one algorithm can be faster then the other for sorting less then, say, 64 or 128 items and slower on larger sequences. Simpler algorithms with minimal housekeeping tend to perform better on small sets ,even if the are  O(n2) type of algorithms.  Especially algorithms that have less number of jump statement in compiled version of code. For example, insertion sort is competitive with more complex algorithms up to N=25 or so.

Adaptive sorting

The more you know about the order of the array that you need to sort  then faster you can sort it.  If there are chanced that the array is sorted, inversely sorted or "almost sorted" (in many cases this is a typical input) it make sense to use one  "intelligence gathering pass" over the array.

Such an "intelligence gathering pass" allow you to select a faster algorithm. this approach can be called "adaptive sorting".

The simplest way to implement such an intelligence pass is to count the number of permutations. This requires exactly N "extra" comparisons to be performs. Which, if the order is close to random,  slightly increase running time of the algorithm ( it slightly increases O of the particular algorithms). But if the order of elements belong to some "extreme" cases speed up from this information can be tremendous.  Among such cases:

Another important  information is the range of keys used for sorting. If the range is small (people age, calendar dates, IP addresses, etc) they you can speed up sorting based on this fact.

Sometimes you can benefit from compressing the key (making it shorter and more suitable for comparison, for example by converting it from string to an integer (this is possible for example for dates and IP addresses). 

If resulting sorting order is irrelevant (for example when you compile frequency of visits to a particular web site from Proxy logs), and the keys are long  then it is possible to compress keys, using iether zip algorithm or some kind of  hash function  such as MD5 to speed up comparison.

Is general, like in searching,  there is tremendous space for "self-adaptation" of sorting algorithms based on statistics gathered during their runs.

Typical data sequences for testing sorting algorithms

There is not "best sorting algorithm" for all data. Various algorithms have their own strength and weaknesses. For example some "sense" already sorted or "almost sorted" data sequences and perform faster on such sets.  In this sense Knuth math analysis is insufficient although "worst time" estimates are useful.

As for input data it is useful to distinguish between the following broad categories that all should be used in testing (random number sorting is a very artificial test and as such the estimate it provides does not have much practical value, unless we know that other cases behave similarly or better):  

  1. Completely randomly reshuffled array (this is the only test that naive people use in evaluating sorting algorithms) .

  2. Already sorted array (you need to see how horrible Quicksort is on this case and how good shellsort is ;-). This is actually a pretty important case as often sorting is actually resorting of previously sorted data done after minimal modifications of the data set.  There are three important case of already sorted array

  3. Array that consisted of merged already sorted arrays (Chainsaw array). Arrays can be sorted

    1. in right direction

    2. in opposite direction of have arrays

    3. both sorted in right and opposite direction (one case is "symmetrical chainsaw"). 

  4. Array consisting of small number of identical elements (sometimes called  or "few unique" case). If number of distinct elements is large this is the case similar to chainsaw but without advantage of preordering. So it can be generated by "inflicting" certain number of permutations on chainsaw array. Worst case is when there is just two values of elements in the array (binary array). Quicksort is horrible on such data. Many other algorithms work slow on such an array.

  5. Already sorted in right direction array with N permutations (with N from 0.1 to 10% of the size). Insertion sort does well on such arrays. Shellsort also is quick. Quick sort do not adapt well to nearly sorted data.

  6. Already sorted array in reverse order array with N permutations

  7. Large data sets with normal distribution of keys.

  8. Pseudorandom data (daily values of S&P500 or other index for a decade or two might be a good test set here; they are available from Yahoo.com )

Behavior on "almost sorted" data and worst case behavior are a very important characteristics of sorting algorithms. For example, in sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the input is already sorted, its complexity falls to O(n). Specifically, n-1 comparisons and zero moves are performed, which is the same as for simply running through the input, checking if it is pre-sorted. In  Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). Python uses timsort, a hybrid of merge sort and insertion sort, which will also become the standard sort algorithm for Java SE 7.

Languages for Exploring the Efficiency of Sort Algorithms

Calculation of the number of comparisons and number of  data moves can be done in any language. C and other compiled languages provide an opportunity to see the effect of computer instruction set and CPU speed on the sorting performance. Usually the test program is written as a subroutine that is called, say, 1000 times. Then data entry time (running just data coping or data generating part the same number of times without any sorting) is subtracted from the first.

Such a method can provide more or less accurate estimate of actual algorithms run time on a particular data set and particular CPU architecture. Modern CPUs that have a lot of general purpose registers tend to perform better on sorting: sorting algorithms tend to belong to the class of algorithms with tight inner loop and speed of this inner loop has disproportionate effect on the total run time. If most scalar  variables used in this inner look can be kept in registers times improves considerably.  

Artificial computers like Knuth MIXX can be used too. In this case the time is calculated based on the time table of performing of each instruction (instruction cost metric). 

While choosing the language is somewhat a religious issue, there are good and bad language for sorting algorithms implementations. See Language Design and Programming Quotes for enlightened discussion of this very touchy subject ;-)

BTW you can use Perl or other interpreted scripting language in a similar way as the assembler of artificial computer like MIX.  I actually prefer Perl as a tool of exploring sorting as this is an interpreted language (you do not need to compile anything) with a very good debugger and powerful control structures. And availability of powerful control structures is one thing that matter in implementation of sorting algorithms. Life is more complex then proponents of structured programming are were trying us to convince.

Final Notes

I would like to stress it again that that a lot of examples in the books are implemented with errors.  That's especially true for Java books and Java demo implementations. Please use Knuth's book for reference.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 07, 2017] Timsort, the sorting algorithms invented by Tim Peters for Cpython interpteter

Nov 07, 2017 | www.amazon.com

sorted and list.sort is Timsort, an adaptive algorithm that switches from insertion sort to merge sort strategies, depending on how ordered the data is. This is efficient because real-world data tends to have runs of sorted items. There is a Wikipedia article about it.

Timsort was first deployed in CPython, in 2002. Since 2009, Timsort is also used to sort arrays in both standard Java and Android, a fact that became widely known when Oracle used some of the code related to Timsort as evidence of Google infringement of Sun's intellectual property. See Oracle v. Google - Day 14 Filings .

Timsort was invented by Tim Peters, a Python core developer so prolific that he is believed to be an AI, the Timbot. You can read about that conspiracy theory in Python Humor . Tim also wrote The Zen of Python: import this .

[Oct 28, 2017] C Source Code for Sort Algorithms

Oct 28, 2017 | durangobill.com

//****************************************************************************
//
// Sort Algorithms
// by
// Bill Butler
//
// This program can execute various sort algorithms to test how fast they run.
//
// Sorting algorithms include:
// Bubble sort
// Insertion sort
// Median-of-three quicksort
// Multiple link list sort
// Shell sort
//
// For each of the above, the user can generate up to 10 million random
// integers to sort. (Uses a pseudo random number generator so that the list
// to sort can be exactly regenerated.)
// The program also times how long each sort takes.
//

[Oct 28, 2017] SORTING AND SEARCHING ALGORITHMS by Tom Niemann

Oct 28, 2017 | epaperpress.com

26 pages PDF. Source code in C and Visual Basic c are linked in the original paper

This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know a high-level language, such as C, and that you are familiar with programming concepts including arrays and pointers.

The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by a section on dictionaries, structures that allow efficient insert, search, and delete operations. The last section describes algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is included.

Most algorithms have also been coded in Visual Basic. If you are programming in Visual Basic, I recommend you read Visual Basic Collections and Hash Tables, for an explanation of hashing and node representation.

If you are interested in translating this document to another language, please send me email. Special thanks go to Pavel Dubner, whose numerous suggestions were much appreciated. The following files may be downloaded:

source code (C) (24k)

source code (Visual Basic) (27k)

Permission to reproduce portions of this document is given provided the web site listed below is referenced, and no additional restrictions apply. Source code, when part of a software project, may be used freely without reference to the author.

Thomas Niemann

Portland, Oregon epaperpress.

Contents

Contents .......................................................................................................................................... 2

Preface ............................................................................................................................................ 3

Introduction ...................................................................................................................................... 4

Arrays........................................................................................................................................... 4

Linked Lists .................................................................................................................................. 5

Timing Estimates.......................................................................................................................... 5

Summary...................................................................................................................................... 6

Sorting ............................................................................................................................................. 7

Insertion Sort................................................................................................................................ 7

Shell Sort...................................................................................................................................... 8

Quicksort ...................................................................................................................................... 9

Comparison................................................................................................................................ 11

External Sorting.......................................................................................................................... 11

Dictionaries .................................................................................................................................... 14

Hash Tables ............................................................................................................................... 14

Binary Search Trees .................................................................................................................. 18

Red-Black Trees ........................................................................................................................ 20

Skip Lists.................................................................................................................................... 23

Comparison................................................................................................................................ 24

Bibliography................................................................................................................................... 26

[Dec 26, 2014] The World of YouTube Bubble Sort Algorithm Dancing 63

Posted by timothy
from the right-under-our-very-noses dept.

theodp writes In addition to The Ghost of Steve Jobs, The Codecracker, a remix of 'The Nutcracker' performed by Silicon Valley's all-girl Castilleja School during Computer Science Education Week earlier this month featured a Bubble Sort Dance. Bubble Sort dancing, it turns out, is more popular than one might imagine. Search YouTube, for example, and you'll find students from the University of Rochester to Osmania University dancing to sort algorithms. Are you a fan of Hungarian folk-dancing? Well there's a very professionally-done Bubble Sort Dance for you! Indeed, well-meaning CS teachers are pushing kids to Bubble Sort Dance to hits like Beauty and a Beat, Roar, Gentleman, Heartbeat, Under the Sea, as well as other music.

A Comparison of Sorting Algorithms

Not clear how the data were compiled... Code is amateurish so timing should be taken with the grain of salt.
Recently, I have translated a variety of sorting routines into Visual Basic and compared their performance... I hope you will find the code for these sorts useful and interesting.


What makes a good sorting algorithm? Speed is probably the top consideration, but other factors of interest include versatility in handling various data types, consistency of performance, memory requirements, length and complexity of code, and the property of stability (preserving the original order of records that have equal keys). As you may guess, no single sort is the winner in all categories simultaneously (Table 2).


Let's start with speed, which breaks down into "order" and "overhead". When we talk about the order of a sort, we mean the relationship between the number of keys to be sorted and the time required. The best case is O(N); time is linearly proportional to the number of items. We can't do this with any sort that works by comparing keys; the best such sorts can do is O(N log N), but we can do it with a RadixSort, which doesn't use comparisons. Many simple sorts (Bubble, Insertion, Selection) have O(N^2) behavior, and should never be used for sorting long lists. But what about short lists? The other part of the speed equation is overhead resulting from complex code, and the sorts that are good for long lists tend to have more of it. For short lists of 5 to 50 keys or for long lists that are almost sorted, Insertion-Sort is extremely efficient and can be faster than finishing the same job with QuickSort or a RadixSort. Many of the routines in my collection are "hybrids", with a version of InsertionSort finishing up after a fast algorithm has done most of the job.

The third aspect of speed is consistency. Some sorts always take the same amount of time , but many have "best case" and "worst case" performance for particular input orders of keys. A famous example is QuickSort, generally the fastest of the O(N log N) sorts, but it always has an O(N^2) worst case. It can be tweaked to make this worst case very unlikely to occur, but other O(N log N) sorts like HeapSort and MergeSort remain O(N log N) in their worst cases. QuickSort will almost always beat them, but occasionally they will leave it in the dust.

[Oct 10, 2010] The Heroic Tales of Sorting Algorithms

Notation:

Page numbers refer to the Preiss text book Data Structures and Algorithms with Object-Orientated Design Patterns in Java.

This page was created with some references to Paul's spiffy sorting algorithms page which can be found here. Most of the images scans of the text book (accept the code samples) were gratefully taken from that site.

Sorting Algorithm Page Implementation Summary Comments Type Stable? Asymptotic Complexities
Straight Insertion 495 On each pass the current item is inserted into the sorted section of the list. It starts with the last position of the sorted list, and moves backwards until it finds the proper place of the current item. That item is then inserted into that place, and all items after that are shuffled to the left to accommodate it. It is for this reason, that if the list is already sorted, then the sorting would be O(n) because every element is already in its sorted position. If however the list is sorted in reverse, it would take O(n2) time as it would be searching through the entire sorted section of the list each time it does an insertion, and shuffling all other elements down the list.. Good for nearly sorted lists, very bad for out of order lists, due to the shuffling. Insertion Yes Best Case: O(n).

Worst Case: O(n2)

Binary Insertion Sort 497 This is an extension of the Straight Insertion as above, however instead of doing a linear search each time for the correct position, it does a binary search, which is O(log n) instead of O(n). The only problem is that it always has to do a binary search even if the item is in its current position. This brings the cost of the best cast up to O(n log n). Due to the possibility of having to shuffle all other elements down the list on each pass, the worst case running time remains at O(n2). This is better than the Strait Insertion if the comparisons are costly. This is because even though, it always has to do log n comparisons, it would generally work out to be less than a linear search. Insertion Yes Best Case: O(n log n).

Worst Case: O(n2)

Bubble Sort 499 On each pass of the data, adjacent elements are compared, and switched if they are out of order. eg. e1 with e2, then e2 with e3 and so on. This means that on each pass, the largest element that is left unsorted, has been "bubbled" to its rightful place at the end of the array. However, due to the fact that all adjacent out of order pairs are swapped, the algorithm could be finished sooner. Preiss claims that it will always take O(n2) time because it keeps sorting even if it is in order, as we can see, the algorithm doesn't recognise that. Now someone with a bit more knowledge than Preiss will obviously see, that you can end the algorithm in the case when no swaps were made, thereby making the best case O(n) (when it is already sorted) and worst case still at O(n2). In general this is better than Insertion Sort I believe, because it has a good change of being sorted in much less than O(n2) time, unless you are a blind Preiss follower. Exchange Yes. NOTE: Preiss uses a bad algorithm, and claims that best and worst case is O(n2).

We however using a little bit of insight, can see that the following is correct of a better bubble sort Algorithm (which does Peake agree with?)

Best Case: O(n).

Worst Case: O(n2)

Quicksort 501 I strongly recommend looking at the diagram for this one. The code is also useful and provided below (included is the selectPivot method even though that probably won't help you understanding anyway).
The quick sort operates along these lines: Firstly a pivot is selected, and removed from the list (hidden at the end). Then the elements are partitioned into 2 sections. One which is less than the pivot, and one that is greater. This partitioning is achieved by exchanging values. Then the pivot is restored in the middle, and those 2 sections are recursively quick sorted.
A complicated but effective sorting algorithm. Exchange No Best Case: O(n log n).

Worst Case: O(n2)

Refer to page 506 for more information about these values. Note: Preiss on page 524 says that the worst case is O(n log n) contradicting page 506, but I believe that it is O(n2), as per page 506.

Straight Selection Sorting. 511 This one, although not very efficient is very simply. Basically, it does n2 linear passes on the list, and on each pass, it selects the largest value, and swaps it with the last unsorted element.
This means that it isn't stable, because for example a 3 could be swapped with a 5 that is to the left of a different 3.
A very simple algorithm, to code, and a very simple one to explain, but a little slow.

Note that you can do this using the smallest value, and swapping it with the first unsorted element.

Selection No Unlike the Bubble sort this one is truly Q(n2), where best case and worst case are the same, because even if the list is sorted, the same number of selections must still be performed.
Heap Sort 513 This uses a similar idea to the Straight Selection Sorting, except, instead of using a linear search for the maximum, a heap is constructed, and the maximum can easily be removed (and the heap reformed) in log n time. This means that you will do n passes, each time doing a log n remove maximum, meaning that the algorithm will always run in Q(n log n) time, as it makes no difference the original order of the list. This utilises, just about the only good use of heaps, that is finding the maximum element, in a max heap (or the minimum of a min heap). Is in every way as good as the straight selection sort, but faster. Selection No Best Case: O(n log n).

Worst Case: O(n log n).
Ok, now I know that looks tempting, but for a much more programmer friendly solution, look at Merge sort instead, for a better O(n log n) sort .

2 Way Merge Sort 519 It is fairly simple to take 2 sorted lists, and combine the into another sorted list, simply by going through, comparing the heads of each list, removing the smallest to join the new sorted list. As you may guess, this is an O(n) operation. With 2 way sorting, we apply this method to an single unsorted list. In brief, the algorithm recursively splits up the array until it is fragmented into pairs of two single element arrays. Each of those single elements is then merged with its pairs, and then those pairs are merged with their pairs and so on, until the entire list is united in sorted order. Noting that if there is every an odd number, an extra operation is added, where it is added to one of the pairs, so that that particular pair will have 1 more element than most of the others, and won't have any effect on the actual sorting. Now isn't this much easier to understand that Heap sort, its really quite intuitive. This one is best explain with the aid of the diagram, and if you haven't already, you should look at it. Merge Yes Best and Worst Case: Q(n log n)
Bucket Sort 526 Bucket sort initially creates a "counts" array whose size is the size of the range of all possible values for the data we are sorting, eg. all of the values could be between 1 and 100, therefore the array would have 100 elements. 2 passes are then done on the list. The first tallies up the occurrences of each of number into the "counts" array. That is for each index of the array, the data that it contains signifies the number of times that number occurred in list. The second and final pass goes though the counts array, regenerating the list in sorted form. So if there were 3 instance of 1, 0 of 2, and 1 of 3, the sorted list would be recreated to 1,1,1,3. This diagram will most likely remove all shadows of doubt in your minds. This sufferers a limitation that Radix doesn't, in that if the possible range of your numbers is very high, you would need too many "buckets" and it would be impractical. The other limitation that Radix doesn't have, that this one does is that stability is not maintained. It does however outperform radix sort if the possible range is very small. Distribution No Best and Worst case:Q(m + n) where m is the number of possible values. Obviously this is O(n) for most values of m, so long as m isn't too large.

The reason that these distribution sorts break the O(n log n) barrier is because no comparisons are performed!

Radix Sort 528 This is an extremely spiffy implementation of the bucket sort algorithm. This time, several bucket like sorts are performed (one for each digit), but instead of having a counts array representing the range of all possible values for the data, it represents all of the possible values for each individual digit, which in decimal numbering is only 10. Firstly a bucked sort is performed, using only the least significant digit to sort it by, then another is done using the next least significant digit, until the end, when you have done the number of bucket sorts equal to the maximum number of digits of your biggest number. Because with the bucket sort, there are only 10 buckets (the counts array is of size 10), this will always be an O(n) sorting algorithm! See below for a Radix Example. On each of the adapted bucket sorts it does, the count array stores the numbers of each digit. Then the offsets are created using the counts, and then the sorted array regenerated using the offsets and the original data.
This is the god of sorting algorithms. It will search the largest list, with the biggest numbers, and has a is guaranteed O(n) time complexity. And it ain't very complex to understand or implement.
My recommendations are to use this one wherever possible.
Distribution Yes Best and Worst Case: Q(n)
Bloody awesome!

[Oct 09, 2010] Sorting Knuth

1. About the code
2. The Algorithms

2.1. 5.2 Internal Sorting
2.2. 5.2.1 Sorting by Insertion
2.3. 5.2.2 Sorting by Exchanging
2.4. 5.2.3 Sorting by selection
2.5. 5.2.4 Sorting by Merging
2.6. 5.2.5 Sorting by Distribution

3. Epilog

by Marc Tardif
last updated 2000/01/31 (version 1.1)
also available as XML

This article should be considered an independent re-implementation of all of Knuth's sorting algorithms from The Art of Programming - Vol. 3, Sorting and Searching. It provides the C code to every algorithm discussed at length in section 5.2, Internal Sorting. No explanations are provided here, the book should provide all the necessary comments. The following link is a sample implementation to confirm that everything is in working order: sknuth.c.

[Jul 25, 2005] The Code Project - Sorting Algorithms In C# - C# Programming

God bless their misguided object-oriented souls ;-)

Richard Harter's World

Postman's Sort Article from C Users Journal

This article describes a program that sorts an arbitrarily large number of records in less time than any algorithm based on comparison sorting can. For many commonly encountered files, time will be strictly proportional to the number of records. It is not a toy program. It can sort on an arbitrary group of fields with arbitrary collating sequence on each field faster than any other program available.

An Improved Comb Sort with Pre-Defined Gap Table

PennySort is a measure of how many 100-byte records you can sort for a penny of capital cost

4 Programs Make NT 'Sort' of Fast

Benchmark results -- all times in seconds
10,000 records, 10-character alpha key 100,000 records, 10-character alpha key 1 million unique 180-byte records, 10-character alpha key 1 million unique 180-byte records, 7-character integer key 1 million unique 180-byte records, full 180-byte alphanumeric key
Windows NT sort command 2.73 54.66 NA NA NA
Cosort .31 7.89 300.66 297.33 201.34
NitroSort .28 6.94 296.1 294.71 270.67
Opt-Tech .54 9.27 313.33 295.31 291.52
Postman's Sort

Fast median search an ANSI C implementation

An inverted taxonomy of sorting algorithms

An alternative taxonomy (to that of Knuth and others) of sorting algorithms is proposed.

An alternative taxonomy (to that of Knuth and others) of sorting algorithms is proposed. It emerges naturally out of a top-down approach to the derivation of sorting algorithms. Work done in automatic program synthesis has produced interesting results about sorting algorithms that suggest this approach. In particular, all sorts are divided into two categories: hardsplit/easyjoin and easysplit/hardjoin.

Quicksort and merge sort, respectively, are the canonical examples in these categories.

Insertion sort and selection sort are seen to be instances of merge sort and quicksort, respectively, and sinking sort and bubble sort are in-place versions of insertion sort and selection sort. Such an organization introduces new insights into the connections and symmetries among sorting algorithms, and is based on a higher level, more abstract, and conceptually simple basis. It is proposed as an alternative way of understanding, describing, and teaching sorting algorithms.

Data Structures and Algorithms with Object-Oriented Design Patterns in C++ online book by Bruno R. Preiss B.A.Sc., M.A.Sc., Ph.D., P.Eng. Associate Professor Department of Electrical and Computer Engineering University of Waterloo, Waterloo, Canada

sortchk - a sort algorithm test suite

sortchk is a simple test suite I wrote in order to measure the costs (in terms of needed comparisons and data moves, not in terms of time consumed by the algorithm, as this is too dependend on things like type of computer, programming language or operating system) of different sorting algorithms. The software is meant to be easy extensible and easy to use.

It was developed on NetBSD, but it will also compile and run well on other systems, such as FreeBSD, OpenBSD, Darwin, AIX and Linux. With little work, it should also be able to run on foreign platforms such as Microsoft Windows or MacOS 9.

Sorting Algorithms Implementations of sorting algorithms.

  1. Techniques for sorting arrays
    1. Bubble sort
    2. Linear insertion sort
    3. Quicksort
    4. Shellsort
    5. Heapsort
    6. Interpolation sort
    7. Linear probing sort
  2. Sorting other data structures
    1. Merge sort
    2. Quicksort for lists
    3. Bucket sort
    4. Radix sort
    5. Hybrid methods of sorting
      1. Recursion termination
      2. Distributive partitioning
      3. Non-recursive bucket sort
    6. Treesort
  3. Merging
    1. List merging
    2. Array merging
    3. Minimal-comparison merging
  4. External sorting
    1. Selection phase techniques
      1. Replacement selection
      2. Natural selection
      3. Alternating selection
      4. Merging phase
    2. Balanced merge sort
    3. Cascade merge sort
    4. Polyphase merge sort
    5. Oscillating merge sort
    6. External Quicksort