|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
|See Also||Recommended Books||Recommended Links||TAOCP Volume3||Assembler is not for Dummies|
|Animations||Testing Sorting Algorithms||The Art of Debugging||C language||Flowcharts|
|Simple||Insertion sort||Selection sort||Bubblesort|
|Insertion based||Insertion sort||Shellsort|
|Merge based||Insertion sort||Mergesort||Natural wtwo wya merge|
|Sorting by selection||Selection sort||Heapsort|
|Sorting by exchanging||Bubblesort||Shaker sort
|Distribution sorts||Bucket sort||Radix sort|
|Sorting algorithms style||Unix sort||Flashsort||Humor||Etc|
This is a very special page: here students can find several references and actual implementation for Bubblesort :-). Actually bubblesort is a rather weak sorting algorithm for arrays that for some strange reason dominates introductory courses. It's not that bad on lists and on already sorted arrays (or "almost sorted" arrays), but that's another story. An important point is that it is very often implemented incorrectly.
Note: When writing or debugging soft algorithms, UNIX sort can often be used to check results for correctness (you can automate it to check several tricky cases, not just one sample). Diff with Unix sort results will instantly tell you if you algorithms works correctly or not if all keys are distinct. If there are multiple identical keys your mileage may vary ;-). Sorting algorithms can be stable (which preserve the initial sequence of records with identical keys and non-stable -- which, as you can guess, do not.
Among simple sorting algorithms, the insertion sort seems to be better for small sets. It is stable and works perfectly on "almost sorted" arrays. The latter case is probably the most important class of input data for sorting algorithms.
Selection sort, while not bad, does not takes advantage of the preexisting sorting order and is not stable. At the same time it moves "dissident" elements by larger distances then insertion sort, so the total number of moves might well be less. But a move typically cost about the same as unsuccessful comparison (on modern computer with predictive execution comparison that results in a jump which cost 10 or more times then when comparison that does not change order of execution (typically successful comparison). So it is total number of comparisons plus total number of moves that counts.
Despite this fact it is still make sense to judge algorithms by just the number of comparisons. And it is clear that algorithms do not use comparisons at all have significant advantages on modern computers. Again, total number of comparisons and moves is a better metric. But not all comparisons are reated equal -- what really matter are unsuccessful comparisons, not so much successful one. An Unsucessful comparison results in a jump in compiled code which forces the flush of the instruction pipeline.
The second important fact is the size of the set. There is no and can't be a sorting algorithms that is equally efficient of small sets and on large sets. For small sets "overhead" of the algorithms itself is an important factor so the simpler algorithm is the better it performs on small sets.
Several more complex algorithms such as radixsort (which does not used comparisons), shellsort, mergesort, heapsort, and even quite fragile, but still fashionable quicksort are much faster on larger sets. Mergesort is now used by several scripting languages as internal sorting algorithm instead of Quicksort. A least significant digit (LSD) radix sort recently has resurfaced as an alternative to other high performance comparison-based sorting algorithms for integer keys and short strings (like Heapsort and Mergesort) because it does not use comparisons and can utilize modern CPUs much better then traditional comparison based algorithms.
Anyway, IMHO if an instructor uses bubblesort as an example you should be slightly vary ;-). The same is true if Quicksort is over-emphasized in the course as all singing-all dancing solution for sorting large sets..
But the most dangerous case is when the instructor emphasizes object oriented programming while describing sorting. That applies to book authors too. It's better to get a second book in such case, as typically those guys try to obscure the subject making it more complex (and less interesting) than it should be. Of course, as a student you have no choice and need to pass the exam, but be very wary of this trap of mixing apples and oranges.
You might fail to understand the course material just because the instructor is too preoccupied with OO machinery instead of algorithms. So while you may feel that you are incapable to master the subject, in reality that simply means that the instructor is an overcomplexity junky, nothing more nothing less ;-). In any case OO obscures the elegance of sorting algorithms to such an extent that most students hate the subject for the rest of their life. If this is the case, that only means that you are normal.
Classic treatise on sorting algorithms remains Volume 3 of TAoCP by Donald Knuth. The part of the book in which sorting algorithms are explained is very short (the volume also covers searching and several other subjects). Just the first 180 pages. You can buy the first edition for 3-5 dollars (and they are still as good as later editions for teaching purposes as Knuth did s good job in the first edition and never improved on it in later editions -- changes are mostly small corrections and typographic niceties; please not the flowcharts in the book are a horrible mess -- shame on Knuth)
Unfortunately the third volume was written when Knuth was already pretty much exhausted by publishing the first two volumes and that shows. The third volume also was based not on Knuth own research but on lecture notes of Professor Robert W. Floyd, one of the few winners of ACM Turing prise, the top 50 scientists and technologists of the USA early computer science explosion. Also at the moment of writing the field still quickly developed and this fact is reflected in the unevenness of the book content, where some algorithms are well covered but other not so well. Now it can be viewed as the most outdated volume out of three, despite the fact that it was published last. Nevertheless it remains classic because Knuth is Knuth, and his touch on the subject can't be replicated easily.
As any classic, it is quite difficult to read (examples are in MIX assembler; and growing on ancient IBM 650 Knuth missed the value of System/360 instruction set). Still like classic should, it produces insights that you never get reading regular textbooks. The most valuable are probably not the content but exercises. Most of the content of volume 3 can now be found in other books, although in somewhat emasculated fashion.
Again I would like to warn you that the main disaster in courses "Data structures and algorithms" happens when they try to mix applies with oranges, poisoning sorting algorithms by injecting into the course pretty much unrelated subject -- OO. And it is students in this case who suffer, not the instructor ;-)
Complexity of the algorithms is just one way to classify sorting algorithms. There are many others. One important classification is based on the internal structure of the algorithm:
We can also classify sorting algorithms by several other criteria such as:
Computational complexity (worst, average and best number of comparisons for several typical test cases, see below) in terms of the size of the list (n). Typically, good average number of comparisons is O(n log n) and bad is O(n2). Please note that O matters. Even if both algorithms belong to O(n log n) class, algorithm for which O=1 is 100 times faster then algorithm for which O=100.
Another problem with O notation is that this asymptotic analysis does not tell about algorithms behavior on small lists, or worst case behavior. This is actually one of the drawback of Knuth book in which he was carried too much by the attempt to establish theoretical bounds on algorithms. Worst case behavior is probably more important then average.
For example "plain-vanilla" quicksort requires O(n2) comparisons in case of already sorted or almost sorted array: a very important in practice case as in many case sorting is used as "incompetent people merge" instead of merging two sets the second set concatenated to the sorted set and then the whole array is resorted. If you get statistic of usage of Unix sort in some large datacenter you will be see that a lot of sorting used for processing huge log files and then resorting them for additional key. For example the pipeline
gzip -dc $1.gz | grep '" 200' | cut -d '"' -f 2 | cut -d '/' -f 3 | \ '[:upper:]' '[:lower:]' | sort | uniq -c | sort -r > most_frequent
can be applied to 100M or larger file and this script can be processed daily.
Sort algorithms which only use generic key comparison operation always need at least O(n log n) comparisons on average; while sort algorithms which exploit the structure of the key space cannot sort faster than O(n log k) where k is the size of the keyspace. So if key space is much less then N (a lot of records with identical keys), no comparison algorithms is completive with radix type sorting.
Please note that the number of comparison is just convenient theoretical metric. In reality both moves and comparisons matter, and failed comparisons matter more then successful; also on short keys the cost of move is comparable to the cost of successful comparison (even if pointers are used).
Stability: stable sorting algorithms maintain the relative order of records with equal keys. If all keys are different then this distinction does not make any sense. But if there are equal keys, then a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.
Staqbility is a valuable property of sorting algorithms.
Memory usage (and use of other computer resources). One large class of algorithms are "in-place
sorting. They are generally slower than algorithms that use additional memory: additional memory
can be used for mapping of keyspace. Most fast stable algorithms use additional memory.
With the current 16GB typical memory size on laptops and 64GB typical memory size on the servers as
well as virtual memory used in all major OSes, the old emphasis on algorithms that does not require
additional memory should probably be abandoned. Speedup that can be
achieved by using of a small amount of additional memory is considerable and it is stupid to ignore
it. Moreover even for classic in-place algorithms you can always use pointers to speed
up moving records. for long records this additional space proportional to N actually speed up moved
dramatically and just due to this such an implementation will outcompete other algorithms. In other
words when sorting records not to use pointers in modern machines in any case when the record of the
sorting array is larger then 64 bit is not a good strategy. In real life sorting records usually
are size several times bigger then the size of a pointers (4 bytes in 32 bit CPUs, 8 bytes on 64 bit).
That means that, essentially, study of sorting algorithms on just integer arrays is a much
more realistic abstraction, then it appear from the first site.
In modern computer multi-level memory is used. Cache-aware versions of the sort algorithms, whose operations have been specifically chosen to minimize the movement of pages in and out cache, can be dramatically quicker. One example is the tiled merge sort algorithm which stops partitioning sub arrays when subarrays of size S are reached, where S is the number of data items fitting into a single page in memory. Each of these subarrays is sorted with an in-place sorting algorithm, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance on machines that benefit from cache optimization. (LaMarca & Ladner 1997)
Modern CPUs designs try to minimize wait states of CPU that are inevitable due to the fact that it
is connected to much slower (often 3 or more times) memory using a variety of techniques such as
branch prediction. For example, the number of
(see also Classic
RISC pipeline - Wikipedia) flushes (misses) on typical data greatly influence the speed of algorithm
execution. Modern CPU typically use branch prediction (for example The Intel Core i7 has two branch
target buffers and possibly two or more branch predictors). If branch guessed incorrectly penalty is
significant and proportional to the number of stages in the pipeline. So one possible optimization of
code strategy for modern CPUs is reordering branches, so that higher probability code was located after
the branch instruction. This is a difficult optimization that requires thorously instrumentation
of algorithms and its
gathering set of statistics about branching behaviors based on relevant data samples as well as some
compiler pragma to implement it. In any case this is important factor for modern CPUs which
have 7, 10 and even 20 stages pipelines (like in the
Pentium 4) and were cost of a
pipeline miss can cost as much as three or fours instructions executed sequentially.
Among non-stable algorithms heapsort and Shellsort are probably the most underappreciated and quicksort is one of the most overhyped. Please note the quicksort is a fragile algorithm that is not that good in using predictive execution of instructions in pipeline (typical feature of modern CPUs). There are practically valuable sequences ("near sorted" data sets) in which for example shellsort is twice faster then quicksort (see examples below).
It is fragile because the choice of pivot is equal to guessing an important property of the data to be sorted (and if it went wrong the performance can be close to quadratic). There is also some experimental evidence that on very large sets Quicksort runs into "suboptimal partitioning" on a regular basis, so it becomes a feature, not a bug. It does not work well on already sorted or "almost sorted" (with a couple or permutations) data as well as data sorted in a reverse order. It does not work well on data that contain a lot of identical keys. Those are important cases that are frequent in real world usage of sorting. Again I would like to stress that one will be surprised to find what percentage of sorting is performed on already sorted datasets or datasets with less then 10% of permutations.
For those (and not only for those ;-) reasons you need to be skeptical about "quicksort lobby" with Robert Sedgewick (at least in the past) as the main cheerleader. Quicksort is a really elegant algorithm invented by Hoare in Moscow in 1961, but in real life other qualities then elegance and speed of random data are more valuable ;-). On important practical case of "semi-sorted" and "almost reverse sorted" data quicksort is far from being optimal and often demonstrates dismal performance. You need to do some experiments to see how horrible quicksort can be in case of already sorted data (simple variants exhibit quadratic behavior, the fact that is not mentioned in many textbooks on the subject) and how good shellsort is ;-). And on typical corporate data sets including log records heapsort usually beats quicksort because performance of quicksort too much depends of the choice of pivot and series of bad choices increase time considerably. Each time the pivot element is close to minimum (or maximum) the performance of this stage goes into the drain and on large sets such degenerative cases are not only common, they can happen in series (I suspect that on large data setsquick sort "self-created" such cases -- in other words quicksort poisons its data during partitions making bad choices of pivot progressively more probable). Here are results from one contrarian article written by Paul Hsieh in 2004
Data is time in seconds taken to sort 10000 lists of varying size of about 3000 integers each. Download test here
Please note that total time while important does not tell you the whole story. Actually it reveals that using Intel compiler Heapsort can beat Quicksort even "on average" -- not a small feat. You need also know the standard deviation.
Actually one need to read volume 3 of Knuth to appreciate the beauty and complexity of some advanced sorting algorithms. Please note that sorting algorithms published in textbooks are more often then not implemented with errors. Even insertion sort presents a serious challenge to many book authors ;-). Sometimes the author does not know the programming language he/she uses well, sometimes details of the algorithm are implemented incorrectly. And it is not that easy to debug them. In this sense Knuth remains "the reference", one of the few books where the author took a great effort to debug each and every algorithm he presented.
Please take any predictions about relative efficiency of algorithms with the grain of salt unless they are provided for at least a dozen typical sets of data as described below. Shallow authors usually limit themselves to random sets, which are of little practical importance. So please do not spend much time browsing web references below. They are not as good as Knuth's volume...
It is clear the you never will use sorting algorithms in worst case O(n2) in real time avionics computers ;-). In order to judge suitability of a sorting algorithms to a particular application you need to see:
Are the data that application needs to sort tend to have some preexisting order ?
What are properties of keyspace ?
Do you need a stable sort ?
Can you use some "extra" memory or need "in-place" soft ? (with the current computer memory sizes you usually can afford some additional memory so "in-place" algorithms no longer have any advantages).
Generally the more we know about the properties of data to be sorted, the faster we can sort them. As we already mentioned the size of key space is one of the most important dimensions (sort algorithms that use the size of key space can sort any sequence for time O(n log k) ). For example if we are sorting subset of a card deck we can take into account that there are only 52 keys in any input sequence and select an algorithms that uses limited keyspace for dramatic speeding of the sorting. In this case using generic sorting algorithm is just a waist.
Moreover, the relative speed of the algorithms depends on the size of the data set: one algorithm can be faster then the other for sorting less then, say, 64 or 128 items and slower on larger sequences. Simpler algorithms with minimal housekeeping tend to perform better on small sets even if the are O(n2) type of algorithms. Especially algorithms that have less number of jump statement in compiled version of code. For example insertion sort is competitive with more complex algorithms up to N=25 or so.
There is not "best sorting algorithm" for all data. Various algorithms have their own strength and weaknesses. For example some "sense" already sorted or "almost sorted" data sequences and perform faster on such sets. In this sense Knuth math analysis is insufficient althouth "worst time" estimates are useful.
As for input data it is useful to distinguish between the following broad categories that all should be used in testing (random number sorting is a very artificial test and as such the estimate it provides does not have much practical value, unless we know that other cases behave similarly or better):
Completely randomly reshuffled array (this is the only test that naive people use in evaluating sorting algorithms) .
Already sorted array (you need to see how horrible Quicksort is on this case and how good shellsort is ;-). This is actually a pretty important case as often sorting is actually resorting of previously sorted data done after minimal modifications of the data set. There are three imortant case of already sorted array
Array of distinct elements sorted in "right" direction for which no any reordering is required (triangular array).
Already sorted array consisting of small number of identical elements (stairs). The worst case is rectangular array in when single element is present (all values are identical).
Already sorted in reverse order array (many algorithms, such as insertion sort work slow on such an array)
Array that consisted of merged already sorted arrays(Chainsaw array). Arrays can be sorted
in right direction
in opposite direction of have arrays
both sorted in right and opposite direction (one case is "symmetrical chainsaw").
Array consisting of small number of identical elements (sometimes called or "few unique" case). If number of distinct elements is large this is the case similar to chainsaw but without advantage of preordering. So it can be generated by "inflicting" certain number of permutations on chainsaw array. Worst case is when there is just two values of elements in the array (binary array). Quicksort is horrible on such data. Many other algorithms work slow on such an array.
Already sorted in right direction array with N permutations (with N from 0.1 to 10% of the size). Insrtion soft does well on such arrays. Shellsort also is quick. Quick sort do not adapt well to nearly sorted data.
Already sorted array in reverse order array with N permutations
Large data sets with normal distribution of keys.
Pseudorandom data (daily values of S&P500 or other index for a decade or two might be a good test set here; they are available from Yahoo.com )
Behavior on "almost sorted" data and worst case behavior are a very important characteristics of sorting algorithms. For example, in sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the input is already sorted, its complexity falls to O(n). Specifically, n-1 comparisons and zero moves are performed, which is the same as for simply running through the input, checking if it is pre-sorted. In Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). Python uses timsort, a hybrid of merge sort and insertion sort, which will become the standard sort algorithm for Java SE 7.
Calculation of the number of comparisons and number of data moves can be done in any language. C and other compiled languages provide an opportunity to see the effect of computer instruction set and CPU speed on the sorting performance. Usually the test program is written as a subroutine that is called, say, 1000 times. Then data entry time (running just data coping or data generating part the same number of times without any sorting) is subtracted from the first.
Such a method can provide more or less accurate estimate of actual algorithms run time on a particular data set and particular CPU architecture. Modern CPUs that have a lot of general purpose registers tend to perform better on sorting: sorting algorithms tend to belong to the class of algorithms with tight inner loop and speed of this inner loop has disproportionate effect on the total run time. If most scalar variables used in this inner look can be kept in registers times improves considerably.
Artificial computers like Knuth MIXX can be used too. In this case the time is calculated based on the time table of performing of each instruction (instruction cost metric).
While choosing the language is somewhat a religious issue, there are good and bad language for sorting algorithms implementations. See Language Design and Programming Quotes for enlightened discussion of this touchy subject ;-)
BTW you can use Perl or other interpreted scripting language in a similar way as assembler of artificial computer. I actually prefer Perl as a tool of exploring sorting as this is an interpreted language (you do not need to compile anything) with a very good debugger and powerful control structures. And availability of powerful control structures is one thing that matter in implementation of sorting algorithms. Life is more complex then proponents of structured programming are were trying us to convince.
Please note that a lot of examples in the books are implemented with errors. That's especially true for Java books and Java demo implementations. Use Knuth book for reference.
Posted by timothy
from the right-under-our-very-noses dept.
theodp writes In addition to The Ghost of Steve Jobs, The Codecracker, a remix of 'The Nutcracker' performed by Silicon Valley's all-girl Castilleja School during Computer Science Education Week earlier this month featured a Bubble Sort Dance. Bubble Sort dancing, it turns out, is more popular than one might imagine. Search YouTube, for example, and you'll find students from the University of Rochester to Osmania University dancing to sort algorithms. Are you a fan of Hungarian folk-dancing? Well there's a very professionally-done Bubble Sort Dance for you! Indeed, well-meaning CS teachers are pushing kids to Bubble Sort Dance to hits like Beauty and a Beat, Roar, Gentleman, Heartbeat, Under the Sea, as well as other music.
Recently, I have translated a variety of sorting routines into Visual Basic and compared their performance... I hope you will find the code for these sorts useful and interesting.
What makes a good sorting algorithm? Speed is probably the top consideration, but other factors of interest include versatility in handling various data types, consistency of performance, memory requirements, length and complexity of code, and the property of stability (preserving the original order of records that have equal keys). As you may guess, no single sort is the winner in all categories simultaneously (Table 2).
Let's start with speed, which breaks down into "order" and "overhead". When we talk about the order of a sort, we mean the relationship between the number of keys to be sorted and the time required. The best case is O(N); time is linearly proportional to the number of items. We can't do this with any sort that works by comparing keys; the best such sorts can do is O(N log N), but we can do it with a RadixSort, which doesn't use comparisons. Many simple sorts (Bubble, Insertion, Selection) have O(N^2) behavior, and should never be used for sorting long lists. But what about short lists? The other part of the speed equation is overhead resulting from complex code, and the sorts that are good for long lists tend to have more of it. For short lists of 5 to 50 keys or for long lists that are almost sorted, Insertion-Sort is extremely efficient and can be faster than finishing the same job with QuickSort or a RadixSort. Many of the routines in my collection are "hybrids", with a version of InsertionSort finishing up after a fast algorithm has done most of the job.
The third aspect of speed is consistency. Some sorts always take the same amount of time , but many have "best case" and "worst case" performance for particular input orders of keys. A famous example is QuickSort, generally the fastest of the O(N log N) sorts, but it always has an O(N^2) worst case. It can be tweaked to make this worst case very unlikely to occur, but other O(N log N) sorts like HeapSort and MergeSort remain O(N log N) in their worst cases. QuickSort will almost always beat them, but occasionally they will leave it in the dust.
O(x) = Worst Case Running Time
W(x) = Best Case Running Time
Q(x) = Best and Worst case are the same.
Page numbers refer to the Preiss text book Data Structures and Algorithms with Object-Orientated Design Patterns in Java.
This page was created with some references to Paul's spiffy sorting algorithms page which can be found here. Most of the images scans of the text book (accept the code samples) were gratefully taken from that site.
Sorting Algorithm Page Implementation Summary Comments Type Stable? Asymptotic Complexities Straight Insertion 495 On each pass the current item is inserted into the sorted section of the list. It starts with the last position of the sorted list, and moves backwards until it finds the proper place of the current item. That item is then inserted into that place, and all items after that are shuffled to the left to accommodate it. It is for this reason, that if the list is already sorted, then the sorting would be O(n) because every element is already in its sorted position. If however the list is sorted in reverse, it would take O(n2) time as it would be searching through the entire sorted section of the list each time it does an insertion, and shuffling all other elements down the list.. Good for nearly sorted lists, very bad for out of order lists, due to the shuffling. Insertion Yes Best Case: O(n).
Worst Case: O(n2)
Binary Insertion Sort 497 This is an extension of the Straight Insertion as above, however instead of doing a linear search each time for the correct position, it does a binary search, which is O(log n) instead of O(n). The only problem is that it always has to do a binary search even if the item is in its current position. This brings the cost of the best cast up to O(n log n). Due to the possibility of having to shuffle all other elements down the list on each pass, the worst case running time remains at O(n2). This is better than the Strait Insertion if the comparisons are costly. This is because even though, it always has to do log n comparisons, it would generally work out to be less than a linear search. Insertion Yes Best Case: O(n log n).
Worst Case: O(n2)
Bubble Sort 499 On each pass of the data, adjacent elements are compared, and switched if they are out of order. eg. e1 with e2, then e2 with e3 and so on. This means that on each pass, the largest element that is left unsorted, has been "bubbled" to its rightful place at the end of the array. However, due to the fact that all adjacent out of order pairs are swapped, the algorithm could be finished sooner. Preiss claims that it will always take O(n2) time because it keeps sorting even if it is in order, as we can see, the algorithm doesn't recognise that. Now someone with a bit more knowledge than Preiss will obviously see, that you can end the algorithm in the case when no swaps were made, thereby making the best case O(n) (when it is already sorted) and worst case still at O(n2). In general this is better than Insertion Sort I believe, because it has a good change of being sorted in much less than O(n2) time, unless you are a blind Preiss follower. Exchange Yes. NOTE: Preiss uses a bad algorithm, and claims that best and worst case is O(n2).
We however using a little bit of insight, can see that the following is correct of a better bubble sort Algorithm (which does Peake agree with?)
Best Case: O(n).
Worst Case: O(n2)
Quicksort 501 I strongly recommend looking at the diagram for this one. The code is also useful and provided below (included is the selectPivot method even though that probably won't help you understanding anyway).
The quick sort operates along these lines: Firstly a pivot is selected, and removed from the list (hidden at the end). Then the elements are partitioned into 2 sections. One which is less than the pivot, and one that is greater. This partitioning is achieved by exchanging values. Then the pivot is restored in the middle, and those 2 sections are recursively quick sorted.
A complicated but effective sorting algorithm. Exchange No Best Case: O(n log n).
Worst Case: O(n2)
Refer to page 506 for more information about these values. Note: Preiss on page 524 says that the worst case is O(n log n) contradicting page 506, but I believe that it is O(n2), as per page 506.
Straight Selection Sorting. 511 This one, although not very efficient is very simply. Basically, it does n2 linear passes on the list, and on each pass, it selects the largest value, and swaps it with the last unsorted element.
This means that it isn't stable, because for example a 3 could be swapped with a 5 that is to the left of a different 3.
A very simple algorithm, to code, and a very simple one to explain, but a little slow.
Note that you can do this using the smallest value, and swapping it with the first unsorted element.
Selection No Unlike the Bubble sort this one is truly Q(n2), where best case and worst case are the same, because even if the list is sorted, the same number of selections must still be performed. Heap Sort 513 This uses a similar idea to the Straight Selection Sorting, except, instead of using a linear search for the maximum, a heap is constructed, and the maximum can easily be removed (and the heap reformed) in log n time. This means that you will do n passes, each time doing a log n remove maximum, meaning that the algorithm will always run in Q(n log n) time, as it makes no difference the original order of the list. This utilises, just about the only good use of heaps, that is finding the maximum element, in a max heap (or the minimum of a min heap). Is in every way as good as the straight selection sort, but faster. Selection No Best Case: O(n log n).
Worst Case: O(n log n).
Ok, now I know that looks tempting, but for a much more programmer friendly solution, look at Merge sort instead, for a better O(n log n) sort .
2 Way Merge Sort 519 It is fairly simple to take 2 sorted lists, and combine the into another sorted list, simply by going through, comparing the heads of each list, removing the smallest to join the new sorted list. As you may guess, this is an O(n) operation. With 2 way sorting, we apply this method to an single unsorted list. In brief, the algorithm recursively splits up the array until it is fragmented into pairs of two single element arrays. Each of those single elements is then merged with its pairs, and then those pairs are merged with their pairs and so on, until the entire list is united in sorted order. Noting that if there is every an odd number, an extra operation is added, where it is added to one of the pairs, so that that particular pair will have 1 more element than most of the others, and won't have any effect on the actual sorting. Now isn't this much easier to understand that Heap sort, its really quite intuitive. This one is best explain with the aid of the diagram, and if you haven't already, you should look at it. Merge Yes Best and Worst Case: Q(n log n) Bucket Sort 526 Bucket sort initially creates a "counts" array whose size is the size of the range of all possible values for the data we are sorting, eg. all of the values could be between 1 and 100, therefore the array would have 100 elements. 2 passes are then done on the list. The first tallies up the occurrences of each of number into the "counts" array. That is for each index of the array, the data that it contains signifies the number of times that number occurred in list. The second and final pass goes though the counts array, regenerating the list in sorted form. So if there were 3 instance of 1, 0 of 2, and 1 of 3, the sorted list would be recreated to 1,1,1,3. This diagram will most likely remove all shadows of doubt in your minds. This sufferers a limitation that Radix doesn't, in that if the possible range of your numbers is very high, you would need too many "buckets" and it would be impractical. The other limitation that Radix doesn't have, that this one does is that stability is not maintained. It does however outperform radix sort if the possible range is very small. Distribution No Best and Worst case:Q(m + n) where m is the number of possible values. Obviously this is O(n) for most values of m, so long as m isn't too large.
The reason that these distribution sorts break the O(n log n) barrier is because no comparisons are performed!
Radix Sort 528 This is an extremely spiffy implementation of the bucket sort algorithm. This time, several bucket like sorts are performed (one for each digit), but instead of having a counts array representing the range of all possible values for the data, it represents all of the possible values for each individual digit, which in decimal numbering is only 10. Firstly a bucked sort is performed, using only the least significant digit to sort it by, then another is done using the next least significant digit, until the end, when you have done the number of bucket sorts equal to the maximum number of digits of your biggest number. Because with the bucket sort, there are only 10 buckets (the counts array is of size 10), this will always be an O(n) sorting algorithm! See below for a Radix Example. On each of the adapted bucket sorts it does, the count array stores the numbers of each digit. Then the offsets are created using the counts, and then the sorted array regenerated using the offsets and the original data.
This is the god of sorting algorithms. It will search the largest list, with the biggest numbers, and has a is guaranteed O(n) time complexity. And it ain't very complex to understand or implement.
My recommendations are to use this one wherever possible.
Distribution Yes Best and Worst Case: Q(n)
1. About the code
2. The Algorithms
2.1. 5.2 Internal Sorting
2.2. 5.2.1 Sorting by Insertion
2.3. 5.2.2 Sorting by Exchanging
2.4. 5.2.3 Sorting by selection
2.5. 5.2.4 Sorting by Merging
2.6. 5.2.5 Sorting by Distribution
by Marc Tardif
last updated 2000/01/31 (version 1.1)
also available as XML
This article should be considered an independent re-implementation of all of Knuth's sorting algorithms from The Art of Programming - Vol. 3, Sorting and Searching. It provides the C code to every algorithm discussed at length in section 5.2, Internal Sorting. No explanations are provided here, the book should provide all the necessary comments. The following link is a sample implementation to confirm that everything is in working order: sknuth.c.
This article describes a program that sorts an arbitrarily large number of records in less time than any algorithm based on comparison sorting can. For many commonly encountered files, time will be strictly proportional to the number of records. It is not a toy program. It can sort on an arbitrary group of fields with arbitrary collating sequence on each field faster than any other program available.
Benchmark results--all times in seconds
10,000 records, 10-character alpha key 100,000 records, 10-character alpha key 1 million unique 180-byte records, 10-character alpha key 1 million unique 180-byte records, 7-character integer key 1 million unique 180-byte records, full 180-byte alphanumeric key Windows NT sort command 2.73 54.66 NA NA NA Cosort .31 7.89 300.66 297.33 201.34 NitroSort .28 6.94 296.1 294.71 270.67 Opt-Tech .54 9.27 313.33 295.31 291.52 Postman's Sort
An alternative taxonomy (to that of Knuth and others) of sorting algorithms is proposed. It emerges naturally out of a top-down approach to the derivation of sorting algorithms. Work done in automatic program synthesis has produced interesting results about sorting algorithms that suggest this approach. In particular, all sorts are divided into two categories: hardsplit/easyjoin and easysplit/hardjoin. Quicksort and merge sort, respectively, are the canonical examples in these categories. Insertion sort and selection sort are seen to be instances of merge sort and quicksort, respectively, and sinking sort and bubble sort are in-place versions of insertion sort and selection sort. Such an organization introduces new insights into the connections and symmetries among sorting algorithms, and is based on a higher level, more abstract, and conceptually simple basis. It is proposed as an alternative way of understanding, describing, and teaching sorting algorithms.
sortchk is a simple test suite I wrote in order to measure the costs (in terms of needed comparisons and data moves, not in terms of time consumed by the algorithm, as this is too dependend on things like type of computer, programming language or operating system) of different sorting algorithms. The software is meant to be easy extensible and easy to use.
It was developed on NetBSD, but it will also compile and run well on other systems, such as FreeBSD, OpenBSD, Darwin, AIX and Linux. With little work, it should also be able to run on foreign platforms such as Microsoft Windows or MacOS 9.
Sort Algorithms Visualizer
Sandeep Mitra's Java Sorting Animation Page with user input
The Complete Collection of Algorithm Animations
Sorting Algorithms Demo by Jason Harrison (email@example.com) -- Java applets, not possibility to alter input.
Animation of Sorting Algorithms
Sorting Algorithms Demo
Sorting Demo is a powerful tool for demonstrating how sorting algorithms work. It was designed to alleviate some of the difficulty instructors often have in conveying these concepts to students due to lack of blackboard space and having to constantly erase. This program started out as a quicksort demonstration applet, and after proposing the idea to Seton Hall's math and computer science department, I was given permission to expand the idea to encompass many of the sorts commonly taught in a data structures/algorithms class. There are currently 9 algorithms featured, all of which allow you to either type in your own array or make the computer generate a random array of a milestone size. Although graphics limit the input array to a length of 25 elements, there is the option to run the algorithm without graphics in order to get an understanding of its running time in comparison with the other sorts.
Sorting Algorithms Demo
We all know that Quicksort is one of the fastest algorithms for sorting. It's not often, however, that we get a chance to see exactly how fast Quicksort really is. The following applets chart the progress of several common sorting algorthms while sorting an array of data using constant space algorithms. (This is inspired by the algorithm animation work at Brown University and the video Sorting out Sorting from the University of Toronto (circa 1970!).)
Animated Algorithms Sorting
This applet lets you observe the dynamic operation of several basic sort algorithms. The implementations, explanations
and display technique are all taken from Algorithms in C++, third Edition, Sedgewick, 1998.
Generate a data set by clicking one of the data set buttons.
This generates a data set with the appropriate characteristics. Each data set is an array of 512 values in the range 0..511. Data sets have no duplicate entries except the Gaussian distribution.
Run one or more sort algorithms on the data set to see how the algorithm works.
Each execution of a sort runs on the same data set until you generate a new one.
Animator for selection sort Provided by - Peter Brummund through the Hope College Summer Research Program. Algorithms covered:
This is a collection of algorithms for sorting and searching. Descriptions are brief and intuitive, with just enough theory thrown in to make you nervous. I assume you know a high-level language, such as C, and that you are familiar with programming concepts including arrays and pointers.
The first section introduces basic data structures and notation. The next section presents several sorting algorithms. This is followed by a section on dictionaries, structures that allow efficient insert, search, and delete operations. The last section describes algorithms that sort data and implement dictionaries for very large files. Source code for each algorithm, in ANSI C, is included.
Most algorithms have also been coded in Visual Basic. If you are programming in Visual Basic, I recommend you read Visual Basic Collections and Hash Tables, for an explanation of hashing and node representation.
If you are interested in translating this document to another language, please send me email. Special thanks go to Pavel Dubner, whose numerous suggestions were much appreciated. The following files may be downloaded:
P. M. McIlroy, K. Bostic and M. D. McIlroy, Engineering radix sort, Computing Systems
6 (1993) 5-27 gzipped
M. D. McIlroy, A killer adversary for quicksort, Software--Practice and Experience 29 (1999) 341-344, gzipped PostScript or pdf
Data Structures Lecture 12
Discussion of Sorting Algorithms
Sequential and parallel sorting algorithms
CCAA - Sorting Algorithms
Sorting and Searching Algorithms: A Cookbook Good explanations & source code, Thomas Niemann
The Online Guide to Sorting at IIT
Radix Sort Tutorial .
Sort Benchmark Home Page all you need to know on sort benchmark...
M. H. Albert and M. D. Atkinson, Sorting with a Forklift, Eigth Scandinavian Workshop on Algorithm Theory, July 2002.
Vladmir Estivill-Castro , Derick Wood, A survey of adaptive sorting algorithms, ACM Computing Surveys (CSUR), v.24 n.4, p.441-476, Dec. 1992
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2015 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October, 11, 2015