Download presentation
Presentation is loading. Please wait.
1
Ch. 10 Algorithm Efficiency & Sorting
O( ): Big-Oh An algorithm is said to take O(f (n)) if its running time is upper-bounded by cf(n) e.g., O(n), O(n log n), O(n2), O(2n), … Formal definition O(f(n)) = { g(n) | ∃c > 0, n0 ≥ 0 s.t.∀n ≥ n0, cf(n) ≥ g(n) } g(n) ∈ O(f(n)이 맞지만 관행적으로 g(n) = O(f(n))이라고 쓴다. 직관적 의미 g(n) = O(f(n)) ⇒ g grows no faster than f
2
Figure 10.1 Time requirements as a function of the problem size n
3
Figure 10.3a A comparison of growth-rate functions: a) in tabular form
4
Figure 10.3b A comparison of growth-rate functions: b) in graphical form
5
Types of Running-Time Analysis
Worst-case analysis Analysis for the worst-case input(s) Average-case analysis Analysis for all inputs More difficult to analyze Best-case analysis Analysis for the best-case input(s) Mostly not meaningful
6
Running Time for Search in an Array
Sequential search Worst case: O(n) Average case: O(n) Binary search Worst case: O(log n) Average case: O(log n) ← 각자 확인 요망!
7
Sorting Algorithms 대부분 O(n2)과 O(nlogn) 사이
Input이 특수한 성질을 만족하는 경우에는 O(n) sorting도 가능 E.g., input이 –O(n)과 O(n) 사이의 정수
8
Selection Sort An iteration
Find the largest item Swap it to the rightmost place Exclude the rightmost item Repeat the above iteration until only one item remained
9
The largest item Running time: (n-1)+(n-2)+···+2+1 = O(n2) Worst case Average case
10
selectionSort(theArray[ ], n) {
for (last = n-1; last >=1; last--) { largest = indexOfLargest(theArray, last+1); Swap theArray[largest] & theArray[last]; } indexOfLargest(theArray, size) largest = 0; for (i = 1; i < size; ++i) { if (theArray[i] > theArray[largest]) largest = i; selectionSort(theArray[ ], n) { for (last = n-1; last >=1; last--) { largest = indexOfLargest(theArray, last+1); Swap theArray[largest] & theArray[last]; } indexOfLargest(theArray, size) largest = 0; for (i = 1; i < size; ++i) { if (theArray[i] > theArray[largest]) largest = i; Running time: 두 함수의 for loop의 iteration 수의 합이 좌우 indexOfLargest가 n-1 번 call 되고, call 될 때마다 indexOfLargest의 수행시간은 한 단계씩 가벼워진다. Running time: 두 함수의 for loop의 iteration 수의 합이 좌우 indexOfLargest가 n-1 번 call 되고, call 될 때마다 indexOfLargest의 수행시간은 한 단계씩 가벼워진다. (n-1)+(n-2)+···+2+1 = O(n2)
11
Bubble Sort Worst case Average case
Running time: (n-1)+(n-2)+···+2+1 = O(n2)
12
Insertion Sort Running time: O(n2) Worst case: 1+2+···+(n-2)+(n-1)
Average case: ½ (1+2+···+(n-2)+(n-1)) Running time: O(n2)
13
Figure 10.6 An insertion sort partitions the array into two regions
14
Mergesort Algorithm mergeSort(S)
{ // Input: sequence S with n elements // Output: sorted sequence S if (S.size( ) > 1) { Let S1, S2 be the 1st half and 2nd half of S, respectively; mergeSort(S1); mergeSort(S2); S merge(S1, S2); } Algorithm merge(S1, S2) { sorting된 두 sequence S1, S2 를 합쳐 sorting 된 하나의 sequence S를 만든다
15
Animation (Mergesort)
7 2 | 9 4 7 | 2 7
16
Animation (Mergesort)
2 7 7 2 | 9 4 4 9 7 7 | 2 2 9 9 | 4 4 7 2 9 4 Running time: O(nlogn)
17
Figure 10.8 A mergesort with an auxiliary temporary array
18
Figure 10.9 A mergesort of an array of six integers
19
Figure 10.10 A worst-case instance of the merge step in mergesort
20
Quicksort Algorithm quickSort(S)
{ // Input: sequence S with n elements // Output: sorted sequence S if (S.size( ) > 1) { x pivot of S; (L, R) partition(S, x); // L: left partition, R: right partition quickSort(L); quickSort(R); return L • x • R; // concatenation } Algorithm partition(S, x) { sequence S에서 x보다 작은 item은 partition L로, x보다 크거나 같은 item은 partition R로 분류.
21
Animation (Quicksort)
1 2 1 2 1 2 2 1 4 4 6 8 6 8 8 6 6 8 1 1 Average-case running time: O(nlogn) Worst-case running time: O(n2) 6 6
22
Figure 10.12 Partitioning 방법은 다양하다. 교과서에 그 중 한 가지 방법을 소개하고 있다.
A partition with a pivot Partitioning 방법은 다양하다. 교과서에 그 중 한 가지 방법을 소개하고 있다.
23
Radix Sort radixSort(theArray, d)
{ // Sort n d-digit integers in the array theArray for (j = d downto 1) { Do a stable sort on theArray by jth digit; } Stable sort 같은 값을 가진 item들은 sorting 후에도 원래의 순서가 유지되는 성질을 가진 sort를 일컫는다.
24
Running time: O(n) ← d: a constant
0123 2154 0222 0004 0283 1560 1061 2150 1560 2150 1061 0222 0123 0283 2154 0004 0004 0222 0123 2150 2154 1560 1061 0283 0004 1061 0123 2150 2154 0222 0283 1560 0004 0123 0222 0283 1061 1560 2150 2154 Running time: O(n) ← d: a constant
25
Comparison of Sorting Efficiency
Worst Case Average Case Selection Sort n2 Bubble Sort Insertion Sort Mergesort nlogn Quicksort Radix Sort n Heapsort
Similar presentations