Design and Analysis of Algorithms CSCE 310J: Data Structures & Algorithms Chapter 2 CSCE 310J: Data Structures & Algorithms I Giving credit where credit is due: • Most of the lecture notes are based on the slides from the Textbook’s companion website Analysis of Algorithms – http://www.aw.com/cssuport/ • Several slides are from William Spears of the University of Wyoming • I have modified them and added new slides Dr. Steve Goddard goddard@cse.unl.edu http://www.cse.unl.edu/~goddard/Courses/CSCE310J Design and Analysis of Algorithms - Chapter 2 1 Analysis of Algorithms I 2 Theoretical analysis of time efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size Issues: • • • • Design and Analysis of Algorithms - Chapter 2 Correctness Time efficiency Space efficiency Optimality I Basic operation: the operation that contributes most towards the running time of the algorithm. input size I Approaches: T(n) ≈ copC(n) • Theoretical analysis • Empirical analysis running time Design and Analysis of Algorithms - Chapter 2 3 Input size and basic operation examples Problem Input size measure Compute an n Floating point multiplication Graph problem #vertices and/or edges Visiting a vertex or traversing an edge Design and Analysis of Algorithms - Chapter 2 4 Empirical analysis of time efficiency Search for key in list of n Number of items in list n Key comparison items Floating point multiplication Number of times basic operation is executed Design and Analysis of Algorithms - Chapter 2 I Select a specific (typical) sample of inputs I Use physical unit of time (e.g., milliseconds) Basic operation Multiply two matrices of Dimensions of matrices floating point numbers execution time for basic operation OR 5 I Count actual number of basic operations I Analyze the empirical data Design and Analysis of Algorithms - Chapter 2 6 1 Design and Analysis of Algorithms Chapter 2 Best-case, average-case, worst-case Example: Sequential search For some algorithms efficiency depends on type of input: I Worst case: W(n) – maximum over inputs of size n I Best case: B(n) – minimum over inputs of size n I Average case: A(n) – “average” over inputs of size n I Best case I Average case I • Number of times the basic operation will be executed on typical input • NOT the average of worst and best case • Expected number of basic operations repetitions considered as a random variable under some assumption about the probability distribution of all possible inputs of size n Design and Analysis of Algorithms - Chapter 2 I Problem: Given a list of n elements and a search key K, find an element equal to K, if any. Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search) Worst case I 7 Design and Analysis of Algorithms - Chapter 2 8 Order of growth Types of formulas for basic operation count I Exact formula e.g., C(n) = n(n-1)/2 I Most important: Order of growth within a constant multiple as n→∞ I Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n2 I Example: I • How much faster will algorithm run on computer that is twice as fast? • How much longer does it take to solve problem of double input size? Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn2 Design and Analysis of Algorithms - Chapter 2 I 9 See table 2.1 Design and Analysis of Algorithms - Chapter 2 Table 2.1 10 Asymptotic growth rate I A way of comparing functions that ignores constant factors and small input sizes I O(g(n)): class of functions f(n) that grow no faster than g(n) I Θ (g(n)): class of functions f(n) that grow at same rate as g(n) I Ω(g(n)): class of functions f(n) that grow at least as fast as g(n) see figures 2.1, 2.2, 2.3 Design and Analysis of Algorithms - Chapter 2 11 Design and Analysis of Algorithms - Chapter 2 12 2 Design and Analysis of Algorithms Big-oh Chapter 2 Big-omega Design and Analysis of Algorithms - Chapter 2 13 Big-theta Design and Analysis of Algorithms - Chapter 2 14 Establishing rate of growth: Method 1 – using limits 0 limn→∞ T(n)/g(n) = c>0 ∞ Design and Analysis of Algorithms - Chapter 2 order of growth of T(n) ___ order of growth of g(n) order of growth of T(n) ___ order of growth of g(n) Examples: • 10n vs. 2n2 • n(n+1)/2 vs. n2 • logb n vs. 15 L’Hôpital’s rule order of growth of T(n) ___ order of growth of g(n) logc n Design and Analysis of Algorithms - Chapter 2 16 An Example If I lim n→∞ f(n) = lim n→∞ g(n) = ∞ I The derivatives f´, g´ exist, Let f(N) = 25 N 2 + N and g(N) = N 2 . Then lim N →∞ f(N)/g(N) = 25. So f(N) = Θ(N 2 ). Then lim n→∞ f(n) g(n) = lim n→∞ f ´(n) g ´(n) Design and Analysis of Algorithms - Chapter 2 17 Design and Analysis of Algorithms - Chapter 2 18 3 Design and Analysis of Algorithms Another Example Chapter 2 Establishing rate of growth: Method 2 – using definition I I Let f(N) = N log N and g(N) = N 1.5 . Then lim N →∞ f(N)/g(N) = N log N / N 1.5 = f(n) ≤ c g(n) for every n ≥ n0 Examples: I 10n is O(2n2) .5 log N / N . Now take the derivative of the top and bottom to get : (1 / N ) / .5 N −.5 = 2/ N f(n) is O(g(n)) if order of growth of f(n) ≤ order of growth of g(n) (within constant multiple) There exist positive constant c and non-negative integer n0 such that .5 This approaches 0, so N log N = o(N ). 1.5 I Design and Analysis of Algorithms - Chapter 2 19 Basic Asymptotic Efficiency classes 1 constant log n logarithmic n linear n log n n log n n2 quadratic n3 cubic 2n exponential n! factorial Design and Analysis of Algorithms - Chapter 2 5n+20 is O(10n) Design and Analysis of Algorithms - Chapter 2 20 Time efficiency of nonrecursive algorithms Steps in mathematical analysis of nonrecursive algorithms: 21 Examples: I Decide on parameter n indicating input size I Identify algorithm’s basic operation I Determine worst, average, and best case for input of size n I Set up summation for C(n) reflecting algorithm’s loop structure I Simplify summation using standard formulas (see Appendix A) Design and Analysis of Algorithms - Chapter 2 22 Matrix multipliacation I Matrix multiplication I Selection sort I Insertion sort I Mystery Algorithm Design and Analysis of Algorithms - Chapter 2 23 Design and Analysis of Algorithms - Chapter 2 24 4 Design and Analysis of Algorithms Selection sort Design and Analysis of Algorithms - Chapter 2 Insertion sort 25 Mystery algorithm I 26 Recursion is similar to a proof by induction: • There must be a base (trivial) case. • The recursion is assumed to hold for all k < N. • The Nth case is built from the k < N cases. 27 Asside: Recall Proof by Induction I Proof Design and Analysis of Algorithms - Chapter 2 Programming with Recursion for i := 1 to n - 1 do max := i ; for j := i + 1 to n do if |A[ j, i ]| > |A[ max, i ]| then max := j ; for k := i to n + 1 do swap A[ i, k ] with A[ max, k ]; for j := i + 1 to n do for k := n + 1 downto i do A[ j, k ] := A[ j, k ] - A[ i, k ] * A[ j, i ] / A[ i, i ] ; Design and Analysis of Algorithms - Chapter 2 Chapter 2 Design and Analysis of Algorithms - Chapter 2 28 Proof that T(N) >= F(N) by (strong) induction: Base cases : T( 0 ) = 1 ≥ F( 0 ) = 1, • Show theorem true for trivial case(s). Then, assuming theorem true up to case N, show true for N+1. Thus true for all N. T( 1 ) = 1 ≥ F( 1 ) = 1. T( 2 ) = 4 ≥ F( 2 ) = 2. We know that T ( N + 1) > T ( N ) + T ( N − 1) and F ( N + 1) = F ( N ) + F ( N − 1). Assume theorem holds for all k , 1 ≤ k ≤ N Now prove for the N + 1 case : T ( N + 1) > T ( N ) + T ( N − 1) ≥ F(N) + F(N-1 ) = F ( N + 1) Design and Analysis of Algorithms - Chapter 2 29 Design and Analysis of Algorithms - Chapter 2 30 5 Design and Analysis of Algorithms Chapter 2 Back to Recursion: Example Recursive evaluation of n ! Proof that F(N) >= (3/2)N Base cases : F( 5 ) = 8 ≥ (3 / 2) 5 = 7.6 , F (6 ) = 13 ≥ (3 / 2) 6 = 11.4. I Definition: n ! = 1*2*…*(n-1)*n I Recursive definition of n!: I Algorithm: Assume theorem holds for all k , 1 ≤ k ≤ N . Now prove for the N + 1 case : F(N + 1 ) = F(N) + F(N-1 ) ≥ (3 / 2) + (3 / 2) N N −1 if n=0 then F(n) := 1 else F(n) := F(n-1) * n return F(n) = (3 / 2) N (1 + (2/3)) = (3 / 2) N (5/3) > (3 / 2) N (3/2) = (3 / 2) N +1. I Design and Analysis of Algorithms - Chapter 2 31 Recurrence for number of multiplications to compute n!: Design and Analysis of Algorithms - Chapter 2 Time efficiency of recursive algorithms Important recurrence types: Steps in mathematical analysis of recursive algorithms: I I I I Identify algorithm’s basic operation I Determine worst, average, and best case for input of size n I Set up a recurrence relation and initial condition(s) for C(n)-the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls). I Solve the recurrence to obtain a closed form or estimate the order of magnitude of the solution (see Appendix B) Design and Analysis of Algorithms - Chapter 2 I 1. 2. 3. bk a< a = bk a > bk logarithmic A pass through input reduces problem size by half. T(n) = 2T(n/2) + cn T(1) = d Solution: T(n) = cn lg n + d n 33 quadratic One (constant) operation reduces problem size by half. T(n) = T(n/2) + c T(1) = d Solution: T(n) = c lg n + d I linear A pass through input reduces problem size by one. T(n) = T(n-1) + cn T(1) = d Solution: T(n) = [n(n+1)/2 – 1] c + d n log n Design and Analysis of Algorithms - Chapter 2 34 Math Review: Exponents A general divide-and-conquer recurrence T(n) = aT(n/b) + f (n) One (constant) operation reduces problem size by one. T(n) = T(n-1) + c T(1) = d Solution: T(n) = (n-1)c + d Decide on parameter n indicating input size 32 where f (n) ∈ Θ(nk) XA XB = XA+B (not XAB !!) XA / XB = XA-B (XA )B = XAB XA + XA = 2XA 2A + 2A = 2A+1 T(n) ∈ T(n) ∈ Θ(nk lg n ) T(n) ∈ Θ(nlog b a) Θ(nk) Note: the same results hold with O instead of Θ. Design and Analysis of Algorithms - Chapter 2 35 Design and Analysis of Algorithms - Chapter 2 36 6 Design and Analysis of Algorithms Logarithms I Chapter 2 Logarithms… Definition: XA = B if and only if logXB = A (x is the “base” of the logarithm). I Theorem: • Example: 102 = 100 means log10100 = 2. I Theorems logXAB = logXA + logXB logXA/B = logXA – logXB logX AB = B logXA Design and Analysis of Algorithms - Chapter 2 37 Logarithms… I logAB = logCB / logCA Proof: Let X = logCB, Y = logCA, and Z = logAB. By the definition of logarithm: CX = B, CY = A, and AZ = B. Thus CX = B = AZ = CYZ , X = YZ, Z = X/Y. Design and Analysis of Algorithms - Chapter 2 38 Series The notation for logs can be confusing. There are two alternatives: • log2(x) = log (log x) • log2(x) = (log x)2 N S = ∑i = or i =1 I Generally, we use the 2nd definition. I Note: log2(x) is not (log x2) I Note: log is not a multiplicative entity, it is a function. N ( N + 1) 2 Proof by Gauss when 9 years old (?!): I S = 1 + 2 + 3 + ... + ( N − 2 ) + ( N − 1 ) + N S = N + ( N − 1 ) + ( N − 2 ) + ... + 3 + 2 + 1 − − − − − − − − − − − − − − − − − − − − − − − 2 S = N ( N + 1) Design and Analysis of Algorithms - Chapter 2 39 Finite Series Design and Analysis of Algorithms - Chapter 2 40 Finite Series N S = ∑ ik ≈ i =1 I N S = ∑ i2 = i =1 N ( N + 1)(2 N + 1) 6 N k +1 | k +1| Proof: k i =1 1 41 ≈ ∫ x k dx = 1 x k +1 N |1 k +1 1 N k +1 = − k +1 k +1 3k 2k 1k Design and Analysis of Algorithms - Chapter 2 N N ∑i Nk 2 3 4 …… N N+1 Design and Analysis of Algorithms - Chapter 2 42 7 Design and Analysis of Algorithms Finite Series N S = ∑ Ai = i =0 Chapter 2 Finite Series A N +1 − 1 A −1 N N S = ∑ Ai = S = ∑ 2i = 2 N +1 − 1 i =1 i=0 Proof: A N +1 − A A −1 N S = ∑ 2i = 2 N +1 − 2 i =1 Proof: (1 + A + A2 + ... + A N )( A − 1) = ( A + A2 + ... + A N )( A − 1) = A + A2 + ... + A N +1 − 1 − A − A2 − ... − A N = A N +1 − 1 A2 + ... + A N +1 − A − A2 − ... − A N = A N +1 − A Design and Analysis of Algorithms - Chapter 2 43 Design and Analysis of Algorithms - Chapter 2 44 General Rules for Sums n n ∑ c = c∑1 = c(n − m + 1) i=m i =m ∑ (a + b ) = ∑ a + ∑ b i i i i i ∑ ca = c ∑ ai i i i n n+ k ∑a i=m i+k ∑a x i i = i+k i i ∑a i i =m + k = x k ∑ ai x i i Design and Analysis of Algorithms - Chapter 2 45 8