Computer Science 212 Title: Data Structures and Algorithms Instructor: Harry Plantinga Text: Algorithm Design: Foundations, Analysis, and Internet Examples by Goodrich and Tamassia Computer Science 212 Data Structures and Algorithms The Heart of Computer Science Data structures Study of important algorithms Algorithm analysis Algorithm design techniques Plus Intelligent systems Course Information: http://cs.calvin.edu/curriculum/cs/212/ Grades on moodle Why study DS & Algorithms? Some problems are difficult to solve and good solutions are known Some “solutions” don’t always work Some simple algorithms don’t scale well Data structures and algorithms make good tools for addressing new problems Fun! Joy! Esthetic beauty! Place in the curriculum The study of interesting areas of computer science: upper-level electives 108, 112: basic tools (like algebra to mathematics) 212: More advanced tools. Introduction to the science of computing. (A little more mathematical than other core CS courses, but not as rigorous as a real math course.) Example: Search and Replace Goal: replace one string with another Which version is better? #!/usr/bin/perl my $a = shift; my $b = shift; my $input = ""; #!/usr/bin/perl my $a = shift; my $b = shift; my $input = ""; while (<STDIN>) { $input .= $_; } while (<STDIN>) { $input .= $_; } while ($input =~ m/$a/) { $input =~ s/$a/$b/; } $input =~ s/$a/$b/g; Empirical Analysis Implement Gather runtime data for various inputs Analyze runtime vs. size Example: suppose we test swap1 Input sizes: 1, 2, 4, 8, 16, 32, 64, 128, 256 Runtimes: 70, 110, 250, 560, 810, 1750, 6510, 25680, 102050 swap1 runtime analysis 1 70 2 4 110 250 8 560 16 810 32 1750 64 6510 128 25680 256 102050 sw ap1 runtime 1000000 y = 1.555x 2 - 0.3518x + 232.72 100000 Runtime (ms) size Swap1 (ms) 10000 1000 100 10 1 1 10 100 Problem size (1000s) 1000 swap2 runtime analysis size swap2 4 70 8 50 16 70 6000 32 80 5000 64 100 128 120 256 230 512 380 1000 1024 580 0 2048 910 4096 1680 8192 3130 16384 6160 swap2 runtimes Runtime (ms) 7000 4000 3000 2000 0 5000 10000 Problem size (1000s) 15000 20000 Runtime vs. Problem Size 120000 100000 80000 swap1 60000 swap2 40000 20000 0 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 Runtime vs. Size, lg-lg plot 18 y = 1.9577x + 0.957 16 14 y = 0.9175x - 0.2822 Lg (runtime) 12 10 8 6 4 2 0 0 2 4 6 8 10 12 14 Lg(problem size) Note that slope gives growth rate of runtime Note that first few points are inflated by startup costs 16 Empirical runtime analysis What can we say about the runtimes of these two algorithms? Could give runtime as a function of input size, for a particular implementation But would that apply on different hardware? What about a different implementation in C++ instead of Perl? Is there anything we can say that is true in general? Is version 2 better than version 1 in general? Why? Empirical analysis: conclusions Empirical analyses are implementationdependent You have to use representative sample data You can learn something about the growth rate from the slope or curve of the runtime graph Bad algorithms can’t be fixed with faster computers Big companies (e.g. Microsoft) are not immune to bad algorithms What’s the runtime? for i 0 to n 1 do for j 0 to n 1 do if A[i] A[j] then A[j] A[i] Theoretical Analysis Uses a high-level description of the algorithm instead of an implementation Characterizes running time as a function of the input size, n. Takes into account all possible inputs, often analyzing the worst case Allows us to evaluate the speed of an algorithm independent of the hardware/software environment Pseudocode (§1.1) Example: find max High-level description element of an array of an algorithm More structured than Algorithm arrayMax(A, n) English prose Input array A of n integers Less detailed than a Output maximum element of A program currentMax A[0] Preferred notation for for i 1 to n 1 do describing algorithms if A[i] currentMax then Hides program design currentMax A[i] issues return currentMax Pseudocode Details Control flow if … then … [else …] while … do … repeat … until … for … do … Indentation replaces braces Method call var.method (arg [, arg…]) Return value return expression Method declaration Algorithm method (arg [, arg…]) Input … Output … Expressions Assignment (like in Java) Equality testing (like in Java) n2 Superscripts and other mathematical formatting allowed Questions… Can a program be asymptotically faster on one type of CPU vs another? Do all CPU instructions take equally long? The Random Access Machine (RAM) Model A CPU An potentially unbounded bank 2 of memory cells, each of 1 which can hold an arbitrary 0 number or character Memory cells are numbered and accessing any cell in memory takes unit time. Primitive Operations Basic computations performed by an algorithm Identifiable in pseudocode Largely independent from any programming language Exact definition not important (constant number of machine cycles per statement) Assumed to take a constant amount of time in the RAM model Examples: Evaluating an expression Assigning a value to a variable Indexing into an array Calling a method Returning from a method Counting Primitive Operations (§1.1) By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size Algorithm arrayMax(A, n) currentMax A[0] for i 1 to n 1 do if A[i] currentMax then currentMax A[i] { increment counter i } return currentMax # operations 2 2+n 2(n 1) 2(n 1) 2(n 1) 1 Total 7n 1 Estimating Running Time Algorithm arrayMax executes 7n 1 primitive operations in the worst case. Define: a = Time taken by the fastest primitive operation b = Time taken by the slowest primitive operation Let T(n) be worst-case time of arrayMax. Then a (7n 1) T(n) b (7n 1) Hence, the running time T(n) is bounded by two linear functions Theoretical analysis: conclusion We can count the number of RAM-equivalent statements executed as a function of input size All remaining implementation dependencies amount to multiplicative constants, which we will ignore (But watch out for statements that take more than constant time, such as $input =~ s/$a/$b/g) Linear, quadratic, etc. runtime is an intrinsic property of an algorithm What’s the runtime? int n; cin >> n; for (int i=0; i<n; i++) for (int j=0; j<n; j++) for (int k=0; k<n; k++) { cout << "Hello, "; cout << "greetings, "; cout << "bonjour, "; cout << "guten tag, "; cout << "Здравствуйте, "; cout << "你好 "; cout << " world!"; } What’s the runtime? int n; cin >> n; if (n<1000) for (int i=0; i<n; i++) for (int j=0; j<n; j++) for (int k=0; k<n; k++) cout << "Hello\n"; else for (int j=0; j<n; j++) for (int k=0; k<n; k++) cout << "world!\n"; Function Growth Rates Growth rates of functions: Linear n Quadratic n2 Cubic n3 In a log-log chart, the slope of the line corresponds to the growth rate of the function T (n ) 1E+30 1E+28 1E+26 1E+24 1E+22 1E+20 1E+18 1E+16 1E+14 1E+12 1E+10 1E+8 1E+6 1E+4 1E+2 1E+0 1E+0 Cubic Quadratic Linear 1E+2 1E+4 1E+6 n 1E+8 1E+10 Constant factors The growth rate is not affected by constant factors or lower-order terms Examples 102n + 105 is a linear function 105n2 + 108n is a quadratic function T (n ) 1E+26 1E+24 1E+22 1E+20 1E+18 1E+16 1E+14 1E+12 1E+10 1E+8 1E+6 1E+4 1E+2 1E+0 1E+0 Quadratic Quadratic Linear Linear 1E+2 1E+4 1E+6 n 1E+8 1E+10 Asymptotic (big-O) Notation (§1.2) 10,000 3n Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants 1,000 c and n0 such that f(n) cg(n) for n n0 Example: 2n + 10 is O(n) 2n + 10 cn (c 2) n 10 n 10/(c 2) Pick c 3 and n0 10 2n+10 n 100 10 1 1 10 100 n 1,000 Example 1,000,000 Example: the function n2 is not O(n) n2 cn nc The above inequality cannot be satisfied since c must be a constant n^2 100n 100,000 10n n 10,000 1,000 100 10 1 1 10 100 n 1,000 More Big-O Examples 7n-2 7n-2 is O(n) need c > 0 and n0 1 such that 7n-2 c•n for n n0 this is true for c = 7 and n0 = 1 3n3 + 20n2 + 5 3n3 + 20n2 + 5 is O(n3) need c > 0 and n0 1 such that 3n3 + 20n2 + 5 c•n3 for n n0 this is true for c = 4 and n0 = 21 3 log n + log log n 3 log n + log log n is O(log n) need c > 0 and n0 1 such that 3 log n + log log n c•log n for n n0 this is true for c = 4 and n0 = 2 Asymptotic analysis of functions Asymptotic analysis is equivalent to ignoring multiplicative constants ignoring lower-order terms “for large enough inputs” Big-O and growth rate Big-O gives an upper bound on the growth rate of a function Think of it as <= [asymptotically speaking] Big-O Rules If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e., 1. 2. Use the smallest possible class of functions, if possible Drop lower-order terms Drop constant factors Say “2n is O(n)” instead of “2n is O(n2)” (The former is a stronger statement) Use the simplest expression of the class Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)” Asymptotic Algorithm Analysis Asymptotic analysis: determine the runtime in big-O notation To perform the asymptotic analysis Example: Find the worst-case number of primitive operations executed as a function of the input size Express this function with big-O notation We determine that algorithm arrayMax executes at most 7n 1 primitive operations We say that algorithm arrayMax “runs in O(n) time” Since constant factors and lower-order terms are eventually dropped anyway, we can disregard them when counting primitive operations Computing Prefix Averages Runtime analysis example: Two algorithms for prefix averages The i-th prefix average of an array X is average of the first (i + 1) elements of X: A[i] (X[0] + X[1] + … + X[i])/(i+1) Computing the array A of prefix averages of another array X has applications to financial analysis 35 30 X A 25 20 15 10 5 0 1 2 3 4 5 6 7 Prefix Averages (Quadratic) The following algorithm computes prefix averages in quadratic time by applying the definition Algorithm prefixAverages1(X, n) Input array X of n integers Output array A of prefix averages of X A new array of n integers for i 0 to n 1 do s X[0] for j 1 to i do s s + X[j] A[i] s / (i + 1) return A #operations n n n 1 + 2 + …+ (n 1) 1 + 2 + …+ (n 1) n 1 Prefix Averages (Linear) The following algorithm computes prefix averages in linear time by keeping a running sum Algorithm prefixAverages2(X, n) Input array X of n integers Output array A of prefix averages of X A new array of n integers s0 for i 0 to n 1 do s s + X[i] A[i] s / (i + 1) return A Algorithm prefixAverages2 runs in O(n) time #operations n 1 n n n 1 Arithmetic Progression The running time of prefixAverages1 is O(1 + 2 + …+ n) The sum of the first n integers is n(n + 1) / 2 6 5 4 There is a simple visual proof of this fact 3 Thus, algorithm prefixAverages1 runs in O(n2) time 1 7 2 0 1 2 3 4 5 6 What’s the runtime? int n; 2n3+n2+n+2? cin >> n; for (int i=0; i<n; i++) for (int j=0; j<n; j++) O(n3) runtime for (int k=0; k<n; k++) cout << “Hello world!\n”; What if the last line is replaced by: string *s=new string(“Hello world!\n”); O(n3) time and space What’s the runtime? int n; cin >> n; for (int i=0; i<n; i++) for (int j=0; j<n; j++) for (int k=0; k<n; k++) cout << “Hello world!\n”; for (int i=0; i<n; i++) for (int j=0; j<n; j++) for (int k=0; k<n; k++) cout << “Hello world!\n”; O(n3) + O(n3) = O(n3) Statements or blocks in sequence: add What’s the runtime? int n; cin >> n; for (int i=0; i<n; i++) for (int j=n; j>1; j/=2) cout << “Hello world!\n”; Loops: add up cost of each iteration (multiply loop cost by number of iterations if they all take the same time) log n iterations of n steps O(n log n) What’s the runtime? int n; cin >> n; for (int i=1; i<=n; i++) for (int j=1; j<=i; j++) cout << “Hello world!\n”; Loops: add up cost of each iteration 1 + 2 + 3 + … + n = n(n+1)/2 = Q(n2) What’s the runtime? template <class Item> void insert(Item a[], int l, int r) { int i; for (i=r; i>l; i--) compexch(a[i-1],a[i]); for (i=l+2; i<=r; i++) { int j=i; Item v=a[i]; while (v<a[j-1]) { a[j] = a[j-1]; j--; } a[j] = v; } } Math you need to Review Summations (Sec. 1.3.1) Logarithms and Exponents (Sec. 1.3.2) Proof techniques (Sec. 1.3.3) Basic probability (Sec. 1.3.4) properties of logarithms: logb(xy) = logbx + logby logb (x/y) = logbx - logby Logb xa = a logb x logba = logxa/logxb properties of exponentials: a(b+c) = aba c abc = (ab)c ab /ac = a(b-c) b = a logab bc = a c*logab Relatives of Big-Oh big-Omega f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1 such that f(n) c•g(n) for n n0 big-Theta f(n) is Q(g(n)) if there are constants c’ > 0 and c’’ > 0 and an integer constant n0 1 such that c’•g(n) f(n) c’’•g(n) for n n0 little-o f(n) is o(g(n)) if, for any constant c > 0, there is an integer constant n0 0 such that f(n) c•g(n) for n n0 little-omega f(n) is (g(n)) if, for any constant c > 0, there is an integer constant n0 0 such that f(n) c•g(n) for n n0 Intuition for Asymptotic Notation Big-Oh f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n) big-Omega f(n) is (g(n)) if f(n) is asymptotically greater than or equal to g(n) big-Theta f(n) is Q(g(n)) if f(n) is asymptotically equal to g(n) little-oh f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n) little-omega f(n) is (g(n)) if is asymptotically strictly greater than g(n) Example Uses of the Relatives of Big-Oh 5n2 is (n2) f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1 such that f(n) c•g(n) for n n0 let c = 5 and n0 = 1 5n2 is (n) f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1 such that f(n) c•g(n) for n n0 let c = 1 and n0 = 1 5n2 is (n) f(n) is (g(n)) if, for any constant c > 0, there is an integer constant n0 0 such that f(n) c•g(n) for n n0 need 5n02 c•n0 given c, the n0 that satisfies this is n0 c/5 0 Asymptotic Analysis: Review What does it mean to say that an algorithm has runtime O(n log n)? n: Problem size Big-O: upper bound over all inputs of size n “Ignore constant factor” (why?) “as n grows large” O: like <= for functions (asymptotically speaking) : like >= Q: like = Asymptotic notation: examples Asymptotic runtime, in terms of O, , Q? Suppose the runtime for a function is n2 + 2n log n + 40 0.0000001 n2+ 1000000n1.999 n3 + n2 log n n2.0001 + n2 log n 2n+ 100 n2 1.00001n+ 100 n97 Asymptotic comparisons 0.0000001 n2 = O(1000000n1.999 )? No – a polynomial with a higher power dominates one with a lower power n1.000001 = O(n log n)? No – all polynomials (n.000001) dominate any polylog (log n) 1.0001n = O(n943)? No – all exponentials dominate any polynomial lg n = Q(ln n)? Yes – different bases are just a constant factor difference (Evaluate the limit of the quotient of the functions) Estimate the runtime Suppose an algorithm has runtime Q(n3) suppose solving a problem of size 1000 takes 10 seconds. How long to solve a problem of size 10000? runtime 10-8 n3; if n=10000, runtime 10000s = 2.7hr Suppose an algorithm has runtime Q(n log n) suppose solving a problem of size 1000 takes 10 seconds. How long to solve a problem of size 10000? runtime 10-3 n lg n; if n=10000, runtime 133 secs Worst vs. average case You might be interested in worst, best, or average case analysis of an algorithm You can have upper, lower, or tight bounds on each of those functions. Eg. For each n, some problem instances of size n have runtime n and some have runtime n2. Worst case: Best case: Average case: Q(n2), (n), (log n), O(n2), O(n3) (n), (log n), O(n2), Q(n) (n), (log n), O(n2), O(n3) Average case: need to know distribution of inputs Problem: Extensible Arrays A data structure for a stack that is… As fast and easy as an array (usually) Not limited in size How? Extensible Array: Usually works like a normal array When it fills, copy the entire contents into a new array of twice the size. Runtime? Analysis of Extensible Arrays Worst-case runtime for a push() operation? Is that the most useful way of describing the result? Amortized analysis: averaging out the cost over many operations. Amortized runtime of a push() operation? Total runtime of m push operations: m steps when array doesn’t grow Worst-case cost of grow operations: 1+2+4+8+…+m < 2m Total: O(m) time for m operations. O(1) amortized runtime per operation Theoretical and empirical analysis Why do we ignore constants in analyzing algorithms theoretically? Wouldn’t it be better if we knew the actual constants? Theoretical and empirical analysis gives us Theoretical knowledge of asymptotic runtime Empirical knowledge of actual constants for some implementation Best of both worlds! Analyze this… function fun (int n) if (n<1000) for (i=1; i<=n; i++) for (j=1; j<=n; j++) for (k=1; k<=n; k++) cout << “Hello world!\n”; else { for (i=1; i<=n; i++) j=i; while (j >= 1) j = j / 2; cout << “Hello world!\n”; if (n%2) for (i=1; i<=n; i++) for (j=1; j<=n; j++) cout << “Hello world!\n”; } cout << “Have a nice day.\n”;