Algorithms: COMP3121/9101 Aleks Ignjatović, ignjat@cse.unsw.edu.au office: 504 (CSE building K 17) Admin: Song Fang, cs3121@cse.unsw.edu.au School of Computer Science and Engineering University of New South Wales Sydney 3. LARGE INTEGER MULTIPLICATION COMP3121/3821/9101/9801 1 / 35 Basics revisited: how do we multiply two numbers? The primary school algorithm: X X X X * X X X X ------X X X X X X X X X X X X X X X X --------------X X X X X X X X <- first input integer <- second input integer \ \ O(n^2) intermediate operations: / O(n^2) elementary multiplications / + O(n^2) elementary additions <- result of length 2n Can we do it faster than in n2 many steps?? COMP3121/3821/9101/9801 2 / 35 The Karatsuba trick Take the two input numbers A and B, and split them into two halves: A1 A0 z }| { z }| { n A = A1 2 2 + A0 A = XX . . . X} XX . . . X} | {z | {z n/2 bits n/2 bits n 2 B = B1 2 + B0 AB can now be calculated as follows: n AB = A1 B1 2n + (A1 B0 + A0 B1 )2 2 + A0 B0 n = A1 B1 2n + ((A1 + A0 )(B1 + B0 ) − A1 B1 − A0 B0 )2 2 + A0 B0 We have saved one multiplication, now we have only three: A0 B0 , A1 B1 and (A1 + A0 )(B1 + B0 ). COMP3121/3821/9101/9801 3 / 35 AB = n A1 B1 2n + ((A1 + A0 )(B1 + B0 ) − A1 B1 − A0 B0 )2 2 + A0 B0 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: function Mult(A, B) if |A| = |B| = 1 then return AB else A1 ← MoreSignificantPart(A); A0 ← LessSignificantPart(A); B1 ← MoreSignificantPart(B); B0 ← LessSignificantPart(B); U ← A0 + A1 ; V ← B0 + B1 ; X ← Mult(A0 , B0 ); W ← Mult(A1 , B1 ); Y ← Mult(U, V); return W 2n + (Y − X − W ) 2n/2 + X end if end function COMP3121/3821/9101/9801 4 / 35 The Karatsuba trick How many steps does this algorithm take? (remember, addition is in linear time!) n Recurrence: T (n) = 3 T + cn 2 a = 3; since b = 2; f (n) = c n; nlogb a = nlog2 3 1.5 < log2 3 < 1.6 we have f (n) = c n = O(nlog2 3−ε ) for any 0 < ε < 0.5 Thus, the first case of the Master Theorem applies. Consequently, T (n) = Θ(nlog2 3 ) < Θ(n1.585 ) without going through the messy calculations! COMP3121/3821/9101/9801 5 / 35 Generalizing Karatsuba’s algorithm Can we do better if we break the numbers in more than two pieces? Lets try breaking the numbers A, B into 3 pieces; then with k = n/3 we obtain A = XXX . . . XX} XXX . . . XX} XXX . . . XX} | {z | {z | {z k bits of A2 k bits of A1 k bits of A0 i.e., A= B= A2 22k B2 22k + + A1 2 k B1 2k + + A0 B0 So, AB = A2 B2 24k + (A2 B1 + A1 B2 )23k + (A2 B0 + A1 B1 + A0 B2 )22k + + (A1 B0 + A0 B1 )2k + A0 B0 COMP3121/3821/9101/9801 6 / 35 The Karatsuba trick AB = A2 B2 24k + (A2 B1 + A1 B2 )23k + (A2 B0 + A1 B1 + A0 B2 )22k + {z } {z } | {z } | | C4 C3 C2 k + (A1 B0 + A0 B1 )2 + A0 B0 | {z } | {z } C1 C0 we need only 5 coefficients: C4 C3 C2 C1 C0 =A2 B2 =A2 B1 + A1 B2 =A2 B0 + A1 B1 + A0 B2 =A1 B0 + A0 B1 =A0 B0 Can we get these with 5 multiplications only? Should we perhaps look at (A2 + A1 + A0 )(B2 + B1 + B0 ) = A0 B0 + A1 B0 + A2 B0 + A0 B1 + A1 B1 + A2 B1 + A0 B2 + A1 B2 + A2 B2 ??? Not clear at all how to get C0 − C4 with 5 multiplications only ... COMP3121/3821/9101/9801 7 / 35 The Karatsuba trick: slicing into 3 pieces We now look for a method for getting these coefficients without any guesswork! Let A = A2 22k + A1 2k + A0 B = B2 22k + B1 2k + B0 We form the naturally corresponding polynomials: PA (x) = A2 x2 + A1 x + A0 ; PB (x) = B2 x2 + B1 x + B0 . Note that A =A2 (2k )2 + A1 2k + A0 = PA (2k ); B =B2 (2k )2 + B1 2k + B0 = PB (2k ). COMP3121/3821/9101/9801 8 / 35 The Karatsuba trick: slicing into 3 pieces If we manage to compute somehow the product polynomial PC (x) = PA (x)PB (x) = C4 x4 + C3 x3 + C2 x2 + C1 x + C0 , with only 5 multiplications, we can then obtain the product of numbers A and B simply as A · B = PA (2k )PB (2k ) = PC (2k ) = C4 24k + C3 23k + C2 22k + C1 2k + C0 , Note that the right hand side involves only shifts and additions. Since the product polynomial PC (x) = PA (x)PB (x) is of degree 4 we need 5 values to uniquely determine PC (x). We choose the smallest possible 5 integer values (smallest by their absolute value), i.e., −2, −1, 0, 1, 2. Thus, we compute PA (−2), PA (−1), PA (0), PA (1), PA (2) PB (−2), PB (−1), PB (0), PB (1), PB (2) COMP3121/3821/9101/9801 9 / 35 The Karatsuba trick: slicing into 3 pieces For PA (x) = A2 x2 + A1 x + A0 we have PA (−2) = A2 (−2)2 + A1 (−2) + A0 = 4A2 − 2A1 + A0 PA (−1) = A2 (−1)2 + A1 (−1) + A0 = A2 − A1 + A0 PA (0) = A2 02 + A1 0 + A0 = A0 PA (1) = A2 12 + A1 1 + A0 = A2 + A1 + A0 PA (2) = A2 22 + A1 2 + A0 = 4A2 + 2A1 + A0 . Similarly, for PB (x) = B2 x2 + B1 x + B0 we have PB (−2) = B2 (−2)2 + B1 (−2) + B0 = 4B2 − 2B1 + B0 PB (−1) = B2 (−1)2 + B1 (−1) + B0 = B2 − B1 + B0 PB (0) = B2 02 + B1 0 + B0 = B0 PB (1) = B2 12 + B1 1 + B0 = B2 + B1 + B0 PB (2) = B2 22 + B1 2 + B0 = 4B2 + 2B1 + B0 . These evaluations involve only additions because 2A = A + A; 4A = 2A + 2A. COMP3121/3821/9101/9801 10 / 35 The Karatsuba trick: slicing into 3 pieces Having obtained PA (−2), PA (−1), PA (0), PA (1), PA (2) and PB (−2), PB (−1), PB (0), PB (1), PB (2) we can now obtain PC (−2), PC (−1), PC (0), PC (1), PC (2) with only 5 multiplications of large numbers: PC (−2) = PA (−2)PB (−2) = (A0 − 2A1 + 4A2 )(B0 − 2B1 + 4B2 ) PC (−1) = PA (−1)PB (−1) = (A0 − A1 + A2 )(B0 − B1 + B2 ) PC (0) = PA (0)PB (0) = A0 B0 PC (1) = PA (1)PB (1) = (A0 + A1 + A2 )(B0 + B1 + B2 ) PC (2) = PA (2)PB (2) = (A0 + 2A1 + 4A2 )(B0 + 2B1 + 4B2 ) COMP3121/3821/9101/9801 11 / 35 The Karatsuba trick: slicing into 3 pieces Thus, if we represent the product C(x) = PA (x)PB (x) in the coefficient form as C(x) = C4 x4 + C3 x3 + C2 x2 + C1 x + C0 we get C4 (−2)4 + C3 (−2)3 + C2 (−2)2 + C1 (−2) + C0 = PC (−2) = PA (−2)PB (−2) C4 (−1)4 + C3 (−1)3 + C2 (−1)2 + C1 (−1) + C0 = PC (−1) = PA (−1)PB (−1) C4 04 + C3 03 + C2 02 + C1 · 0 + C0 = PC (0) = PA (0)PB (0) C4 14 + C3 13 + C2 12 + C1 · 1 + C0 = PC (1) = PA (1)PB (1) C4 24 + C3 23 + C2 22 + C1 · 2 + C0 = PC (2) = PA (2)PB (2). Simplifying the left side we obtain 16C4 − 8C3 + 4C2 − 2C1 + C0 C4 − C3 + C2 − C1 + C0 C0 C4 + C3 + C2 + C1 + C0 16C4 + 8C3 + 4C2 + 2C1 + C0 COMP3121/3821/9101/9801 = PC (−2) = PC (−1) = PC (0) = PC (1) = PC (2) 12 / 35 The Karatsuba trick: slicing into 3 pieces Solving this system of linear equations for C0 , C1 , C2 , C3 , C4 produces (as an exercise solve this system by hand, using the Gaussian elimination) C0 = PC (0) PC (−2) 2PC (−1) 2PC (1) PC (2) − + − 12 3 3 12 2PC (−1) 5PC (0) 2PC (1) PC (2) PC (−2) C2 = − + − + − 24 3 4 3 24 PC (−2) PC (−1) PC (1) PC (2) C3 = − + − + 12 6 6 12 PC (−2) PC (−1) PC (0) PC (1) PC (2) C4 = − + − + 24 6 4 6 24 C1 = Note that these expressions do not involve any multiplications of TWO large numbers and thus can be done in linear time. With the coefficients C0 , C1 , C2 , C3 , C4 obtained, we can now form the polynomial PC (x) = C0 + C1 x + C2 x2 + C3 x3 + C4 x4 . We can now compute PC (2k ) = C0 + C1 2k + C2 22k + C3 23k + C4 24k in linear time, because computing PC (2k ) involves only binary shifts of the coefficients plus O(k) additions. Thus we have obtained A · B = PA (2k )PB (2k ) = PC (2k ) with only 5 multiplications!Here is the complete algorithm: COMP3121/3821/9101/9801 13 / 35 1: 2: 3: 4: 5: function Mult(A, B) obtain A0 , A1 , A2 and B0 , B1 , B2 such that A = A2 22 k + A1 2k + A0 ; B = B2 22 k + B1 2k + B0 ; form polynomials PA (x) = A2 x2 + A1 x + A0 ; PB (x) = B2 x2 + B1 x + B0 ; PA (−2) ← 4A2 − 2A1 + A0 PB (−2) ← 4B2 − 2B1 + B0 PA (−1) ← A2 − A1 + A0 PB (−1) ← B2 − B1 + B0 PA (0) ← A0 PB (0) ← B0 PA (1) ← A2 + A1 + A0 PB (1) ← B2 + B1 + B0 PA (2) ← 4A2 + 2A1 + A0 PB (2) ← 4B2 + 2B1 + B0 PC (−2) ← Mult(PA (−2), PB (−2)); PC (−1) ← Mult(PA (−1), PB (−1)); PC (0) ← Mult(PA (0), PB (0)); PC (1) ← Mult(PA (1), PB (1)); 6: C0 ← PC (0); C2 ← − C3 ← − C4 ← 7: 8: 9: PC (2) ← Mult(PA (2), PB (2)) PC (−2) + − PC (−1) 6 + PC (0) 4 2PC (1) 3 2PC (1) − + − PC (2) 12 PC (2) 3 PC (1) 24 PC (2) 6 + + 3 5PC (0) 6 − 2PC (−1) 4 PC (−1) 12 24 − 3 + − 12 2PC (−1) 24 PC (−2) PC (−2) PC (−2) C1 ← 12 − PC (1) + PC (2) 6 24 form PC (x) = C4 x4 + C3 x3 + C2 x2 + C1 x + C0 ; compute PC (2k ) = C4 24k + C3 23k + C2 22k + C1 2k + C0 return PC (2k ) = A · B. end function COMP3121/3821/9101/9801 14 / 35 The Karatsuba trick: slicing into 3 pieces How fast is this algorithm? We have replaced a multiplication of two n bit numbers with 5 multiplications of n/3 bit numbers with an overhead of additions, shifts and the similar, all doable in linear time c n; thus, n T (n) = 5T + cn 3 We now apply the Master Theorem: we have a = 5, b = 3, so we consider nlogb a = nlog3 5 ≈ n1.465... Clearly, the first case of the MT applies and we get T (n) = O(nlog3 5 ) < O(n1.47 ). COMP3121/3821/9101/9801 15 / 35 The Karatsuba trick: slicing into 3 pieces Recall that the original Karatsuba algorithm runs in time nlog2 3 ≈ n1.58 > n1.47 . Thus, we got a significantly faster algorithm. Then why not slice numbers A and B into even larger number of slices? Maybe we can get even faster algorithm? The answer is, in a sense, BOTH yes and no! One can show that in fact slicing numbers in larger number of slices produces asymptotically faster algorithm, but its constants appearing in the asymptotic estimates explode (they involve pp where p + 1 is the number of slices). This shows that asymptotic estimates are useful only if the sizes of the constants involved are small! COMP3121/3821/9101/9801 16 / 35 Generalizing Karatsuba’s algorithm (not examinable) The general case - slicing the input numbers A, B into p + 1 many slices For simplicity, let us assume A and B have exactly (p + 1)k bits (otherwise one of the slices will have to be shorter); Note: p is a fixed (smallish) number, a fixed parameter of our design – p + 1 is the number of slices we are going to make, but k depends on the input values A and B and can be arbitrarily large! Slice A, B into p + 1 pieces each: A A= Ap 2kp + Ap−1 2k(p−1) + · · · + A0 B= Bp 2kp + Bp−1 2k(p−1) + · · · + B0 Ap Ap-1 ... k bits k bits … A0 k bits divided into p+1 slices each slice k bits = (p+1) k bits in total COMP3121/3821/9101/9801 17 / 35 Generalizing Karatsuba’s algorithm We form the naturally corresponding polynomials: PA (x) = Ap x p + Ap−1 xp−1 + · · · + A0 PB (x) = Bp xp + Bp−1 xp−1 + · · · + B0 As before, we have: A = PA (2k ); B = PB (2k ); AB = PA (2k )PB (2k ) = (PA (x) · PB (x)) |x=2k Since AB = (PA (x) · PB (x)) |x=2k we adopt the following strategy: we will first figure out how to multiply polynomials fast to obtain PC (x) = PA (x) · PB (x); then we evaluate PC (2k ). Note that PC (x) = PA (x) · PB (x) is of degree 2p: PC (x) = 2p X C j xj j=0 COMP3121/3821/9101/9801 18 / 35 Generalizing Karatsuba’s algorithm Example: (A3 x3 + A2 x2 + A1 x + A0 )(B3 x3 + B2 x2 + B1 x + B0 ) = A3 B3 x6 + (A2 B3 + A3 B2 )x5 + (A1 B3 + A2 B2 + A3 B1 )x4 +(A0 B3 + A1 B2 + A2 B1 + A3 B0 )x3 + (A0 B2 + A1 B1 + A2 B0 )x2 +(A0 B1 + A1 B0 )x + A0 B0 In general: for PA (x) = Ap xp + Ap−1 xp−1 + · · · + A0 PB (x) = Bp xp + Bp−1 xp−1 + · · · + B0 we have PA (x) · PB (x) = 2p X X j=0 Ai Bk xj = We need to find the coefficients Cj = Cj xj j=0 i+k=j X 2p X Ai Bk without performing (p + 1)2 i+k=j many multiplications necessary to get all products of the form Ai Bk . COMP3121/3821/9101/9801 19 / 35 A VERY IMPORTANT DIGRESSION: ~ = (A0 , A1 , . . . , Ap−1 , Ap ) and If you have two sequences A ~ = (B0 , B1 , . . . , Bm−1 , Bm ), and if you form the two corresponding polynomials B PA (x) = A0 + A1 x + . . . + Ap−1 xp−1 + Ap xp PB (x) = B0 + B1 x + . . . + Bm−1 xm−1 + Bm xm and if you multiply these two polynomials to obtain their product m+p PA (x) · PB (x) = X X j=0 p+m Ai Bk xj = X Cj xj j=0 i+k=j ~ = (C0 , C1 , . . . , Cp+m ) of the coefficients of the product then the sequence C polynomial, with these coefficients given by Cj = X Ai Bk , for 0 ≤ j ≤ p + m, i+k=j is extremely important and is called the LINEAR CONVOLUTION of ~ and B ~ and is denoted by C ~ =A ~ ? B. ~ sequences A COMP3121/3821/9101/9801 20 / 35 AN IMPORTANT DIGRESSION: For example, if you have an audio signal and you want to emphasise the bass sounds, you would pass the sequence of discrete samples of the signal through a digital filter which amplifies the low frequencies more than the medium and the high audio frequencies. This is accomplished by computing the linear convolution of the sequence of discrete samples of the signal with a sequence of values which correspond to that filter, called the impulse response of the filter. This means that the samples of the output sound are simply the coefficients of the product of two polynomials: 1 2 polynomial PA (x) whose coefficients Ai are the samples of the input signal; polynomial PB (x) whose coefficients Bk are the samples of the so called impulse response of the filter (they depend of what kind of filtering you want to do). Convolutions are bread-and-butter of signal processing, and for that reason it is extremely important to find fast ways of multiplying two polynomials of possibly very large degrees. In signal processing these degrees can be greater than 1000. This is the main reason for us to study methods of fast computation of convolutions (aside of finding products of large integers, which is what we are doing at the moment). COMP3121/3821/9101/9801 21 / 35 Coefficient vs value representation of polynomials Every polynomial PA (x) of degree p is uniquely determined by its values at any p + 1 distinct input values x0 , x1 , . . . , xp : PA (x) ↔ {(x0 , PA (x0 )), (x1 , PA (x1 )), . . . , (xp , PA (xp ))} For PA (x) = Ap xp + Ap−1 xp−1 + . . . + A0 , these values can be matrix multiplication: 1 x0 x20 . . . xp0 PA (x0 ) A0 p 2 1 x1 x1 . . . x1 A1 PA (x1 ) . .. .. .. .. .. .. = .. . . . . . . PA (xp ) Ap 1 xp x2p . . . xpp obtained via a . (1) It can be shown that if xi are all distinct then this matrix is invertible. Such a matrix is called the Vandermonde matrix. COMP3121/3821/9101/9801 22 / 35 Coefficient vs value representation of polynomials - ctd. Thus, if all xi are all distinct, given any values PA (x0 ), PA (x1 ), . . . , PA (xp ) the coefficients A0 , A1 , . . . , Ap of the polynomial PA (x) are uniquely determined: −1 1 x0 x20 . . . xp0 A0 PA (x0 ) p 2 A1 1 x1 x1 . . . x1 PA (x1 ) (2) .. = . .. .. .. .. .. . .. . . . . . Ap PA (xp ) 1 xp x2p . . . xpp • Equations (1) and (2) show how we can commute between: 1 2 a representation of a polynomial PA (x) via its coefficients Ap , Ap−1 , . . . , A0 , i.e. PA (x) = Ap xp + . . . + A1 x + A0 a representation of a polynomial PA (x) via its values PA (x) ↔ {(x0 , PA (x0 )), (x1 , PA (x1 )), . . . , (xp , PA (xp ))} COMP3121/3821/9101/9801 23 / 35 Coefficient vs value representation of polynomials- ctd. If we fix the inputs x0 , x1 , . . . , xp then commuting between a representation of a polynomial PA (x) via its coefficients and a representation via its values at these points is done via the following two matrix multiplications, with matrices made up from constants: PA (x0 ) PA (x1 ) .. . PA (xp ) A0 A1 .. . Ap = = 1 1 .. . 1 1 1 .. . 1 x0 x1 .. . xp x0 x1 .. . xp x20 x21 .. . x2p x20 x21 .. . x2p ... ... .. . ... xp0 xp1 .. . xpp ... ... .. . ... xp0 xp1 .. . xpp −1 . A0 A1 .. . Ap PA (x0 ) PA (x1 ) .. . PA (xp ) ; Thus, for fixed input values x0 , . . . , xp this switch between the two kinds of representations is done in linear time! COMP3121/3821/9101/9801 24 / 35 Our strategy to multiply polynomials fast: 1 Given two polynomials of degree at most p, PA (x) = Ap xp + . . . + A0 ; PB (x) = Bp xp + . . . + B0 convert them into value representation at 2p + 1 distinct points x0 , x1 , . . . , x2p : PA (x) ↔ {(x0 , PA (x0 )), (x1 , PA (x1 )), . . . , (x2p , PA (x2p ))} PB (x) ↔ {(x0 , PB (x0 )), (x1 , PB (x1 )), . . . , (x2p , PB (x2p ))} Note: since the product of the two polynomials will be of degree 2p we need the values of PA (x) and PB (x) at 2p + 1 points, rather than just p + 1 points! 2 Multiply these two polynomials point-wise, using 2p + 1 multiplications only. PA (x)PB (x) ↔ {(x0 , PA (x0 )PB (x0 )), (x1 , PA (x1 )PB (x1 )), . . . , (x2p , PA (x2p )PB (x2p ))} | {z } | {z } | {z } PC (x0 ) 3 PC (x1 ) PC (x2p ) Convert such value representation of PC (x) = PA (x)PB (x) back to coefficient form PC (x) = C2p x2p + C2p−1 x2p−1 + . . . + C1 x + C0 ; COMP3121/3821/9101/9801 25 / 35 Fast multiplication of polynomials - continued What values should we choose for x0 , x1 , . . . , x2p ?? Key idea: use 2p + 1 smallest possible integer values! {−p, −(p − 1), . . . , −1, 0, 1, . . . , p − 1, p} So we find the values PA (m) and PB (m) for all m such that −p ≤ m ≤ p. Remember that p + 1 is the number of slices we split the input numbers A, B. Multiplication of a large number with k bits by a constant integer d can be done in time linear in k because it is reducible to d − 1 additions: d · A = A + A + ... + A | {z } d Thus, all the values PA (m) = Ap mp + Ap−1 mp−1 + · · · + A0 : −p ≤ m ≤ p, PB (m) = Bp mp + Bp−1 mp−1 + · · · + B0 : −p ≤ m ≤ p. can be found in time linear in the number of bits of the input numbers! COMP3121/3821/9101/9801 26 / 35 Fast multiplication of polynomials - ctd. We now perform 2p + 1 multiplications of large numbers to obtain PA (−p)PB (−p), . . . , PA (−1)PB (−1), PA (0)PB (0), PA (1)PB (1), . . . , PA (p)PB (p) For PC (x) = PA (x)PB (x) these products are 2p + 1 many values of PC (x): PC (−p) = PA (−p)PB (−p), . . . , PC (0) = PA (0)PB (0), . . . , PC (p) = PA (p)PB (p) Let C0 , C1 , . . . , C2p be the coefficients of the product polynomial C(x), i.e., let PC (x) = C2p x2p + C2p−1 x2p−1 + · · · + C0 , We now have: C2p (−p)2p + C2p−1 (−p)2p−1 + · · · + C0 = PC (−p) C2p (−(p − 1))2p + C2p−1 (−(p − 1))2p−1 + · · · + C0 = PC (−(p − 1)) .. . C2p (p − 1)2p + C2p−1 (p − 1)2p−1 + · · · + C0 = PC (p − 1) C2p p2p + C2p−1 p2p−1 + · · · + C0 = PC (p) COMP3121/3821/9101/9801 27 / 35 Fast multiplication of polynomials - ctd. 1 1 .. . 1 1 This is just a system of linear equations, that can be solved for C0 , C1 , . . . , C2p : −p (−p)2 ... (−p)2p C0 PC (−p) 2 2p −(p − 1) (−(p − 1)) . . . (−(p − 1)) C1 PC (−(p − 1)) .. .. .. .. .. .. = , . . . . . . 2 2p C2p−1 PC (p − 1) p−1 (p − 1) ... (p − 1) 2 2p C P (p) 2p C p p ... p i.e., we can obtain C0 , C1 , . . . , C2p as C0 C1 .. . C2p = 1 1 .. . 1 1 −p −(p − 1) .. . p−1 p (−p)2 (−(p − 1))2 .. . (p − 1)2 p2 ... ... .. . ... ... (−p)2p (−(p − 1))2p .. . (p − 1)2p p2p −1 PC (−p) P (−(p − 1)) C .. . . PC (p − 1) PC (p) But the inverse matrix also involves only constants depending on p only; Thus the coefficients Ci can be obtained in linear time. So here is the algorithm we have just described: COMP3121/3821/9101/9801 28 / 35 1: 2: 3: 4: function Mult(A, B) if |A| = |B| < p + 1 then return AB else obtain p + 1 slices A0 , A1 , . . . , Ap and B0 , B1 , . . . , Bp such that A = Ap 2 pk + Ap−1 2 pk + Bp−1 2 B = Bp 2 5: (p−1) k + . . . + A0 (p−1) k + . . . + B0 form polynomials p (p−1) + . . . + A0 p (p−1) + . . . + B0 PA (x) = Ap x + Ap−1 x PB (x) = Bp x + Bp−1 x 6: 7: 8: 9: 10: for m = −p to m = p do compute PA (m) and PB (m); PC (m) ← Mult(PA (m)PB (m)) end for compute C0 , C1 , . . . C2p via C0 C1 . . . 1 1 = . . . C2p 11: 12: 13: 14: −p (−p)2 ... −(p − 1) (−(p − 1))2 ... . . . . . . . . . p p2 ... 1 2p form PC (x) = C2p x −1 (−p)2p PC (−p) PC (−(p − 1)) (−(p − 1))2p . . . . . . p2p . PC (p) k + . . . + C0 and compute PC (2 ) return PC (2k ) = A · B end if end function COMP3121/3821/9101/9801 29 / 35 How fast is our algorithm? it is easy to see that the values of the two polynomials we are multiplying have at most k + s bits where s is a constant which depends on p but does NOT depend on k: PA (m) = Ap mp + Ap−1 mp−1 + · · · + A0 : This is because each Ai is smaller than p k |PA (m)| < p (p + 1) × 2 ⇒ 2k −p ≤ m ≤ p. because each Ak has k bits; thus log2 |PA (m)| < log2 (pp (p + 1)) + k = s + k Thus, we have reduced a multiplication of two k(p + 1) digit numbers to 2p + 1 multiplications of k + s digit numbers plus a linear overhead (of additions splitting the numbers etc.) So we get the following recurrence for the complexity of Mult(A, B): T ((p + 1)k) = (2p + 1)T (k + s) + c k Let n = (p + 1)k. Then n c T (n) = (2p + 1) T +s + n p+1 p+1 | {z } | {z } a b Since s is constant, its impact can be neglected. COMP3121/3821/9101/9801 30 / 35 How fast is our algorithm? n T (n) = (2p + 1) T p + 1 + s + | {z } | {z } a c p+1 n b Since logb a = logp+1 (2p + 1) > 1, we can choose a small ε such that also logb a − ε > 1. Consequently, for such an ε we would have f (n) = c/(p + 1) n = O nlogb a−ε . Thus, with a = 2p + 1 and b = p + 1 the first case of the Master Theorem applies; so we get: T (n) = Θ nlogb a = Θ nlogp+1 (2p+1) COMP3121/3821/9101/9801 31 / 35 Note that nlogp+1 (2p+1) < nlogp+1 2(p+1) = nlogp+1 2+logp+1 (p+1) = n1+logp+1 2 = n 1+ log 1 2 (p+1) Thus, by choosing a sufficiently large p, we can get a run time arbitrarily close to linear time! How large does p have to be, in order to to get an algorithm which runs in time n1.1 ? n1.1 = n 1+ log 1 2 (p+1) → 1 1 = log2 (p + 1) 10 → p + 1 = 210 Thus, we would have to slice the input numbers into 210 = 1024 pieces!! COMP3121/3821/9101/9801 32 / 35 We would have to evaluate polynomials PA (x) and PB (x) both of degree p at values up to p. However, p = 210 , so evaluating PA (p) = Ap pp + . . . + A0 involves 10 multiplication of Ap with pp = (210 )2 ≈ 1.27 × 103079 . Thus, while evaluations of PA (x) and PB (x) for x = −p . . . p can theoretically all be done in linear time, T (p) = c p, the constant c is absolutely humongous. Consequently, slicing the input numbers in more than just a few slices results in a hopelessly slow algorithm, despite the fact that the asymptotic bounds improve as we increase the number of slices! The moral is: In practice, asymptotic estimates are useless if the size of the constants hidden by the O-notation are not estimated and found to be reasonably small!!! COMP3121/3821/9101/9801 33 / 35 Crucial question: Are there numbers x0 , x1 , . . . , xp such that the size of xpi does not grow uncontrollably? Answer: YES; they are the complex numbers zi lying on the unit circle, i.e., such that |zi | = 1! This leads to something called the Fast Fourier Transform, about which you can read in the supplementary notes on Moodle (not examinable material). COMP3121/3821/9101/9801 34 / 35 PUZZLE! The warden meets with 23 new prisoners when they arrive. He tells them, “You may meet today and plan a strategy. But after today, you will be in isolated cells and will have no communication with one another. In the prison there is a switch room, which contains two light switches labeled A and B, each of which can be in either the on or the off position. I am not telling you their present positions. The switches are not connected to anything. After today, from time to time whenever I feel so inclined, I will select one prisoner at random and escort him to the switch room. This prisoner will select one of the two switches and reverse its position. He must move one, but only one of the switches. He can’t move both but he can’t move none either. Then he will be led back to his cell. No one else will enter the switch room until I lead the next prisoner there, and he’ll be instructed to do the same thing. I’m going to choose prisoners at random. I may choose the same guy three times in a row, or I may jump around and come back. But, given enough time, everyone would eventually visit the switch room many times. At any time anyone of you may declare to me: “We have all visited the switch room. If it is true, then you will all be set free. If it is false, and somebody has not yet visited the switch room, you will be fed to the alligators.” What is the strategy the prisoners can devise to gain their freedom? COMP3121/3821/9101/9801 35 / 35