05divide-and-conquer..

Chapter 5
Divide and Conquer
Slides by Kevin Wayne.
Copyright © 2005 Pearson-Addison Wesley.
All rights reserved.
1
Divide-and-Conquer
Divide-and-conquer.
Break up problem into several parts.
Solve each part recursively.
Combine solutions to sub-problems into overall solution.



Most common usage.
Break up problem of size n into two equal parts of size ½n.
Solve two parts recursively.
Combine two solutions into overall solution in linear time.



Consequence.
Brute force: n2.
Divide-and-conquer: n log n.


Divide et impera.
Veni, vidi, vici.
- Julius Caesar
2
5.1 Mergesort
Sorting
Sorting. Given n elements, rearrange in ascending order.
Obvious sorting
applications.
List files in a
directory.
Organize an MP3
library.
List names in a phone
book.
Display Google
PageRank results.
Problems become easier
once sorted.
Find the median.
Non-obvious sorting
applications.
Data compression.
Computer graphics.
Interval scheduling.
Computational biology.
Minimum spanning tree.
Supply chain
management.
Simulate a system of
particles.
Book recommendations
on Amazon.
Load balancing on a
4
Mergesort
Mergesort.
Divide array into two halves.
Recursively sort each half.
Merge two halves to make sorted whole.



Jon von Neumann (1945)
A
L
G
O
R
I
T
H
M
S
A
L
G
O
R
I
T
H
M
S
divide
O(1)
A
G
L
O
R
H
I
M
S
T
sort
2T(n/2)
merge
O(n)
A
G
H
I
L
M
O
R
S
T
5
Merging
Merging. Combine two pre-sorted lists into a sorted whole.
How to merge efficiently?
Linear number of comparisons.
Use temporary array.


A
G
A
L
G
O
H
R
H
I
M
S
T
I
Challenge for the bored. In-place merge. [Kronrud, 1969]
using only a constant amount of extra storage
6
A Useful Recurrence Relation
Def. T(n) = number of comparisons to mergesort an input of size n.
Mergesort recurrence.
 0

T(n)   T  n /2 

 solve left half
 T  n /2  
n
solve right half
merging
if n  1
otherwise

Solution. T(n) = O(n log2 n).
Assorted proofs. We describe several ways to prove this recurrence.
Initially we assume n is a power of 2 and replace  with =.
7
Proof by Recursion Tree
 0

T(n)   2T(n /2)  n

 sorting both halves merging
T(n)

n
2(n/2)
T(n/2)
T(n/2)
T(n/4)
if n 1
otherwise
T(n/4)
T(n/4)
T(n/4)
log2n
4(n/4)
...
2k (n / 2k)
T(n / 2k)
...
T(2)
T(2)
T(2)
T(2)
T(2)
T(2)
T(2)
T(2)
n/2 (2)
n log2n
8
Proof by Telescoping
Claim. If T(n) satisfies this recurrence, then T(n) = n log2 n.
assumes n is a power of 2
 0

T(n)   2T(n /2)  n

 sorting both halves merging
Pf. For n> 1:
T(n)

n
if n 1
otherwise
2T(n /2)
n
1

T(n /2)
n /2
1

T(n / 4)
n/4
1 1

T(n /n)
n /n
1 

1
log 2 n
log2 n
9
Proof by Induction
Claim. If T(n) satisfies this recurrence, then T(n) = n log2 n.
assumes n is a power of 2
 0

T(n)   2T(n /2)  n

 sorting both halves merging
if n 1
otherwise
Pf. (by induction
on n)

Base case: n = 1.
Inductive hypothesis: T(n) = n log2 n.
Goal: show that T(2n) = 2n log2 (2n).



T(2n) 



2T(n)  2n
2n log2 n  2n
2nlog2 (2n) 1  2n
2n log2 (2n)
10
Analysis of Mergesort Recurrence
Claim. If T(n) satisfies the following recurrence, then T(n)  n lg n.
 0

T(n)   T  n /2   T  n /2   n

merging
solve right half
 solve left half
log2n
if n  1
otherwise
Pf. (by induction on n)

Base case: n = 1.
Define n1 = n / 2 , n2 = n / 2.
Induction step: assume true for 1, 2, ... , n–1.



T(n) 





T(n1 )  T(n2 )  n
n1lg n1   n2  lg n2   n
n1 lg n2   n2  lg n2   n
n  lg n2   n
n(  lg n1 )  n
n  lg n
n2
 n /2

2 
 2
lg n 
lg n 
/2

/2
 lg n2   lg n 1
11
5.3 Counting Inversions
Counting Inversions
Music site tries to match your song preferences with others.
You rank n songs.
Music site consults database to find people with similar tastes.


Similarity metric: number of inversions between two rankings.
My rank: 1, 2, …, n.
Your rank: a1, a2, …, an.
Songs i and j inverted if i < j, but ai > aj.



Songs
A
B
C
D
E
Me
1
2
3
4
5
You
1
3
4
2
5
Inversions
3-2, 4-2
Brute force: check all (n2) pairs i and j.
13
Applications
Applications.
Voting theory.
Collaborative filtering.
Measuring the "sortedness" of an array.
Sensitivity analysis of Google's ranking function.
Rank aggregation for meta-searching on the Web.
Nonparametric statistics (e.g., Kendall's Tau distance).






14
Counting Inversions: Divide-and-Conquer
Divide-and-conquer.
1
5
4
8
10
2
6
9
12
11
3
7
15
Counting Inversions: Divide-and-Conquer
Divide-and-conquer.
Divide: separate list into two pieces.

1
1
5
5
4
4
8
8
10
10
2
2
6
6
9
9
12
12
11
11
3
3
7
Divide: O(1).
7
16
Counting Inversions: Divide-and-Conquer
Divide-and-conquer.
Divide: separate list into two pieces.
Conquer: recursively count inversions in each half.


1
1
5
5
4
4
8
8
10
10
5 blue-blue inversions
5-4, 5-2, 4-2, 8-2, 10-2
2
2
6
6
9
9
12
12
11
11
3
3
7
7
Divide: O(1).
Conquer: 2T(n / 2)
8 green-green inversions
6-3, 9-3, 9-7, 12-3, 12-7, 12-11, 11-3, 11-7
17
Counting Inversions: Divide-and-Conquer
Divide-and-conquer.
Divide: separate list into two pieces.
Conquer: recursively count inversions in each half.
Combine: count inversions where ai and aj are in different halves,
and return sum of three quantities.



1
1
5
5
4
4
8
8
10
10
2
2
6
6
5 blue-blue inversions
9
9
12
12
11
11
3
3
7
7
Divide: O(1).
Conquer: 2T(n / 2)
8 green-green inversions
9 blue-green inversions
5-3, 4-3, 8-6, 8-3, 8-7, 10-6, 10-9, 10-3, 10-7
Combine: ???
Total = 5 + 8 + 9 = 22.
18
Counting Inversions: Combine
Combine: count blue-green inversions
Assume each half is sorted.
Count inversions where ai and aj are in different halves.
Merge two sorted halves into sorted whole.



to maintain sorted invariant
3
7
10
14
18
19
2
11
16
17
23
25
6
3
2
2
0
0
13 blue-green inversions: 6 + 3 + 2 + 2 + 0 + 0
2
3
7
10
11
14
16
17
18
19
Count: O(n)
23
25
Merge: O(n)
T(n)  T  n/2  T  n/2  O(n)  T(n)  O(nlog n)
19
Counting Inversions: Implementation
Pre-condition. [Merge-and-Count] A and B are sorted.
Post-condition. [Sort-and-Count] L is sorted.
Sort-and-Count(L) {
if list L has one element
return 0 and the list L
Divide the list into two halves A and B
(rA, A)  Sort-and-Count(A)
(rB, B)  Sort-and-Count(B)
(rB, L)  Merge-and-Count(A, B)
}
return r = rA + rB + r and the sorted list L
20
5.4 Closest Pair of Points
Closest Pair of Points
Closest pair. Given n points in the plane, find a pair with smallest
Euclidean distance between them.
Fundamental geometric primitive.
Graphics, computer vision, geographic information systems,
molecular modeling, air traffic control.
Special case of nearest neighbor, Euclidean MST, Voronoi.


fast closest pair inspired fast algorithms for these problems
Brute force. Check all pairs of points p and q with (n2) comparisons.
1-D version. O(n log n) easy if points are on a line.
Assumption. No two points have same x coordinate.
to make presentation cleaner
22
Closest Pair of Points: First Attempt
Divide. Sub-divide region into 4 quadrants.
L
23
Closest Pair of Points: First Attempt
Divide. Sub-divide region into 4 quadrants.
Obstacle. Impossible to ensure n/4 points in each piece.
L
24
Closest Pair of Points
Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side.

L
25
Closest Pair of Points
Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side.
Conquer: find closest pair in each side recursively.


L
21
12
26
Closest Pair of Points
Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side.
Conquer: find closest pair in each side recursively.
seems like (n2)
Combine: find closest pair with one point in each side.
Return best of 3 solutions.




L
8
21
12
27
Closest Pair of Points
Find closest pair with one point in each side, assuming that distance < .
L
21
12
 = min(12, 21)
28
Closest Pair of Points
Find closest pair with one point in each side, assuming that distance < .
Observation: only need to consider points within  of line L.

L
21
 = min(12, 21)
12

29
Closest Pair of Points
Find closest pair with one point in each side, assuming that distance < .
Observation: only need to consider points within  of line L.
Sort points in 2-strip by their y coordinate.


L
7
6
4
12
5
21
 = min(12, 21)
3
2
1

30
Closest Pair of Points
Find closest pair with one point in each side, assuming that distance < .
Observation: only need to consider points within  of line L.
Sort points in 2-strip by their y coordinate.
Only check distances of those within 11 positions in sorted list!



L
7
6
4
12
5
21
 = min(12, 21)
3
2
1

31
Closest Pair of Points
Def. Let si be the point in the 2-strip, with
the ith smallest y-coordinate.
Claim. If |i – j|  12, then the distance between
si and sj is at least .
Pf.
No two points lie in same ½-by-½ box.
Two points at least 2 rows apart
2 rows
have distance  2(½). ▪
j
39
31

½

Fact. Still true if we replace 12 with 7.
i
½
30
29
28
27
½
26
25


32
Closest Pair Algorithm
Closest-Pair(p1, …, pn) {
Compute separation line L such that half the points
are on one side and half on the other side.
1 = Closest-Pair(left half)
2 = Closest-Pair(right half)
 = min(1, 2)
O(n log n)
2T(n / 2)
Delete all points further than  from separation line L
O(n)
Sort remaining points by y-coordinate.
O(n log n)
Scan points in y-order and compare distance between
each point and next 11 neighbors. If any of these
distances is less than , update .
O(n)
return .
}
33
Closest Pair of Points: Analysis
Running time.
T(n)  2T n/2  O(n log n)  T(n)  O(n log 2 n)

Q. Can we achieve O(n log n)?
A. Yes. Don't sort points in strip from scratch each time.
Each recursive returns two lists: all points sorted by y coordinate,
and all points sorted by x coordinate.
Sort by merging two pre-sorted lists.


T(n)  2T n/2  O(n)  T(n)  O(n log n)

34
5.5 Integer Multiplication
Integer Arithmetic
Add. Given two n-digit integers a and b, compute a + b.
O(n) bit operations.

Multiply. Given two n-digit integers a and b, compute a × b.
Brute force solution: (n2) bit operations.

1 1 0 1 0 1 0 1
* 0 1 1 1 1 1 0 1
1 1 0 1 0 1 0 1 0
Multiply
0 0 0 0 0 0 0 0 0
1 1 0 1 0 1 0 1 0
1 1 0 1 0 1 0 1 0
1
1
1
1
1
1
0
1
1
1
0
1
0
1
0
1
+
0
1
1
1
1
1
0
1
1
0
1
0
1
0
0
1
0
Add
1 1 0 1 0 1 0 1 0
1 1 0 1 0 1 0 1 0
1 1 0 1 0 1 0 1 0
0 0 0 0 0 0 0 0 0
0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0
36
Divide-and-Conquer Multiplication: Warmup
To multiply two n-digit integers:
Multiply four ½n-digit integers.
Add two ½n-digit integers, and shift to obtain result.


x
 2 n / 2  x1  x0
y  2 n / 2  y1  y0
xy  2 n / 2  x1  x0 2 n / 2  y1  y0  2 n  x1 y1  2 n / 2  x1 y0  x0 y1   x0 y0




T(n)  4T n /2   (n)
recursive calls

 T(n)  (n 2 )
add, shift
assumes n is a power of 2
37
Karatsuba Multiplication
To multiply two n-digit integers:
Add two ½n digit integers.
Multiply three ½n-digit integers.
Add, subtract, and shift ½n-digit integers to obtain result.



x  2 n / 2  x1
y  2 n / 2  y1
xy  2 n  x1 y1
 2 n  x1 y1




A
x0
y0
2 n / 2  x1 y0  x0 y1   x0 y0
2 n / 2   (x1  x0 ) (y1  y0 )  x1 y1  x0 y0   x0 y0
B
A
C
C
Theorem. [Karatsuba-Ofman, 1962] Can multiply two n-digit integers

in O(n1.585) bit operations.
T(n)  T  n /2   T  n /2   T 1 n /2  
recursive calls
 T(n)  O(n
log 2 3
(n)
add, subtract, shift
)  O(n1.585 )
38
Karatsuba: Recursion Tree
 0
if n  1
T(n)  
 3T(n/2)  n otherwise

log 2 n
T(n)   n
k0

3 k
2
23
1 log 2 n

3 1
2
1
 3nlog 2 3  2
n
T(n)
T(n/2)
T(n/2)
T(n/2)
3(n/2)
T(n/4) T(n/4) T(n/4)
T(n/4) T(n/4) T(n/4)
T(n/4) T(n/4) T(n/4)
9(n/4)
...
...
3k (n / 2k)
T(n / 2k)
...
T(2)
T(2)
T(2)
T(2)
...
T(2)
T(2)
T(2)
T(2)
3 lg n (2)
39
Matrix Multiplication
Matrix Multiplication
Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB.
c11 c12

c21 c22

c
 n1 cn2
n
cij   a ik bkj
k 1
a11 a12
c1n 


c2n 
a
a 22
  21



a
cnn 
 n1 an2
b11 b12
a1n 


a 2n 
b b
  21 22



b b
ann 
 n1 n2
b1n 

b2n 

bnn 



Brute force. (n3) arithmetic operations.
Fundamental question. Can we improve upon brute force?
41
Matrix Multiplication: Warmup
Divide-and-conquer.
Divide: partition A and B into ½n-by-½n blocks.
Conquer: multiply 8 ½n-by-½n recursively.
Combine: add appropriate products using 4 matrix additions.



C11 C12 

 
C
C
 21
22 
A11

A21
A12 
 
A22 
B11

B21
T(n)  8T n /2  
recursive calls
B12 

B22 

(n2 )
C11

C12

C21
C22


A11  B11   A12  B21 
A11  B12   A12  B22 
A21  B11   A22  B21 
A21  B12   A22  B22 
 T(n)  (n 3 )
add, form submatrices

42
Matrix Multiplication: Key Idea
Key idea. multiply 2-by-2 block matrices with only 7 multiplications.
C11

C21

C12  A11
  
C22  A21
C11
C12
C21
C22






A12 
 
A22 
B11

B21
B12 

B22 
P5  P4  P2  P6
P1  P2
P3  P4
P5  P1  P3  P7
P1
P2
P3
P4
P5
P6
P7







A11  ( B12  B22 )
( A11  A12 )  B22
( A21  A22 )  B11
A22  ( B21  B11 )
( A11  A22 )  ( B11  B22 )
( A12  A22 )  ( B21  B22 )
( A11  A21 )  ( B11  B12 )

7 multiplications.
18 = 10 + 8 additions (or subtractions).
43
Fast Matrix Multiplication
Fast matrix multiplication. (Strassen, 1969)
Divide: partition A and B into ½n-by-½n blocks.
Compute: 14 ½n-by-½n matrices via 10 matrix additions.
Conquer: multiply 7 ½n-by-½n matrices recursively.
Combine: 7 products into 4 terms using 8 matrix additions.




Analysis.
Assume n is a power of 2.
T(n) = # arithmetic operations.


T(n)  7T n /2 
recursive calls
(n 2 )
 T(n)  (n log 2 7 )  O(n 2.81 )
add, subtract

44
Fast Matrix Multiplication in Practice
Implementation issues.
Sparsity.
Caching effects.
Numerical stability.
Odd matrix dimensions.
Crossover to classical algorithm around n = 128.





Common misperception: "Strassen is only a theoretical curiosity."
Advanced Computation Group at Apple Computer reports 8x speedup
on G4 Velocity Engine when n ~ 2,500.
Range of instances where it's useful is a subject of controversy.


Remark. Can "Strassenize" Ax=b, determinant, eigenvalues, and other
matrix ops.
45
Fast Matrix Multiplication in Theory
Q. Multiply two 2-by-2 matrices with only 7 scalar multiplications?
A. Yes! [Strassen, 1969]
(n log 2 7 )  O(n 2.81 )
Q. Multiply two 2-by-2 matrices with only 6 scalar multiplications?

A. Impossible. [Hopcroft and Kerr, 1971]
log 2 6
2.59
(n
)  O(n
)
Q. Two 3-by-3 matrices with only 21 scalar multiplications?
A. Also impossible.

(n log 3 21 )  O(n 2.77 )
Q. Two 70-by-70 matrices with only 143,640 scalar multiplications?

A. Yes! [Pan, 1980]
log 70 143640
2.80
(n
Decimal wars.
December, 1979: O(n2.521813).
January, 1980: O(n2.521801).

)  O(n
)


46
Fast Matrix Multiplication in Theory
Best known. O(n2.376) [Coppersmith-Winograd, 1987.]
Conjecture. O(n2+) for any  > 0.
Caveat. Theoretical improvements to Strassen are progressively less
practical.
47