Dynamic Programming

advertisement
Algorithm Design
Algorithm Design
1
Introduction
The design of good algorithms is more an art than
science
Basic algorithm-design methods:





Greedy method
Divide and Conquer
Dynamic Programming
Backtracking
Branch and Bound
Advanced methods:





Linear Programming
Integer Programming
Neural Networks
Genetic Algorithms
Simulated annealing
Algorithm Design
2
The Greedy Method
Algorithm Design
3
Outline
The Greedy Method Technique (§5.1)
Fractional Knapsack Problem (§5.1.1)
Task Scheduling (§5.1.2)
Minimum Spanning Trees (§7.3)
Text Compression
Algorithm Design
4
The Greedy Method
Technique
The greedy method is a general algorithm
design paradigm, built on the following
elements:


configurations: different choices, collections, or
values to find
objective function: a score assigned to
configurations, which we want to either maximize or
minimize
It works best when applied to problems with the
greedy-choice property:

a globally-optimal solution can always be found by a
series of local improvements from a starting
configuration.
Algorithm Design
5
Making Change
Problem: A dollar amount to reach and a collection of
coin amounts to use to get there.
Configuration: A dollar amount yet to return to a
customer plus the coins already returned
Objective function: Minimize number of coins returned.
Greedy solution: Always return the largest coin you can
Example 1: Coins are valued $.32, $.08, $.01

Has the greedy-choice property, since no amount over $.32 can
be made with a minimum number of coins by omitting a $.32
coin (similarly for amounts over $.08, but under $.32).
Example 2: Coins are valued $.30, $.20, $.05, $.01

Does not have greedy-choice property, since $.40 is best made
with two $.20’s, but the greedy solution will pick three coins
(which ones?)
Algorithm Design
6
The Fractional Knapsack
Problem
Given: A set S of n items, with each item i having


bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
If we are allowed to take fractional amounts, then this is
the fractional knapsack problem.

In this case, we let xi denote the amount we take of item i

Objective: maximize
b ( x / w )
iS

Constraint:
x
iS
i
i
i
i
W
Algorithm Design
7
Example
Given: A set S of n items, with each item i having


bi - a positive benefit
wi - a positive weight
Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Solution:
Items:
Weight:
Benefit:
Value:
($ per ml)
1
2
3
4
5
4 ml
8 ml
2 ml
6 ml
1 ml
$12
$32
$40
$30
$50
3
4
20
5
50
Algorithm Design
•
•
•
•
1
2
6
1
ml
ml
ml
ml
of
of
of
of
10 ml
8
5
3
4
2
The Fractional Knapsack
Algorithm
Greedy choice: Keep taking
item with highest value
(benefit to weight ratio)


Since bi ( xi / wi )  (bi / wi ) xi
iS
iS
Run time: O(n log n). Why?
Correctness: Suppose there
is a better solution



there is an item i with higher
value than a chosen item j
(i.e., vi<vj) but xi<wi and xj>0
If we substitute some i with j,
we get a better solution
How much of i: min{wi-xi, xj}
Thus, there is no better
solution than the greedy one
Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit bi
and weight wi; max. weight W
Output: amount xi of each item i
to maximize benefit with
weight at most W
for each item i in S
xi  0
vi  bi / wi
{value}
w0
{total weight}
while w < W
remove item i with highest vi
xi  min{wi , W - w}
w  w + min{wi , W - w}
Algorithm Design
9
Task Scheduling
Given: a set T of n tasks, each having:


A start time, si
A finish time, fi (where si < fi)
Goal: Perform all the tasks using a minimum number of
“machines.”
Machine 3
Machine 2
Machine 1
1
2
3
4
5
Algorithm Design
6
7
8
9
10
Task Scheduling
Algorithm
Greedy choice: consider tasks
by their start time and use as
few machines as possible with
Algorithm taskSchedule(T)
this order.
Input: set T of tasks w/ start time si
 Run time: O(n log n). Why?
and finish time fi
Correctness: Suppose there is a
Output: non-conflicting schedule
with minimum number of machines
better schedule.
m0
{no. of machines}
 We can use k-1 machines
while T is not empty
 The algorithm uses k
remove task i w/ smallest si
 Let i be first task scheduled
if there’s a machine j for i then
on machine k
schedule i on machine j
 Machine i must conflict with
else
k-1 other tasks
mm+1
 But that means there is no
schedule i on machine m
non-conflicting schedule
using k-1 machines
Algorithm Design
11
Example
Given: a set T of n tasks, each having:



A start time, si
A finish time, fi (where si < fi)
[1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start)
Goal: Perform all tasks on min. number of machines
Machine 3
Machine 2
Machine 1
1
2
3
4
5
Algorithm Design
6
7
8
9
12
Text Compression (§ 11.4)
Given a string X, efficiently encode X into a
smaller string Y

Saves memory and/or bandwidth
A good approach: Huffman encoding




Compute frequency f(c) for each character c.
Encode high-frequency characters with short code
words
No code word is a prefix for another code
Use an optimal encoding tree to determine the
code words
Algorithm Design
13
Encoding Tree Example
A code is a mapping of each character of an alphabet to a binary
code-word
A prefix code is a binary code such that no code-word is the
prefix of another code-word
An encoding tree represents a prefix code


Each external node stores a character
The code word of a character is given by the path from the root to
the external node storing the character (0 for a left child and 1 for a
right child)
00
010
011
10
11
a
b
c
d
e
a
Algorithm Design
d
b
c
e
14
Encoding Tree Optimization
Given a text string X, we want to find a prefix code for the characters
of X that yields a small encoding for X
Frequent characters should have long code-words
Rare characters should have short code-words


Example
X = abracadabra
T1 encodes X into 29 bits
T2 encodes X into 24 bits



T1
T2
c
d
a
b
r
a
b
c
Algorithm Design
r
d
15
Huffman’s Algorithm
Given a string X,
Huffman’s algorithm
construct a prefix
code the minimizes
the size of the
encoding of X
It runs in time
O(n + d log d), where
n is the size of X
and d is the number
of distinct characters
of X
A heap-based
priority queue is
used as an auxiliary
structure
Algorithm HuffmanEncoding(X)
Input string X of size n
Output optimal encoding trie for X
C  distinctCharacters(X)
computeFrequencies(C, X)
Q  new empty heap
for all c  C
T  new single-node tree storing c
Q.insert(getFrequency(c), T)
while Q.size() > 1
f1  Q.minKey()
T1  Q.removeMin()
f2  Q.minKey()
T2  Q.removeMin()
T  join(T1, T2)
Q.insert(f1 + f2, T)
return Q.removeMin()
Algorithm Design
16
Example
11
a
5
2
a
b
c
d
r
5
2
1
1
2
b
2
c
1
d
1
6
a
X = abracadabra
Frequencies
c
b
2
c
d
b
2
r
2
a
5
c
2
d
r
2
r
6
2
a
5
4
a
5
c
Algorithm Design
4
d
b
r
4
d
b
r
17
Extended Huffman Tree Example
Algorithm Design
18
Divide-and-Conquer
7 29 4  2 4 7 9
72  2 7
77
22
Algorithm Design
94  4 9
99
44
19
Outline and Reading
Divide-and-conquer paradigm (§5.2)
Review Merge-sort (§4.1.1)
Recurrence Equations (§5.2.1)




Iterative substitution
Recursion trees
Guess-and-test
The master method
Integer Multiplication (§5.2.2)
Algorithm Design
20
Divide-and-Conquer
Divide-and conquer is a
general algorithm design
paradigm:



Divide: divide the input data S in
two or more disjoint subsets S1,
S2 , …
Recur: solve the subproblems
recursively
Conquer: combine the solutions
for S1, S2, …, into a solution for S
The base case for the
recursion are subproblems of
constant size
Analysis can be done using
recurrence equations
Algorithm Design
21
Merge-Sort Review
Merge-sort on an input
sequence S with n
elements consists of
three steps:



Divide: partition S into
two sequences S1 and S2
of about n/2 elements
each
Recur: recursively sort S1
and S2
Conquer: merge S1 and
S2 into a unique sorted
sequence
Algorithm mergeSort(S, C)
Input sequence S with n
elements, comparator C
Output sequence S sorted
according to C
if S.size() > 1
(S1, S2)  partition(S, n/2)
mergeSort(S1, C)
mergeSort(S2, C)
S  merge(S1, S2)
Algorithm Design
22
Recurrence Equation
Analysis
The conquer step of merge-sort consists of merging two sorted
sequences, each with n/2 elements and implemented by means of
a doubly linked list, takes at most bn steps, for some constant b.
Likewise, the basis case (n < 2) will take at b most steps.
Therefore, if we let T(n) denote the running time of merge-sort:
b
if n  2

T (n)  
2T (n / 2) + bn if n  2
We can therefore analyze the running time of merge-sort by
finding a closed form solution to the above equation.

That is, a solution that has T(n) only on the left-hand side.
Algorithm Design
23
Iterative Substitution
In the iterative substitution, or “plug-and-chug,” technique, we
iteratively apply the recurrence equation to itself and see if we can
find a pattern:
T ( n )  2T ( n / 2) + bn
 2( 2T ( n / 22 )) + b(n / 2)) + bn
 22 T (n / 22 ) + 2bn
 23 T (n / 23 ) + 3bn
 24 T (n / 24 ) + 4bn
 ...
 2i T ( n / 2i ) + ibn
Note that base, T(n)=b, case occurs when 2i=n. That is, i = log n.
So,
T (n)  bn + bn logn
Thus, T(n) is O(n log n).
Algorithm Design
24
The Recursion Tree
Draw the recursion tree for the recurrence relation and look for a
pattern:
b
if n  2

T (n)  
2T (n / 2) + bn if n  2
time
depth
T’s
size
0
1
n
bn
1
2
n/2
bn
i
2i
n/2i
bn
…
…
…
…
Total time = bn + bn log n
(last level plus all previous levels)
Algorithm Design
25
Guess-and-Test Method
In the guess-and-test method, we guess a closed form solution
and then try to prove it is true by induction:
b
if n  2

T (n)  
2T (n / 2) + bn logn if n  2
Guess: T(n) < cn log n.
T (n)  2T (n / 2) + bn log n
 2(c(n / 2) log(n / 2)) + bn log n
 cn(logn - log 2) + bn log n
 cn log n - cn + bn log n
Wrong: we cannot make this last line be less than cn log n
Algorithm Design
26
Guess-and-Test Method,
Part 2
Recall the recurrence equation:
b
if n  2

T (n)  
2T (n / 2) + bn logn if n  2
Guess #2: T(n) < cn log2 n.
T (n)  2T (n / 2) + bn log n
 2(c(n / 2) log2 (n / 2)) + bn log n
 cn(logn - log 2) 2 + bn log n
 cn log2 n - 2cn log n + cn + bn log n

if c > b.
 cn log2 n
So, T(n) is O(n log2 n).
In general, to use this method, you need to have a good guess
and you need to be good at induction proofs.
Algorithm Design
27
Master Method
Many divide-and-conquer recurrence equations have
the form:
c

T (n)  
aT (n / b) + f (n)
if n  d
if n  d
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Algorithm Design
28
Master Method, Example 1
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  4T (n / 2) + n
Solution: logba=2, so case 1 says T(n) is O(n2).
Algorithm Design
29
Master Method, Example 2
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  2T (n / 2) + n log n
Solution: logba=1, so case 2 says T(n) is O(n log2 n).
Algorithm Design
30
Master Method, Example 3
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  T (n / 3) + n log n
Solution: logba=0, so case 3 says T(n) is O(n log n).
Algorithm Design
31
Master Method, Example 4
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  8T (n / 2) + n
2
Solution: logba=3, so case 1 says T(n) is O(n3).
Algorithm Design
32
Master Method, Example 5
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  9T (n / 3) + n
3
Solution: logba=2, so case 3 says T(n) is O(n3).
Algorithm Design
33
Master Method, Example 6
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  T (n / 2) + 1
(binary search)
Solution: logba=0, so case 2 says T(n) is O(log n).
Algorithm Design
34
Master Method, Example 7
c
if n  d
aT (n / b) + f (n)
if n  d
The form: T (n)  
The Master Theorem:
1. if f (n) is O(n logb a - ), thenT (n) is (n logb a )
2. if f (n) is (n logb a logk n), thenT (n) is (n logb a logk +1 n)
3. if f (n) is (n logb a + ), thenT (n) is ( f (n)),
provided af (n / b)  f (n) for some  1.
Example:
T (n)  2T (n / 2) + log n
(heap construction)
Solution: logba=1, so case 1 says T(n) is O(n).
Algorithm Design
35
Iterative “Proof” of the
Master Theorem
Using iterative substitution, let us see if we can find a pattern:
T (n)  aT (n / b) + f (n)
 a (aT (n / b 2 )) + f (n / b)) + bn
 a 2T (n / b 2 ) + af (n / b) + f (n)
 a 3T (n / b 3 ) + a 2 f (n / b 2 ) + af (n / b) + f (n)
 ...
a
T (1) +
logb n
(logb n ) -1
i
a
f (n / b i )
i 0
n
logb a
T (1) +
(logb n ) -1
i
i
a
f
(
n
/
b
)

i 0
We then distinguish the three cases as



The first term is dominant
Each part of the summation is equally dominant
The summation is a geometric series
Algorithm Design
36
Integer Multiplication
Algorithm: Multiply two n-bit integers I and J.

Divide step: Split I and J into high-order and low-order bits
I  I h 2n / 2 + I l
J  J h 2n / 2 + J l

We can then define I*J by multiplying the parts and adding:
I * J  ( I h 2n / 2 + I l ) * ( J h 2n / 2 + J l )
 I h J h 2n + I h J l 2n / 2 + I l J h 2n / 2 + I l J l


So, T(n) = 4T(n/2) + n, which implies T(n) is O(n2).
But that is no better than the algorithm we learned in grade
school.
Algorithm Design
37
An Improved Integer
Multiplication Algorithm
Algorithm: Multiply two n-bit integers I and J.

Divide step: Split I and J into high-order and low-order bits
I  I h 2n / 2 + I l
J  J h 2n / 2 + J l

Observe that there is a different way to multiply parts:
I * J  I h J h 2 n + [(I h - I l )(J l - J h ) + I h J h + I l J l ]2 n / 2 + I l J l
 I h J h 2 n + [(I h J l - I l J l - I h J h + I l J h ) + I h J h + I l J l ]2 n / 2 + I l J l
 I h J h 2 n + ( I h J l + I l J h )2 n / 2 + I l J l


So, T(n) = 3T(n/2) + n, which implies T(n) is O(nlog23), by
the Master Theorem.
Thus, T(n) is O(n1.585).
Algorithm Design
38
Dynamic Programming
Algorithm Design
39
Outline and Reading
Matrix Chain-Product
The General Technique
0-1 Knapsack Problem
Subsequences
Algorithm Design
40
Matrix Chain-Products
Dynamic Programming is a general
algorithm design paradigm.


Rather than give the general structure, let
us first give a motivating example:
Matrix Chain-Products
Review: Matrix Multiplication.

C = A*B
A is d × e and B is e × f

O(def ) time

f
e -1
C[i, j ]   A[i, k ] * B[k , j ] d
B
j
e
e
A
C
i
i,j
d
k 0
f
Algorithm Design
41
Matrix Chain-Products
Matrix Chain-Product:



Compute A=A0*A1*…*An-1
Ai is di × di+1
Problem: How to parenthesize?
Example





B is 3 × 100
C is 100 × 5
D is 5 × 5
(B*C)*D takes 1500 + 75 = 1575 ops
B*(C*D) takes 1500 + 2500 = 4000 ops
Algorithm Design
42
Enumeration Approach
Matrix Chain-Product Alg.:



Try all possible ways to parenthesize
A=A0*A1*…*An-1
Calculate number of ops for each one
Pick the one that is best
Running time:




The number of parenthesizations is equal
to the number of binary trees with n nodes
This is exponential!
It is called the Catalan number, and it is
almost 4n.
This is a terrible algorithm!
Algorithm Design
43
Greedy Approach
Idea #1: repeatedly select the product that
uses (up) the most operations.
Counter-example:






A is 10 × 5
B is 5 × 10
C is 10 × 5
D is 5 × 10
Greedy idea #1 gives (A*B)*(C*D), which takes
500+1000+500 = 2000 ops
A*((B*C)*D) takes 500+250+250 = 1000 ops
Algorithm Design
44
Another Greedy Approach
Idea #2: repeatedly select the product that uses
the fewest operations.
Counter-example:






A is 101 × 11
B is 11 × 9
C is 9 × 100
D is 100 × 99
Greedy idea #2 gives A*((B*C)*D)), which takes
109989+9900+108900=228789 ops
(A*B)*(C*D) takes 9999+89991+89100=189090 ops
The greedy approach is not giving us the
optimal value.
Algorithm Design
45
“Recursive” Approach
Define subproblems:



Find the best parenthesization of Ai*Ai+1*…*Aj.
Let Ni,j denote the number of operations done by this
subproblem.
The optimal solution for the whole problem is N0,n-1.
Subproblem optimality: The optimal solution can be
defined in terms of optimal subproblems




There has to be a final multiplication (root of the expression
tree) for the optimal solution.
Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1).
Then the optimal solution N0,n-1 is the sum of two optimal
subproblems, N0,i and Ni+1,n-1 plus the time for the last multiply.
If the global optimum did not have these optimal
subproblems, we could define an even better “optimal”
solution.
Algorithm Design
46
Characterizing Equation
The global optimal has to be defined in terms of
optimal subproblems, depending on where the final
multiply is at.
Let us consider all possible places for that final multiply:


Recall that Ai is a di × di+1 dimensional matrix.
So, a characterizing equation for Ni,j is the following:
N i , j  min{N i ,k + N k +1, j + di d k +1d j +1}
i k  j
Note that subproblems are not independent–the
subproblems overlap.
Algorithm Design
47
Dynamic Programming
Algorithm Visualization
The bottom-up
N i , j  min{N i ,k + N k +1, j + di d k +1d j +1}
i k  j
construction fills in the
N array by diagonals
j …
n-1
N 0 1 2 i
Ni,j gets values from
0
previous entries in i-th
1
row and j-th column
…
answer
Filling in each entry in
i
the N table takes O(n)
time.
Total run time: O(n3)
j
Getting actual
parenthesization can be
n-1
done by remembering
“k” for each N entry
Algorithm Design
48
Dynamic Programming
Algorithm
Since
Algorithm matrixChain(S):
subproblems
Input: sequence S of n matrices to be multiplied
overlap, we don’t
use recursion.
Output: number of operations in an optimal
Instead, we
parenthesization of S
construct optimal
for i  1 to n - 1 do
subproblems
Ni,i  0
“bottom-up.”
for b  1 to n - 1 do
Ni,i’s are easy, so
start with them
{ b  j - i is the length of the problem }
Then do
for i  0 to n - b - 1 do
problems of
ji+b
“length” 2,3,…
subproblems,
Ni,j  +
and so on.
for k  i to j - 1 do
Running time:
Ni,j  min{Ni,j, Ni,k + Nk+1,j + di dk+1 dj+1}
O(n3)
return N0,n-1
Algorithm Design
49
The General Dynamic
Programming Technique
Applies to a problem that at first seems to
require a lot of time (possibly exponential),
provided we have:



Simple subproblems: the subproblems can be
defined in terms of a few variables, such as j, k, l,
m, and so on.
Subproblem optimality: the global optimum value
can be defined in terms of optimal subproblems
Subproblem overlap: the subproblems are not
independent, but instead they overlap (hence,
should be constructed bottom-up).
Algorithm Design
50
The 0/1 Knapsack Problem
Given: A set S of n items, with each item i having


wi - a positive weight
bi - a positive benefit
Goal: Choose items with maximum total benefit but with
weight at most W.
If we are not allowed to take fractional amounts, then
this is the 0/1 knapsack problem.

In this case, we let T denote the set of items we take

Objective: maximize
b
iT

Constraint:
i
w  W
iT
i
Algorithm Design
51
Example
Given: A set S of n items, with each item i having


bi - a positive “benefit”
wi - a positive “weight”
Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Items:
Weight:
Benefit:
1
2
3
4
5
4 in
2 in
2 in
6 in
2 in
$20
$3
$6
$25
$80
Algorithm Design
box of width 9 in
Solution:
• item 5 ($80, 2 in)
• item 3 ($6, 2in)
• item 1 ($20, 4in)
52
A 0/1 Knapsack Algorithm,
First Attempt
Sk: Set of items numbered 1 to k.
Define B[k] = best selection from Sk.
Problem: does not have subproblem optimality:

Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of
(benefit, weight) pairs and total weight W = 20
Best for S4:
Best for S5:
Algorithm Design
53
A 0/1 Knapsack Algorithm,
Second Attempt
Sk: Set of items numbered 1 to k.
Define B[k,w] to be the best selection from Sk with
weight at most w
Good news: this does have subproblem optimality.
B[k - 1, w]
if wk  w

B[k , w]  
else
max{B[k - 1, w], B[k - 1, w - wk ] + bk }
I.e., the best subset of Sk with weight at most w is
either


the best subset of Sk-1 with weight at most w or
the best subset of Sk-1 with weight at most w-wk plus item k
Algorithm Design
54
0/1 Knapsack Algorithm
B[k - 1, w]
if wk  w

B[k , w]  
else
max{B[k - 1, w], B[k - 1, w - wk ] + bk }
Algorithm 01Knapsack(S, W):
Recall the definition of
Input: set S of n items with benefit bi
B[k,w]
and weight wi; maximum weight W
Output: benefit of best subset of S with
Since B[k,w] is defined in
weight at most W
terms of B[k-1,*], we can
let A and B be arrays of length W + 1
use two arrays of instead of
a matrix
for w  0 to W do
B[w]  0
Running time: O(nW).
for k  1 to n do
Not a polynomial-time
copy array B into array A
algorithm since W may be
for w  wk to W do
large
if A[w-wk] + bk > A[w] then
This is a pseudo-polynomial
B[w]  A[w-wk] + bk
time algorithm
return B[W]
Algorithm Design
55
Subsequences (§ 11.5.1)
A subsequence of a character string
x0x1x2…xn-1 is a string of the form
xi xi …xi , where ij < ij+1.
Not the same as substring!
Example String: ABCDEFGHIJK
1



2
k
Subsequence: ACEGJIK
Subsequence: DFGHK
Not subsequence: DAGH
Algorithm Design
56
The Longest Common
Subsequence (LCS) Problem
Given two strings X and Y, the longest
common subsequence (LCS) problem is
to find a longest subsequence common
to both X and Y
Has applications to DNA similarity
testing (alphabet is {A,C,G,T})
Example: ABCDEFG and XZACKDFWGH
have ACDFG as a longest common
subsequence
Algorithm Design
57
A Poor Approach to the
LCS Problem
A Brute-force solution:



Enumerate all subsequences of X
Test which ones are also subsequences of Y
Pick the longest one.
Analysis:


If X is of length n, then it has 2n
subsequences
This is an exponential-time algorithm!
Algorithm Design
58
A Dynamic-Programming
Approach to the LCS Problem
Define L[i,j] to be the length of the longest common
subsequence of X[0..i] and Y[0..j].
Allow for -1 as an index, so L[-1,k] = 0 and L[k,-1]=0, to
indicate that the null part of X or Y has no match with the other.
Then we can define L[i,j] in the general case as follows:
1. If xi=yj, then L[i,j] = L[i-1,j-1] + 1 (we can add this match)
2. If xi≠yj, then L[i,j] = max{L[i-1,j], L[i,j-1]} (we have no
match here)
Case 1:
Case 2:
Algorithm Design
59
An LCS Algorithm
Algorithm LCS(X,Y ):
Input: Strings X and Y with n and m elements, respectively
Output: For i = 0,…,n-1, j = 0,...,m-1, the length L[i, j] of a longest string
that is a subsequence of both the string X[0..i] = x0x1x2…xi and the
string Y [0.. j] = y0y1y2…yj
for i =1 to n-1 do
L[i,-1] = 0
for j =0 to m-1 do
L[-1,j] = 0
for i =0 to n-1 do
for j =0 to m-1 do
if xi = yj then
L[i, j] = L[i-1, j-1] + 1
else
L[i, j] = max{L[i-1, j] , L[i, j-1]}
return array L
Algorithm Design
60
Visualizing the LCS Algorithm
Algorithm Design
61
Analysis of LCS Algorithm
We have two nested loops




The outer one iterates n times
The inner one iterates m times
A constant amount of work is done inside
each iteration of the inner loop
Thus, the total running time is O(nm)
Answer is contained in L[n,m] (and the
subsequence can be recovered from the
L table).
Algorithm Design
62
Analysis of LCS Algorithm
We have two nested loops




The outer one iterates n times
The inner one iterates m times
A constant amount of work is done inside
each iteration of the inner loop
Thus, the total running time is O(nm)
Answer is contained in L[n,m] (and the
subsequence can be recovered from the
L table).
Algorithm Design
63
Backtracking and
Branch and Bound
Algorithm Design
64
n-Queens Problem
A queen that is placed on an n x n
chessboard, may attack any piece placed
in the same column, row, or diagonal.
8x8 Chessboard
n-Queens Problem
Can n queens be placed on an n x n
chessboard so that no queen may
attack another queen?
4x4
n-Queens Problem
8x8
Difficult Problems
Many require you to find either a subset
or permutation that satisfies some
constraints and (possibly also) optimizes
some objective function.
May be solved by organizing the
solution space into a tree and
systematically searching this tree for
the answer.
Algorithm Design
68
Subset Problems
Solution requires you to find a of n elements.
The subset must satisfy some constraints and
possibly optimize some objective function.
Examples.






Partition.
Subset sum.
0/1 Knapsack.
Satisfiability (find subset of variables to be set to
true so that formula evaluates to true).
Scheduling 2 machines.
Packing 2 bins.
Permutation Problems
Solution requires you to find a of n
elements.
The must satisfy some constraints and
possibly optimize some objective function.
Examples.


TSP.
n-queens.
 Each queen must be placed in a different row and
different column.
 Let queen i be the queen that is going to be placed in
row i.
 Let ci be the column in which queen i is placed.
 c1, c2, c3, …, cn is a permutation of [1,2,3, …, n] such
that no two queens attack.
Solution Space
Set that includes at least one solution to the
problem.
Subset problem.


n = 2, {00, 01, 10, 11}
n = 3, {000, 001, 010, 100, 011, 101, 110, 111}
Solution space for subset problem has 2n
members.
Nonsystematic search of the space for the answer
takes O(p2n) time, where p is the time needed to
evaluate each member of the solution space.
Solution Space
Permutation problem.


n = 2, {12, 21}
n = 3, {123, 132, 213, 231, 312, 321}
Solution space for a permutation problem
has n! members.
Nonsystematic search of the space for the
answer takes O(pn!) time, where p is the
time needed to evaluate a member of the
solution space.
Subset & Permutation Problems
Subset problem of size n.
 Nonsystematic search of the space for the answer
takes O(p2n) time, where p is the time needed to
evaluate each member of the solution space.
Permutation problem of size n.
 Nonsystematic search of the space for the answer
takes O(pn!) time, where p is the time needed to
evaluate each member of the solution space.
Backtracking and branch and bound perform a
systematic search; often taking much less time than
taken by a nonsystematic search.
Tree Organization of Solution
Space
Set up a tree structure such that the leaves
represent members of the solution space.
For a size n subset problem, this tree structure
has 2n leaves.
For a size n permutation problem, this tree
structure has n! leaves.
The tree structure is too big to store in memory;
it also takes too much time to create the tree
structure.
Portions of the tree structure are created by the
backtracking and branch and bound algorithms
as needed.
Subset Tree For n = 4
x1= 0
x1=1
x2=1
x3=1
x2=1
x2= 0
x2= 0
x3= 0
x4=1
1110
x4=0
1011
0111
0001
Permutation Tree For n = 3
x1=1
x2= 2
x3=3
123
x2= 3
x3=2
132
x1= 3
x1=2
x2= 1
x3=3
213
x 2= 3
x3=1
231
x2= 1
x3=2
312
x2= 2
x3=1
321
Backtracking
Search the solution space tree in a
depth-first manner.
May be done recursively or use a stack
to retain the path from the root to the
current node in the tree.
The solution space tree exists only in
your mind, not in the computer.
Backtracking Depth-First Search
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
Backtracking Depth-First Search
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
Backtracking Depth-First Search
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
Backtracking Depth-First Search
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
Backtracking Depth-First Search
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
O(2n) Subet Sum & Bounding
Functions
x1=1
x2=1
x2= 0
x1= 0
x2=1
x2= 0
Each forward and backward move takes O(1) time.
Backtracking
Space required is O(tree height).
With effective bounding functions, large
instances can often be solved.
For some problems (e.g., 0/1 knapsack), the
answer (or a very good solution) may be found
quickly but a lot of additional time is needed to
complete the search of the tree.
Run backtracking for as much time as is feasible
and use best solution found up to that time.
Branch And Bound
Search the tree using a breadth-first search
(FIFO branch and bound).
Search the tree as in a bfs, but replace the FIFO
queue with a stack (LIFO branch and bound).
Replace the FIFO queue with a priority queue
(least-cost (or max priority) branch and bound).
The priority of a node p in the queue is based on
an estimate of the likelihood that the answer
node is in the subtree whose root is p.
Branch And Bound
Space required is O(number of leaves).
For some problems, solutions are at different
levels of the tree (e.g., 16 puzzle).
4
14 1
13 2 3 12
6 11 5 10
9 8 7 15
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15
Branch And Bound


FIFO branch and bound finds solution closest
to root.
Backtracking may never find a solution
because tree depth is infinite (unless repeating
configurations are eliminated).
Least-cost branch and bound directs the
search to parts of the space most likely to
contain the answer. So it could perform
better than backtracking.
Download