Dynamic Programming Algorithm

advertisement
Analysis & Design of
Algorithms
(CSCE 321)
Prof. Amr Goneid
Department of Computer Science, AUC
Part 10. Dynamic Programming
Prof. Amr Goneid, AUC
1
Dynamic Programming
Prof. Amr Goneid, AUC
2
Dynamic Programming
 Introduction
 What is Dynamic Programming?
 How To Devise a Dynamic Programming







Approach
The Sum of Subset Problem
The Knapsack Problem
Minimum Cost Path
Coin Change Problem
Optimal BST
DP Algorithms in Graph Problems
Comparison with Greedy and D&Q Methods
Prof. Amr Goneid, AUC
3
1. Introduction
We have demonstrated that
 Sometimes, the divide and conquer
approach seems appropriate but fails
to produce an efficient algorithm.
One of the reasons is that D&Q
produces overlapping subproblems.
Prof. Amr Goneid, AUC
4
Introduction
 Solution:
Buy speed using space
 Store previous instances to compute current instance
 instead of dividing the large problem into two (or more)
smaller problems and solving those problems (as we did
in the divide and conquer approach), we start with the
simplest possible problems.
 We solve them (usually trivially) and save these results.
These results are then used to solve slightly larger
problems which are, in turn, saved and used to solve
larger problems again.
 This method is called Dynamic Programming

Prof. Amr Goneid, AUC
5
2. What is Dynamic Programming
 An algorithm design method used when the
solution is a result of a sequence of decisions
(e.g. Knapsack, Optimal Search Trees,
Shortest Path, .. etc).
 Makes decisions one at a time. Never make
an erroneous decision.
 Solves a sub-problem by making use of
previously stored solutions for all other subproblems.
Prof. Amr Goneid, AUC
6
Dynamic Programming
 Invented by American mathematician Richard
Bellman in the 1950s to solve optimization
problems
“Programming” here means “planning”
Prof. Amr Goneid, AUC
7
When is Dynamic Programming
Two main properties of a problem that suggest
that the given problem can be solved using
Dynamic programming.
 Overlapping Subproblems
 Optimal Substructure
Prof. Amr Goneid, AUC
8
Overlapping Subproblems
Like Divide and Conquer, Dynamic Programming
combines solutions to subproblems. Dynamic
Programming is mainly used when solutions of
same subproblems are needed again and again.
 Examples are computing the Fibonacci
Sequence, Binomial Coefficients, etc.
Prof. Amr Goneid, AUC
9
Optimal Substructure: Principle of
Optimality
 Dynamic programming uses the Principle of
Optimality to avoid non-optimal decision
sequences.
 For an optimal sequence of decisions, the
remaining decisions must constitute an
optimal sequence.
 Example: Shortest Path
Find Shortest path from vertex (i) to vertex (j)
Prof. Amr Goneid, AUC
10
Principle of Optimality
Let k be an intermediate vertex on a shortest ito-j path i , a , b, … , k , l , m , … , j . The path
i , a , b, … , k must be shortest i-to-k , and
the path k , l , m , … , j must be shortest k-to-j
k
i
j
l
a
b
m
Prof. Amr Goneid, AUC
11
3. How To Devise a Dynamic
Programming Approach
 Given a problem that is solvable by a Divide &






Conquer method
Prepare a table to store results of sub-problems
Replace base case by filling the start of the table
Replace recursive calls by table lookups
Devise for-loops to fill the table with sub-problem
solutions instead of returning values
Solution is at the end of the table
Notice that previous table locations also contain valid
(optimal) sub-problem solutions
Prof. Amr Goneid, AUC
12
Example(1): Fibonacci Sequence
 The Fibonacci graph is not a tree, indicating an
Overlapping Subproblem.
 Optimal Substructue:
If F(n-2) and F(n-1) are optimal, then
F(n) = F(n-2) + F(n-1) is optimal
Prof. Amr Goneid, AUC
13
Fibonacci Sequence
Dynamic Programming Solution:
 Buy speed with space, a table
F(n)
 Store previous instances to
compute current instance
Prof. Amr Goneid, AUC
14
Fibonacci Sequence
Fib(n):
Table F[n]
if (n < 2) return 1;
F[0] = F[1] = 1;
else
if ( n >= 2)
return
Fib(n-1) + Fib(n-2)
for i = 2 to n
F[i] =
F[i-1] + F[i-2];
i-2
i-1
i
return F[n];
Prof. Amr Goneid, AUC
15
Fibonacci Sequence
Dynamic Programming Solution:
 Space Complexity is O(n)
 Time Complexity is T(n) = O(n)
Prof. Amr Goneid, AUC
16
Example(2): Counting Combinations
 n   n  1   n  1
  
 , m  n
comb ( n , m )     
m
m

1
m
  
 

with comb ( n ,0 )  comb ( n , n )  1
 Overlapping Subproblem.
5,3
4,2
3,1
4,3
3,2
2,1
1,0
3,2
2,2
1,1
2,1
1,0
3,3
2,2
1,1
Prof. Amr Goneid, AUC
17
Counting Combinations
 Optimal Substructure
The value of Comb(n, m) can be recursively
calculated using following standard formula for
Binomial Coefficients:
Comb(n, m) = Comb(n-1, m-1) + Comb(n-1, m)
Comb(n, 0) = Comb(n, n) = 1
Prof. Amr Goneid, AUC
18
Counting Combinations
Dynamic Programming Solution:
 Buy speed with space, Pascal’s
Triangle. Use a table T[0..n, 0..m].
 Store previous instances to
compute current instance
Prof. Amr Goneid, AUC
19
Counting Combinations
comb(n,m):
if ((m == 0) || (m == n)) return1;
else
return
comb (n − 1, m − 1) +
comb (n − 1, m);
i-1, j-1
Table T[n,m]
for (i = 0 to n − m) T[i, 0] = 1;
for (i = 0 to m) T[i, i] = 1;
for (j = 1 to m)
for (i = j + 1 to n − m + j)
T[i, j] =
T[i − 1, j − 1] + T[i − 1, j];
return T[n, m];
i-1 , j
i,j
Prof. Amr Goneid, AUC
20
Counting Combinations
Dynamic Programming Solution:
 Space Complexity is O(nm)
 Time Complexity is T(n) = O(nm)
Prof. Amr Goneid, AUC
21
Exercise
Consider the following function:
n 1
F( n )   F( i )F( i 1 )
for n  1 with F( 0 )  F( 1 )  2
i 1
Consider the number of arithmetic operations used to
be T(n):
 Show that a direct recursive algorithm would give
an exponential complexity.
 Explain how, by not re-computing the same F(i)
value twice, one can obtain an algorithm with T(n) =
O(n2)
 Give an algorithm for this problem that only uses
O(n) arithmetic operations.
Prof. Amr Goneid, AUC
22
4. The Sum of Subset Problem
 Given a set of positive integers W = {w1,w2...wn}
 The problem: is there a subset of W that sums exactly to m?
i.e, is SumSub (w,n,m) true?
w1
w2
....
wn
m
 Example:
W = { 11 , 13 , 27 , 7} , m = 31
A possible subset that sums exactly to 31 is {11 , 13 , 7}
Hence, SumSub (w,4,31) is true
Prof. Amr Goneid, AUC
23
The Sum of Subset Problem
 Consider the partial problem SumSub (w,i,j)
w1
w2
....
wi
j
 SumSub (w, i, j) is true if:


wi is not needed, {w1,..,wi-1} has a subset that sums to (j), i.e.,
SumSub (w, i-1, j) is true, OR
wi is needed to fill the rest of (j), i.e., {w1,..,wi-1} has a subset that
sums to (j-wi)
 If there are no elements, i.e. (i = 0) then SumSub (w, 0, j) is true
if (j = 0) and false otherwise
Prof. Amr Goneid, AUC
24
Divide & Conquer Approach
Algorithm:
bool SumSub (w, i, j)
{
if (i == 0) return (j == 0);
else
if (SumSub (w, i-1, j)) return true;
else if ((j - wi) >= 0)
return SumSub (w, i-1, j - wi);
else return false;
}
Prof. Amr Goneid, AUC
25
Dynamic Programming Approach
Use a tabel t[i,j], i = 0.. n, j = 0..m
 Base case:
set t[0,0] = true and t[0,j] = false for (j != 0)
 Recursive calls are constructed as follows:
loop on i = 1 to n
loop on j = 1 to m
 test on SumSub (w, i-1, j) is replaced by t[i,j] = t[i-1,j]
 return SumSub (w, i-1, j - wi) is replaced by
t[i,j] = t[i-1,j] OR t[i-1,j – wi]
Prof. Amr Goneid, AUC
26
Dynamic Programming Algorithm
bool SumSub (w , n , m)
{
1.
t[0,0] = true;
for (j = 1 to m) t[0,j] = false;
2.
for (i = 1 to n)
3.
for (j = 0 to m)
i-1,
4.
t[i,j] = t[i-1,j];
j - wi
5.
if ((j-wi) >= 0)
6.
t[i,j] = t[i-1,j] || t[i-1, j – wi];
7.
return t[n,m];
}
Prof. Amr Goneid, AUC
i-1,
j
i,j
27
Dynamic Programming Algorithm
Analysis:
costs O(1) + O(m)
2.
costs O(n)
3.
costs O(m)
4.
costs O(1)
5.
costs O(1)
6.
costs O(1)
7.
costs O(1)
Hence, space complexity is O(nm)
Time complexity is T(n) = O(m) + O(nm) = O(nm)
1.
Prof. Amr Goneid, AUC
28
5. The (0/1) Knapsack Problem
 Given n indivisible objects with positive integer
weights W = {w1,w2...wn} and positive integer values
V = {v1,v2...vn} and a knapsack of size (m)
i=1
w1
p1
i=2
i=n
w2
wn
……..
m
pn
p2
 Find the highest valued subset of objects with total
weight at most (m)
Prof. Amr Goneid, AUC
29
The Decision Instance
 Assume that we have tried objects of type
(1,2,..., i -1) to fill the sack up to a capacity (j)
with a maximum profit of P(i-1,j)
 If j  wi then P(i-1, j - wi) is the maximum
profit if we remove the equivalent weight wi of
an object of type (i).
 By trying to add object (i), we expect the
maximum profit to change to
P(i-1, j - wi) + vi
Prof. Amr Goneid, AUC
30
The Decision Instance
 If this change is better, we do it, otherwise we leave
things as they were, i.e.,
P(i , j) = max { P(i-1, j) , P(i-1, j - wi) + vi } for j  wi
P(i , j) = P(i-1, j) for j < wi
 The above instance can be solved for P(n , m) by
initializing P(0,j) = 0 and successively computing
P(1,j) , P(2,j) ....., P (n , j) for all 0  j  m
Prof. Amr Goneid, AUC
31
Divide & Conquer Approach
Algorithm:
int Knapsackr (int w[ ], int v[ ], int i, int j)
{
if (i == 0) return 0;
else
{
int a = Knapsackr (w,v,i-1,j);
if ((j - w[i]) >= 0)
{ int b = Knapsackr (w,v,i-1,j-w[i]) + v[i];
return (b > a? b : a); }
else return a;
}
}
Prof. Amr Goneid, AUC
32
Divide & Conquer Approach
Analysis:
T(n) = no. of calls to Knapsackr (w, v, n, m):
 For n = 0, one main call, T(0) = 1
 For n > 0, one main call plus two calls each with n-1
 The recurrence relation is:
T(n) = 2T(n-1) + 1 for n > 0 with T(0) = 1
 Hence T(n) = 2n+1 -1 = O(2n) = exponential time
Prof. Amr Goneid, AUC
33
Dynamic Programming Approach
 The following approach will give the maximum
profit, but not the collection of objects that
produced this profit
 Initialize P(0 , j) = 0
for 0  j  m
 Initialize P(i , 0) = 0
for 0  i  n
 for each object i from 1 to n do
for a capacity j from 0 to m do
P(i , j) = P(i-1 , j)
if ( j >= wi)
if (P(i-1 , j) < P(i-1, j - wi) + vi )
P(i , j)  P(i-1 , j - wi) + vi
Prof. Amr Goneid, AUC
34
DP Algorithm
int Knapsackdp (int w[ ], int v[ ], int n, int m)
{ int p[N][M];
for (int j = 0; j <= m; j++) p[0][j] = 0;
for (int i= 0; i <= n; i++) p[i][0] = 0;
for (int i = 1; i <= n; i++)
for (j = 0; j <= m; j++)
{
int a = p[i-1][j]; p[i][j] = a;
if ((j-w[i]) >= 0)
{ int b = p[i-1][j-w[i]]+v[i];
if (b > a) p[i][j] = b; }
}
return p[n][m];
}
Hence, space complexity is O(nm)
Time complexity is T(n) = O(n) + O(m) + O(nm) = O(nm)
Prof. Amr Goneid, AUC
35
Example
Example: Knapsack capacity m = 5
item weight
value ($)
1
2
12
2
1
10
3
3
20
4
2
15
Prof. Amr Goneid, AUC
36
Example
P(i-1, j-wi)
wi, vi
P(i-1, j)
0
j-wi
j
m
0
0
0
0
0
i-1
0
i
0
n
0
P(i , j)
Prof. Amr Goneid, AUC
Goal P(n,m)
37
Example
.
Prof. Amr Goneid, AUC
38
Exercises
 Modify the previous Knapsack algorithm so
that it could also list the objects contributing
to the maximum profit.
 Explain how to reduce the space complexity
of the Knapsack problem to only O(m). You
need only to find the maximum profit, not
the actual collection of objects.
Prof. Amr Goneid, AUC
39
Exercise:
Longest Common Sequence Problem
 Given two sequences A = {a1, . . . , an} and B = {b1, . . . , bm}.
 Find the longest sequence that is a subsequence of both A
and B. For example, if A = {aaadebcbac} and
B = {abcadebcbec}, then {adebcb} is subsequence of length
6 of both sequences.
 Give the recursive Divide & Conquer algorithm and the
Dynamic programming algorithm together with their analyses
 Hint:
Let L(i, j) be the length of the longest common subsequence
of {a1, . . . , ai} and {b1, . . . , bj}. If ai = bj then
L(i, j) = L(i−1, j −1)+1. Otherwise, one can see that
L(i, j) = max (L(i, j − 1), L(i − 1, j)).
Prof. Amr Goneid, AUC
40
6. Minimum Cost Path
 Given a cost matrix C[ ][ ] and a position (n, m) in
C[ ][ ], find the cost of minimum cost path to reach
(n , m) from (0, 0).
 Each cell of the matrix represents a cost to traverse
through that cell. Total cost of a path to reach (n, m)
is the sum of all the costs on that path (including
both source and destination).
 From a given cell (i, j), You can only traverse down
to cell (i+1, j), , right to cell (i, j+1) and diagonally to
cell (i+1, j+1) . Assume that all costs are positive
integers
Prof. Amr Goneid, AUC
41
Minimum Cost Path
Example: what is the minimum cost path to (2, 2)?
The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). The cost
of the path is 8 (1 + 2 + 2 + 3).
 Optimal Substructure
Minimum cost to reach (n, m) is
“minimum of the 3 cells plus cost[n][m]“.
i.e.,
minCost(n, m) = min (minCost(n-1, m-1),
minCost(n-1, m), minCost(n, m-1)) + C[n][m]
Prof. Amr Goneid, AUC
42
Minimum Cost Path (D&Q)
 Overlapping Subproblems:
The recursive definition suggests a D&Q approach with
overlapping subproblems:
int MinCost(int C[ ][M], int n, int m)
{
if (n < 0 || m < 0) return ∞;
else if (n == 0 && m == 0) return C[n][m];
else return C[n][m] + min( MinCost(C, n-1, m-1),
MinCost(C, n-1, m), MinCost(C, n, m-1) );
}
Anaysis: For m=n, T(n) = 3 T(n-1) + 3 for n > 0 with T(0) = 0
Hence T(n) = O(3n) , exponential complexity
Prof. Amr Goneid, AUC
43
Dynamic Programming Algorithm
In the Dynamic Programming(DP) algorithm, recomputations of same
subproblems can be avoided by constructing a temporary array T[ ][ ] in
a bottom up manner.
int minCost(int C[ ][M], int n, int m)
Space complexity
is O(nm)
{ int i, j; int T[N][M];
Time complexity is
T[0][0] = C[0][0];
T(n) = O(n)+O(m) +
/* Initialize first column */
O(nm) = O(nm)
for (i = 1; i <= n; i++) T[i][0] = T[i-1][0] + C[i][0];
/* Initialize first row */
for (j = 1; j <= m; j++) T[0][j] = T[0][j-1] + C[0][j];
/* Construct rest of the array */
for (i = 1; i <= n; i++)
for (j = 1; j <= m; j++)
T[i][j] = min(T[i-1][j-1], T[i-1][j], T[i][j-1]) + C[i][j];
return T[n][m];
}
Prof. Amr Goneid, AUC
44
7. Coin Change Problem
 We want to make change for N cents, and we have infinite
supply of each of S = {S1 , S2, …Sm} valued coins, how many
ways can we make the change? (For simplicity's sake, the
order does not matter.)
 Mathematically, how many ways can we express N as
m
N   xk S k
k 1
, xk  0, k  {1....m}
 For example, for N = 4,S = {1,2,3}, there are four solutions:
{1,1,1,1},{1,1,2},{2,2},{1,3}.
 We are trying to count the number of distinct sets.
 Since order does not matter, we will impose that our solutions
(sets) are all sorted in non-decreasing order (Thus, we are
looking at sorted-set solutions: collections).
Prof. Amr Goneid, AUC
45
Coin Change Problem
 With S1 < S2 < …<Sm the number of possible sets C(N,m) is
Composed of:
Those sets that contain at least 1 Sm, i.e. C(N-Sm, m)
Those sets that do not contain any Sm, i.e. C(N, m-1)
 Hence, the solution can be represented by the recurrence
relation:
C(N,m) = C(N, m-1) + C(N-Sm , m)
with the base cases:
C(N,m) = 1 for N = 0
C(N,m) = 0 for N < 0
C(N,m) = 0 for N  1 , M ≤ 0
 Therefore, the problem has optimal substructure property as
the problem can be solved using solutions to subproblems.
 It also has the property of overlapping subproblems.
Prof. Amr Goneid, AUC
46
D&Q Algorithm
int count( int S[ ], int m, int n )
{
// If n is 0 then there is 1 solution (do not include any coin)
if (n == 0) return 1;
// If n is less than 0 then no solution exists
if (n < 0) return 0;
// If there are no coins and n is greater than 0, then no solution
if (m <=0 && n >= 1) return 0;
// count is sum of solutions (i) including S[m-1] (ii) excluding
// S[m-1]
return count( S, m - 1, n ) + count( S, m, n-S[m-1] );
}
The algorithm has exponential complexity.
Prof. Amr Goneid, AUC
47
DP Algorithm
int count( int S[ ], int m, int n )
{ int i, j, x, y;
int table[n+1][m];
// n+1 rows to include the case (n = 0)
for (i=0; i < m; i++) table[0][i] = 1; // Fill for the case (n = 0)
// Fill rest of the table enteries bottom up
for (i = 1; i < n+1; i++)
for (j = 0; j < m; j++)
{ x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // solutions including S[j]
y = (j >= 1)? table[i][j-1]: 0;
// solutions excluding S[j]
table[i][j] = x + y;
// total count
}
return table[n][m-1];
}
Space comlexity is O(nm)
Time comlexity is O(m) + O(nm) = O(nm)
Prof. Amr Goneid, AUC
48
8. Optimal Binary Search Trees
 Problem:
Given a set of keys K1 , K2 , … , Kn and their
corresponding search frequencies P1 , P2 , … , Pn ,
find a binary search tree for the keys such that the
total search cost is minimum.
 Remark:
The problem is similar to that of the optimal merge
trees (Huffman Coding) but more difficult because
now:
- Keys can exist in internal nodes
- The binary search tree condition (Left < Parent <
Right) is imposed
Prof. Amr Goneid, AUC
49
(a) Example
 A Binary Search Tree of 5 words:
Word
a
am
and
if
two
Freq
(tot 100)
22
18
20
30
10
 A Greedy Algorithm:
Insert words in the tree in order of decreasing
frequency of search.
Prof. Amr Goneid, AUC
50
A Greedy BST
 Insert (if) , (a) , (and) , (am) , (two)
if
Level
1
30
22
a
two
and
10
20
2
3
4
18 am
 Total Search Cost (100 searches) =
1*30 + 2*22 + 2* 10 + 3*20 + 4*18 = 226
Prof. Amr Goneid, AUC
51
Another way to Compute Cost
For a tree containing n keys (K 1 , K 2 , … ,
C(tree) is:
K n), the total cost
i
C (Left)
1 .. (i-1)
C (right)
(i+1).. n
Total Cost = C(tree) =
( P)All keys + C(left subtree) + C(right subtree)
For the previous example:
C(tree) = 100 + {60 + [0] + [38 + (18) + (0)]}+ {10} = 226
Prof. Amr Goneid, AUC
52
An Optimal BST
 A Dynamic Programming Algorithm leads to the
following BST:
and 20
30
if
22 a
18
am
10
two
Cost = 1*20 +2*22 + 2*30 + 3*18 + 3*10 = 208
=100 + {40 + [0] + [18]} + {40 + [0] + [10] } =
= 208
Prof. Amr Goneid, AUC
53
(b) Dynamic Programming Method
 For n keys, we perform n-1 iterations 1  j  n-1
 In each iteration (j), we compute the best way to build
a sub-tree containing j+1 keys (K i , K i+1 , … , K i+j) for
all possible BST combinations of such j+1 keys ( i.e.
for 1  i  n-j).
 Each sub-tree is tried with one of the keys as the root
and a minimum cost sub-tree is stored.
 For a given iteration (j), we use previously stored
values to determine the current best sub-tree.
Prof. Amr Goneid, AUC
54
Simple Example
 For 3 keys (A < B < C), we perform 2
iterations j = 1 , j = 2
 For j = 1, we build sub-trees using 2 keys.
These come from i = 1, (A-B), and
i = 2, (B-C).
 For each of these combinations, we compute
the least cost sub-tree, i.e., the least cost of
the two sub-trees (A*,B) and (A, B*) and the
least cost of the two sub-trees (B*,C) and (B,
C*) , where (*) denotes parent.
Prof. Amr Goneid, AUC
55
Simple Example (Cont.)
 For j = 2, we build trees using 3 keys. These
come from i = 1, (A-C).
 For this combination, we compute the least
cost tree of the trees
(A*,(B-C)) , (A , B* , C) , ((A-B) , C*).
This is done using previously computed least
cost sub-trees.
Prof. Amr Goneid, AUC
56
Simple Example (continued)
 j=1
i = 1, (A-B)
i = 2 , (B-C)
A
B
B
k=1
C
k=2
min = ?
min = ?
B
k=2
 j=2
C
k=3
A
B
i = 1, (A-C)
A
k=1
B-C
B
k=2 A
min
min = ?
C
k=3
C
A-B
min
Prof. Amr Goneid, AUC
57
(c) Example Revisited: Optimal BST for 5 keys
 Iterations 1..2
I
1
2
1 Ke y
a
am
and
22
18
a..am
sum(p)
j=0
4
5
if
tw o
20
30
10
am ..and
and..if
if..tw o
40
38
50
40
k=I
18
20
30
10
k= I+ j
22
18
20
30
m in
58
56
70
50
a..and
am ..if
and..tw o
sum(p)
60
68
60
k=I
56
70
50
42
48
30
58
56
70
102
116
90
p
2 k e ys
j=1
3 Ke ys
j=2
k= I+ j
m in
3
Prof. Amr Goneid, AUC
58
Optimal BST for 5 keys
 Iterations 3 .. n-1
4 Ke ys
j= 3
a..if
am ..tw o
90
78
116
90
92
68
88
66
k = I+ j
102
116
m in
178
144
sum(p)
k= I
5 Ke ys
j= 4
a..tw o
sum(p)
100
k= I
144
112
108
112
k = I+ j
178
m in
208
Prof. Amr Goneid, AUC
59
Construction of The Tree
 Min BST at j = 4, i = 1, k = 3
cost = 100 + 58 + 50 = 208
and
a..am
 (a .. am)min at j = 1, i = 1, k = 1
If..two
min
min
a
cost = 58
 (if .. two)min at j = 1, i = 4, k = 1
cost = 50
am
if
two
 Final min BST
and
cost = 1*20 + 2*22 + 2*30 +
3*18 + 3*10 = 208
a
if
am
Prof. Amr Goneid, AUC
two
60
(d) Complexity Analysis
 Skeletal Algorithm:
for j = 1 to n-1 do
{ // sub-tree has j+1 nodes
for i = 1 to n-j do
{ // for each of the n-j sub-tree combinations
for k = i to i+j do
{ find cost of each of the j+1 configurations
and determine minimum cost }
}
}
 T(n) =
1 j  n-1 ( j + 1) (n – j) = O(n3) ,
S(n) = O(n2)
Prof. Amr Goneid, AUC
61
Exercise
 Find the optimal binary search tree for the
following words with the associated
frequencies:
a (18) , and (22) , I (19) , it (20) , or (21)
 Answer: Min cost = 20 + 2*43 + 3*37 = 217
it
20
21
22
or
and
18
a
I
Prof. Amr Goneid, AUC
19
62
9. Dynamic Programming Algorithms
for Graph Problems
Various optimization graph problems have been solved using
Dynamic Programming algorithms. Examples are:
 Dijkstra's algorithm solves the single-source shortest path
problem for a graph with nonnegative edge path costs
 Floyd–Warshall algorithm for finding all pairs shortest paths
in a weighted graph (with positive or negative edge weights)
and also for finding transitive closure
 The Bellman–Ford algorithm computes single-source
shortest paths in a weighted digraph for graphs with negative
edge weights.
These will be discussed later under “Graph Algorithms”.
Prof. Amr Goneid, AUC
63
10. Comparison with Greedy and
Divide & Conquer Methods
 Greedy vs. DP :





Both are optimization techniques, building solutions from a
collection of choices of individual elements.
The greedy method computes its solution by making its
choices in a serial forward fashion, never looking back or
revising previous choices.
DP computes its solution bottom up by synthesizing them
from smaller subsolutions, and by trying many possibilities
and choices before it arrives at the optimal set of choices.
There is no a priori test by which one can tell if the Greedy
method will lead to an optimal solution.
By contrast, there is a test for DP, called The Principle of
Optimality
Prof. Amr Goneid, AUC
64
Comparison with Greedy and Divide &
Conquer Methods
 D&Q vs. DP:



Both techniques split their input into parts, find
subsolutions to the parts, and synthesize larger
solutions from smaller ones.
D&Q splits its input at pre-specified deterministic
points (e.g., always in the middle)
DP splits its input at every possible split point rather
than at pre-specified points. After trying all split
points, it determines which split point is optimal.
Prof. Amr Goneid, AUC
65
Download