Tutorial Exercise (Dynamic Programming) f (j)

advertisement
Tutorial Exercise (Dynamic Programming)
1.
CS3381 Design and Analysis of Algorithms
Helena Wong, 2001
For the equations of the assembly-line scheduling problem:
f *= min (f1[n] + x1, f2[n] + x2)
f 1(j) =
f 2(j) =
e1+a1,1
if j=1
min (f1[j-1] + a1,j, f2[j-1] + t2,j-1+ a1,j) if j>1
e2+a2,1
if j=1
min (f2[j-1] + a2,j, f1[j-1] + t1,j-1+ a2,j) if j>1
Let ri(j) = number of references made to fi[j] in a recursive algorithm, We have
r1(n) = r2(n) = 1
r1(j) = r2(j) = r1(j+1) + r2(j+1)
Show that ri(j) = 2n-j.
[Hint: You may use substitution method to prove ri(n-k) = 2k first]
We firstly prove that ri(n-k) = 2k by substitution method
When k=0, r1(n-k)=r2(n-k)=1=2k => it is true when k=0
Assume it is true for k-1, ie. ri(n-k+1) = 2k-1
By substitution: ri(n-k) = r1(n-k+1)+ r2(n-k+1) = 2*2k-1 = 2k
=> it is also true for k.
=> we have proved that ri(n-k) = 2k
=> let j = n-k, then ri(j) = 2n-j.
2. Using the result of question 1, show that the total number of references to all f i[j] values, is exactly 2n+1-2.
What does this imply to the running time of a recursive algorithm for the Assembly-Line Scheduling
Problem? Compare it with the running time of the “FASTEST-WAY” algorithm.
Total number of references to all fi[j] values
= ( r1(1) + r1(2) + r1(3) + … + r1(n) ) + ( r2(1) + r2(2) + r2(3) + … + r2(n)
= (2n-1 + 2n-2 + 2n-3 + … +2n-n) + (2n-1 + 2n-2 + 2n-3 + … +2n-n)
= 2 ( 1+21+22+23+…+2n-1)
= 2 ( 1+21+22+23+…+2n-1 + 2n – 2n)
= 2 [(2n+1-1)/(2-1)] – 2 * 2n
= 2 * (2n+1-1) – 2n+1
= 2n+1-2
)
Assuming each reference takes constant ((1)) time, running time of a recursive
algorithm for the problem is (2n).
Running time of the “FASTEST-WAY” algorithm is (n)!!!
Tutorial Exercise (Dynamic Programming)
CS3381 Design and Analysis of Algorithms
Helena Wong, 2001
3. For the “MATRIX-CHAIN-ORDER” algorithm used in our Matrix-Chain
Multiplication problem
a. By observation, state the tight upper bound of the running time.
b. Show that the total number of times line 9 being executed is (n3-n)/6.
[Given 1+22+32+..n2 = n(n+1)(2n+1)/6.]
c. Make a conclusion about the asymptotic tight bound of the running
time of the algorithm.
MATRIX-CHAIN-ORDER
1 for
2
3 for
4
5
6
7
8
9
10
11
i  1 to n
m[i,i]  0
len  2 to n //len is chain length
for i  1 to n - len + 1
j  i + len - 1
m[i,j]  
for k  i to j-1
q  m[i,k]+m[k+1,j]+p i-1pkpj
if q < m[i,j]
then m[i,j]  q
s[i,j]  k
a. O(n3)
b. The ‘len’ loop executes n-1 times (len = 2,3,4,..n). Inside each ‘len’ loop, the ‘i’ loop executes n-len+1
times. Inside each ‘i’ loop, the ‘k’ loop executes (j-1) – i + 1 = (i + len – 1 – 1) – i + 1 = len-1 times
answer:
len=2 to n (n-len+1)*(len – 1)
= len=2 to n (n * len – len2 + len – n + len – 1)
= len=2 to n ((n+2)*len - len2 – n – 1)
= (n+2) (2+3+…n) – (22+32+..n2) – (n-1)*n – (n-1)*1
= (n+2)[(1+2+3+..n) - 1] – [(12+22+32+…+n2)-1] – (n-1)(n+1)
= (n+2)[(1+n)*(n/2) – 1] – [(n(n+1)(2n+1)/6)-1] – (n2-1)
= (n+2)(1+n)*(n/2) – (n+2) – [(2n3+3n2+n)/6 – 1] – n2+1
= (n3+3n2+2n)/2 – [(2n3+3n2+n)/6 – 1] – (n+2) – n2+1
= (1/6) { 3n3+9n2+6n - 2n3-3n2-n + 6 – 6n – 12 – 6n2 +6}
= (n3-n)/6
c. (b) implies the running time is (n3). Together with (a) => (n3)
(n3)
4.
Draw the recursion tree for the merge-sort procedure on an array of 16 elements. Explain why memoization
is ineffective in speeding up a good divide-and-conquer algorithm such as MERGE-SORT.
It does not have the Overlapping Subproblems property.
(Not re-visiting subproblems.)
Needs lot of space to store solutions of subproblems.
5.
Show that the running time of the MEMOIZED-MATRIX-CHAIN algorithm is O(n3)
(text book pp. 348-349)
MEMOIZED -MATRIX -CHAIN()
1 for i  1 to n
2
for j  i to n
3
m[i,j]  
4 return LOOKUP -CHAIN(1,n)
LOOKUP -CHAIN(i,j)
1 if m[i,j] < 
2
return m[i,j]
3 if i=j
4
then m[i,j]  0
5
else for k  i to j -1
6
q  LOOKUP -CHAIN(i,k)
+ LOOKUP -CHAIN(k+1,j)
+ p i-1pkpj
7
if q < m[i,j]
8
then m[i,j]  q
9 return m[i,j]
We categorize the calls of LOOKUP-CHAIN into 2 types:
(1) calls in which m[i,j] = 
(2) calls in which m[i,j] < 
There are (n2) calls of (1), one per table entry.
All calls of (2) are made as recursive calls by calls of the (1) type.
Whenever a given call of LOOKUP-CHAIN makes recursive calls, it makes O(n) of them. Therefore, there are
O(n3) calls of (2) in all.
Each call of the second type takes O(1) time, and each call of the first type takes O(n) time plus the time spent in its
recursive calls. => The total time is O(n3).
Tutorial Exercise (Dynamic Programming)
6.
CS3381 Design and Analysis of Algorithms
Helena Wong, 2001
A dynamic programming solution to the LCS problem, LCS-LENGTH, is illustrated below.
What is the asymptotic upper bound of its running time?
Write a Memoized version of LCS-LENGTH.
Compare the performances of the 2 solutions.
LCS-LENGTH(X,Y)
1 m = length of X
O(mn)
2 n = length of Y
3 for i = 1 to m
4
c[i,0] = 0
5 for j = 1 to n
6
c[0,j] = 0
7 for i = 1 to m
8
for j = 1 to n
9
if xi=yj
10
c[i,j] = c[i-1,j-1] + 1
11
b[i,j] = “use result of c[i,j] plus xi”
12
else if c[i-1,j] >= c[i,j-1]
13
c[i,j] = c[i-1,j]
14
b[i,j] = “use result of c[i-1,j]”
12
else
13
c[i,j] = c[i,j-1]
14
b[i,j] = “use result of c[i,j-1]”
15 return c and b
MEMOIZED-LCS-LENGTH ()
1 for i=1 to m
2
for j=1 to n
3
c[i,j]=-1
4 return LOOKUP-LCS(m,n)
LOOKUP-LCS (i,j)
1 if i=0 or j=0
2
return 0
3 if c[i,j]<> -1
4
return c[i,j]
5 if (xi=yj)
6
c[i,j] = LOOKUP-LCS(i-1,j-1) + 1
7
b[i,j] = “use result of c[i,j] plus xi”
8 else if LOOKUP-LCS(i-1,j)>= LOOKUP-LCS(i,j-1)
9
c[i,j] = c[i-1,j]
10
b[i,j] = “use result of c[i-1,j]”
11
else
12
c[i,j] = c[i,j-1]
13
b[i,j] = “use result of c[i,j-1]”
14 return c[i,j]
7. Repeat question 6 for the OPTIMAL-BST algorithm
OPTIMAL-BST()
1 for i = 1 to n+1
O(n3)
2
e[i,i-1] = qi-1
3
w[i,i-1] = qi-1
4 for len = 1 to n
5
for i = 1 to n-len + 1
6
j = i + len - 1
7
e[i,j] = 
8
w[i,j] = w[i,j-1] + pj + qj
9
for r = i to j
10
t = e[i,r-1] + e[r+1,j] + w[i,j]
11
if t < e[i,j]
12
e[i,j] = t
13
root[i,j] = r
14 return e and root
MEMOIZED-BST ()
1 for i=1 to n+1
2
for j=0 to n
3
e[i,j]= 
4 return LOOKUP-BST(1,n)
LOOKUP-BST (i,j)
1 if e[i,j]<> 
2
return e[i,j]
3 if j=i-1
4
e[i,j]=qi-1
5
w[i,j]=qi-1
6 else
7
w[i,j]=w[i,j-1]+pj+qj
8
for r=i to j
9
t=LOOKUP-BST(i,r-1) + LOOKUP-BST(r+1,j) + w[i,j]
10
if t < e[i,j]
11
e[i,j]=t
12
root[i,j]=r
13 return e[i,j]
Download