Chapter 5

advertisement
Fundamental Techniques
CS 5050
Chapter 5
Goal: review complexity analysis
Talk of categories used to describe
algorithms.
1
Algorithmic Frameworks
•
•
•
•
•
•
•
•
Greedy
Divide-and-conquer
Discard-and-conquer
Top-down Refinement
Dynamic Programming
Backtracking/Brute Force
Branch-and-bound
Local Search (Heuristic)
2
Greedy Algorithms
• Short-view, tactical approach to minimize the value of an
objective function.
• Eating whatever you want – no regard for health
consequences or leaving some for others.
• Example: Cheapest meal – beverage, main course, dessert,
vegetable. Can we minimize cost by satisfying each category
separately?
3
Greedy Algorithms
• Solutions are often not global, but can be when problems
satisfy the “greedy-choice” property
• Example – Making change with US money to minimize
number of coins; not true with other money systems.
 Example – Knapsack problem. Given items weight and
value. Want best value in knapsack within total weight
limit. Cannot be done using greedy methods. Doesn’t have
greedy choice property
 It works best when applied to problems with
the greedy-choice property:
– a globally-optimal solution can always be found
by a series of local improvements from a starting
configuration.
4
The Greedy Method
Technique
• The greedy method is a general algorithm design
paradigm, built on the following elements:
– configurations: different choices, collections, or values to
find
– objective function: a score assigned to configurations,
which we want to either maximize or minimize
• Example: best path to take given I can only use {x1, … xn}
locations? Does this have the greedy choice property?
• Example: BST best binary search tree (best expected cost to
find each entry). Does this have the greedy choice property?
What if I always selected the root with the highest weight?
(a:6, b:5, c:5 – should a be root?)
5
Making Change
• Problem: A dollar amount to reach and a collection of coin
amounts to use to get there.
• Objective function: Minimize number of coins returned.
• Greedy solution: Always return the largest value coin you can
• Example 0: Coins are valued $1.00, $.50, $.25, $.10, $.05, $.01
• Example 1: Coins are valued $.32, $.08, $.01
– Has the greedy-choice property, since no amount over $.32 can be
made with a minimum number of coins by omitting a $.32 coin
(similarly for amounts over $.08, but under $.32).
– There would never be a reason NOT to use the highest value coin.
• Example 2: Coins are valued $.30, $.20, $.05, $.01
– Does not have greedy-choice property, since $.40 is best made with
two $.20’s, but the greedy solution will pick three coins (which ones?)
6
The Knapsack Problem
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
• Items are indivisible.
– In this case, we let xi denote {1 if we take item, 0 otherwise}
– Objective: maximize
b (x )
iS
– Constraint:
i
i
w W
iS
i
7
The Knapsack Problem
• Greedy will not work
• Example: Knapsack of weight 10
Tent: weight 6, benefit 7
Food: weight 5, benefit 5
Clothing: weight 5, benefit 5
Tent has the (1) best total benefit and the (2) best
benefit per pound  but is not best to include.
8
The Fractional Knapsack Problem
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
• If we are allowed to take fractional amounts, then this is the
fractional knapsack problem.
– In this case, we let xi denote the amount we take of item i
b ( x / w )
– Objective: maximize
iS
– Constraint:
x
iS
i
i
i
i
W
9
Example
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Solution:
Items:
Weight:
Benefit:
Value:
($ per ml)
1
2
3
4
5
4 ml
8 ml
2 ml
6 ml
1 ml
$12
$32
$40
$30
$50
3
4
20
5
50
•
•
•
•
1
2
6
1
ml
ml
ml
ml
of
of
of
of
5
3
4
2
10 ml
10
The Fractional Knapsack Algorithm
• Greedy choice: Keep taking
item with highest value
(benefit to weight ratio)
– Since bi ( xi / wi )  (bi / wi ) xi
iS
iS Why?
– Run time:
O(n log n).
Correctness: Suppose some
choice k was wrong. Then we
should have selected some
amount of resource t. Let a be
the smaller of the possible
amounts of k and t. Since k has
the higher value, a units of k
must be worth more than a
units of t. Hence, there was no
mistake.
Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit bi
and weight wi; max. weight W
Output: amount xi of each item i
to maximize benefit with
weight at most W
for each item i in S
xi  0
vi  bi / wi
{value}
w0
{current weight}
while w < W
remove item i with highest vi
xi  min{wi , W - w}
w  w + min{wi , W - w}
11
Task Scheduling
• Given: a set T of n tasks, each having:
– A start time, si
– A finish time, fi (where si < fi)
• Notice – the start and finish times are fixed!!
• Goal: Perform all the tasks using a minimum number of
“machines.”
• How would you place task s so you didn’t have to rethink?
Machine 3
Machine 2
Machine 1
1
2
3
4
5
6
7
8
9
12
Task Scheduling Algorithm
•
Greedy choice: consider tasks by
their start time and use as few
machines as possible with this order.
Algorithm taskSchedule(T)
– Run time: O(n log n). Why?
– Must Sort
•
Correctness: Suppose there is a
better schedule.
– We can use k-1 machines
– The algorithm uses k
– Let i be first task scheduled on
machine k
– Machine i must conflict with k-1
other tasks
– But that means there is no nonconflicting schedule using k-1
machines
Input: set T of tasks w/ start time si
and finish time fi
Output: non-conflicting schedule
with minimum number of machines
m0
{no. of machines}
while T is not empty
remove task i w/ smallest si
if there’s a machine j for i then
schedule i on machine j
else
mm+1
schedule i on machine m
13
Example
• Given: a set T of n tasks, each having:
– A start time, si
– A finish time, fi (where si < fi)
– [1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start)
• Goal: Perform all tasks on min. number of machines
Machine 3
Machine 2
Machine 1
1
2
3
4
5
6
7
8
9
14
Divide-and-Conquer
• Divide-and conquer is a general
algorithm design paradigm:
– Divide: divide the input data S in two or
more disjoint subsets S1, S2, …
– Recur: solve the subproblems
recursively
– Conquer: combine the solutions for S1,
S2, …, into a solution for S
• The base case for the recursion are
subproblems of constant size
• Analysis can be done using
recurrence equations
15
Divide and Conquer Algorithms
• Divide the problem into smaller subproblems - these
are often equal sized
• Eventually get to a base case
• Examples – mergesort or quicksort
• Generally recursive
• Usually efficient, but sometimes not
• For example, Fibonacci numbers – much duplicate
work
• Desirability depends on work in splitting/combining
16
Merge-Sort Review
• Merge-sort on an input
sequence S with n
elements consists of three
steps:
– Divide: partition S into two
sequences S1 and S2 of
about n/2 elements each
– Recur: recursively sort S1
and S2
– Conquer: merge S1 and S2
into a unique sorted
sequence
Algorithm mergeSort(S, C)
Input sequence S with n
elements, comparator C
Output sequence S sorted
according to C
if S.size() > 1
(S1, S2)  partition(S, n/2)
mergeSort(S1, C)
mergeSort(S2, C)
S  merge(S1, S2)
17
Recurrence Equation Analysis
• The conquer step of merge-sort consists of merging two sorted
sequences, each with n/2 elements and implemented by means of a
doubly linked list, takes at most bn steps, for some constant b.
• Likewise, the basis case (n < 2) will take at b most steps.
• Therefore, if we let T(n) denote the running time of merge-sort:
b
if n  2

T (n)  
2T (n / 2) + bn if n  2
• We can therefore analyze the running time of merge-sort by finding a
closed form solution to the above equation.
– That is, a solution that has T(n) only on the left-hand side.
18
Iterative Substitution
(telescoping)
• In the iterative substitution, or “plug-and-chug,” technique, we
iteratively apply the recurrence equation to itself and see if we can find a
pattern:
T ( n )  2T ( n / 2) + bn
 2( 2T ( n / 22 )) + b(n / 2)) + bn
 22 T (n / 22 ) + 2bn
 23 T (n / 23 ) + 3bn
 24 T (n / 24 ) + 4bn
 ...
 2i T ( n / 2i ) + ibn
• Note that base, T(n)=b, case occurs when 2i=n. That is, i = log n.
• So,
T (n)  bn + bn logn
• Thus, T(n) is O(n log n).
19
• Recursion tree – bn effort at each of log n
levels. Show pictures of work (like we did in
class)
• Guess and test – eyeball, then attempt an
inductive proof. If can’t prove, try something
bigger/smaller. This is not a good way for
beginning students.
20
The Recursion Tree
• Draw the recursion tree for the recurrence relation and look for a
pattern:
b
if n  2

T (n)  
2T (n / 2) + bn if n  2
time
depth
T’s
size
0
1
n
bn
1
2
n/2
bn
i
2i
n/2i
bn
…
…
…
…
Total time = bn + bn log n
(last level plus all previous levels)
21
Master method (different from text)
method of text can give tighter bounds
• Of the form
T(n) = c if n < d
aT(n/b) +O(nk) if n >= d
– a > bk then T(n) is O(n logb a )
– a = bk then T(n) is O(nk log n)
– a < bk then T(n) is O(nk)
• Can work this out on your own using
telescoping and math skills.
22
Master Method, Example 1
• Example:
T (n)  4T (n / 2) + n
• a=4
• b=2
• k=1
Case 1 says
Solution: logba=2, so case 1 says T(n) is O(n2).
23
Master Method Example 2
• T(n) = 8T(n/2) +n2
•
•
•
•
A=8
B=2
K=2
Case 3 8<4 so O(n2)
24
Master Method Example 3
• T(n) = T(n/2) +1
•
•
•
•
A=1
B=2
K=0
Case 2 1=1 so O(log n)
25
Divide and Conquer
When does it help?
• Dividing the problem in half doesn’t always help. Example – canning
peaches. There is no improvement in dividing the problem in half (and
may actually be more work because of the overhead of setup and
cleanup).
• There are two ways dividing in half may help
– As Jennie said – you may be able to throw part away. Like in a binary
search, when the bad half is just discarded.
– If the original complexity is greater than n and the “dividing the
problem” or “putting back together” phase is linear, you can win. This
is what happens in Quick sort. The division is linear, yet the work in a
small subproblem is greater than linear. Say it is n2. Then the work
becomes: (n/2)2 + (n/2)2 + O(n) = 2(n2/4) +n = n2/2 + n
Thus in a single time of dividing in half, the coefficient on n2 is reduced
If you do this recursively, the savings will be even greater.
26
Divide and Conquer
Integer Multiplication
• Algorithm: Multiply two n-bit integers I and J.
– Divide step: Split I and J into high-order and low-order bits
I  I h 2n / 2 + I l
J  J h 2n / 2 + J l
– We can then define I*J by multiplying the parts and adding:
I * J  ( I h 2n / 2 + I l ) * ( J h 2n / 2 + J l )
 I h J h 2n + I h J l 2n / 2 + I l J h 2n / 2 + I l J l
– So, T(n) = 4T(n/2) + n, which implies T(n) is O(n2).
– But that is no better than the algorithm we learned in grade
school. We have four multiplications each ¼ as big.
27
An Improved Integer
Multiplication Algorithm
• Algorithm: Multiply two n-bit integers I and J.
– Divide step: Split I and J into high-order and low-order bits
I  I h 2n / 2 + I l
J  J h 2n / 2 + J l
– Observe that there is a different way to multiply parts.
– Suppose we played around with other choices and just happened to
get the same answer but could reuse the same multiplications?
– Suppose we tried to just subtract Ih and Il and subtract J the OPPOSITE
way
28
An Improved Integer
Multiplication Algorithm
• Algorithm: Multiply two n-bit integers I and J.
– Divide step: Split I and J into high-order and low-order bits
I  I h 2n / 2 + I l
J  J h 2n / 2 + J l
– Observe that there is a different way to multiply parts:
I * J  I h J h 2 n + [(I h - I l )(J l - J h ) + I h J h + I l J l ]2 n / 2 + I l J l
 I h J h 2 n + [(I h J l - I l J l - I h J h + I l J h ) + I h J h + I l J l ]2 n / 2 + I l J l
 I h J h 2 n + ( I h J l + I l J h )2 n / 2 + I l J l
– So, T(n) = 3T(n/2) + n, which implies T(n) is O(nlog23), by the Master
Theorem.
– Thus, T(n) is O(n1.585).
29
Large Integer multiplication
• Idea – How think of? You want a way of
reducing the total number of pieces you
need to compute.
• Observe (Ih-Il)(Jl-Jh) = IhJl-IlJl – IhJh +IlJh
• Key is for one multiply, get two terms we
need if add in two terms we already have
• So, instead of computing the four pieces shown
earlier, we do this one multiplication to get two of
the pieces we need!
30
Large Integer multiplication
• IJ = IhJh2n +[(Ih-Il)(Jl-Jh)+IhJh+IlJl] 2n/2 + IlJl
• Tada – three multiplications instead of four
• by master formula O(n1.585)
–a=3
–b=2
–k=1
31
Try at seats!
For Example Mult 75*53
I: 7 5 2
J: 5 3 -2
35 15 -4 Product
35*100 + 10(-4 + 35 + 15) + 15
75*53=3975
32
Matrix multiplication
• Same idea Breaking up into four parts doesn’t help
• Strassen’s algorithm: complicated patterns of
sum/difference multiply reduce the number of
subproblems to 7 (from 8)
• Won’t go through details, as not much is learned
from the struggle.
• T(n) = 7T(n/2) + bn2
– Matrix multiplication is O(n2.808)
33
Homework
• In using recursion to find best expected cost:
Let expectedCost(a,b) return the integer representing the bestExpected cost
in placing nodes a through b (where a,b are subscripts into order array of
values to be placed)
We need an auxiliary array to store the weights between subscripts
Int expectedCost(a,b)
{ if (a >b) return 0;
if (a==b) return weight[a];
r = subscript of best root (trial and error to determine)
cost1 = expectedCost(a,r-1)
cost2 = expectedCost(r+1,b)
theCost = weight[r] + cost1 + weight[a,r-1] + cost2 + weight[r+1,b]
weight[a,b] = weight[r]+weight[a,r-1] + weight[r+1,b]
return theCost;
}
34
Homework
• Notice how you repeatedly compute the same
problems over and over again.
• In memoizing, you record your results so you
don’t have to repeat the problems. Store
answers in bestCost[][]. Count the number of
times you can just look up the bestCost rather
than having to compute it.
35
Discard and Conquer
• similar to divide and conquer
• Discard and conquer requires only that we solve one
of several subproblems
– Corresponds to proof by cases
– Binary search is an example
– Finding kth smallest is an example (Quickselect)
36
Dynamic Programming
37
Outline and Reading
• Matrix Chain-Product (§5.3.1)
• The General Technique (§5.3.2)
• 0-1 Knapsack Problem (§5.3.3)
38
Dynamic Programming Algorithms
• Reverse of divide and conquer, we build
solutions from the base cases up
• Avoids possible duplicate calls in the recursive
solution
• Implementation is usually iterative
• Examples – Fibonacci series, Pascal triangle,
making change with coins of different
relatively prime denominations
39
bestCost
x
Homework
weight
x
x
x
x
x
x
x
x
Notice we fill in main diagonal first.
To compute bestCost[1,3] we look at [2,3] [1,1][3,3] [1,2]
40
Good Dynamic Programming
Algorithm Attributes
• Simple Subproblems
• Subproblem Optimality: optimal solution
consists of optimal subproblems.
• Can you think of a real world example without
subproblem optimality? Round trip discounts.
• Subproblem Overlap (sharing)
41
The 0-1 Knapsack Problem
•
•
•
•
0-1, means take item or leave it (no fractions)
Now given units which we can take or leave
Obvious solution of enumerating all subsets is Θ(2n)
Difficulty is in characterizing subproblems
– Find best solution for first k units – no good, as optimal
solution doesn’t build on earlier solutions
– Find best solution, first k units within quantity limit
– Either use previous best at this limit, or new item plus
previous best at reduced limit – O(nW)
• Pseudo-polynomial – as it depends on a parameter
W, which is not part of other solutions.
42
Solve using recursion:
int value[MAX]; // value of each item
int weight[MAX]: // weight of each item
//You can use item "item" or items with lower number
//The maximum weight you can have is maxWeight
// return the best value possible in the knapsack
int bestValue(int item, int maxWeight) {
if (item < 0) return 0;
if (maxWeight < weight[item])
// current item can't be used, skip it
return bestValue(item-1, maxWeight);
useIt = bestValue(item-1, maxWeight - weight[item]) + value[item]
dontUseIt = bestValue(item-1, maxWeight);
return max (useIt, dontUseIt); }
43
Price per Pound
• The constant `price-per-pound' knapsack problem is
often called the subset sum problem, because given
a set of numbers, we seek a subset that adds up to
a specific target number, i.e. the capacity of our
knapsack.
• If we have a capacity of 10, consider a tree in which
each level corresponds to considering each item (in
order). Notice, in this simple example, about a third
of the calls are duplicates.
44
45
• We would need to store whether a specific
weight could be achieved using only items 1-k.
• possible[item][max] = given the current item
(or earlier in the list) and max value, can you
achieve max?
46
• Consider the weights: 2, 2, 6,5,4 with limit of 10
• We could compute such a table in an iterative
fashion:
2
1
2
3
no
yes no
4
5
6
7
8
9
10
no
no
no
no
no
no
no
2
6
5
4
47
• Consider the weights: 2, 2, 6,5,4 with limit of 10
• We could compute such a table in an iterative
fashion:
1
2
2
no
2
3
4
5
6
7
8
9
10
yes no
no
no
no
no
no
no
no
no
yes no
yes no
no
no
no
no
no
6
no
yes no
yes no
yes no
5
no
yes no
yes yes yes yes yes yes yes
4
no
yes no
yes yes yes yes yes yes yes
yes no
yes
48
From the table
• Can you tell HOW to fill the knapsack?
49
• Let's compare the two strategies:
• Normal/forgetful: wait until you are asked for a
value before computing it. You may have to do
some things twice, but you will never do anything
you don't need.
• Compulsive/elephant: you do everything before
you are asked to do it. You may compute things
you never need, but you will never compute
anything twice (as you never forget).
• Which is better? At first it seems better to wait
until you need something, but in large recursions,
almost everything is needed somewhere and
many things are computed LOTS of times.
50
Consider the complexity for a max capacity of M
and N different items.
• Normal: For each item, try it two ways (useIt
or dontUseIt). O(N2)
• Compulsive: Fill in array O(M*N)
• Which is better depends on values of M and
N. Notice, the two complexities depend on
different variables.
51
Clever Observation
• Since only the previous row is ever accessed,
don’t really need to store all rows.
• However, couldn’t easily read back optimal
choices
52
Matrix Multiplication
 

1
2 3
4
5 6


10
20
30

12
22
32
1*10+2*20+3*30
1*12+2*22+3*32
4*10+5*20+6*30
4*12+4*22+6*32

# of Columns of A must = # of Rows of B
53
Matrix Multiplication
Matrices A and B can be multiplied if:
[r x c] and [s x d]
c=s
The work required is O(rcd)
54
Adjacency matrix
of a directed graph
4
1
2
6
3
5
At seats, try computing A*A. What do you have?
How many ways you can get from one point to another in
exactly two steps.
55
Exercise 0: If A is the adjacency matrix of a
graph, then (Ak)ij=>1 iff there is a path of
length k from i to j.
56
• Given a sequence < A1, A2, ..., An>
of n matrices, we wish to compute the product
A1 A2 ... An.
• Matrix multiplication is associative, so the
product does not depend on how we
parenthesize the matrices
57
58
At Seats
A1*(A2*A3)
A1 10
100
(A1*A2) *A3
A2
100 5
Which is better?
A3
5
50
59
We compute all possibilities and remember the best.
60
n=5, compute in terms of smaller problems: P1*P4+P2*P3
+ P3*P2 + P4*P1 = 14 Looks bad, doubles when problem
61
size increases by 1!!!
• Running time:
– The number of parenthesizations is equal to
the number of binary search trees with n
external nodes (n-1 internal nodes). (The
matrices are the external nodes. The internal
nodes represent matrix multiplication of the
children.)
– This is exponential!
– It is called the Catalan number, and it is almost
4n.
– This is a terrible algorithm!
62
63
Matrix Multiplication
Too Many Overlapping Subproblems
At the top level decide which two pieces to multiply together
64
At seats -What is Algorithm?
• For each cell N(i,j) represents the best cost of
computing the multiplication of matrices i
thru j.
– Look at each possible division
– Pick the best of the possibilities
– k is division point
• N(i,k) + N(k+1,j) gives each piece
• multiply two pieces is di xdk+1 and dk+1 x dj+1
• subscripting – remember di is rowsize of ith matrix
65
Solution?
• How would you fix the problem of recomputing the same problems over and over?
66
Why
• Can you explain
• (1) What goes in each cell?
• (2) Why we are most interested in the
expected cost of subtrees independent of
where they are in the final tree?
Matrix Multiplication Parenthesis
Operation Count AND division point
Matrix
Rows
Cols
1
5
10
2
10
100
3
3
100
2
4
4
2
15
5
5
15
3
6
6
3
40
1
2
Tell me (1) the meaning of the value in each cell (2) how it is
computed (3) where the answer is
“Recursive” Approach
• Define subproblems:
– Find the best parenthesization of Ai*Ai+1*…*Aj.
– Let Ni,j denote the number of operations done by this subproblem.
– The optimal solution for the whole problem is N0,n-1.
• Subproblem optimality: The optimal solution can be defined
in terms of optimal subproblems
– There has to be a final multiplication (root of the expression tree) for
the optimal solution.
– Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1).
– Then the optimal solution N0,n-1 is the sum of two optimal
subproblems, N0,i and Ni+1,n-1 plus the time for the last multiply.
– If the global optimum did not have these optimal subproblems, we
could define an even better “optimal” solution.
69
Characterizing Equation
• The global optimal has to be defined in terms of optimal
subproblems, depending on where the final multiply is at.
• Let us consider all possible places for that final multiply:
– Recall that Ai is a di × di+1 dimensional matrix.
– So, a characterizing equation for Ni,j is the following:
N i , j  min{N i ,k + N k +1, j + di d k +1d j +1}
i k  j
• Note that subproblems are not independent–the
subproblems overlap.
70
Dynamic Programming
Algorithm Visualization
• The bottom-up construction
fills in the N array by
N i , j  min{N i ,k + N k +1, j + di d k +1d j +1}
diagonals
i k  j
• Cells in lower diagonal are
j …
n-1
N 0 1 2 i
not used.
0
• Ni,j gets values from previous
1
entries in i-th row and j-th
…
answer
column
i
• Filling in each entry in the N
table takes O(n) time.
• Total run time: O(n3)
j
• Getting actual
parenthesization can be
n-1
done by remembering “k”
for each N entry
71
Dynamic Programming
Algorithm
• Since subproblems Algorithm matrixChain(S):
overlap, we don’t
Input: sequence S of n matrices to be multiplied
use recursion.
Output: number of operations in an optimal
• Instead, we
construct optimal
parenthesization of S
subproblems
for i  1 to n - 1 do
“bottom-up.”
Ni,i  0
• Ni,i’s are easy, so
for subLen 1 to n - 1 do
start with them
• Then do problems
{subLen  j - i is the length of the problem }
of “length” 2,3,…
for i  0 to n - subLen - 1 do
subproblems, and
j  i + subLen
so on.
Ni,j  +
• Running time:
O(n3)
for k  i to j - 1 do
Ni,j  min{Ni,j, Ni,k + Nk+1,j + di dk+1 dj+1}
return N0,n-1
72
The General Dynamic
Programming Technique
• Applies to a problem that at first seems to require a
lot of time (possibly exponential), provided we
have:
– Simple subproblems: the subproblems can be defined in
terms of a few variables, such as j, k, l, m, and so on.
– Subproblem optimality: the global optimum value can be
defined in terms of optimal subproblems
– Subproblem repetition: the subproblems are not
independent, but instead they repeat(hence, should be
constructed bottom-up).
73
The 0/1 Knapsack Problem
• Given: A set S of n items, with each item i having
– wi - a positive weight
– bi - a positive benefit
• Goal: Choose items with maximum total benefit but with
weight at most W.
• If we are not allowed to take fractional amounts, then this is
the 0/1 knapsack problem.
– In this case, we let T denote the set of items we take
b
– Objective: maximize
iT
– Constraint:
i
w  W
iT
i
74
Example
• Given: A set S of n items, with each item i having
– bi - a positive “benefit”
– wi - a positive “weight”
• Goal: Choose items with maximum total benefit but with
weight at most W.
“knapsack”
Items:
Weight:
Benefit:
1
2
3
4
5
4 in
2 in
2 in
6 in
2 in
$20
$3
$6
$25
$80
box of width 9 in
Solution:
• item 5 ($80, 2 in)
• item 3 ($6, 2in)
• item 1 ($20, 4in)
75
A 0/1 Knapsack Algorithm, First
Attempt
• Sk: Set of items numbered 1 to k.
• Define B[k] = best selection from Sk.
• Problem: does not have subproblem optimality:
– Consider set S={(3,2),(5,4),(8,5),(4,3),(10,9)} of
(benefit, weight) pairs and total weight W = 20
Best for S4:
Best for S5:
Best from 4 is
of
no help in
selecting
Best from 5
76
A 0/1 Knapsack Algorithm,
Second Attempt
• Sk: Set of items numbered 1 to k (first k items from set)
• Define B[k,w] to be the best selection from Sk (first k items
from set) with weight at most w
• Good news: this does have subproblem optimality.
B[k - 1, w]
if wk  w

B[k , w]  
else
max{B[k - 1, w], B[k - 1, w - wk ] + bk }
• I.e., the best subset of Sk with weight at most w is either
– the best subset of Sk-1 with weight at most w or
– the best subset of Sk-1 with weight at most w-wk plus item k
77
0/1 Knapsack Algorithm
B[k - 1, w]
if wk  w

B[k , w]  
else
max{B[k - 1, w], B[k - 1, w - wk ] + bk }
Algorithm 01Knapsack(S, W):
• Recall the definition of B[k,w]
Input: set S of n items with benefit bi
and weight wi; maximum weight W
• Since B[k,w] is defined in
Output: benefit of best subset of S with
terms of B[k-1,*], we can use
weight at most W
two arrays of instead of a
let A and B be arrays of length W + 1
matrix
for w  0 to W do
• Running time: O(nW).
B[w]  0
• Not a polynomial-time
for k  1 to n do
algorithm since W may be
copy array B into array A
large
for w  wk to W do
• This is a pseudo-polynomial
if A[w-wk] + bk > A[w] then
time algorithm
B[w]  A[w-wk] + bk
return B[W]
78
Example
“knapsack”
Items:
Weight:
Benefit:
1
2
3
4
5
1 in
3 in
2 in
6 in
2 in
$10
$30
$6
$75
$60
box of width 9 in
Solution:
• item 5 ($60, 2 in)
• item 4 ($75, 6in)
• item 1 ($10, 1in)
79
At seats, knapsack problem
1
(1 in/ $10) 1
10
2
10
3
10
4
10
5
10
6
10
7
10
8
10
9
10
(3 in/$30) 2
(2in/$6) 3
(6 in/$75) 4
(2in/ $60) 5
80
1
2
3
4
5
6
7
8
9
(1 in/ $10) 1
10
10
10
10
10
10
10
10
10
(3 in/$30) 2
10
10
30
40
40
40
40
40
40
(2in/$6) 3
10
10
30
40
40
46
46
46
46
(6 in/$75) 4
10
10
30
40
40
75
85
85
105
(2in/ $60) 5
10
60
70
70
90
100
100
135
145
81
Download