Chapter 8 Analysis of Algorithms

advertisement
Analysis of Algorithms
 An algorithm is a step-by-step specification on how to perform
a certain task.
 time complexity: the time it takes to execute an algorithm.
 Algorithm LARGEST1
1. Initially, place the number in register x1 in a register called
max.
2. For i = 2, 3, ..., n do
Compare the number in the register xi with the number in
register max. If the number in xi is larger than the number in
max, move the number in register xi to register max.
Otherwise, do nothing.
3. Finally, the number in register max is the largest of the n
numbers in registers x1, x2, ..., xn.
Time complexity: O(n-1) ((n - 1)c exactly), where c is the time for
comparing two numbers. The time it takes to place a number in
register max is either negligible or has been included in quantity
c.
 Algorithm LARGEST2
4. For i = 1, 2, ..., n-1 do: Compare the numbers in registers xi
and xi+1. Place the larger of the two numbers in register xi+1
and the smaller in register xi.
5. Finally, the number in register xn is the largest of the n
numbers.
Time complexity: O(n-1).
 Algorithm BUBBBLESORT
1. For i = n, n-1, n-2, …, 4, 3, 2 do: Use algorithm LARGEST2
to place in register xi the largest of the i numbers in registers
x1, x2, ..., xi.
2. Finally, the numbers in registers x1, x2, …, xn are in
ascending order.
Time complexity: O(n(n-1)/2),
n-1 comparisons for the 1st execution
n-2 comparisons for the 2nd execution
…
1 comparisons
for the (n-1)st execution.
 Algorithm SHOREST
1. Initially, let P = {a} and T = V-{a}. For every vertex t in T,
let l(t) = w(a, t).
2. Select the vertex in T that has the smallest index with respect
to P. Let x denote such vertex.
3. If x is the vertex we wish to reach from a, stop. If not, let P’
= P∪ {x} and T’ = T-{x}. For every vertex t in T’, update its
index with respect to P’.
4. Repeat steps 2 and 3 using P’ as P and T’ as T.
Time complexity: O((n-1)2),
 Enlarges the set P one vertex at a time and terminates
when set P contains vertex z.  at most n - 1 iterations
 During each iterations, select the vertex in T that has the
smallest index and update the indices in T’. O(n - 1)
Complexity of Problems
 Can we find an algorithm with time complexity lower than O(n)
to solve the problem of finding the largest of n numbers?
 The time complexity of a problem: The time complexity of a
“best” possible algorithm for solving the problem.
 The upper bound on the time complexity of the problem: A best
possible algorithm will have a time complexity less than or
equal to the bound.
Clearly, the time complexity of any algorithm that solves the
problem is an upper bound on the time complexity of the
problem.
 The lower bound on the time complexity of the problem: no
algorithm for solving the problem will have complexity lower
than the bound.
 The time complexity of a problem can be located if the upper
bound and the lower bound we obtained have the same time
complexity.
 A straightforward upper bound on the time complexity of the
LARGESMALL problem is O(2n - 3) using Algorithm
LARGEST1 or LARGEST2 directly.
 Algorithm LARGESMALL
1. For i = 1, 2, ..., n/2, compare the two numbers in registers xi
and xi+(n/2), and place the smaller number in register xi and the
largest in register xi+(n/2).
2. Use algorithm LARGEST1 to determine the largest of the n/2
numbers in registers x(n/2)+1, x(n/2)+2, …, xn. This is the largest
of the n given numbers.
3. Use an algorithm similar to LARGEST1 to determine the
smallest of the n/2 numbers x1, x2,…, xn/2. This is the smallest
of the n given numbers.
Time complexity: (3n/2) - 2,
Step 1:
n/2 comparisons
Step 2:
(n/2) - 1 comparisons
Step 3:
(n/2) -1 comparisons.
 How to find the lower bound of the LARGESMALL problem?
An adversary argument solution
 Let A denote the set of numbers that so far have not been
compared with any other number.
Let B denote the set of numbers that has already been confirmed no
to be the largest.
Let C denote the set of numbers that has already been confirmed no
to be the smallest.
 Initially, A = {a1, a2, …, an}, B = , and C = . At the end of
execution |A| = 0, |B| = n - 1, and |C| = n - 1.
 Notice that a number may appear in both B and C. On the other
hand, A and B are clearly disjoint, as are A and C.
 The general idea of the adversary argument is whenever the
algorithm compares two numbers, the adversary argument will
tell the outcome to the algorithm. The adversary argument
selects the outcomes in such a way that the algorithm will be
unable to determine the largest and the smallest of the n
numbers until conclusion of at least (3n/2) - 2 comparisons.
 The rules of adversary argument for this problem:
1. If the algorithm compares two numbers ai and aj such that ai
is in B but not in C, then ai < aj. (If both ai and aj are in B but
not in C, the result is arbitrary as long as it is consistent.)
2. If the algorithm compares two numbers ai and aj such that ai
is in C but not in B. then ai > aj. (If both ai and aj are in C but
not in B, the result is arbitrary as long as it is consistent.)
3. For any other comparison, the result is arbitrary as long as it
is consistent.
 The response of the adversary argument is always consistent. If
ai is in B but not in C, it means ai has never been compared with
a number that is smaller than ai.  Rule 1 will never yield
inconsistent answers. If ai is in C but not in B, it means ai has
never been compared with a number that is larger than ai. 
Rule 2 will never yield inconsistent answers.
 If we compare two numbers in A, both numbers will be detected
from A with the smaller of the two added to B and the larger
added to C.  Both |B| and |C| will be increased by 1. Any
other comparison will at most cause either |B| to increase by 1
or |C| to increase by 1.
 To increase |B| from 0 to n - 1 and to increase |C| from 0 to n 1, the best any algorithm can do is to use n/2 comparisons to
compare the n numbers that were initially in A and to use 2(n - 1)
- n more comparisons so the sizes of |B| and |C| will reach n - 1
at the end.
 Thus, the total number of comparisons is at least
n/2 + [2(n - 1) - n] = (3n/2) - 2.
Tractable and Intractable Problems
 Brute force
 Traveling salesperson problem : (n - 1)! Tours (69! =
1.71122  1098, it takes 5.42626  1078 centuries if our
computer can examine 1010 tours per second.)
 Knapsack problem : Suppose we have n integers a1, a2, ...,
an and a constant K. We want to find a subset of integers so
that their sum is less than or equal to but is as close to K as
possible. There are 2n subsets. (n = 100, 2n = 1.26765  1030,
it takes 4.01969  1012 years if our computer can examine
1010 subsets per second.)
 The table of growing rates
 A problem is considered tractable (computationally easy, O(nk))
if it can be solved by an efficient algorithm and intractable
(computationally difficult, the lower bound grows fast than nk)
if there is no efficient algorithm for solving it.
 The class of NP-complete problems: There is a class of
problems, including TSP and Knapsack, for which no efficient
algorithm is currently known.
 We have not been able to prove that efficient algorithms
cannot exist for their solution.
 But, it has been shown that these problems are either
tractable or intractable.
1. It takes exponential number of steps to solve any
NP-complete problem in the worst case.
2. It takes polynomial number of steps to solve any
NP-complete problem in the worst case.
3. Any NP-complete problem can be solved by a
polynomial time deterministic algorithm in average if
and only if NP = P is proved.
4. If a problem is proved to be an NP-complete problem,
then at present it will always take exponential number of
steps to solve this problem for any input.
5. The
lower
bound
of
NP-complete
exponential if and only if NP  P is proved.
problem
is
Download