Uploaded by Pez thePupper

Decrease-and-Conquer Algorithms: Lecture Notes

advertisement
Lecture 4
Decrease-and-Conquer
1
Lecture Contents
1. Insertion Sort
2. Topological Sorting
3. Algorithms for Generating Combinatorial
Objects
4. Decrease-by-a-Constant-Factor Algorithms
5. Variable-Size-Decrease Algorithms
2
Decrease-and-Conquer
1. Reduce problem instance to smaller instance of
the same problem
2. Solve smaller instance
3. Extend solution of smaller instance to obtain
solution to original instance
3
3 Types of Decrease-and-Conquer
• Decrease by a constant (usually by 1):
- insertion sort
- graph traversal algorithms (DFS and BFS)
- algorithms for generating permutations, subsets
• Decrease by a constant factor (usually by half)
- binary search method
- exponentiation by squaring (an = (a2)n/2)
• Variable-size decrease
- selection problem
- interpolation search
4
Example of Insertion Sort
5
Pseudocode of Insertion Sort
Algorithm InsertionSort(A[0..n – 1])
//Sorts a given array by insertion sort
//Input: An array A[0..n – 1] of n orderable
// elements
//Output: Array A[0..n – 1] sorted in nondecreasing
// order
6
Pseudocode of Insertion Sort
for i ← 1 to n – 1 do
v ← A[i]
j←i–1
while j ≥ 0 and A[j] > v do
A[j + 1] ← A[j]
j←j–1
A[j + 1] ← v
7
Analysis of Insertion Sort
• The basic operation of the algorithm is the key
comparison A[j] > v.
• Time efficiency
- worst case (i.e., array reversely sorted), A[j] > v
is executed the largest number of times, i.e., for
every j = i – 1, ..., 0. The number of key
comparisons is
8
Analysis of Insertion Sort
• Time efficiency
- best case (i.e., array sorted in ascending order),
the comparison A[j] > v is executed only once on
every iteration of the outer loop. The number of
key comparisons is
9
Analysis of Insertion Sort
• Time efficiency
- average case: The number of key comparisons is
Do the details as an exercise
10
Lecture Contents
1. Insertion Sort
2. Topological Sorting
3. Algorithms for Generating Combinatorial
Objects
4. Decrease-by-a-Constant-Factor Algorithms
5. Variable-Size-Decrease Algorithms
11
Lecture Contents
1. Insertion Sort
2. Topological Sorting
3. Algorithms for Generating
Combinatorial Objects
4. Decrease-by-a-Constant-Factor Algorithms
5. Variable-Size-Decrease Algorithms
12
Generating Permutations
Generate n! permutations of n-element set {1, 2, ...,
n}.
First, generate all (n – 1)! permutations of {1, 2, ...,
n – 1}.
Then, insert n in each of the n possible positions
among elements of every permutation of n – 1
elements. Hence, we will obtain all the
permutations of {1, ..., n}.
13
Generating Permutations
• Minimal-change algorithm:
- We can insert n in the previously generated
permutations of n – 1 elements either left to right or
right to left.
- We start with inserting n into 12 ... (n – 1) by
moving right to left and then switch direction every
time a new permutation of {1, ..., n – 1} needs to be
processed.
14
Generating Permutations
• An algorithm which ensures that each new
permutation is created from the previous one by
exchanging only two neighboring elements is called
a minimal-change algorithm.
• Exercise: Implement minimal-change algorithm
15
Generating Subsets
• Generate all 2n subsets of an n-element set A =
{a1, ..., an}. The set of all subsets of a set is called
power set.
16
Generating Subsets
• All subsets of A = {a1, ..., an} can be divided into
two groups: those that do not contain an, and those
that do.
• The former group is all the subsets of A = {a1, ...,
an–1}, while each and every element of the latter can
be obtained by adding an to a subset of {a1, ..., an–
1}. Thus, once we have a list of all subsets of {a1,
..., an–1}, we can get all the subsets of {a1, ..., an} by
adding to the list all its elements with an put into
each of them.
17
Generating Subsets
• There is a one-to-one correspondence between all
2n subsets of an n-element set A = {a1, ..., an} and
all 2n bit strings b1, ..., bn of length n.
• The easiest way to establish such a
correspondence is to assign to a subset the bit string
in which bi = l if ai belongs to the subset and bi = 0
if ai does not belong to it.
18
Generating Subsets
• We can generate all bit strings of length n by
generating successive binary numbers from 0 to 2n
–1
19
Generating Subsets
• Bit strings that every one of them differs from its
immediate predecessor by only a single bit is called
the binary reflected Gray code. For example, for n
= 3, we can get
000 001 011
010 110 111 101 100.
• Exercise: Write a program to generate the binary
reflected Gray code of length n (e.g., n = 3).
20
Lecture Contents
1. Insertion Sort
2. Topological Sorting
3. Algorithms for Generating Combinatorial
Objects
4. Decrease-by-a-Constant-Factor
Algorithms
5. Variable-Size-Decrease Algorithms
21
Decrease-by-Constant-Factor Algorithms
• In this variation of decrease-and-conquer,
instance size is reduced by the same factor
(typically, 2)
• Examples:
Binary search
Exponentiation by squaring algorithm
22
Binary Search
Very efficient algorithm for searching in sorted
array:
23
Pseudocode of Binary Search
Algorithm BinarySearch(A[0..n – 1], K)
//Implements nonrecursive binary search
//Input: An array A[0..n – 1] sorted in ascending order and
//
a search key K
//Output: An index of the array’s element that is equal to K
//
or –1 if there is no such element
24
Pseudocode of Binary Search
ℓ ← 0; r ← n – 1
while ℓ ≤ r do
m ← ë(ℓ + r)/2û
if K = A[m] return m
else if K < A[m] r ← m – 1
else ℓ ← m + 1
return –1
• Best case: A[m] = K
Cb(n) = 1 Î Q(1)
25
Analysis of Binary Search
• Worst case (i.e., unsuccessful search): let the number of
key comparisons be Cw(n), the recurrence relation for Cw(n)
is:
Cw(n) = Cw(ën / 2û) + 1 for n > 1, Cw(1) = 1.
• Assume that n = 2k, solve the resulting recurrence by
backward substitutions, we obtain the solution
Cw(2k) = k + 1 = log2 n + 1.
• Actually, we can get a solution valid for an arbitrary
positive integer n:
Cw(n) = ëlog2 nû + 1 = élog2 (n + 1)ù.
• The worst-case efficiency of binary search is in Q(log n). 26
Analysis of Binary Search
• Average case: assume that n = 2k – 1, the binary
decision tree has k levels (k = 0, 1, 2, 3, ...).
- Successful search: Ca(n) =
= k – 1 + k/n » log2 (n + 1) – 1 » log2 n Î Q(log2 n).
(binary decision tree has k levels (k = 0, 1, 2, ...))
- Unsuccessful search:
Ca(n) » log2 (n + 1) – 1/2 » log2 n Î Q(log2 n).
Exercise: do average-case analysis for binary
search
27
Useful Formulas
= 1×21 + 2×22 + … n×2n = (n – 1)2n + 1 + 2
= ?.
= 1 + 2 + ... + 2n = 2n + 1 – 1
am × a n = am + n
am / an = am – n
28
Master Theorem
• A problem’s instance of size n is divided into b
instances of size n/b, with a of them needing to be
solved, where a ≥ 1, b > 1, and n is a power of b.
• Recurrence for running time T(n) of DC is
T(n) = aT(n/b) + f(n),
where f(n) is time spent on dividing and combining.
• Master Theorem gives the solution to T(n)
29
Master Theorem
Master Theorem: If f(n) Î Q(nd) with d ³ 0, then
Q(nd)
if a < bd
T(n) Î Q(nd log n)
if a = bd
Q(nlog b a )
if a > bd
Note: Analogous results hold for O and Ω.
30
Exponentiation
• The problem: Compute an where n is a nonnegative/positive integer and a is a positive integer.
Four methods to solve the problem are:
- Brute-force approach: an = a.a. … .a
M(n) = n – 1 Î Q(n)
- Decrease-by-one approach: an = an–1.a (a0 = 1)
M(n) = n – 1 Î Q(n)
31
Exponentiation
- Divide-and-conquer approach:
• The recurrence for the number of multiplications is
M(n) = M(ën / 2û) + M(én / 2ù) + 1 for n > 1, M(1) = 0.
• For n = 2k: M(n) = 2M(n/2) + 1 for n > 1, M(1) = 0.
- Master Theorem: a = 2,b = 2, d = 0 → M(n) Î Θ(n)
32
Exponentiation by Squaring Algorithm
- decrease-by-a-constant-factor approach:
• Example: a4 = (a2)2, a5 = (a2)2 . a
• Recurrence for the number of multiplications is
M(n) = M(n / 2) + c for n > 1, c = 1 or 2, M(0) = M(1) = 0.
33
Exponentiation by Squaring Algorithm
• According to Master Theorem, we have
a = 1, b = 2, d = 0. a = bd.
Thus, M(n) Î Q(log n) » Q(b).
• Exercise: Solving the recurrence by backward
substitution method, we have M(n) = kc = c log2 n
Î Q(log n).
• Exercise: Evaluate time complexities of the four
methods used to compute an.
34
Lecture Contents
1. Insertion Sort
2. Topological Sorting
3. Algorithms for Generating Combinatorial
Objects
4. Decrease-by-a-Constant-Factor Algorithms
5. Variable-Size-Decrease Algorithms
35
Variable-Size-Decrease Algorithms
• In the variable-size-decrease variation of
decrease-and-conquer, instance size reduction
varies from one iteration to another
• Examples:
• Selection problem
• Interpolation search
36
Selection Problem
• Find the k-th smallest element in a list of n
numbers
- k = 1: find the smallest element (minimum)
- k = n: find the largest element (maximum)
- median: k = én/2ù
• Example: 4, 1, 10, 9, 7, 12, 8, 2, 15
median = ?
37
Algorithms for the Selection Problem
• The sorting-based algorithm: Sort and return the k-th
element
• Efficiency (if sorted by mergesort): Q(n log n)
• A faster algorithm is based on using the quicksort-like
partition of the list.
38
Algorithms for the Selection Problem
• Let s be the partition’s split position.
- k = s: the pivot p is the solution.
- k < s: look for kth smallest element in left part.
- k > s: look for (k – s)th smallest element in right
part.
39
Recursive Implementation
Algorithm SelectionRec(A[ℓ..r], k)
// Solves the selection problem by recursive
// partition-based algorithm
//Input: A subarray A[ℓ..r] of orderable elements
// and integer k (1 ≤ k ≤ r – ℓ +1)
//Output: Value of kth smallest element in A[ℓ..r]
40
Recursive Implementation
s ← Partition(A[ℓ..r])
if k < s – ℓ + 1 // s > ℓ + k – 1
SelectionRec(A[ℓ..s – 1], k)
else if k > s – ℓ + 1 // s < ℓ + k – 1
SelectionRec(A[s + 1..r], k – s – 1)
else return A[s]
// call SelectionRec(A[0..n – 1], k)
41
Hoare’s Partition Algorithm
Algorithm Partition(A[ℓ..r])
// Î Q(n). i > j: n + 1 comparisons,
// i = j: n comparisons
//Partitions subarray using its first element as
// a pivot
//Input: A subarray A[ℓ..r] of A[0..n – 1], defined
// by its left and right indices ℓ and r (ℓ < r)
//Output: Partition of A[ℓ..r], with split position
// returned as this function’s value
42
Hoare’s Partition Algorithm
p ← A[ℓ]
i ← ℓ; j ← r + 1
repeat
repeat i ← i + 1 until A[i] ≥ p
repeat j ← j – 1 until A[j] ≤ p
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) //undo last swap when i ≥ j
swap(A[ℓ], A[j])
return j
43
Lomuto’s Partition Algorithm
Algorithm Partition(A[ℓ..r])
//Partitions subarray using first element as pivot
//Input: A subarray A[ℓ..r] of array A[0..n – 1],
// defined by its left and right indices ℓ and r
// (ℓ ≤ r)
//Output: Partition of A[ℓ..r] and the new position
// of the pivot
44
Lomuto’s Partition Algorithm
p ← A[ℓ]
s←ℓ
for i ← ℓ + 1 to r do
if A[i] < p
s ← s + 1; swap(A[s], A[i])
swap(A[ℓ], A[s])
return s
45
Example
• Find the median of the following list of nine
numbers: 4, 1, 10, 9, 7, 12, 8, 2, 15. Here, k = é9/2ù
=5
46
Example
Now s = k = 5 → stop. median is 8, which is > 2,
1, 4, 7 and < 9, 12, 10, 15.
47
Non-Recursive Implementation
Algorithm Selection(A[0..n – 1], k)
//Solves the selection problem by partition-based
// algorithm
//Input: An array A[0..n – 1] of orderable elements
// and integer k (1 ≤ k ≤ n)
//Output: The value of the kth smallest element in
// A[0..n – 1]
48
Non-Recursive Implementation
ℓ ← 0; r ← n – 1
A[n] ← ∞ //append sentinel
while ℓ ≤ r do
p ← A[ℓ] // the pivot
i ← ℓ; j ← r + 1
repeat
repeat i ← i + 1 until A[i] ≥ p
repeat j ← j – 1 until A[j] ≤ p
swap(A[i], A[j])
until i ≥ j
49
Non-Recursive Implementation
swap(A[i], A[j]) // undo last swap
swap(A[ℓ], A[j]) // j is partitioning position
if k < j + 1
r←j–1
else if k > j + 1
ℓ←j+1
else return A[k – 1]
50
Efficiency Analysis
• Best case and average case: splits roughly
happened in the middle of a remaining array, the
recurrence for the number of comparisons is
C(n) = C(n/2) + Q(n),
• Master Theorem → C(n) Î Q(n).
• Worst case: (array is increasingly sorted and k =
n): C(n) is about the same as quicksort. That is,
C(n) Î Q(n2).
51
Interpolation Search
• Given a sorted array A[l..r] (l = 0, r = n – 1 we
have A[0..n – 1]), find whether a search key v is in
A.
• Binary search (good for small array): Cw(n) =
log2 n + 1, Ca(n) » log2 n.
• Binary search always compares a search key v
with the middle value of a given sorted array (and
hence reduces the problem’s instance size by half).
52
Interpolation Search
• Interpolation search (good for large array):
Cw(n) = Q(n), Ca(n) < log2 log2 n + 1. Interpolation
search finds an array’s element to be compared
with the search key v.
• The algorithm assumes that the array’s values
increase linearly (i.e., fairly uniformly distributed),
i.e., along the straight line through the points (l,
A[l]) and (r, A[r]). The search key’s value v is
compared with the element A[x] whose index x
(also called probe position) is computed as
53
Interpolation Search
• round-off operator ëû is used to guarantee that
index x is an integer. ëxû = m ↔ m ≤ x < m + 1
Exercise: Derive the above formula
54
Interpolation Search
55
Interpolation Search
- v Ï [A[l], A[r]], not found
- v = A[x]: stop
- v < A[x]: search v in A[l..x – 1]).
- v > A[x]: search v in A[x + 1..r]).
• Average case: Ca(n) < log2 log2 n + 1.
• Worst case (array’s elements
exponentially), Cw(n) Î Q(n).
increase
56
Download