Uploaded by tyiwow73761

231ICS353 Basic Concepts in Algorithmic Analysis (1)

advertisement
Chapter 1 - Basic Concepts in Algorithmic Analysis
Dr. Ahmed Al-Herz
KFUPM - ICS
August 30, 2022
1 / 95
What is an algorithm
A sequence of unambiguous steps for solving a problem.
When an algorithm is given a valid input, it obtains the required
output in a finite amount of time.
2 / 95
Why to study algorithms
Important for all areas of Computer Science.
Networking, efficient routing and switching
Operating system, resources allocation, handling deadlocks
Bioinformatics, genomes similarity.
Databases, efficient retrieval, and load balancing
etc.
Knowing the standard algorithms for solving different problems
allows you to select the right algorithm (function or method) in
your project.
Design and analyze a new algorithm for solving a problem more
efficiently.
Developing problem solving and analytical skills.
3 / 95
Chapter 1
Reading assignment: sections 1-12, and 14.
4 / 95
Chapter 1
In chapter 1, we will answer the following two questions:
What does an efficient algorithm mean?
How can one tell that it is more efficient than other algorithms?
based on some easy-to-understand searching and sorting
algorithms that we may have seen earlier.
5 / 95
Searching Problem
Searching Problem
Assume A is an array with n elements A[1], A[2], ..., A[n]. For a given
element x, determine whether there is an index j such that 1 ≤ j ≤ n,
and x = A[j].
We will consider two basic algorithms for solving the searching
problem:
Linear Search.
Binary Search.
6 / 95
Linear Search Algorithm
Linear Search Algorithm
Input: An array A[1..n] of n elements and an element x
Output: j if x = A[j] where 1 ≤ j ≤ n, or 0 otherwise
1: for j ← 1 to n do
2:
if x = A[j] then
3:
return j
4:
end if
5: end for
6: return 0
7 / 95
Analyzing Linear Search Algorithm
One way to measure the efficiency of an algorithm is to count how
many instructions are executed before the algorithm terminates.
We will focus on instructions that are executed repeatedly.
Linear Search Algorithm
Input: An array A[1..n] of n elements and an element x
Output: j if x = A[j] where 1 ≤ j ≤ n, or 0 otherwise
1: for j ← 1 to n do
2:
if x = A[j] then
3:
return j
4:
end if
5: end for
6: return 0
8 / 95
Analyzing Linear Search Algorithm
1
2
...
n/2
...
...
n−1
n
...
What will be the number of element comparisons if x
First appears in the first element of A
1 comparison.
First appears in the middle element of A
n/2 comparisons
First appears in the last element of A
n comparisons
Doesn’t appear in A.
n comparisons
9 / 95
Analyzing Linear Search Algorithm
Theorem
The number of comparisons performed by Linear Search Algorithm on
an array of size n is at most n.
10 / 95
Binary Search Algorithm
We can do “better” than linear search if we knew that the
elements of A are sorted, say in non-decreasing order.
The idea is that you can compare x to the middle element of A,
say A[middle].
If x < A[middle] then you know that x cannot be an element from
A[middle+1], A[middle+2], ..., A[n].
If x > A[middle] then you know that x cannot be an element from
A[1], A[2], ..., A[middle-1].
11 / 95
Binary Search Algorithm
Binary Search Algorithm
Input: An array A[1..n] of n elements sorted in non-decreasing order
and an element x
Output: j if x = A[j] where 1 ≤ j ≤ n, and 0 otherwise
1: low ← 1, high ← n, j ← 0
2: while low ≤ high and j = 0 do
3:
mid ← b(low + high)/2c
4:
if x = A[mid] then
5:
j ← mid
6:
else if x < A[mid] then
7:
high ← mid − 1
8:
else
9:
low ← mid + 1
10:
end if
11: end while
12 / 95
Binary Search Example
x = 15
mid = b(1 + 10)/2c = 5
x > A[5]
low = 1
1
high = 10
mid = 5
4
5
7
8
9
10
12
15
20
13 / 95
Binary Search Example
x = 15
low = mid + 1 = 6
mid = b(6 + 10)/2c = 8
x > A[8]
low = 6 mid = 8 high = 10
1
4
5
7
8
9
10
12
15
20
14 / 95
Binary Search Example
x = 15
low = mid + 1 = 9
mid = b(9 + 10)/2c = 9
x = A[9]
mid = 9
1
4
5
7
8
9
10
12
15
20
high = 10
low = 9
15 / 95
Worst Case Analysis Binary Search Algorithm
The goal is to find the maximum number of element comparisons.
Assume x is greater than the largest element in A.
The number of comparisons is equal to the number of iterations of the
while loop.
Note that the size of the sub-array (low to high) deceases by half in
every iteration.
What is the size of the array (under consideration) in the:
first iteration: n.
second iteration: bn/2c.
2 .
third iteration: bbn/2c/2c
=
n/2
i-th iteration: n/2i−1
Note that the last iteration occurs when the size is 1.
16 / 95
Worst Case Analysis Binary Search Algorithm
To find the number of iterations we need to find i when the size equals
1.
By using the floor function definition: bxc = m ⇐⇒ m ≤ x < m + 1.
we have n/2i−1 = 1 ⇐⇒ 1 ≤ n/2i−1 < 2
=⇒ 2i−1 ≤ n < 2i
=⇒ i − 1 ≤ log n < i
=⇒ i = blog nc + 1 (by using the floor function definition)
Theorem
The number of comparisons performed by Binary Search Algorithm on
a sorted array of size n is at most blog nc + 1.
17 / 95
Sorting Problem
Sorting Problem
Assume A is an array with n elements A[1], A[2], ..., A[n]. Rearrange
the elements of the array in a specific order (usually in non-decreasing
order).
We will consider in this chapter three basic algorithms for solving
the sorting problem:
Selection Sort.
Insertion Sort.
Bottom-up Merge Sort.
18 / 95
Selection Sort Algorithm
Selection Sort Algorithm
Input: An array A[1..n] of n elements
Output: A[1..n] of n elements sorted in non-decreasing order
1: for i ← 1 to n − 1 do
2:
k←i
3:
for j ← i + 1 to n do
4:
if A[j] < A[k] then
5:
k←j
6:
end if
7:
end for
8:
if k 6= i then
9:
Interchange A[i] and A[k]
10:
end if
11: end for
19 / 95
Selection Sort Example
i
k
1
2
3
4
5
6
30
2
6
5
21
4
20 / 95
Selection Sort Example
i
k
1
2
3
4
5
6
30
2
6
5
21
4
i
k
1
2
3
4
5
6
2
30
6
5
21
4
21 / 95
Selection Sort Example
i
k
1
2
3
4
5
6
30
2
6
5
21
4
i
k
1
2
3
4
5
6
2
30
6
5
21
4
i
k
1
2
3
4
5
6
2
4
6
5
21
30
22 / 95
Selection Sort Example
i
k
i=k
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
4
5
6
21
30
i
k
1
2
3
4
5
6
2
30
6
5
21
4
i
k
1
2
3
4
5
6
2
4
6
5
21
30
23 / 95
Selection Sort Example
i
k
i=k
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
4
5
6
21
30
i
k
i=k
1
2
3
4
5
6
1
2
3
4
5
6
2
30
6
5
21
4
2
4
5
6
21
30
i
k
1
2
3
4
5
6
2
4
6
5
21
30
24 / 95
Analyzing Selection Sort Algorithm
We need to find the number of comparisons.
The outer loop iterates n − 1 times.
In each i-th outer iteration we make n − i comparisons in step 4.
The total number of comparison is
n−1
P
n−i
i=1
Note that
n−1
P
n − i = (n − 1) + (n − 2) + ... + 2 + 1 =
i=1
n−1
P
n−1
P
i
i=1
i = n(n − 1)/2
i=1
Theorem
The number of comparisons performed by Selection Sort Algorithm on
an array of size n is n(n − 1)/2.
25 / 95
Insertion Sort Algorithm
Insertion Sort Algorithm
Input: An array A[1..n] of n elements
Output: A[1..n] of n elements sorted in non-decreasing order
1: for i ← 2 to n do
2:
x ← A[i]
3:
j ←i−1
4:
while j > 0 and A[j] > x do
5:
A[j + 1] ← A[j]
6:
j ←j−1
7:
end while
8:
A[j + 1] ← x
9: end for
26 / 95
Insertion Sort Example
i
1
2
3
4
5
6
30
2
6
5
21
4
27 / 95
Insertion Sort Example
i
1
2
3
4
5
6
30
2
6
5
21
4
i
1
2
3
4
5
6
2
30
6
5
21
4
28 / 95
Insertion Sort Example
i
1
2
3
4
5
6
30
2
6
5
21
4
i
1
2
3
4
5
6
2
30
6
5
21
4
i
1
2
3
4
5
6
2
6
30
5
21
4
29 / 95
Insertion Sort Example
i
i
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
5
6
30
21
4
i
1
2
3
4
5
6
2
30
6
5
21
4
i
1
2
3
4
5
6
2
6
30
5
21
4
30 / 95
Insertion Sort Example
i
i
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
5
6
30
21
4
i
i
1
2
3
4
5
6
1
2
3
4
5
6
2
30
6
5
21
4
2
5
6
21
30
4
i
1
2
3
4
5
6
2
6
30
5
21
4
31 / 95
Insertion Sort Example
i
i
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
5
6
30
21
4
i
i
1
2
3
4
5
6
1
2
3
4
5
6
2
30
6
5
21
4
2
5
6
21
30
4
i
1
2
3
4
5
6
1
2
3
4
5
6
2
6
30
5
21
4
2
4
5
6
21
30
32 / 95
Analyzing Insertion Sort Algorithm
The minimum number of comparisons is n − 1 which occurs when the
elements of the array are already sorted in non-decreasing order.
The maximum number of comparisons occurs when the elements are
already sorted in decreasing order.
In the i-th iteration we have i − 1 comparisons, since we compare A[i]
with all elements in A[1...i − 1].
n
P
i − 1.
So the total is equal to
i=2
Shift the summation indices 2 ≤ i ≤ n =⇒ 1 ≤ i − 1 = i0 ≤ n − 1
n−1
P 0
i = n(n − 1)/2
i0 =1
Theorem
The number of comparisons performed by Insertion Sort Algorithm on
an array of size n is at least n − 1 and at most n(n − 1)/2.
33 / 95
Merging Two Sorted List
Merging Two Sorted List Problem
Given two list of sorted elements in non-decreasing order, merge them
into one list sorted in non-decreasing order.
Example
1
2
5
10
21
1
2
3
5
3
7
9
10
7
9
21
32
32
34 / 95
Merge Algorithm
Merge Algorithm
Input: An array A[1..m] and three
indices p, q and r and 1 ≤ p <
q < r ≤ m such that both subarrays A[p..q] and A[q + 1...r]
are sorted in non-decreasing order
Output: sorted A[p..r] in nondecreasing order
1: s ← p, t ← q + 1, k ← p
2: while s ≤ q and t ≤ r do
3:
if A[s] ≤ A[t] then
4:
B[k] ← A[s]
5:
s←s+1
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
else
B[k] ← A[t]
t←t+1
end if
k ←k+1
end while
if s = q + 1 then
B[k...r] ← A[t...r]
else
B[k...r] ← A[s...q]
end if
A[p...r] ← B[p...r]
35 / 95
Analyzing Merge Algorithm
Let the sizes of two lists be n1 and n2 .
The best case: When all elements of the smaller sub-array are smaller
than or equal to all elements of the other sub-array.
In this case, the number of comparisons is min(n1 , n2 ).
The worst case: When all elements of a sub-array except the last one
are smaller than or equal to all elements in the other sub-array and the
last element is greater than all elements in the other sub-array.
The number of comparisons in this case is n1 + n2 − 1.
Example of the worst case:
1
2
50
4
17
19
43
36 / 95
Bottom-up Sort Algorithm
Bottom-up Sort Algorithm
Input: An array A[1..n] of n elements
Output: A[1..n] of n elements sorted in non-decreasing order
1: t ← 1
2: while t < n do
3:
s ← t, t ← 2t, i ← 0
4:
while i + t ≤ n do
5:
Merge(A, i + 1, i + s, i + t)
6:
i←i+t
7:
end while
8:
if i + s < n then
9:
Merge(A, i + 1, i + s, n)
10:
end if
11: end while
37 / 95
Bottom-up Sort Example
1
2
3
4
5
6
30
2
6
5
21
4
38 / 95
Bottom-up Sort Example
1
2
3
4
5
6
30
2
6
5
21
4
1
2
3
4
5
6
30
2
6
5
21
4
s = 1, t = 2, i = 0
First iteration: merge i + 1 = 1, i + s = 1, i + t = 2
Second iteration: merge i + 1 = 3, i + s = 3, i + t = 4
Third iteration: merge i + 1 = 5, i + s = 5, i + t = 6
39 / 95
Bottom-up Sort Example
1
2
3
4
5
6
30
2
6
5
21
4
1
2
3
4
5
6
2
30
5
6
4
21
s = 2, t = 4, i = 0
First iteration: merge i + 1 = 1, i + s = 2, i + t = 4
second iteration: i + 1 = 5, i + s = 6, i + t = 8 > 6, so terminate the
inner loop.
Now check i + s = 6 ≮ 6 so the last sub-array is not merged.
40 / 95
Bottom-up Sort Example
1
2
3
4
5
6
2
30
5
6
4
21
1
2
3
4
5
6
2
5
6
30
4
21
s = 4, t = 8, i = 0
First iteration: i + 1 = 1, i + s = 4, i + t = 8 > 6, so terminate the inner
loop.
Now check i + s = 4 < 6, so the two sub-arrays with boundaries
i + 1 = 1, i + s = 4, n = 6 is merged.
41 / 95
Bottom-up Sort Example
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
5
6
30
4
21
1
2
3
4
5
6
1
2
3
4
5
6
30
2
6
5
21
4
2
4
5
6
21
30
1
2
3
4
5
6
2
30
5
6
4
21
42 / 95
Analyzing Bottom-up Sort Algorithm
Assume n is a power of 2.
The outer loop is executed log n times (it terminates when k = 2x = n
take the log of both sides we get x = log n).
In the first iteration there are n/2 merge operations and comparisons.
In the second iteration there are n/4 merge operations and each list is
of two elements. The number of comparison is 2 or 3 for each merge
operation.
In the third iteration there are n/8 merge operations on lists of four
elements. The number of comparison is between 4 and 7 for each merge
operation.
In general, in the i-th iteration there are n/2i merged operations on
lists of 2i−1 elements. The total number of comparison in the i-th
iteration is between (n/2i )2i−1 and (n/2i )(2i − 1).
43 / 95
Analyzing Bottom-up Sort Algorithm
Let k = log n
The number of comparisons is at least
k
P
(n/2i )2i−1
i=1
=
k
P
n/2 = kn/2
i=1
=
n log n
2
44 / 95
Analyzing Bottom-up Sort Algorithm
and at most:
k
P
(n/2i )(2i − 1)
i=1
k
P
=
(n − n/2i )
i=1
= kn − n
k
P
(1/2i )
i=1
We know that
k
P
(1/2i ) = 2 − 1/2k (Using:
i=0
k
P
(xi ) = (xk+1 − 1)/(x − 1) where x = 1/2)
i=0
so,
k
P
(1/2i ) = 2 − 1/2k − 1 = 1 − 1/2k
i=1
thus, we have kn − n(1 − 1/2k ) = n log n − n + 1
45 / 95
Analyzing Bottom-up Sort Algorithm
Theorem
The number of comparisons performed by Bottom-up Sort Algorithm
on an array of size n is at least (n log n)/2 and at most n log n − n + 1.
46 / 95
Time Complexity
One way to measure the performance of an algorithm is how fast it
executes.
The question is how?
47 / 95
Order of Growth
We look for an “objective” measure, i.e. the number of operations
Since counting the exact number of operations is difficult,
sometimes impossible, we can always use asymptotic analysis,
where constants and lower-order terms are ignored.
E.g. n3 , 1000n3 , and 10n3 + 10000n2 + 5n − 1 are all “the same”.
The reason we can do this is that we are always interested in
comparing different algorithms for arbitrary large number of
inputs.
48 / 95
Big Oh
Big-Oh definition
Let f (n) and g(n) be non-negative functions, f (n) is said to be
O(g(n)) if there exists two positive constants c and n0 such that
f (n) ≤ cg(n) for all n ≥ n0 .
O() notation indicates an upper bound.
49 / 95
Big Oh
Find c and n0 to show that f (n) = 10n2 + 20n + 5 is in O(n2 )
We want to show f (n) = 10n2 + 20n + 5 ≤ cn2 for all n ≥ n0 .
Divide by n2 we get 10 + 20/n + 5/n2 ≤ c.
Clearly for c = 35 the inequality holds for all n ≥ 1.
50 / 95
Big-Omega
Big-Omega definition
Let f (n) and g(n) be non-negative functions, f (n) is said to be Ω(g(n))
if there exists two positive constants c and n0 such that f (n) ≥ cg(n)
for all n ≥ n0 .
Ω() notation indicates a lower bound.
51 / 95
Big Omega
Find c and n0 to show that f (n) = 10n2 + 20n + 5 is in Ω(n2 )
We want to show f (n) = 10n2 + 20n + 5 ≥ cn2 for all n ≥ n0 .
Divide by n2 we get 10 + 20/n + 5/n2 ≥ c.
Clearly for c = 10 the inequality holds for all n ≥ 1.
52 / 95
Big-Theta
Big-Theta definition
Let f (n) and g(n) be non-negative functions, f (n) is said to be
Θ(g(n)) if there exists three positive constants c1 , c2 and n0 such that
c2 g(n) ≥ f (n) ≥ c1 g(n) for all n ≥ n0 .
Θ() notation indicates a tight bound.
If f (n) = O(g(n)) and f (n) = Ω(g(n)) then f (n) = Θ(g(n)).
53 / 95
Big Theta
Show that f (n) = 10n2 + 20n + 5 is in Θ(n2 )
Since f (n) = O(n2 ) and f (n) = Ω(n2 ) then f (n) = Θ(n2 ).
54 / 95
Big Theta
Show that log n! = Θ(n log n).
Note that,
log n! = log n(n − 1)...(2)(1) = log n + log (n − 1) + ... + log 2 + log 1.
n
P
=
log j
j=1
First we want to show that
n
P
log j = O(n log n).
j=1
Note that,
n
z
}|
{
log n + log (n − 1) + ... + log 2 + log 1 ≤ log n + log n + ... + log n
n
P
so,
log j ≤ n log n, and for c = 1, n0 = 2 we have
j=1
n
P
log j = O(n log n)
j=1
55 / 95
Big Theta
Now we want to show that
n
P
log j = Ω(n log n).
j=1
Note that, log n + log(n − 1) + ... + log 2 + log 1 ≥
n/2
z
}|
{
log (n/2) + log (n/2) + ... + log (n/2)
n
P
j=1
log j ≥
n/2
P
log (n/2) ≥ cn log n
j=1
for c = 1/4, n0 = 4 we have
n
P
log j = Ω(n log n)
j=1
Thus, log n! = Θ(n log n).
56 / 95
little-Oh
little-Oh definition
Let f (n) and g(n) be non-negative functions, f (n) is said to be o(g(n))
if there exists two positive constants c and n0 such that f (n) < cg(n)
for all n ≥ n0 .
o() notation indicates that f (n) and g(n) belong to different
equivalence classes.
While Θ() indicates that functions belong to the same equivalence
class.
n = o(n2 ) and n2 6= o(n2 )
n 6= Θ(n2 ) and n2 = Θ(n2 )
57 / 95
Relational properties
Transitivity:
f (n) = Θ(g(n)) and g(n) = Θ(h(n)) =⇒ f (n) = Θ(h(n))
Reflexivity:
f (n) = Θ(f (n))
Symmetry:
f (n) = Θ(g(n)) ⇐⇒ g(n) = Θ(f (n))
Transpose symmetry:
f (n) = O(g(n)) ⇐⇒ g(n) = Ω(f (n))
58 / 95
Asymptotic functions and limits
Let f (n) and g(n) be non-negative functions, then
limn→∞ f (n)/g(n) < ∞ =⇒ f (n) = O(g(n))
limn→∞ f (n)/g(n) > 0 =⇒ f (n) = Ω(g(n))
limn→∞ f (n)/g(n) = c =⇒ f (n) = Θ(g(n))
limn→∞ f (n)/g(n) = 0 =⇒ f (n) = o(g(n))
59 / 95
Examples
Show that 20n2 + 10n = Θ(n2 )
Let f (n) = 20n2 + 10n, g(n) = n2
limn→∞ (20n2 + 10n)/n2
= limn→∞ 20n2 /n2 + limn→∞ 10/n
= 20 + 0.
Since 20 is constant, we have (20n2 + 10n) = Θ(n2 )
This also implies (20n2 + 10n) = O(n2 ) and (20n2 + 10n) = Ω(n2 ).
60 / 95
Examples
Show that log n2 = o(n)
Let f (n) = log n2 , g(n) = n
limn→∞ log n2 /n
= limn→∞ 2 log n/n = ∞/∞ which is undefined.
Use L’Hoptial rule (differentiate both functions)
f 0 (n) = 1/(n ln 2) and g 0 (n) = 1
limn→∞ 2/(n ln 2) = 0
Thus, log n2 = o(n), this also means log n2 = O(n).
61 / 95
Examples
Show that n3 = Ω(n2 )
Let f (n) = n3 , g(n) = n2
limn→∞ n3 /n2
= limn→∞ n
= ∞.
=⇒ n3 = Ω(n2 )
This also implies n3 6= O(n2 ) and n3 6= Θ(n2 ).
62 / 95
Examples (Proof by induction)
Show that 2n = O(n!)
We want to prove 2n < n! for all n ≥ 4
Base case: the statement is true when n = 4, 16 < 24
Inductive hypothesis: 2k < k! for all k > 4
Inductive step: we want to show the statement is true for k + 1
Using the inductive hypothesis we have 2k < k!
Multiply by 2 we get (2)2k < 2k!
We know that 2 < k + 1 for any k > 4
so, (2)2k < 2k! < (k + 1)k!
2k+1 < (k + 1)!
Thus, 2n = O(n!)
63 / 95
Space Complexity
Space complexity refers to the amount of memory space needed to
carry out the computational steps required in an algorithm
excluding memory space needed to hold the input.
As in time complexity we use asymptotic notations to express the
amount of space.
64 / 95
Space Complexity Examples
Space complexity of:
Linear Search Algorithm: O(1).
Binary Search Algorithm: O(1).
Selection Sort Algorithm: O(1).
Insertion Sort Algorithm: O(1).
Merge Algorithm: O(n).
Bottom-up Merge Sort Algorithm: O(n).
65 / 95
Optimal Algorithm
If one can show that there is no algorithm that solves a certain
problem in asymptotically less than that of a certain algorithm A,
we call A an optimal algorithm.
Example: Sorting problem is proved that no algorithm solves the
problem in less than n log n, so any algorithm that solve sorting in
n log n is an optimal algorithm.
66 / 95
Estimating the Running Time of an Algorithm
This is achieved by:
Counting the frequency of basic operations.
Recurrence Relations.
67 / 95
Counting the frequency of basic operations
We focus on operation that is a good representative of the overall
time complexity of the algorithm.
The number of iterations in a while loop and/or a for loop is a
good indication of the total number of operations.
We represent the cost of a loop in summation form, then evaluate
the summation.
68 / 95
Useful Summation Formulas
n
P
c = c(n − m + 1)
i=m
n
P
i = n(n + 1)/2
i=1
n
P
i2 = n(n + 1)(2n + 1)/6
i=1
n
P
i3 = (n(n + 1)/2)2
i=1
n
P
i=0
ai =
an+1 −1
a−1 , a
6= 1
69 / 95
Example
Example 1
1: sum = 0
2: for j ← 1 to n do
3:
for i ← 1 to j do
4:
sum = sum +1
5:
end for
6: end for
7: for k ← 1 to n3 do
8:
A[k] ← k
9: end for
Question: What is the time complexity of the algorithm?
70 / 95
Example
Represent the cost of loops in summation form:
outer loop inner loop
z}|{
n
X
j=1
=
n
P
z }| {
j
X
n3
P
1 +
1
i=1
k=1
j + n3
j=1
= n(n + 1)/2 + n3 = Θ(n3 )
71 / 95
Example
Example 2
1:
2:
3:
4:
5:
6:
√
for j ← 1 to n do
sum[j]=0
for i ← 1 to j 2 do
sum[j] = sum[j] + i
end for
end for
Question: How many time line 4 is executed?
72 / 95
Example
Represent the cost of loops in summation form:
√
2
j
Pn P
1
j=1 i=1
√
Pn
j2
=
j=1
√ √
√
= n( n + 1)(2 n + 1)/6
√
√
= n(2n + 3 n + 1)/6
√
= n1.5 /3 + n/2 + n/6 = Θ(n1.5 )
73 / 95
Example
Example 3
1: count ← 0
2: for i ← 1 to n do
3:
m ← bn/ic
4:
for j ← 1 to m do
5:
count ← count +1
6:
end for
7: end for
Question: How many time line 5 is executed?
74 / 95
Example
Represent the cost of loops in summation form:
n bn/ic
P
P
1
i=1 j=1
n
P
=
bn/ic
i=1
Using the floor function definition:
n
n
n
P
P
P
n/i − 1 < bn/ic ≤ n/i we get,
n/i − 1 <
bn/ic ≤
n/i
i=1
i=1
i=1
Using the following fact (Appendix A page 513):
n
P
ln (n + 1) ≤
1/i ≤ ln (n) + 1, we get
n
P
i=1
i=1
n
P
n/i − 1 <
i=1
bn/ic ≤
n
P
n/i ≈ n log n
i=1
Thus, line 5 is executed Θ(n log n) times.
75 / 95
Example
Example 4
1: sum ← 0
2: k ← 1
3: while k ≤ n do
4:
for j ← 1 to n do
5:
sum ← sum +1
6:
end for
7:
k ← 2k
8: end while
Question: How many time line 5 is executed?
76 / 95
Example
Note the increment of k is not by 1
k = 1, 2, 4, ..., n
Note the increment of power is by 1
k = 20 , 21 , 22 , ..., 2log n
k = 2i , i = log k
Represent the cost of loops in summation form in term of i:
log
n
Pn P
1
i=0 j=1
=
log
Pn
n=n
i=0
log
Pn
1
i=0
= n(log n + 1) = n log n + n
= Θ(n log n)
77 / 95
Example
Example 5
1: sum ← 0
2: k ← 1
3: while k ≤ n do
4:
for j ← 1 to k do
5:
sum ← sum +1
6:
end for
7:
k ← 2k
8: end while
Question: How many time line 5 is executed?
78 / 95
Example
Note the increment of k is not by 1
k = 1, 2, 4, ..., n
Note the increment of power is by 1
k = 20 , 21 , 22 , ..., 2log n
k = 2i , i = log k
Represent the cost of loops in summation form in term of i:
log
2i
Pn P
1
i=0 j=1
=
log
Pn
2i =
i=0
2log n+1 −1
2−1
= (2)2log n − 1 = 2n − 1
= Θ(n)
79 / 95
Example
Example 6
1: sum ← 0
2: while n ≥ 1 do
3:
for j ← 1 to n do
4:
sum ← sum +1
5:
end for
6:
n ← n/2
7: end while
Question: How many time line 4 is executed?
80 / 95
Example
Let k be the iterator of the outer loop
k = n, n/2, ...2, 1
k = 2log n , 2log n−1 , ..., 21 , 20
k = 2i , i = log k
Represent the cost of loops in summation form in term of i
log
2i
Pn P
1
i=0 j=1
=
log
Pn
2i =
i=0
2log n+1 −1
2−1
= (2)2log n − 1 = 2n − 1
= Θ(n)
81 / 95
Example
Example 7
1: sum ← 0
2: for i ← 1 to n do
3:
j←2
4:
while j ≤ n do
5:
sum ← sum +1
6:
j ← j2
7:
end while
8: end for
Question: How many time line 5 is executed?
82 / 95
Example
0
1
log log n−1
j = 22 , 22 , ...22
log log n
, 22
k = 0, 1, , ...., log log n − 1, log log n
Represent the cost of loops in summation form
log n
n logP
P
i=1
=
1
k=0
n
P
log log n + 1
i=1
= n(log log n + 1)
= Θ(n log log n)
83 / 95
Recurrence Relations
The number of operations can be represented as a recurrence
relation.
There are very well known techniques which we will study in order
to solve these recurrences
84 / 95
Example of Using Recurrence Relations
Recursive Binary Search Algorithm
Input: A[1..n] of n elements sorted in non-decreasing order and an element x
Output: j if x = A[j] where 1 ≤ j ≤ n, and 0 otherwise
1: procedure RecBinarSearch(A, low, high, x)
2:
if low > high then
3:
return 0
4:
else
5:
mid ← b(low + high)/2c
6:
if x = A[mid] then
7:
return mid
8:
else if x < A[mid] then
9:
RecBinarSearch(A, low, mid − 1, x)
10:
else
11:
RecBinarSearch(A, mid + 1, high, x)
12:
end if
13:
end if
85 / 95
14: end procedure
Example of Using Recurrence Relations
The number of comparison denoted by C(n) performed by the
recursive binary search algorithm is represented as follows:
(
1,
if n = 1
C(n) ≤
C(n/2) + 1, if n > 1
Solution:
C(n) ≤ C(n/2) + 1
C(n) ≤ C(n/2/2) + 1 + 1 = C(n/4) + 1 + 1
.
.
k times
z }| {
k
C(n) ≤ C(n/2 ) + 1 + ... + 1
Note that n/2k = 1 when k = log n
log n times
z }| {
So, C(n) ≤ C(1) + 1 + ... + 1 =⇒ C(n) ≤ log n + 1
86 / 95
Worst Case Analysis
In worst case analysis of time complexity we select the maximum
cost among all possible inputs of size n.
One can do that for the O(), Ω() notations as well as the Θ()
notation.
87 / 95
Average Case Analysis
Probabilities of all inputs is an important piece of prior knowledge
in order to compute the number of operations on average.
Usually, average case analysis is lengthy and complicated, even
with simplifying assumptions.
88 / 95
Computing the Average Running Time
The running time in this case is taken to be the average time over
all inputs of size n.
Assume we have k inputs, where each input costs C(i) operations,
and each input can occur with probability P (i), 1 ≤ i ≤ k, the
k
P
average running time is given by
C(i)P (i)
i=1
89 / 95
Average Case Analysis of Linear Search
Assume the probability that element x appears in any position in the
array or does not appear is equally likely.
Since we have n + 1 possibilities (n positions and not in A) The
probability that x = A[i] or does not exist in A is 1/(n + 1).
What is the number of comparisons for each input? If x is in position i
then the algorithm makes i comparisons and if it is not in the array
then n comparisons are done.
Thus, the average number of comparisons:
n
P
1
1
n+1 · i + n+1 · n
i=1
=
=
=
1
n+1
·
n
P
i=1
i+
n
n+1
n(n+1)
1
n
+ n+1
n+1 ·
2
n
n
2 + n+1 = Θ(n)
90 / 95
Average Case Analysis of Insertion Sort
Assume the array A contains distinct elements.
Assume all n! permutations are equally likely.
Consider inserting A[i] in its proper position A[1...i], if its proper
position is j, 1 ≤ j ≤ i then the number of comparisons is i − j if j = 1
and i − j + 1 if 2 ≤ j ≤ i.
The probability that a location in A[1...i] is a proper one to insert A[i]
is 1/i.
Thus, the average number of comparisons to insert A[i]:
i
P
i−j+1
i−1
+
i
i
j=2
Shift summation indices 2 ≤ j ≤ i =⇒ 1 ≤ j − 1 ≤ i − 1, let j 0 = j − 1
then i − j + 1 = i − (j 0 + 1) + 1 = i − j 0 .
i−1
i−1
P i−j 0
P j0
Also note
i =
i
j 0 =1
j 0 =1
91 / 95
Average Case Analysis of Insertion Sort
i−1
i
=
=
+
i−1
P
j0
i
j 0 =1
i(i−1)
2i
i
1
2 + 2
+1−
−
1
i
1
i
Now sum all 2 ≤ i ≤ n
n
P
( 2i + 12 − 1i )
i=2
=
n
P
i=2
=
=
i
2
+
n
P
i=2
1 n(n+1)
2(
2
Θ(n2 )
1
2
−
n
P
i=2
− 1) +
1
i
n−1
2
− (log n − 1)
92 / 95
Input Size
The input size interpretation is subject to the problem for which
the algorithm is designed.
In sorting and searching problems, we use the number of elements
in an array.
In graph algorithms, the input size usually refers to the number of
vertices and edges.
In matrix operations, the input size is commonly taken to be the
dimensions of the input matrices.
If the input is a number, the input size is commonly taken to be
the number of bits.
93 / 95
Input Size
What is the time complexity of the following algorithm
Algorithm 1
Input: An array A[1..n] and an integer n
n
P
Output:
A[j]
j=1
sum ← 0
2: for j ← 1 to n do
3:
sum ← sum + A[j]
4: end for
1:
The input size is the number of elements in the array which is n
The loop iterates n times, so the time complexity is O(n).
94 / 95
Input Size
What is the time complexity of the following algorithm
Algorithm 2
Input: An integer n
n
P
Output:
j
j=1
sum ← 0
2: for j ← 1 to n do
3:
sum ← sum + j
4: end for
1:
The input size is the number of bits k = blog nc + 1
The loop iterates n = 2k times, so the time complexity is O(2k ).
95 / 95
Download