Uploaded by Dhiraj Kumar -029 CSE-A

DAA Module I

advertisement
INTRODUCTION
An Algorithm is a set of steps to complete a task. It is an efficient method that can be
expressed within finite amount of time and space and is the best way to represent the
solution of a particular problem in a very simple and efficient manner.
Input
Set of rules to obtain
the expected output
from the given input.
Output
y
Algorithm
Characteristics of an algorithm:
Every algorithm must be satisfied the following characteristics.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Input: Zero / more quantities are externally supplied.
Output: At least one quantity is produced.
Definiteness: Each step is clear and unambiguous.
Finiteness: The algorithm must terminate after a finite time or, finite number of
steps.
Correctness: Correct set of output values must be produced from the each set of
inputs.
Effectiveness: Each step must be carried out in finite time.
Efficiency: An algorithm is efficient if it uses minimal running time and memory
space as possible.
Feasibility: It must be feasible to execute each individual instructions.
Independent: It must be independent of language i.e. an algorithm should focus only
on what are inputs, outputs and how to derive output.
Pseudo code for Expressing Algorithms:
An algorithm is a procedure. It has two parts; the first part is head and the second part is
body.
Head Section:
Algorithm name1 (p1, p2,…,p3)
//Problem Description:
Dr. A. K. Panda
//Input:
//Output:
Body Section:








In the body of an algorithm various programming constructs like if, for, while and
some statements like assignments are used.
The compound statements may be enclosed with { and } brackets. if, for, while can
be open and closed by {, } respectively. Proper indention is must for block.
Comments are written using // at the beginning.
The identifier should begin by a letter and not by digit. It contains alpha numeric
letters after first letter. No need to mention data types.
“←’ or, “:=” are used as assignment operator. E.g. a=5;
Boolean operators (TRUE, FALSE), Logical operators (AND, OR, NOT) and Relational
operators (<,<=, >, >=,=, ≠, <>) are also used.
Input can be done using read and output can be done using write or, print.
Array[], if then else condition, branch and loop can be also used in algorithm.
Basic Steps of Problem Solving:
Understanding the problem/
requirement
Prepare the complete flow
chart of the problem
Design and analyze an
algorithm
Testing an algorithm
Write a computer program for
the Algorithm
Dr. A. K. Panda
Design of Algorithm
Algorithm design refers to a method or process of solving a problem. There are two ways
to design an algorithm. They are:
1. Top-down approach (Iterative algorithm): In the iterative algorithm, the function
repeatedly runs until the condition is met or it fails.
2. Bottom-up approach (Recursive algorithm): In the recursive approach, the function
calls itself until the condition is met.
For example, to find factorial of a given number n is given below:
1. Iterative :
Fact(n)
{
for i = 1 to n
fact = fact * i;
return fact;
}
Here the factorial is calculated as 1 × 2 × 3 × . . . × n.
2. Recursive :
Fact(n)
{
if n = 0 return 1;
else return n * fact(n-1);
}
Here the factorial is calculated as n × (n − 1) × . . . × 1.
Analysis of Algorithm:
The analysis is a process of estimating the efficiency of an algorithm. An algorithm is
efficient if it uses minimum memory and has less running time.
There are two basic parameters which are used to find the efficiency of an algorithm. They
are:


The amount of compute time consumed on any CPU (Time Complexity)
The amount of memory used (Space Complexity)
Dr. A. K. Panda
Space Complexity & Time Complexity:
Space Complexity:
The space complexity of an algorithm is the quantity of temporary storage required for
running the algorithm, including the space of input values for execution.
The space required by an algorithm has the following two components:


Auxiliary Space (Fixed or, Static Part): It is just a temporary or extra space. It
includes the various types spaces, such as instruction space (i.e., space for code),
space for simple variables and fixed-size component variables, space for constants,
etc.
Space use by input values (Variable or, dynamic Part): Space required by component
variables whose size is dependent on the particular problem instance at run-time
being solved.
𝑆𝑝𝑎𝑐𝑒 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 = 𝐴𝑢𝑥𝑖𝑙𝑖𝑎𝑟𝑦 𝑠𝑝𝑎𝑐𝑒 + 𝑆𝑝𝑎𝑐𝑒 𝑢𝑠𝑒 𝑏𝑦 𝑖𝑛𝑝𝑢𝑡 𝑣𝑎𝑙𝑢𝑒𝑠
So to find space-complexity, it is enough to calculate the space occupied by the variables
used in an algorithm/program since the first part is static.
Time Complexity: The time complexity of an algorithm quantifies the amount of time
taken by an algorithm to run as a function of the length of the input. Note that the time to
run is a function of the length of the input and not the actual execution time of the machine
on which the algorithm is running on.
Time and Space Complexity depends on lots of things like machine hardware, operating
system, etc. However, we do not consider any of these factor while analyzing the algorithm.
We will only consider the execution time of an algorithm.
Step Count Method:
This method is used to calculate Time and space complexity.
Procedure:




There is no count for “{“ and “}” .
Each basic statement like ’assignment’ and ’return’ have a count of 1.
If a basic statement is iterated, then multiply by the number of times the loop is run.
The loop statement is iterated n times, it has a count of (n + 1). Here the loop runs n
times for the true case and a check is performed for the loop exit (the false
condition), hence the additional 1 in the count.
Dr. A. K. Panda
Example:
1. Sum of elements in an array
Algorithm Sum(a,n)
{
sum=0
for i = 1 to n do
sum = sum + a[i];
return sum;
}
Total:
SC (Time Complexity)
0
1
𝑛+1
𝑛
1
0
2𝑛 + 3
SC (Space Complexity))
1 word for sum
1 word each for i and n
n words for the array a[]
(𝑛 + 3) words
2. Adding two matrices of order m and n
Algorithm Add(a, b, c, m, n)
{
for i = 1 to m do
for j = 1 to n do
c[i,j] = a[i,j] + b[i,j]
}
Total:
Step Count (Time Complexity)
0
𝑚+1
𝑚(𝑛 + 1)
𝑚𝑛
0
2𝑚𝑛 + 2𝑚 + 2
3. To find 𝒏𝒕𝒉 number in Fibonacci series
Algorithm Fibonacci (n)
{
if 𝑛 ≤ 1 then
write (𝑛)
else
𝑓2 = 0;
𝑓1 = 1;
for 𝑖 = 2 to 𝑛 do
{
𝑓 = 𝑓1 + 𝑓2;
𝑓2 = 𝑓1;
𝑓1 = 𝑓;
}
write (𝑓)
}
Total:
Step Count (Time Complexity)
0
1
0
0
1
1
𝑛
0
𝑛−1
𝑛−1
𝑛−1
0
1
0
4𝑛 + 1
Dr. A. K. Panda
Time Complexity of Algorithms:
The complexity of an algorithm A is the function 𝑓(𝑛) which gives the running time and/or
storage space requirement of the algorithm in terms of the size 𝑛 of the input data. Mostly,
the storage space required by an algorithm is simply a multiple of the data size 𝑛.
Complexity shall refer to the running time of the algorithm.
The complexity function 𝑓(𝑛) for certain cases are:
1. Best Case : The minimum possible value of 𝑓(𝑛) is called the best case.
2. Average Case : The expected value of 𝑓(𝑛).
3. Worst Case : The maximum value of 𝑓(𝑛) for any key possible input.
Suppose that A is an algorithm, and n is the size of the input data.
Then complexity 𝑓(𝑛) of A increases as n increases.
It is usually the rate of increase of 𝑓(𝑛) we want to examine. This is usually done by
comparing f(n) with some standard functions. The most common computing times are:
𝑂(1), 𝑂(𝑙𝑜𝑔2 𝑛), 𝑂(𝑛), 𝑂(𝑛. 𝑙𝑜𝑔2 𝑛), 𝑂(𝑛 2 ), 𝑂(𝑛 3 ), 𝑂(2𝑛 ), 𝑛! and 𝑛 𝑛.
The execution time for six of the typical functions is given below:
𝑛2
𝑛3
2𝑛
n
𝑙𝑜𝑔2 𝑛
𝑛. 𝑙𝑜𝑔2 𝑛
1
0
0
1
1
2
2
1
2
4
8
4
4
2
8
16
64
16
8
3
24
64
512
256
16
4
64
256
4096
65,536
32
5
160
1024
32,768
4,294,967,296
The time complexity of an algorithm is commonly expressed using asymptotic notations.
Dr. A. K. Panda
Asymptotic Notations:
Asymptotic notations are the way to express time and space complexity. It represents the
running time of an algorithm. If we have more than one algorithm with alternative steps
then to choose among them, the algorithm with lesser complexity should be selected. To
represents these complexities, asymptotic notations are used.
There are five asymptotic notations:
 Big Oh −
𝑂(𝑛)
 Big Theta −
𝛳(𝑛)
 Big Omega − 𝛺(𝑛)
 Little oh −
𝑜(𝑛)
 Little omega − 𝜔(𝑛)
Big-oh (O) notation:
It is the formal method of expressing the upper bound of an algorithm’s running time. It is
the measure of the longest amount of time it could possibly take for the algorithm to
complete.
Let 𝑓(𝑛 ) and 𝑔(𝑛 ) be asymptotically non-negative functions.
We say that 𝑓(𝑛 ) is in 𝑂(𝑔(𝑛) if there exists a positive integer 𝑛0 and a real positive
constant 𝑐 > 0, such that for all integers 𝑛 ≥ 𝑛0 ,
0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛)
𝑂(𝑔(𝑛)) = {𝑓(𝑛): There exist positive constants 𝑐 and 𝑛0 , such that 0 ≤ 𝑓(𝑛) ≤
𝑐𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.
Example 1:
𝑓(𝑛) = 13
Dr. A. K. Panda
𝑓(𝑛) ≤ 13 × 1
Here, 𝑐 = 13 and 𝑛0 = 0
Hence, 𝑓 (𝑛 ) = 𝑂(1)
Example 2:
𝑓(𝑛) = 3𝑛 + 5
3𝑛 + 5 ≤ 3𝑛 + 5𝑛
3𝑛 + 5 ≤ 8𝑛
Here, 𝑐 = 8 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝑂(𝑛)
Example 3:
𝑓(𝑛) = 3𝑛 2 + 5
3𝑛 2 + 5 ≤ 3𝑛 2 + 5𝑛
3𝑛 2 + 5 ≤ 3𝑛 2 + 5𝑛 2
3𝑛 2 + 5 ≤ 8𝑛 2
Here, 𝑐 = 8 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝑂(𝑛 2 )
Example 4:
𝑓(𝑛) = 7𝑛 2 + 5𝑛
7𝑛 2 + 5𝑛 ≤ 7𝑛 2 + 5𝑛 2
7𝑛 2 + 5𝑛 ≤ 12𝑛 2
Here, 𝑐 = 12 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝑂(𝑛 2 )
Example 5:
𝑓(𝑛) = 2𝑛 + 6𝑛 2 + 3𝑛
2𝑛 + 6𝑛 2 + 3𝑛 ≤ 2𝑛 + 6𝑛 2 + 3𝑛 2 ≤ 2𝑛 + 6. 2𝑛 + 3. 2𝑛 ≤ 10. 2𝑛
Dr. A. K. Panda
Here, 𝑐 = 10 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝑂(2𝑛 )
Big Theta Notation:
It is the method of expressing the tight bound of an algorithm’s running time.
Let 𝑓 (𝑛) and 𝑔(𝑛) be asymptotically non-negative functions.
We say 𝑡ℎ𝑎𝑡 𝑓 (𝑛) is 𝜃( 𝑔 ( 𝑛 )) if there exists a positive integer 𝑛0 and positive constants
𝑐1 and 𝑐2 such that for all integers 𝑛 ≥ 𝑛0 ,
0 ≤ 𝑐1 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛)
𝜃( 𝑔 ( 𝑛 )) = {𝑓(𝑛): There exist positive constants 𝑐1 , 𝑐2 and 𝑛0 , such that 0 ≤ 𝑐1 𝑔(𝑛) ≤
𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.
Example 1:
𝑓(𝑛) = 123
122 × 1 ≤ 𝑓(𝑛) ≤ 123 × 1
Here, 𝑐1 = 122, 𝑐2 = 123 and 𝑛0 = 0
Hence, 𝑓(𝑛 ) = 𝜃 (1)
Example 2:
𝑓(𝑛) = 3𝑛 + 5
3𝑛 < 3𝑛 + 5 ≤ 4𝑛
Here, 𝑐1 = 3, 𝑐2 = 4 and 𝑛0 = 5
Hence, 𝑓(𝑛) = 𝜃(𝑛)
Example 3:
𝑓(𝑛) = 3𝑛 2 + 5
3𝑛 2 < 3𝑛 2 + 5 ≤ 4𝑛 2
Dr. A. K. Panda
Here, 𝑐1 = 3, 𝑐2 = 4 and 𝑛0 = 5
Hence, 𝑓(𝑛) = 𝜃(𝑛 2 )
Example 4:
𝑓(𝑛) = 7𝑛 2 + 5𝑛
7𝑛 2 < 7𝑛 2 + 5𝑛 for all 𝑛, 𝑐1 = 7
Also, 7𝑛 2 + 5𝑛 ≤ 8𝑛 2 for 𝑛 ≥ 𝑛0 = 5, 𝑐2 = 8
Hence, 𝑓(𝑛) = 𝜃(𝑛 2 )
Example 5:
𝑓(𝑛) = 2𝑛 + 6𝑛 2 + 3𝑛
2𝑛 < 2𝑛 + 6𝑛 2 + 3𝑛 < 2𝑛 + 6𝑛 2 + 3𝑛 2 < 2𝑛 + 6. 2𝑛 + 3. 2𝑛 ≤ 10. 2𝑛
Here, 𝑐1 = 1, 𝑐2 = 10 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝜃(2𝑛 )
Big Omega (𝛀) Notation:
Big Omega (Ω) is the method used for expressing the lower bound of an algorithm’s
running time. It is the measure of the smallest amount of time it could possibly take for the
algorithm to complete.
Let 𝑓 (𝑛) and 𝑔(𝑛) be asymptotically non-negative functions.
We say 𝑡ℎ𝑎𝑡 𝑓 (𝑛) is Ω( 𝑔 ( 𝑛 )) if there exists a positive integer 𝑛0 and a positive constant
𝑐 such that for all integers 𝑛 ≥ 𝑛0 ,
0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛)
Ω( 𝑔 ( 𝑛 )) = {𝑓(𝑛): There exist positive constants 𝑐 and 𝑛0 , such that 0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛)
for all 𝑛 ≥ 𝑛0 }.
Dr. A. K. Panda
Example 1:
𝑓(𝑛) = 13
𝑓(𝑛) ≥ 12 × 1
where 𝑐 = 12 and 𝑛0 = 0
Hence, 𝑓(𝑛) = Ω(1)
Example 2:
𝑓(𝑛) = 3𝑛 + 5
3𝑛 + 5 > 3𝑛
where 𝑐 = 3 and 𝑛0 = 1
Hence, , 𝑓(𝑛) = Ω(𝑛)
Example 3:
𝑓(𝑛) = 3𝑛 2 + 5
3𝑛 2 + 5 > 3𝑛 2
where 𝑐 = 3 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(𝑛 2 )
Example 4:
𝑓(𝑛) = 7𝑛 2 + 5𝑛
7𝑛 2 + 5𝑛 > 7𝑛 2
where 𝑐 = 7 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(𝑛 2 )
Example 5:
𝑓(𝑛) = 2𝑛 + 6𝑛 2 + 3𝑛
2𝑛 + 6𝑛 2 + 3𝑛 > 2𝑛
where 𝑐 = 1 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(2𝑛 )
Little-oh (o) Notation
The asymptotic upper bound provided by Big-oh(O) notation may or may not be
asymptotically tight.
For Example, 2𝑛 2 = 𝑂(𝑛 2 ) is asymptotically tight but 2𝑛 = 𝑂(𝑛 2 ) is not.
We use Little-Oh(o) notation to denote an upper bound that is not asymptotically tight.
𝑜(𝑔(𝑛)) = {𝑓(𝑛): for any positive constant 𝑐 > 0 , there exists a constant 𝑛0 > 0 such
that 0 ≤ 𝑓(𝑛) < 𝑐𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.
Dr. A. K. Panda
For example, 2𝑛 = 𝑜(𝑛 2 ) but 2𝑛 2 ≠ 𝑜(𝑛 2 )
The main difference between Big-oh (O) notation and Little-oh (o) Notation is that in
𝑓(𝑛 ) = 𝑂(𝑔(𝑛 )), the bound 0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛) holds for some constant 𝑐 > 0, but in
𝑓(𝑛 ) = 𝑜(𝑔(𝑛 )), the bound 0 ≤ 𝑓(𝑛) < 𝑐𝑔(𝑛) holds for all constant 𝑐 > 0.
Little omega (𝝎) Notation:
The asymptotic lower bound provided by Big Omega (𝜔) notation may or may not be
asymptotically tight. We use Little Omega (𝜔) notation to denote a lower bound that is not
asymptotically tight.
𝜔( 𝑔 ( 𝑛 )) = {𝑓(𝑛): for any positive constant 𝑐 > 0 there exists a constant 𝑛0 > 0, such
that 0 ≤ 𝑐𝑔(𝑛) < 𝑓(𝑛) for all 𝑛 ≥ 𝑛0 }.
For example,
𝑛2
2
= 𝜔(𝑛), but
𝑛2
2
≠ 𝜔(𝑛 2 )
Example:
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝜔(1)
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝜔(𝑛)
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝜃(𝑛 2 )
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = Ω(1)
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = Ω(𝑛)
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛 2 )
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛 6 )
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛 50 )
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 = 𝑜(𝑛19)
Also,
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 ≠ 𝑜(𝑛 2 )
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 ≠ 𝜔 (𝑛19)
𝑎𝑛 2 + 𝑏𝑛 + 𝑐 ≠ 𝑂(𝑛)
Dr. A. K. Panda
Some more incorrect bounds are as follows:
7𝑛 + 5 ≠ 𝑂(1)
2𝑛 + 3 ≠ 𝑂(1)
3𝑛 2 + 16𝑛 + 2 ≠ 𝑂(𝑛)
5𝑛 3 + 𝑛 2 + 3𝑛 + 2 ≠ 𝑂(𝑛 2 )
7𝑛 + 5 ≠ Ω(𝑛 2 )
2𝑛 + 3 ≠ Ω(𝑛 3 )
10𝑛 2 + 7 ≠ Ω(𝑛 4 )
7𝑛 + 5 ≠ 𝜃(𝑛 2 )
2𝑛 2 + 3 ≠ 𝜃(𝑛 3 )
Some more loose bounds are as follows:
2𝑛 + 3 = 𝑂(𝑛 2 )
4𝑛 2 + 5𝑛 + 6 = 𝑂(𝑛 4 )
5𝑛 2 + 3 = Ω(1)
2𝑛 3 + 3𝑛 2 + 2 = Ω(𝑛 2 )
Some correct bounds are as follows:
2𝑛 + 8 = 𝑂(𝑛)
2𝑛 + 8 = 𝑂(𝑛 2 )
2𝑛 + 8 = 𝜃(𝑛)
2𝑛 + 8 = Ω(𝑛)
2𝑛 + 8 = 𝑜(𝑛 2 )
2𝑛 + 8 ≠ 𝑜(𝑛)
2𝑛 + 8 ≠ 𝜔(𝑛)
4𝑛 2 + 3𝑛 + 9 = 𝑂(𝑛 2 )
4𝑛 2 + 3𝑛 + 9 = Ω(𝑛 2 )
Dr. A. K. Panda
4𝑛 2 + 3𝑛 + 9 = 𝜃(𝑛 2 )
4𝑛 2 + 3𝑛 + 9 = 𝑜(𝑛 3 )
4𝑛 2 + 3𝑛 + 9 ≠ 𝑜(𝑛 2 )
4𝑛2 + 3𝑛 + 9 ≠ 𝜔(𝑛 2 )
Asymptotic comparison of functions:





𝑓(𝑛) = 𝑂(𝑔(𝑛)) ≈ 𝑓 ≤ 𝑔
𝑓(𝑛) = 𝜃(𝑔(𝑛)) ≈ 𝑓 = 𝑔
𝑓(𝑛) = Ω(𝑔(𝑛)) ≈ 𝑓 ≥ 𝑔
𝑓(𝑛) = 𝑜(𝑔(𝑛)) ≈ 𝑓 < 𝑔
𝑓(𝑛) = 𝜔(𝑔(𝑛)) ≈ 𝑓 > 𝑔
N.B.:


𝑓(𝑛) is asymptotically smaller than 𝑔(𝑛) if 𝑓(𝑛) = 𝑜(𝑔(𝑛)).
𝑓(𝑛) is asymptotically larger than 𝑔(𝑛) if 𝑓(𝑛) = 𝜔(𝑔(𝑛)).
Properties of Asymptotic Notations
Reflexivity



𝑓(𝑛) = 𝑂(𝑓(𝑛))
𝑓(𝑛) = 𝜃(𝑓(𝑛))
𝑓(𝑛) = Ω(𝑓(𝑛))
Symmetry
𝑓(𝑛) = 𝜃(𝑔(𝑛)) if and only if 𝑔(𝑛) = 𝜃(𝑓(𝑛))
Transitivity





𝑓(𝑛) = 𝑂(𝑔(𝑛)) and 𝑔(𝑛) = 𝑂(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝑂(ℎ(𝑛))
𝑓(𝑛) = 𝜃(𝑔(𝑛)) and 𝑔(𝑛) = 𝜃(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝜃(ℎ(𝑛))
𝑓(𝑛) = Ω(𝑔(𝑛)) and 𝑔(𝑛) = Ω(ℎ(𝑛)) ⇒ 𝑓(𝑛) = Ω(ℎ(𝑛))
𝑓(𝑛) = 𝑜(𝑔(𝑛)) and 𝑔(𝑛) = 𝑜(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝑜(ℎ(𝑛))
𝑓(𝑛) = 𝜔(𝑔(𝑛)) and 𝑔(𝑛) = 𝜔(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝜔(ℎ(𝑛))
Transpose Symmetry
𝑓(𝑛) = 𝑂(𝑔(𝑛)) if and only if 𝑔(𝑛) = Ω(𝑓(𝑛))
𝑓(𝑛) = 𝑜(𝑔(𝑛)) if and only if 𝑔(𝑛) = 𝜔(𝑓(𝑛))
Dr. A. K. Panda
Analysis of Recursive Algorithms through Recurrence Relations
A recurrence relation for a sequence a 0, a1, … is an equation that relates a n to some of the
terms a0, a1, …, an-1.
Initial conditions for the sequence a0, a1, … are explicitly given values for a finite number of
the terms of the sequence.
Example:
Find the recurrence relation and the initial conditions of the Fibonacci sequence: 1, 1, 2, 3,
5, 8, …
Solution:
fn = f n – 1 + fn – 2, for all n  3 and the initial conditions f 1 = 1 and f 2 = 1.
Example:
A person invests Rs. 1000 at 8% interest compounded annually. If a n represents the amount
at the end of n years, find a recurrence relation and initial conditions that define the
sequence.
At the end of 0th year amount i.e. the beginning amount a 0 = 1000.
At the end of 1st year amount a1 = 1000 + 0.08  1000 = a 0 + 0.08  a0 = (1.08)a0.
At the end of 2nd year amount a2 = a1 + 0.08  a1 = (1.08)a1.
Hence the recurrence relation is a n = (1.08)a n – 1 for all n  1 with the initial condition a0 =
1000.
Recursion:
The function or process that is capable of repeating its functionality by calling itself again
and again is called as recursion function. An Algorithm is called recursive algorithm if there
is a loop or, repetition.
Time complexity of iteration can be found by finding the number of cycles being repeated
inside the loop.
Examples of Recursive Algorithms :



To find factorial of a number
To generate Fibonacci series
Finding maximum element of an array
Dr. A. K. Panda

Computing sum of elements in array
Factorial of a number:
𝑛! = 𝑛 × (𝑛 − 1) × (𝑛 − 2) × ⋯ × 2 × 1
This formula may be expressed recursively as:
𝑛! = {
𝑛 × (𝑛 − 1)! ,
1,
𝑛>0
𝑛=0
Recursive program to compute 𝑛!.
Fact (n)
{
if (n==0) return 1;
else
{
temp = Fact (n−1);
return (n * temp);
}
}
Fibonacci Series:
Fibo (n)
{
if (n≤ 2) return 1;
else
{
return (Fibo(𝑛 − 1)+Fibo(𝑛 − 2));
}
}
Divide and Conquer Method:
Divide and conquer is a design strategy which is well known to breaking down efficiency
barriers.
Divide and conquer strategy is as follows: divide the problem instance into two or more
smaller instances of the same problem, solve the smaller instances recursively, and
assemble the solutions to form a solution of the original instance. The recursion stops when
an instance is reached which is too small to divide.
Divide and conquer algorithm consists of two parts:
Dr. A. K. Panda
1. Divide : Divide the problem into a number of sub problems. The sub problems are
solved recursively.
2. Conquer : The solution to the original problem is then formed from the solutions to
the sub problems (patching together the answers).
Different techniques to solve recurrence relations:
1. Substitution method
2. The recursion tree method
3. Master’s theorem
The substitution Method:
There are two different substitution strategies:
Strategy 1: By repeated substitution.
Strategy 2: Guess the solution and prove it correct by mathematical induction.
Strategy 1:
Consider the recurrence equation:
𝑇(𝑛) = {
1,
2𝑇(𝑛 − 1) + 1,
𝑛=1
𝑛≥2
Solution:
⇒ 𝑇(𝑛) = 1 + 2 ∙ 𝑇(𝑛 − 1)
⇒ 𝑇 (𝑛 ) = 1 + 2(1 + 2𝑇 (𝑛 − 2))
⇒T(𝑛 ) = 1 + 2 + 4 ∙ 𝑇(𝑛 − 2)
⇒T(𝑛 ) = 1 + 2 + 4 ∙ (1 + 2𝑇 (𝑛 − 3))
⇒ 𝑇(𝑛) = 1 + 2 + 4 + 8 ∙ 𝑇(𝑛 − 3)
⇒ 𝑇(𝑛) = 1 + 2 + 22 + 23 ∙ 𝑇(𝑛 − 3)
………………………………………
… … … … … … … … … … … … … … ….
⇒ 𝑇(𝑛) = 1 + 2 + 22 + 23 + ⋯ + 2𝑛−1 ∙ 𝑇(1)
⇒ 𝑇 (𝑛 ) = 1 + 2 + 22 + 23 + ⋯ + 2𝑛−1 (Since, 𝑇 (1) = 1)
Dr. A. K. Panda
⇒ 𝑇 (𝑛 ) =
2𝑛 − 1
(𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑆𝑢𝑚 𝐹𝑜𝑟𝑚𝑢𝑙𝑎)
2−1
⇒ 𝑇 (𝑛 ) = 2𝑛 − 1
Note: The repeated substitution method may not always be successful.
Strategy 2:
Suppose we guess the solution to be exponential, but with some constants to be
determined.
Guess: 𝑇(𝑛) = 𝐴 2𝑛 + 𝐵
We try to prove the solution form is correct by induction. If the induction is successful, then
we find the values of the constant A and B in the process.
Induction Proof:
Basis of Induction:
LHS: 𝑇(1) = 1(From the initial condition)
RHS: 𝐴21 + 𝐵 = 2𝐴 + 𝐵
we need 2𝐴 + 𝐵 = 1 … … … … … … . . . (1)
Induction Hypothesis:
Assume that 𝑇(𝑘) = 𝐴 2𝑘 + 𝐵 for 𝑘 ≥ 1
Induction Step:
To prove the solution is also correct for 𝑛 = 𝑘 + 1:
𝑖. 𝑒.To Show 𝑇(𝐾 + 1) = 𝐴 2𝑘+1 + 𝐵
LHS: 𝑇(𝑘 + 1) = 2𝑇(𝑘) + 1 (𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛)
= 2[𝐴 2𝑘 + 𝐵] + 1 (By Inductive ℎ𝑦𝑝𝑜𝑡ℎ𝑒𝑠𝑖𝑠)
= 𝐴 2𝑘+1 + (2𝐵 + 1)
= 𝐴 2𝑘+1 + 𝐵 … … …
RHS
we need 2𝐵 + 1 = 𝐵 … … … … … … . . . (2)
Dr. A. K. Panda
Solving (1) and (2) {
2𝐴 + 𝐵 = 1
2𝐵 + 1 = 𝐵
We get 𝐵 = −1 and 𝐴 = 1
Hence, 𝑇(𝑛 ) = 𝐴 2𝑛 + 𝐵 = 2𝑛 − 1
Algorithm Insertion Sort:
ISort (int [ ], int 𝑛)
{
if (𝑛==1) return;
ISort (𝑛,−1);
𝑛=𝑛−1;
while (𝑛>0 and 𝑛[𝑛]<𝑛[𝑛−1])
{
SWAP ([𝑛],[𝑛−1]);
𝑛=𝑛−1;
}
}
Time Complexity Analysis
Let 𝑇(𝑛) be the worst-case number of key-comparisons to sort n elements.
The while loop in the worst-case makes (𝑛−1) key-comparisons.
𝑇(𝑛) = {
0,
𝑇(𝑛 − 1) + 𝑛 − 1,
𝑛=1
𝑛≥2
Solution by Repeated substitution:
𝑇(𝑛) = (𝑛 − 1) + 𝑇(𝑛 − 1)
= (𝑛 − 1) + (𝑛 − 2) + 𝑇(𝑛 − 2)
= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + 𝑇(𝑛 − 3)
……………………………
Dr. A. K. Panda
……………………………
= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ + 1 + 𝑇(1)
= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ + 1 + 0
=
=
(𝑛−1)𝑛
2
[Since, 𝑇 (1) = 0]
(Arithmetic sum formula)
𝑛2 −𝑛
2
Solution by Guessing the solution and proving correctness by induction:
Suppose we guess the solution as 𝑂(𝑛 2 ) and express the solution as below, in terms of
some constants 𝑛, 𝑛, to be determined. 𝑇(𝑛) = 𝐴𝑛 2 + 𝐵𝑛 + 𝐶
Basis of Induction:
LHS: 𝑇(1) = 0(From the initial condition)
RHS: 𝐴𝑛 2 + 𝐵𝑛 + 𝐶 = 𝐴 + 𝐵 + 𝐶
So we need 𝐴 + 𝐵 + 𝐶 = 0 … … … … … … … … … . . (1).
Induction Hypothesis:
Assume that 𝑇(𝑘 − 1) = 𝐴 (𝑘 − 1)2 + 𝐵(𝑘 − 1) + 𝐶 for 𝑘 ≥ 1
Induction Step:
To prove the solution is also correct for 𝑛 = 𝑘
𝑖. 𝑒.To Show 𝑇(𝑘) = 𝐴 𝑘 2 + 𝐵𝑘 + 𝐶
LHS.: 𝑇 (𝑘) = 𝑇 (𝑘 − 1) + (𝑘 − 1) (from the recurrence equation)
= 𝐴 (𝑘 − 1)2 + 𝐵 (𝑘 − 1) + 𝐶 + (𝑘 − 1)
= 𝐴 𝑘 2 − 2𝐴𝑘 + 𝐴 + 𝐵𝑘 − 𝐵 + 𝐶 + 𝑘 − 1
= 𝐴 𝑘 2 + (−2𝐴 + 𝐵 + 1)𝑘 + (𝐴 − 𝐵 + 𝐶 − 1)
= 𝐴 𝑘 2 + 𝐵𝑘 + 𝐶
So we need −2𝐴 + 𝐵 + 1 = 𝐵 … … … … … … … … (2)
And 𝐴 − 𝐵 + 𝐶 − 1 = 𝐶 … … … … … … … … … … … (3)
Dr. A. K. Panda
𝐴+𝐵+𝐶 =0
Solving (1), (2) and (3) { −2𝐴 + 𝐵 + 1 = 𝐵
𝐴−𝐵+𝐶−1 =𝐶
𝐴=
𝑛2
1
1
, 𝐵 = − , 𝐶 = 0.
2
2
𝑛
Therefore, 𝑇 (𝑛 ) = 2 − 2
Example:
Solve the following recurrence relation by using substitution method.
0,
𝑇(𝑛) = {
2𝑇(𝑛/2) + 1,
𝑛=1
𝑛≥2
Solution: by Repeated Substitution
𝑛
𝑇 (𝑛 ) = 2𝑇 ( ) + 1
2
= 1 + 2[1 + 2𝑇(𝑛⁄4)]
= 1 + 2 + 4𝑇(𝑛⁄4)
= 1 + 2 + 4 + 8 𝑇(𝑛⁄8)
… … … … … … … … … … ..
… … … … … … … … … … ….
𝑛
𝑛
2
2
= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 2𝑘 𝑇( 𝑘) (where 𝑘 = 1 ⇒ 𝑛 = 2𝑘 )
= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 2𝑘 𝑇(1)
= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 0
= 1 + 2 + 4 + ⋯ + 2𝑘−1
=
2𝑘 −1
2−1
= 2𝑘 − 1 = 𝑛 − 1(Geometric Sum formula)
Example:
Show that 𝑇 (𝑛 ) = 𝑂(𝑛 lg 𝑛) is the solution of the following recurrence relation by
substitution method.
Dr. A. K. Panda
𝑇(𝑛) = {
1,
2𝑇(𝑛/2) + 𝑛,
𝑛=1
𝑛>2
Proof:
Step 1: 𝐺𝑢𝑒𝑠𝑠 𝑇(𝑛) = 𝑂(𝑛 lg 𝑛)
Step 2: We have to prove that 𝑇 (𝑛 ) ≤ 𝑐𝑛 lg 𝑛 for an appropriate choice of the constant 𝑐 >
0.
Basis of Induction:
Here our guess does not hold for 𝑛 = 1because 1 ≰ 𝑐. 1 𝑙𝑜𝑔1
Now for 𝑛 = 2
𝑛
LHS: 𝑇 (2) = 2𝑇 ( ) + 𝑛 = 2𝑇 (1) + 2 = 2 × 0 + 2 = 2(From the initial condition)
2
RHS: 𝑐 × 2 lg 2 = 2𝑐
Hence, 𝑇(2) ≤ 𝑐 × 2 lg 2 for 𝑐 ≥ 1
Induction Hypothesis:
Assume that 𝑇(𝑛 ) ≤ 𝑐𝑛 lg 𝑛 for 2 ≤ 𝑛 < 𝑘
Induction Step:
To prove the solution is also correct for 𝑛 = 𝑘
LHS.: 𝑇 (𝑘) = 2𝑇(𝑘/2) + 𝑘(from the recurrence equation)
≤ 2𝑐 (𝑘/2) lg(𝑘/2) + 𝑘
= 𝑐 𝑘 lg(𝑘/2) + 𝑘
= 𝑐 𝑘 lg 𝑘 − 𝑐𝑘 lg 2 + 𝑘
= 𝑐 𝑘 lg 𝑘 − 𝑘 (c lg 2 − 1)
= 𝑐 𝑘 lg 𝑘 − 𝑘(𝑐 − 1)
= 𝑐 𝑘 lg 𝑘 − 𝑐𝑘 + 𝑘
≤ 𝑐𝑘 lg 𝑘 for all 𝑐 ≥ 1
Therefore, 𝑇 (𝑛 ) = 𝑂(𝑛 lg 𝑛)
Dr. A. K. Panda
Recursion Tree Method:
It is a pictorial representation of a given recurrence relation, which shows how Recurrence
is divided till Boundary conditions. It is useful when the divide & Conquer algorithm is
used.
Basic Steps to solve recurrence relation using recursion tree method:
1. Draw a recursive tree for given recurrence relation. In general, we consider the
second term in recurrence as root of the tree.
2. Determine
 Cost of each level
 Total number of levels in the recursion tree
 Number of nodes in the last level
 Cost of the last level
3. Add cost of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation.
Example:
Solve the recurrence 𝑇(𝑛) = 2𝑇 (𝑛/2 ) + 𝑛 using recursion tree method.
Solution:
Step1: First you make a recursion tree of a given recurrence, where n is the root.
The given recurrence relation showsDr. A. K. Panda



A problem of size n will get divided into 2 sub-problems of size n/2.
Then, each sub-problem of size n/2 will get divided into 2 sub-problems of size n/4
and so on.
At the bottom most layer, the size of sub-problems will reduce to 1.
Step 2: Determine cost of each level


Cost of level-0 = n
Cost of level-1 = 𝑛/2 + 𝑛/2 = 𝑛
Cost of level-2 = 𝑛/4 + 𝑛/4 + 𝑛/4 + 𝑛/4 = 𝑛 and so on.
Step 3: Determine total number of levels in the recursion treeLet total number of levels = ℎ

Size of sub-problem at level-0 =

Size of sub-problem at level-1 =

Size of sub-problem at level-2 =
𝑛
20
𝑛
21
𝑛
22
Continuing in similar manner, we haveSize of sub-problem at level-i =
𝑛
2𝑖
Suppose at level-ℎ (last level), size of sub-problem becomes 1. Then𝑛
=1
2ℎ
⇒ 𝑛 = 2ℎ
Taking log on both sides, we get ℎ 𝑙𝑜𝑔2 = 𝑙𝑜𝑔𝑛
⇒ ℎ = log2 𝑛
∴ Total number of levels in the recursion tree = log2 𝑛 + 1
Step4: Determine number of nodes in the last level:




Level-0 has 20 nodes i.e. 1 node
Level-1 has 21 nodes i.e. 2 nodes
Level-2 has 22 nodes i.e. 4 nodes
…………………………………………….
Level-h has 2ℎ = 2log2 𝑛 = 𝑛 log2 2 = 𝑛1 = 𝑛 nodes
Step 6: Determine cost of last level- 𝜃(𝑛)
Dr. A. K. Panda
Step 6: Add costs of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation𝑇 (𝑛 ) = 𝑛 + 𝑛 + ⋯ 𝑛 + 𝜃(𝑛)
= 𝑛 × log2 𝑛 + 𝜃 (𝑛)
= 𝑛 log2 𝑛 + 𝜃 (𝑛)
= 𝜽 (𝒏 log2 𝑛)
Example
𝑛
𝑛
𝑇 (𝑛 ) = 𝑇 ( ) + 𝑇 ( ) + 𝑛 2
4
2
Solution:
𝑇 (𝑛 ) = 𝑛 2 +
5 2
5 2 2
5 3
5
5 2
2
𝑛 + ( ) 𝑛 + ( ) + ⋯ … … . = 𝑛 (1 +
+ ( ) + ⋯..)
16
16
16
16
16
By geometric series, 1 + 𝑥 + 𝑥 2 … . +𝑥 𝑛 =
1 + 𝑥 + 𝑥2 + ⋯ =
1−𝑥 𝑛+1
1
1−𝑥
1−𝑥
for 𝑥 ≠ 1
for |𝑥 | < 1
5
5 2
1
1
16
1+
+( ) +⋯ =
=
=
16
16
1 − 5/16 11/16 11
Dr. A. K. Panda
Hence, 𝑇(𝑛 ) =
16
11
𝒏𝟐 = 𝜽(𝒏𝟐 )
The Master Theorem Method:
The master theorem method is the most useful method for solving recurrences of the form:
𝑇 (𝑛 ) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛)
where 𝑎 ≥ 1, 𝑏 > 1, and 𝑓 is asymptotically positive and 𝑛/𝑏 is equal to either ⌊𝑛/𝑏 ⌋ or,
⌈𝑛/𝑏 ⌉.
The solution can be obtained by using any one of following three common rules:
1. If 𝑓 (𝑛 ) = 𝑂(𝑛 log𝑏 𝑎−∈ ) for some constant ∈> 0, then 𝑇 (𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 )
2. If 𝑓(𝑛) = 𝑂(𝑛 log𝑏 𝑎 ) , then 𝑇 (𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 lg 𝑛)
3. If 𝑓 (𝑛 ) = Ω(𝑛 log𝑏 𝑎+∈ ) for some constant ∈> 0, and if 𝑎𝑓(𝑛/𝑏) ≤ 𝑐𝑓(𝑛)for some
constant 𝑐 < 1 then 𝑇 (𝑛 ) = 𝜃 (𝑓(𝑛))
Example:
Find the solution of the following problem by using Master’s theorem:
𝑇(𝑛) = 9𝑇(𝑛/3) + 𝑛
Solution:
Here, 𝑎 = 9, 𝑏 = 3, 𝑓(𝑛) = 𝑛
log𝑏 𝑎 = log3 9 = 2
𝑛 log𝑏 𝑎 = 𝑛 2
By rule 1, 𝑓(𝑛 ) = 𝑛 = 𝑂(𝑛 2−1 ) = 𝑂(𝑛 log3 9−1 ) where ∈= 1
Hence, 𝑇(𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 ) = 𝜃(𝑛 log3 9) = 𝜃 (𝑛 2 )
Example:
𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛
Solution:
Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛
log𝑏 𝑎 = log2 4 = 2
𝑛 log𝑏 𝑎 = 𝑛 2
Dr. A. K. Panda
By rule 1, 𝑓(𝑛 ) = 𝑛 = 𝑂(𝑛 2−1 ) = 𝑂(𝑛 log2 4−1 ) where ∈= 1
Hence, 𝑇(𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 ) = 𝜃 (𝑛 2 )
Example:
𝑇(𝑛) = 𝑇(2𝑛/3) + 1
Solution:
Here, 𝑎 = 1, 𝑏 = 3/2, 𝑓(𝑛) = 1
𝑛 log𝑏 𝑎 = 𝑛
log3 1
2
= 𝑛0 = 1
By rule 2, 𝑓(𝑛 ) = 1 = 𝑂(1)
Hence, 𝑇(𝑛 ) = 𝜃 (1 lg 𝑛 ) = 𝜃 (lg 𝑛 )
Example:
𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛 2
Solution:
Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛 2
𝑛 log𝑏 𝑎 = 𝑛 log2 4 = 𝑛 2
By rule 2, 𝑓(𝑛 ) = 𝑛 2 = 𝑂(𝑛 2 )
Hence, 𝑇(𝑛 ) = 𝜃 (𝑛 2 lg 𝑛 )
Example:
𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛 3
Solution:
Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛 3
𝑛 log𝑏 𝑎 = 𝑛 log2 4 = 𝑛 2
𝑓 (𝑛 ) = 𝑛 3 = Ω(𝑛 3 ) = Ω(𝑛 2+1 )= Ω(𝑛 log𝑏 𝑎+∈ ) where ∈= 1
and also we have to check the 2nd condition,
𝑛
𝑛
4𝑛3
𝑏
2
8
𝑎𝑓 ( ) = 4𝑓 ( ) =
=
𝑛3
2
≤ 𝑐𝑛 3 where 𝑐 = 1/2.
Dr. A. K. Panda
Hence, by rule 3, 𝑇(𝑛 ) = 𝜃 (𝑓(𝑛)) = 𝜃(𝑛 3 )
Example:
𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑛 2
Solution:
Here, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛 2
𝑛 log𝑏 𝑎 = 𝑛 log2 2 = 𝑛1
𝑓 (𝑛 ) = 𝑛 2 = Ω(𝑛 2 ) = Ω(𝑛1+1 )= Ω(𝑛 log𝑏 𝑎+∈ ) where ∈= 1
𝑛
𝑛
2𝑛2
𝑏
2
4
And 𝑎𝑓 ( ) = 2𝑓 ( ) =
=
𝑛2
2
≤ 𝑐𝑛 3 where 𝑐 = 1/2.
Hence, by rule 3, 𝑇(𝑛 ) = 𝜃 (𝑓(𝑛)) = 𝜃(𝑛 2 )
Fourth condition of Master’s theorem
If 𝑓(𝑛), the non-recursive cost is not a polynomial and it is a poly logarithmic function,
then 4th condition of the master’s theorem is applicable.
If 𝑓 (𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 logk 𝑛) for some 𝑘 > 0, then 𝑇 (𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 logk+1 𝑛)
Example:
𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝑛 𝑙𝑜𝑔 𝑛
Solution:
Here, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛 ) = 𝑛 log 𝑛
𝑛 log𝑏 𝑎 = 𝑛 log2 2 = 𝑛1 = 𝑛
𝑓(𝑛 ) = 𝜃 (𝑛 log1 𝑛 ) , 𝑘 = 1
𝑇 (𝑛 ) = 𝜃(𝑛 log𝑏 𝑎 logk+1 𝑛) = 𝜃 (𝑛 log1+1 𝑛 ) = 𝜃 (𝑛 log2 𝑛 )
Dr. A. K. Panda
Master Method in Simpler Form:
If the recurrence relation is of the form
𝑇 (𝑛 ) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛)
where 𝑎 ≥ 1, 𝑏 > 1 and 𝑓(𝑛) is asymptotically positive function then solution is
𝑇 (𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 ))
where 𝑢 (𝑛 ) depends upon ℎ(𝑛)
and ℎ(𝑛 ) =
𝑓(𝑛)
𝑛log𝑏 𝑎
𝒉(𝒏)
𝑛 ,𝑘 > 0
𝑛𝑘 , 𝑘 < 0
(log 𝑛 )𝑘 , 𝑘 ≥ 0
𝑘
𝒖(𝒏)
𝑛𝑘
1
(log 𝑛 )𝑘+1
𝑘 +1
Example:
𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛 3
Solution:
Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛 ) = 𝑛 3 , 𝑛 log𝑏 𝑎 = 𝑛 log2 4 = 𝑛 2
ℎ (𝑛 ) =
𝑓(𝑛)
𝑛3
=
= 𝑛1
𝑛 log𝑏 𝑎 𝑛 2
By Case 1, 𝑢 (𝑛 ) = 𝑛 𝑘 = 𝑛1 = 𝑛
∴ 𝑇 (𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 )) = 𝜃(𝑛 2 × 𝑛 ) = 𝜃(𝑛 3 )
Example:
𝑇(𝑛) = 8𝑇(𝑛/2) + 𝑛 2
Solution:
𝑎 = 8, 𝑏 = 2, 𝑓 (𝑛 ) = 𝑛 2 , 𝑛 log𝑏 𝑎 = 𝑛 3
ℎ (𝑛 ) =
𝑓(𝑛)
𝑛2
=
= 𝑛 −1
𝑛 log𝑏 𝑎 𝑛 3
By Case 2, 𝑢 (𝑛 ) = 1
Dr. A. K. Panda
∴ 𝑇(𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 )) = 𝜃 (𝑛 3 × 1) = 𝜃(𝑛 3 )
Example:
𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛 2
Solution:
Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛 ) = 𝑛 2 , 𝑛 log𝑏 𝑎 = 𝑛 log2 4 = 𝑛 2
ℎ (𝑛 ) =
𝑓(𝑛)
𝑛
log𝑏 𝑎
=
𝑛2
𝑛2
= 𝑛 0 = 1 which falls neither in case 1 nor in case 2 (Since, 𝑘 = 1)
ℎ(𝑛 ) = 1 = (log 𝑛 )0 , 𝑘 = 0
By case 3, 𝑢 (𝑛 ) =
(log 𝑛)𝑘+1
𝑘+1
=
(log 𝑛)0+1
0+1
= log 𝑛
∴ 𝑇(𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 )) = 𝜃 (𝑛 2 × log 𝑛 ) = 𝜃(𝑛 2 log 𝑛 )
Example:
𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝑛 𝑙𝑜𝑔 𝑛
Solution:
Here, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛 ) = 𝑛 log 𝑛, 𝑛 log𝑏 𝑎 = 𝑛 log2 2 = 𝑛1 = 𝑛
ℎ (𝑛 ) =
𝑓(𝑛)
𝑛 𝑙𝑜𝑔 𝑛
=
= 𝑙𝑜𝑔 𝑛 = (𝑙𝑜𝑔 𝑛)1 , 𝑘 = 1
log
𝑎
𝑛 𝑏
𝑛
By case 3, 𝑢 (𝑛 ) =
(log 𝑛)𝑘+1
𝑘+1
=
(log 𝑛)1+1
1+1
1
= 𝑙𝑜𝑔2 𝑛
2
1
∴ 𝑇(𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 )) = 𝜃 (𝑛 × 𝑙𝑜𝑔2 𝑛) = 𝜃 (𝑛 𝑙𝑜𝑔2 𝑛 )
2
Example:
𝑇(𝑛) = 𝑇(2𝑛/3) + 1
Solution:
Here, 𝑎 = 1, 𝑏 = 3/2, 𝑓(𝑛) = 1, 𝑛 log𝑏 𝑎 = 𝑛
ℎ (𝑛 ) =
log3 1
2
= 𝑛0 = 1
𝑓 (𝑛 )
1
=
= 1 = (log 𝑛 )0 , 𝑘 = 0
log
𝑎
𝑏
𝑛
1
Dr. A. K. Panda
By case 3, 𝑢 (𝑛 ) =
(log 𝑛)𝑘+1
𝑘+1
=
(log 𝑛)0+1
0+1
= log 𝑛
∴ 𝑇(𝑛 ) = 𝜃 (𝑛 log𝑏 𝑎 × 𝑢 (𝑛 )) = 𝜃 (1 × log 𝑛 ) = 𝜃 (log 𝑛 )
Limitations of Master’s Theorem:
The master theorem cannot be used for if:




𝑇(𝑛) is not monotone. eg. 𝑇(𝑛) = 𝑠𝑖𝑛 𝑛
𝑓(𝑛) is not a polynomial. eg. 𝑓(𝑛) = 2𝑛
𝑎 is not a constant. eg. 𝑎 = 2𝑛
𝑎 < 1
Problem Set
1. 𝑇 (𝑛) = 3𝑇 (𝑛/2) + 𝑛 2
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-3)
2. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑛 2
𝑇 (𝑛) = 𝜃(𝑛 2 log 𝑛) (Rule-2)
3. 𝑇 (𝑛) = 𝑇 (𝑛/2) + 2𝑛
𝑇 (𝑛) = 𝜃(2𝑛 ) (Rule 3)
4. 𝑇 (𝑛) = 2𝑛 𝑇 (𝑛/2) + 𝑛 𝑛
Master theorem is not applicable because 2𝑛 is not a constant.
5. 𝑇 (𝑛) = 16𝑇 (𝑛/4) + 𝑛
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-1)
6. 𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝑛/ 𝑙𝑜𝑔 𝑛
Master theorem is not applicable because there is a non-polynomial difference
between 𝑓(𝑛) and 𝑛 log𝑏 𝑎
7. 𝑇 (𝑛) = 2𝑇 (𝑛/4) + 𝑛 0.51
𝑇 (𝑛) = 𝜃(𝑛 0.51 ) (Rule-3)
8. 𝑇 (𝑛) = 0.5𝑇 (𝑛/2) + 1/𝑛
Master theorem is not applicable because 𝑎 < 1
Dr. A. K. Panda
𝑛
9. 𝑇 (𝑛 ) = 16𝑇 ( ) + 𝑛!
4
𝑇 (𝑛 ) = 𝜃 (𝑛!) (Rule-3)
10. 𝑇 (𝑛) = √2𝑇 (𝑛/2) + 𝑙𝑜𝑔 𝑛
𝑇 (𝑛) = 𝜃(√𝑛) (Rule-1)
11. 𝑇 (𝑛) = 3𝑇 (𝑛/2) + 𝑛
𝑇 (𝑛) = 𝜃(𝑛 log 3 ) (Rule-1)
12. T (𝑛) = 3𝑇 (𝑛/3) + √𝑛
𝑇 (𝑛) = 𝜃(𝑛) (Rule 1)
13. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑐𝑛
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-1)
14. 𝑇 (𝑛) = 3𝑇 (𝑛/4) + 𝑛 𝑙𝑜𝑔 𝑛
𝑇 (𝑛 ) = 𝜃(𝑛 log 𝑛) (Rule-3)
15. 𝑇 (𝑛) = 3𝑇 (𝑛/3) + 𝑛/2
𝑇 (𝑛 ) = 𝜃(𝑛 log 𝑛) (Rule-2)
16. 𝑇 (𝑛) = 6𝑇 (𝑛/3) + 𝑛 2 𝑙𝑜𝑔 𝑛
𝑇 (𝑛 ) = 𝜃(𝑛 2 𝑙𝑜𝑔 𝑛) (Rule-3)
17. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑛/ 𝑙𝑜𝑔 𝑛
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-1)
18. 𝑇 (𝑛) = 64𝑇 (𝑛/8) − 𝑛 2 𝑙𝑜𝑔 𝑛
Master theorem is not applicable because 𝑓(𝑛) is not positive.
𝑛
19. 𝑇 (𝑛 ) = 7𝑇 ( ) + 𝑛 2
3
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-3)
20. T (n) = 4T (n/2) + log n
𝑇 (𝑛) = 𝜃(𝑛 2 ) (Rule-1)
Dr. A. K. Panda
Download