Uploaded by Sadbhav Khandelwal

Design and Analysis of Algorithms Lecture Notes: Time Complexity

advertisement
Design and Analysis of Algorithms
Lecture Notes
Dr. Aditya Singh Sengar
Department of Communication and Computer Engineering
The LNM Institute of Information Technology, Jaipur
August 2025
1
Definition
The word ’algorithm’ derives its roots from the title of a mathematics textbook,
named "Algorithmo de Numero Indorum", which is a Latin translation of a Persian
book (kitab al-hisab al-Hindi) written by Muhammed ibn Musa al-Khwarizmi as
an account of numerous mathematical procedures developed by Indian mathematicians before him. His surname ’Al-Khwarizmi’ spurned not just the word algorithm
but also the word Algebra. So, what exactly is an algorithm?
An algorithm is defined as a well-defined computational procedure that takes
some value, or a set of values, as input and produces some values, or a set of
values, as output. It is a tool for solving a well-defined computational problem [1].
In essence, an algorithm is the idea behind any reasonable computer program [2].
2
Desirable properties
1. Correctness: an algorithm must be able to solve the intended problem.
2. Finiteness: an algorithm must be able to accomplish its task in a finite number of steps.
3. Definiteness: each step of an algorithm must be objectively clear. There
should not be any doubts regarding what each step does.
4. Efficiency: an algorithm should strive to be better and more efficient in terms
of both time and space.
5. Implementational simplicity: an algorithm should strive to be easily implementable for wider acceptance.
The above list of desirable properties of an algorithm is in an arranged manner. Hence, correctness is the most important characteristic of an algorithm. The
correctness, however, also requires a definition of the scope of an algorithm.
For example, a sorting algorithm can sort very easily all real numbers; however,
it would definitely fail for the case of imaginary numbers. Once it is established
that an particular algorithm is correct and provides the right output for its entire
2
bubble_sort(int arr[], int n){
Operations
int i, j, temp;
for (i = 0; i < n - 1; i++){
n compare, n increment
for (j = 0; j < n - i - 1; j++){
n-1 compare, n-1 increment
if (arr[j] > arr[j + 1]){
n(n-1) compare
temp = arr[j];
n(n-1) assign
arr[j] = arr[j + 1];
n(n-1) assign
arr[j + 1] = temp;
n(n-1) assign
}
}
}
}
Figure 1: Bubble sort with marked operations
scope, the second and third qualities come into focus. Each algorithm must be
made up of a finite number of definite steps.
If there is more than one algorithm, which accomplishes the same task, we
have pay attention to the question of efficiency and implementational simplicity.
The analysis of time complexity of an algorithm invariably requires the estimation
of the number of smaller indivisible tasks being performed by the algorithm. For
example, consider the code for the bubble sort provided in Fig. 1. The entire algorithm is comprised of a finite number of definite steps. Each of these steps entails
a few basic operations like compare, increment, add, and assign. By counting the
number of times each operation shall be carried out, one can on a very rudimentary
level ascertain the time complexity of a certain algorithm.
However the question of time complexity can only be answered on the basis
of a base computational model. As the performance of an algorithm is primarily
dependent on the speed and scale of its executioner.
To give an analogy, consider yourself as an tourist in an tea estate. As part of
the tour experience, you are given a small tutorial into the tea leaf picking.
Tea leaf picking is a simple exercise, you pick a few leaves from a shrub and
3
CPU
Control
Unit
Input
Device
ALU
Output
Device
Memory Unit
Figure 2: RAM Model of Computation
put them in the basket on your back. Now, if you compare your performance with
that of an professional tea picker, the professional shall outperform you in terms of
efficiency, as her actions have been honed with years of experience.
Hence, a same algorithm (pick up leaves, put them in basket) is being executed
by two different entities, albeit with different levels of efficiency.
Hence, to compare the efficiency of different algorithms requires a common
base model of computation. One such model of computation shall be discussed in
the next section.
3
RAM model of Computation
In RAM (Random Access Machine) model of computation, it is assumed that the
algorithm is being run on a machine with a single processor. The processor is able
to perform one operation in one unit of time. It is also assumed that the machine
is able to conduct all its operations in the same amount of time as well as access a
4
1
2
3
5
4
2
1
5
4
3
5
4
3
2
1
1
2
3
4
5
1
2
4
3
5
4
3
2
1
5
1
2
3
4
5
3
2
1
4
5
2
1
3
4
5
1
2
3
4
5
Best Case
Average Case
Worst Case
Figure 3: Different input instances effect the time complexity differently.
particular memory location in the same amount of time.
Now, a caveat, the precise calculation of time complexity of an algorithm is incomplete without the consideration of the particular input instance as evident from
the three instances of sorting in Fig. 3. We may classify them as the following:
• Worst-case: a particular instance of input, where the algorithm requires the
execution of the maximum number of operations.
• Average-case: a random case of input, where the algorithm requires execution of a average number of operations.
• Best-case: a particular instance of input, where the algorithm requires the
execution of a minimum number of operations.
However, the dependence on input instances grows poorer and poorer with an
increase in input size. Hence, at larger values of N , the prominent question to be
asked is that of granularity. How precise the calculation of time complexity has to
be?
Typically, a very precise and accurate calculation of time complexity is not
required. Rather a rough estimate of time complexity is good enough. These rough
estimates of time complexity shall be discussed in the next section.
5
4
Asymptotic Time Complexity
Before we begin our analysis of asymptotic time complexity. we need to understand two more terms, Upper-bound and Lower-bound.
If a function U (n) is an upper-bound on a function f (n) then:
f (n) ≤ cU (n) c > 0; n → ∞
(1)
Hence, a function upper bounded by another function grows slower with an
increase in n as compared to the upper bound. Similarly, one can define the concept
of lower-bound as well, a function lower bounded by another function grows faster
with n as compared with it. So, if a function L(n) is an lower-bound on a function
f (n) then:
f (n) ≥ cL (n) c > 0; n → ∞
(2)
Based on the concept of upper and lower bounds on functions, we may discuss
the time complexity of an algorithm in terms of its upper and lower bounds. As
a precise measure of time complexity of an algorithm is rarely required. It is especially not that essential at large values of input size, N . So, instead of a pretty
accurate description of the time complexity, algorithm designers rely on asymptotic
measures.
The term ’Asymptotic’ refers to the behaviour of a function f (n) as n → ∞.
Hence all asymptotic measures of time complexity regards the input size N to be
a very large number. One may characterize the growth function of an algorithm in
one of the following notations.
4.1
Small-oh, o
Small-oh or o is a loose upper bound. A function i(n) is called to be small-oh of
f (n) iff:
f (n)
=0
(3)
lim
n→∞ i (n)
then,
f (n) = o (i (n))
6
(4)
Figure 4: Upper and Lower Bounds
4.2
Big-oh, O
Big-oh or O is a tighter upper bound as compared to small-oh and it is defined as:
f (n) = O (h (n))
(5)
f (n) ≤ ch (n) ; c > 0; n > n0
(6)
Iff,
7
4.3
Theta, θ
Small-oh or o is a loose upper bound and it is defined as:
f (n) = Θ (g (n))
(7)
0 ≤ c1 g (n) ≤ f (n) ≤ c2 g (n) ; c1 , c2 > 0; n > n0
(8)
Iff,
4.4
Big-Omega, Ω
Big-Omega or Ω is a looser lower bound as compared to Θ:
f (n) = Ω (k (n))
(9)
0 ≤ ck (n) ≤ f (n) ; c > 0; n > n0
(10)
Iff,
4.5
Small-omega, ω
Small-omega or ω is a loose upper bound. Hence, It is mathematically defined as
follows:
f (n) = ω (j (n))
(11)
Iff,
lim
f (n)
n→∞ j (n)
4.6
=∞
Several prominent properties
Symmetry
f (n) = Θ (g (n)) iff g(n) = Θ (f (n))
Transpose
8
(12)
Figure 5: Different asymptotic notations
f (n) = O (g (n)) iff g(n) = Ω (f (n))
f (n) = o (g (n)) iff g(n) = ω (f (n))
Transitivity
f (n) = O (g (n)) and g (n) = O (h (n)) then f (n) = O (h (n))
f (n) = Θ (g (n)) and g (n) = Θ (h (n)) then f (n) = Θ (h (n))
f (n) = Ω (g (n)) and g (n) = Ω (h (n)) then f (n) = Ω (h (n))
f (n) = o (g (n)) and g (n) = o (h (n)) then f (n) = o (h (n))
f (n) = ω (g (n)) and g (n) = ω (h (n)) then f (n) = ω (h (n))
9
Table 1: A few prominent time complexity functions
Sr. No.
Function
Equation
1
Constant function
f (n) = 1
2
Log function
f (n) = log(n)
3
Linear function
f (n) = n
4
Super-linear function
f (n) = nlog(n)
5
Quadratic function
f (n) = n2
6
Cubic function
f (n) = n3
7
Exponential function
f (n) = cn ; c > 1
8
Factorial function
f (n) = n
References
[1] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein.
Introduction to algorithms. MIT press, 2022.
[2] Steven S Skiena. The algorithm design manual, volume 2. Springer, 2008.
10
Download