# DESIGN OF ALGORITHMS - Lecture Slides Chapter 1

```COT4400–Design and Analysis of Algorithms:
Course Information and Introduction
Asai Asaithambi
Spring 2023
General Information
Instructor:
Asai Asaithambi, Ph. D.
Professor, School of Computing
Class Meetings:
Tue, Thu: 03:05 pm – 04:20 pm
Office Hours:
Tue, Thu: 10:45 am – 12:00 pm
Tue, Thu: 01:45 pm – 03:00 pm
Email:
asai.asaithambi@unf.edu
Textbook:
No textbook required
Material provided on Canvas
What is an Algorithm? Why Study Algorithms?
• What is an algorithm?
◦ It is a sequence of well-defined steps to solve a problem.
◦ Named after the Persian mathematician Al-Khwarismi
• Why study algorithms?
– Our normal daily activities are examples of algorithms
◦ brushing teeth; working out; driving to work; ...
– Humans can carry out not-so-well-defined steps,
but computers cannot
– An algorithm for solving a problem on the computer is
characterized by the amount of time and space needed
– To come up with well-defined steps is called design
– To determine the time/space needed is called analysis
– Studying and analyzing algorithms
will help design efficient algorithms
Application of Algorithms
• Subject Areas
– Artificial Intelligence; Machine Learning
– Robotics; Game Development; Animation
– Big data; Cybersecurity; Networks
– Biology; Chemistry; Physics
– Compilers; Databases; Hardware Design
– Anything else that needs computations
• Commonly Used Modern Apps
– Global Positioning Systems (GPS)
– Recommender Systems
◦ product recommendation on e-commerce websites
◦ movie/music recommendation on streaming services
– Search engines
– Speech to text translation
– Face/object detection in camera apps
Course Overview
• What will I learn in this course?
◦ Methods for mathematically analyzing a given algorithm
◦ Methods for designing algorithms for a variety of problems
• What background knowledge is assumed?
◦ Programming (although you will not code anything)
◦ Basic data structures (from COP3530)
◦ Discrete mathematics (from COT3100)
• Do I need any other special skills?
◦ Willingness to trace algorithms by hand
◦ Patience with algebraic manipulations
How to Succeed in this Course
• Make use of all the resources provided by the instructor
Instructor’s Office Hours
Instructor’s Lecture Notes and Slides
• Study everything as the course builds on basic concepts introduced earlier and increases in difficulty as the course progresses
• Trace through algorithms by hand
• Attend class regularly
• Do not memorize problems/solutions
• Listen carefully in class
• Write down anything for which you need clarification
• Make sure to use correct notation
Are There Other Resources Available?
The following are free resources available on the Internet:
• If you want to review data structures:
◦ https://opendatastructures.org/
• if you want to review discrete mathematics:
◦ https://courses.csail.mit.edu/6.042/spring18/mcs.pdf
• if you want a textbook on Algorithms:
◦ http://jeffe.cs.illinois.edu/teaching/algorithms/
• Note of caution: Most books on Algorithms tend to be
◦ Aimed at the researcher or practitioner
◦ Tend to be too terse, thick, or advanced
Learning Objectives
• Describe fundamental concepts in the design and analysis of
computer algorithms;
• Describe some modern algorithms for solving standard computing problems;
• Analyze some commonly used computer algorithms;
• Decide on the suitability of a specific algorithm design technique
for a given problem;
• Apply algorithm design and analysis techniques learned to solve
problems; and
• Describe complexity classes.
Topics List
This course is aimed at introducing some fundamental techniques
for the analysis and design of algorithms. The following topics will
be covered:
• Preliminaries
• Algorithms with Numbers
• Applied Combinatorics
• Divide-and-Conquer Algorithms
• Graph Algorithms
• Dynamic Programming Algorithms
• Greedy Algorithms
• Intractable Problems
Describing Algorithms
• We will use pseudocode to describe an algorithm
• We will use the function notation in programming languages
Each algorithm will have a header containing its name
followed by the inputs as arguments, and end with a
statement that returns the output of the algorithm
• Each line in the algorithm will be numbered for ease of reference
when analyzing the algorithm
• An example algorithm:
1:
2:
3:
4:
5:
6:
procedure Max(A, n)
max ← A[1]
for i ← 2 to n do
if A[i] &gt; max then
max← A[i]
return max
Why Analyze Algorithms?
We analyze algorithms to answer the following questions:
• Is my algorithm going to run forever?
◦ time complexity
• How much computer memory will my algorithm need?
◦ space complexity
• Since the running time of an algorithm grows with the input size,
What happens if the ”size” of the problem is
• Can I improve upon my algorithm?
◦ Can I solve my problem faster?
◦ Can I solve my problem with less memory?
• Do I know that my algorithm will produce the correct result?
◦ correctness
• What if it is not possible to determine the
exact amount of computing time or memory?
• In such instances, could we estimate as close as possible?
◦ tight bound/estimate
• Why should the analysis be performed prior to using it in real life?
Why Can’t I “Run” the Algorithm to Analyze?
We don’t want to ”run” an algorithm to analyze it because
• The performance of the algorithm will be:
◦ machine dependent
◦ programming language and compiler dependent
◦ operating system and implementation dependent
• There is no guarantee that the algorithm
has been tested on all possible inputs
• Experimentation can be expensive and time consuming
• Comparing several algorithms for the same problem
will take even longer and be more expensive
Conclusion: Analyze algorithms theoretically
Mathematical Preliminaries and Tools
• Algebra/Pre-Calculus Review
◦ Functions in Algorithm Analysis
◦ Laws of Exponents
◦ Polynomials
◦ Logarithms
◦ Floor and Ceiling functions
◦ Factorial
• Analysis of Simple Loops
• Tools for Algorithm Analysis
◦ Summations
◦ Asymptotic Notations
◦ Combinatorics
– Counting
– Recurrence Relations
– Generating Functions
What is a Function?
Given two sets A, B, a function f from A to B is a rule that associates
every element of A to an element of B.
f
x
1
y
5
z
7
A
B
We say f maps A to B, and write f : A 7→ B.
Also, since x is paired with 1, we write f (x ) = 1.
Similarly, f (y ) = 7 and f (z) = 1.
Note: A and B can be infinite sets.
Functions (cont.)
• We are particularly interested in functions f : N 7→ R
• N is the set of natural numbers {0, 1, 2, &middot; &middot; &middot; }
• R is the set of real numbers
• The rules for f are given as formulas, e.g., f (n) = 2n + 7
f
1
49
17
41
21
9
N
R
Some Useful Functions
Polynomial: A polynomial p of degree d (where d is a nonnegative integer) is
the function defined by the rule
p(n) = a0 + a1 n + a2 n2 + &middot; &middot; &middot; + ad nd .
Example: p(n) = 1 + 2n + 3n2
Floor: For any real number x , we define the floor of x as
⌊x ⌋ = the greatest integer ≤ x .
Example: ⌊3.2⌋ = 3, ⌊3.8⌋ = 3
Ceiling: For any real number x , we define the ceiling of x as
⌈x ⌉ = the smallest integer ≥ x .
Example: ⌈2.2⌉ = 3, ⌈2.8⌉ = 3
Factorial: For any integer n ≥ 0, we define the factorial of n [denoted by n!] as
n! =
1,
n &middot; (n − 1)!,
if n = 0;
otherwise.
Example: 5! = 5&middot;4! = 5&middot;4&middot;3! = 5&middot;4&middot;3&middot;2! = 5&middot;4&middot;3&middot;2&middot;1! = 5&middot;4&middot;3&middot;2&middot;1&middot;0! = 120
Things to Remember (Properties of Exponents)
Let a, b, p, and q be real numbers. Then:
ap &middot; aq = ap+q
ap /aq = ap−q
(ap )q = apq
(ab)p = ap b p
(a/b)p = ap /b p (where b ̸= 0)
a−p = 1/ap (where a ̸= 0)
1/a−p = ap
a0 = 1
√
q
ap/q = ap
(where a ̸= 0)
(where a ̸= 0)
(where q ̸= 0)
Things to Remember (Properties of Logarithms)
If ax = n, then we say that x is the logarithm of n to the base a, and write
x = loga n.
Also, when the base is the number e ≈ 2.718, we call log2 x as the natural
logarithm of x , and write ln x . For f (x ) = ln x , we have f ′ (x ) = 1/x .
For all real numbers x , y , z &gt; 0:
logz (x &middot; y ) = logz x + logz y
logz (x /y ) = logz x − logz y
logz (x y ) = y &middot; logz x
x = z logz x
x logz y = y logz x
logz 1 = 0
logz (1/x ) = − logz x
logy x
logy z
1
logz x =
logx z
logz x =
Analysis of Simple Loops
How many times is Hello printed?
for (i = 1; i &lt;= n; i++)
print &quot;Hello&quot;
Represent the loop as a summation:
n
X
1=n
i=1
Hello is printed n times.
Consider another example:
for (i = 1; i &lt;= n; i++)
for (j = 1; j &lt;= n; j++)
print &quot;Hello&quot;
n X
n
X
i=1 j=1
Hello is printed
n2
times.
1 = n2
Analysis of Simple Loops (cont.)
How many times is Hello printed?
for (i = 1; i &lt;= n; i++)
for (j = 1; j &lt;= i; j++)
print &quot;Hello&quot;
n X
i
X
i=1 j=1
1=
n
X
i=1
i=
n(n + 1)
1
1
= n2 + n
2
2
2
Hello is printed (n2 + n)/2 times.
Need to work with summations. Some basic ideas
related to simplifying sums may be helpful.
Evaluating Sums
• A function a : N 7→ A is called a sequence, and the value of a(n) for any
n ∈ N is denoted by an . Frequently, such a function (sequence) is denoted
{an }.
• With A = Z or A = R, for the sequence {an }, expressions such as a3 + a4 +
&middot; &middot; &middot; + a10 is called a finite series, and this is written in shorthand notation
as a summation in the form
10
X
ak .
k=3
• In the expression
10
X
ak :
k=3
• The symbol Σ stands for sum;
• The index k is an integer that takes on consecutive values from 3
through 10; and
• The quantities represented by ak in this range of values of k are
summed.
An equivalent way in which the same summation can be written is shown
below:
X
3≤k≤10
ak
Evaluating Sums (cont.)
• Arithmetic Series:
◦ The sequence a, a + d, a + 2d, &middot; &middot; &middot; is
called an arithmetic progression.
◦ a is the first term, and d the common difference
◦ The nth -term is given by a + (n − 1)d for n ≥ 1
◦ An = a + (a + d) + (a + 2d) + &middot; &middot; &middot; + (a + (n − 1)d)
is called an arithmetic series
◦ An = n [2a + (n − 1)d] /2
• Geometric Series:
◦ The sequence a, ar , ar 2 , &middot; &middot; &middot; is
called a geometric progression.
◦ a is the first term, and r the common ratio
◦ The nth -term is given by ar n−1 for n ≥ 1
◦ Gn = a + ar + ar 2 + &middot; &middot; &middot; + ar n−1
is called a geometric series,
◦ Gn =
 rn − 1


a &middot; r − 1 , if |r | &gt; 1;
n
1−r
a&middot;
, if |r | &lt; 1;


 1−r
na,
if r = 1.
Some More Known Summation Results
n
X
k=1
n
X
1=n
k=
k=1
n
X
k=1
n
X
k=1
n
X
k=1
n(n + 1)
2
k2 =
n(n + 1)(2n + 1)
6
k3 =
n2 (n + 1)2
4
(ak − ak+1 ) = a1 − an+1
Analysis of Simple Loops (cont.)
How many times is Hello printed?
for (i = 1; i &lt;= n; i = i * 2)
print &quot;Hello&quot;
⌊log2 n⌋
X
1≤2k ≤n
1=
X
1 = ⌊log2 n⌋ + 1
k=0
Hello is printed ⌊log2 n⌋ + 1 times.
Now, mimic the same argument and determine the number of
times Hello is printed by the following code:
for ( i = n; i &gt; 0; i = ⌊i/2⌋ )
print &quot;Hello&quot;
Analysis of Simple Loops (cont.)
Assume that n is a power of 2, like n = 2m for some integer m.
for ( i = n; i &gt; 0; i = ⌊i/2⌋ )
for (j = 1; j &lt;= i; j = j+1)
print &quot;Hello&quot;
Note that when n is an integer power of 2, the floor function may
be removed in the loop modification. Therefore, in summation notation, the required answer is obtained as
X
X
1≤i=⌊n/2k ⌋≤n 1≤j≤i
X
1=
1≤i=⌊n/2k ⌋≤n
i=
m X
n
k=0
2k
1
1
+ 2 + &middot;&middot;&middot; +
2 2
1
= n 1 − m+1
1−
2
1
= n 2 − m = 2n − 1.
2
=n 1+
,
1
,
2m
1
,
2
Analysis of Simple Loops (cont.)
How many times is Hello printed?
for (i = 1; i &lt;= n; i++)
for (j = i+20; j &lt;= i+39; j++)
print &quot;Hello&quot;
n
X
i+39
X
i=1
j=i+20
Hello is printed 20n times.
1=
n
X
i=1
20 = 20n
Analyzing a Simple Algorithm
Algorithm. Max
1:
max ← A[1]
2:
for i ← 2 to n do
3:
if A[i] &gt; max then
4:
max ← A[i]
5:
endif
6:
end for
7:
return max
Cost
d1
d2
d3
d4
How many times
1
n−1
n−1
≤n−1
d7
1
Let T (n) denote the time required by this algorithm to find the maximum element
in an n-element array. Then:
T (n) ≤ d1 + d2 (n − 1) + d3 (n − 1) + d4 (n − 1) + d7
≤ (d2 + d3 + d4 )n + (d1 + d7 ) − (d2 + d3 + d4 )
≤ (d2 + d3 + d4 )n + (d1 + d7 ) = K1 n + K2
What we have done is the worst-case analysis.
Note that K1 and K2 may vary from machine to machine, or we might say they
are hardware dependent.
We generally analyze algorithms in a machine independent manner. For this
purpose, we use asymptotic analysis.
The Big-Oh Notation
• We wish to analyze algorithms to determine how their runtime requirements
grow as the problem size grows.
• We use n to denote the size of the problem.
• We use T (n) to denote the running time of an algorithm.
• To express how T (n) grows as n grows, several asymptotic notations are
used.
• The Big-Oh Notation: Provides an asymptotic upper bound
• For any function g(n), O(g(n)) is defined as the set of functions f (n) for
which there exist positive constants c and n0 satisfying
0 ≤ f (n) ≤ c &middot; g(n) for all n ≥ n0 .
• For example, if g(n) = n2 , then O(n2 ) consists of any function f (n) for
which there exist positive constants c and n0 satisfying f (n) ≤ c &middot; n2 for
all n ≥ n0 .
• Thus, by definition O(n2 ) is an infinite set.
• We list some members of O(n2 ) below:
10
1000, 257 log2 n, 2n2 , n1.59 , 1010 n2 , 7n2 + 2022n + 1700, &middot; &middot; &middot;
Big-Oh Notation Illustration
A Big-Oh Example
• f (n) = 4n3 + 3n2 + 7n + 10 ∈ O(n3 ) because, for all n ≥ 1,
f (n) = 4n3 + 3n2 + 7n + 10
≤ (4 + 3 + 7 + 10)n3
≤ 24n3
•
•
•
•
In other words, f (n) ≤ c &middot; n3 for all n ≥ n0 , with c = 24 and
n0 = 1.
Note that we may also say f (n) ∈ O(n4 ) for the same c and
n0 because n3 ≤ n4 for all n ≥ 1 as well.
In fact, f (n) ∈ O(nk ) for k ≥ 3, but we will say n3 is the
tightest bound and nk for k &gt; 3 are all not as tight as n4 .
When concluding that f (n) = 4n3 + 3n2 + 7n + 10 as a member
of O(n3 ) we have suppressed the constants and lower order
terms.
Finally, note also that f (n) ̸∈ O(n2 ).
Analyzing a Simple Algorithm (cont.)
• To avoid worrying about machine-dependent constants, we
use a model known as the Random Access Machine (RAM)
model.1
• In this model, memory is divided into cells
• Each cell can hold a word.
• There is no limit on the size of the word.
• Any location within the memory can be accessed in constant
time.
• The cost of every step in an algorithm is assumed to be 1
(instead of constants like d1 , d2 , etc.)
• Thus, we often count the number of operations instead of the
time required.
• This is often called the operation count.
• Sometimes we distinguish between different operations that
we may count within the same algorithm.
1
not to be confused with Random Access Memory
Analyzing a Simple Algorithm (cont.)
Algorithm. Max
1:
max ← A[1]
2:
for i ← 2 to n do
3:
if A[i] &gt; max then
4:
max ← A[i]
5:
endif
6:
end for
7:
return max
Cost
d\1 1
d\2 1
d\3 1
d\4 1
How many times
1
n−1
n−1
≤n−1
d\7 1
1
Let T (n) denote the number of steps carried out by this algorithm to find the
maximum element in an n-element array. Then:
T (n) ≤ 1 + (n − 1) + (n − 1) + (n − 1) + 1
≤ 3n − 1
≤ 3n ∈ O(n)
Note that we have translated T (n) ≤ 3n as T (n) ∈ O(n) by letting c = 3 and
n0 = 1 in the definition of the Big-Oh notation we saw earlier.
Also, T (n) ≤ 3n would then tell us the actual wall-clock time required by this
algorithm will not exceed m &middot; 3n for some constant m that depends on the
hardware used.
Another Big-Oh Example
√
We show that log2 n ∈ O( n).
We have
1
log2 n
2
= 2 &middot; log2 n1/2
(loga x b = b loga x )
≤ 2 &middot; n1/2
(log2 x &lt; x for x ≥ 1)
log2 n = 2 &middot;
Therefore, log2 n ≤ 2 &middot;
√
n for n ≥ 1.
The definition of Big-Oh holds with c = 2 and n0 = 1.
√
Thus, log2 n ∈ O( n).
Yet Another Big-Oh Example
We show that log2 (n!) ∈ O(n log2 n).
We have
log2 (n!) = log2 (n &middot; (n − 1) &middot; &middot; &middot; 1)
= log2 n + log2 (n − 1) + &middot; &middot; &middot; + log2 1
|
{z
}
n terms
≤ log2 n + log2 n + &middot; &middot; &middot; + log2 n
|
{z
n terms
}
Therefore, log2 (n!) ≤ n log2 n, which holds for all n ≥ 1.
The definition of Big-Oh holds with c = 1 and n0 = 1.
Thus, log2 (n!) ∈ O(n log n)
Note that the base of the logarithm may be dropped.
Growth of Functions in Algorithm Analysis
T (n) Class
Constant
Logarithmic
Linear
Linearithmic
Cubic
Exponential∗
∗k &gt; 1
T (n) in O(&middot;)
O(1)
O(log n)
O(n)
O(n log n)
O(n2 )
O(n3 )
O(k n )
Real Time
best/fastest
super fast
fast
Acceptable
Slow
Slower
On a problem of size n = 1 million, when a O(n2 ) algorithm takes
roughly 70 hours, a O(n log n) algorithm takes roughly 40 seconds.
Suppose the machine being used on both algorithms runs 100 times
faster. Then the O(n2 ) algorithm would need roughly 40 minutes
whereas the O(n log n) algorithm would need half a second.
Conclusion: Algorithm analysis worth the effort!
Ω) Notation
The Big-Omega (Ω
• The Big-Omega (Ω) Notation: Provides an asymptotic lower bound
• For any function g(n), Ω(g(n)) is defined as the set of functions f (n) for
which there exist positive constants c and n0 satisfying
0 ≤ c &middot; g(n) ≤ f (n) for all n ≥ n0 .
• For example, if g(n) = n2 , then Ω(n2 ) consists of any function f (n) for
which there exist positive constants c and n0 satisfying 0 ≤ c &middot; n2 ≤ f (n)
for all n ≥ n0 .
• Thus, by definition Ω(n2 ) is an infinite set.
• We list some members of Ω(n2 ) below:
10
2n2 , 17n2.83 , 1010 n2 , 7n3 + 2022n2 + 1700, 9000n4 , 3n , n!, &middot; &middot; &middot;
Big-Omega Notation Illustration
Analyzing a Simple Algorithm (cont.)
Algorithm. Max
1:
2:
3:
4:
5:
6:
7:
Cost
max ← A[1]
for i ← 2 to n do
if A[i] &gt; max then
max ← A[i]
endif
end for
return max
1
1
1
1
At least
How many times
1
n−1
n−1
0
1
1
If T (n) denote the number of steps carried out by this algorithm, then:
T (n) ≥ 1 + (n − 1) + (n − 1) + 0 + 1
≥ 2n
≥ 2n
Note that we may now conclude from T (n) ≥ 2n that T (n) ∈ Ω(n) by letting
c = 2 and n0 = 1 in the definition of the Big-Omega notation we saw earlier.
Θ) Notation
The Big-Theta (Θ
• The Big-Theta (Θ) Notation: Provides an asymptotic tight bound
• For any function g(n), Θ(g(n)) is defined as the set of functions f (n) for
which there exist positive constants c1 , c2 , and n0 satisfying
0 ≤ c1 &middot; g(n) ≤ f (n) ≤ c2 &middot; g(n) for all n ≥ n0 .
• For example, if g(n) = n2 , then Θ(n2 ) consists of any function f (n) for
which there exist positive constants c1 , c2 , and n0 satisfying
0 ≤ c1 &middot; n2 ≤ f (n) ≤ c2 &middot; n2 , for all n ≥ n0 .
• Thus, by definition Θ(n2 ) is an infinite set.
• We list some members of Θ(n2 ) below:
10
2n2 , 1010 n2 , 7n2 + 2022n + 1700, 9000n2 , &middot; &middot; &middot;
Big-Theta Notation Illustration
For the Max algorithm, for n ≥ 1, T (n) ≤ 3n and T (n) ≥ 2n.
Equivalently, 2n ≤ T (n) ≤ 3n for n ≥ 1. Thus, we conclude that T (n) = Θ(n).
For any two functions f (n) and g(n), it is true that f (n) ∈ Θ(g(n)) if an only
if f (n) ∈ O(g(n)) and f (n) ∈ Ω(g(n)).
Further Insights Into Asymptotic Notation
• The Big-Oh notation is helpful in understanding the worst-case
complexity.
• It will be best if the complexity of an algorithm can be characterized using the Θ notation. However ...
• It is not always easy to obtain a tight bound (Θ) on T (n).
• An asymptotic upper bound analysis (Big-Oh) works well most
of the time.
• Remember multiplying constants are dropped when the Big-Oh
notation is used.
• Asymptotic analysis with the Big-Oh makes sense for comparing
performances of different algorithms as the size of the problem
increases.
• The Big-Oh also provides insights on what happens when the
same algorithm is used on a problem with increasing sizes.
Insights (cont.)
Use a O(n2 ) algorithm to solve a problem of size n and 2n.
◦ Then, since T (n) ≤ k &middot; n2 and T (2n) ≤ k &middot; (2n)2 , we may conclude that
T (2n) ≈ 4T (n).
◦ When the size of the problem is doubled, the time requirement for a O(n2 )
Use a O(n) algorithm to solve a problem of size n and 2n.
◦ Then, since T (n) ≤ k &middot; n and T (2n) ≤ k &middot; (2n), we may conclude that
T (2n) ≈ 2T (n).
◦ When the size of the problem is doubled, the time requirement for a O(n)
algorithm is doubled as well.
Use a O(log n) algorithm to solve a problem of size n and 2n.
◦ Then, since T (n) ≤ k &middot; log n and T (2n) ≤ k &middot; log(2n), we may conclude
that2 T (2n) ≈ T (n) + k.
◦ When the size of the problem is doubled, the time requirement for a
O(log n) algorithm just goes up by k, a constant.
2
Assuming the base of the logarithm is 2.
More on Asymptotic Analysis
If f (n) ∈ O(g(n)), then kf (n) ∈ O(g(n)) for any constant k.
If f1 (n) ∈ O(g(n)) and f2 (n) ∈ O(g(n)), then f1 (n) + f2 (n) ∈ O(g(n)).
If f1 (n) ∈ O(g(n)) and f2 (n) ∈ O(h(n)), then f1 (n) + f2 (n) ∈ O(g(n) + h(n)).
If f1 (n) ∈ O(g(n)) and f2 (n) ∈ O(h(n)), then f1 (n) &middot; f2 (n) ∈ O(g(n) &middot; h(n)).
If f (n) ∈ O(g(n)) and g(n) ∈ O(h(n)), then f (n) ∈ O(h(n)).
Suppose after theoretical analysis, you conclude that the runtime of an algorithm
is O(n2 ), but when you actually run it, it is much faster than O(n2 ).
You may wish to look deeper into the analysis and see if a tighter bound such
as O(n log n) can be obtained. A reason to study analysis of algorithms.
In another scenario, you may have a O(n2 ) algorithm and proved that it is a
tight bound. However, a competitor may claim that they have an algorithm that
performs better.
You may then wish to rethink your algorithm and see if its design can be improved.
Another reason to study design and analysis of algorithms.
The Little-oh (o) Notation
For any function g(n), o(g(n)) is defined as the set of functions
f (n) for which there exists a positive constant n0 satisfying
0 ≤ f (n) &lt; c &middot; g(n) for all n ≥ n0 , and any c &gt; 0
We say f (n) ∈ o(g(n)) if lim
n→∞
f (n)
= 0.
g(n)
f (n) ∈ O(g(n)) means that f (n) grows no faster than g(n).
f (n) ∈ o(g(n)) means that f (n) grows slower than g(n).
Note that 1000n ∈ o(n2 ), but 1000n2 ̸∈ o(n2 ).
On the other hand, 1000n ∈ O(n2 ), and 1000n2 ∈ O(n2 ).
The Little-omega (ω) Notation
For any function g(n), ω(g(n)) is defined as the set of functions
f (n) for which there exists a positive constant n0 satisfying
0 ≤ c &middot; g(n) &lt; f (n) for all n ≥ n0 , and any c &gt; 0
We say f (n) ∈ ω(g(n)) if lim
n→∞
f (n)
= ∞.
g(n)
f (n) ∈ Ω(g(n)) means that f (n) grows no slower than g(n).
f (n) ∈ ω(g(n)) means that f (n) grows faster than g(n).
Note that 1000n2 ∈ ω(n), but 1000n ̸∈ ω(n).
On the other hand, 1000n2 ∈ Ω(n), and 1000n ∈ Ω(n).
Problems with Algorithms of Known Time Complexities
• Constant or O(1):
◦ Given a sequence of numbers in increasing order, find the minimum
and maximum elements
◦ Add the first 10 positive odd integers
• Logarithmic or O(log n):
◦ Binary search on a sorted array of size n
• Linear or O(n):
◦ Find the smallest element of an integer array of size n
◦ Linear search on an (unsorted) array of size n
• Linearithmic or O(n log n):
◦ Merge-sort an n-element array of comparable items
◦ The Fast Fourier Transform (FFT) of an n-vector
◦ Bubble-sort an n-element array of comparable items
◦ Grade-school multiplication of two n-digit numbers
• Cubic or O(n3 ):
◦ Naive matrix multiplication
◦ Gaussian elimination for solving an n &times; n system of linear equations
Exponential vs. Polynomial Complexities
Exponential or O(k n ) with k &gt; 1:
◦ List all subsets of an n-element set
◦ Given a Boolean function f of n variables, determine if there is an assignment of truth values to the variables such that f evaluates to TRUE
◦ List all permutations of n objects
Polynomial Time Algorithm
◦ An algorithm whose time complexity is O(nα ) for some real number α &gt; 0
```