Time Complexity - UCO Department of Computer Science

advertisement
Time Complexity
Dr. Jicheng Fu
Department of Computer Science
University of Central Oklahoma
Objectives (Section 7.6)





The concepts of space complexity and time
complexity
Use the step count to derive a function of the
time complexity of a program
Asymptotics and orders of magnitude
The big-O and related notations
Time complexity of recursive algorithms
Motivation
3
5
10
n
16
20
…
22
28
36
60
arr: sorted
Evaluate An Algorithm

Two important measures to evaluate an
algorithm



Space complexity
Time complexity
Space complexity



The maximum storage space needed for an
algorithm
Expressed as a function of the problem size
Relatively easy to evaluate

Time complexity


Determining the number of steps (operations)
needed as a function of the problem size
Our focus
Step Count


Count the exact number of steps needed for
an algorithm as a function of the problem size
Each atomic operation is counted as one step:



Arithmetic operations
Comparison operations
Other operations, such as “assignment” and
“return”
Algorithm 1
1 int count_1(int n)
2 {
3
sum = 0
4
for i=1 to n {
5
for j=i to n {
6
sum++
7
}
8
}
9
return sum
10 }
1
2n
2i 1 ( n  1  i )
n

n
i 1
(n  1  i)
1
3 2 7
The running time is 2  2n  3i 1 ( n  1  i )  n  n  2
2
2
n
Note:

n
i 1
( n  1  i )  i 1 i  n( n  1) / 2
n
Algorithm 2
1
2
3
4
5
6
7
8
int count_2(int n)
{
sum = 0
for i=1 to n {
sum += n+1-i
}
return sum
}
The running time is 5n  2
1
2n
3n
1
Algorithm 3
1
2
3
4
5
int count_3(int n)
{
sum = n(n+1)/2
return sum
}
The running time is 5 time unit
4
1
Asymptotics

An exact step count is usually unnecessary

Too dependent on programming languages and
programmer’s style


But make little difference in whether the algorithm is
feasible or not
A change in fundamental method can make a
vital difference


If the number of operations is proportional to n,
then double n will double the running time
If the number of operations is proportional to 2n,
doubling n will square the number of operations

Example:



Assume that a computation that takes 1
second may involve 106 operations
Also assume that double the problem size will
require 1012 operations
Increase running time from 1 second to 11.5
days

1012 operations / 106 operations per second = 106
second  11.5 days

Instead of an exact step count, we want a
notation that



accurately reflects the increase of computation
time with the size, but
ignores details that has little effect on the total
Asymptotics: the study of functions of a
parameter n, as n becomes larger and larger
without bound
Orders of Magnitude

The idea:



Suppose function f(n) measures the amount of
work done by an algorithm on a problem of size n
Compare f(n) for large values of n, with some
well-known function g(n) whose behavior we
already understand
To compare f(n) against g(n):


take the quotient f(n) / g(n), and
take the limit of the quotient as n increases
without bound

Definition

If lim f ( n)  0 then:
n  g ( n )
f(n) has strictly smaller order of magnitude than g(n).

If
f ( n) is finite and nonzero then:
lim
n  g ( n )
f(n) has the same order of magnitude as g(n).

If lim f ( n )   then:
n
g ( n)
f(n) has strictly greater order of magnitude than g(n).

Common choices for g(n):






g(n) = 1
g(n) = log n
g(n) = n
g(n) = n2
g(n) = n3
g(n) = 2n
Constant function
Logarithmic function
Linear function
Quadratic function
Cubic function
Exponential function

Notes:

The second case, when f(n) and g(n) have the
same order of magnitude, includes all values of
the limit except 0 and 

Changing the running time of an algorithm by any
nonzero constant factor will not affect its order of
magnitude
Polynomials



If f(n) is a polynomial in n with degree r , then f(n)
has the same order of magnitude as nr
If r < s, then nr has strictly smaller order of
magnitude than ns
Example 1:
f (n)  3n2  100n  25
g(n)  n3
f (n)
3n 2  100n  25
lim
 lim
0
3
n g (n)
n
n

3n2 - 100n - 25 has strictly smaller order than n3

Example 2:
f (n)  3n2  100n  25
g ( n)  n
f ( n) 3n 2  100n  25
lim


n  g ( n )
n


3n2 - 100n - 25 has strictly greater order than n
Example 3:
f (n)  3n2  100n  25
g ( n)  n 2
f ( n) 3n2  100n  25
lim

3
2
n  g ( n )
n

3n2 - 100n - 25 has the same order as n2
Logarithms

The order of magnitude of a logarithm does not
depend on the base for the logarithms

Let loga n and logb n be logarithms to two different bases
a > 1 and b > 1
logb a loga n  log a
logb n
 lim
b
n log n
n
log
n
a
a
lim

Since the base for logarithms makes no difference to the
order of magnitude, we just generally write log without a
base

Compare the order of magnitude of a logarithm log n
with a power of n, say nr (r > 0)



It is difficult to calculate the quotient log n / nr
Need some mathematical tool
L’Hôpital’s Rule




Suppose that: f(n) and g(n) are differentiable functions for
all sufficiently large n, with derivatives f’(n) and g’(n),
respectively
lim f ( n )   and lim g ( n )  
n 
n 
f ( n )
exists
n  g ( n )
f (n)
f ( n )
f (n)
 lim
Then lim
exists and lim
n  g ( n )
n  g ( n )
n  g ( n )
lim

Use L’Hôpital’s Rule
f (n)  ln n
g (n)  n r , r  0
f (n)
ln n
f (n)
1n
1
lim
 lim r  lim
 lim r 1  lim r  0
n  g ( n )
n  n
n  g ( n )
n rn
n rn

Conclusion

log n has strictly smaller order of magnitude than
any positive power nr of n, r > 0.
Exponential Functions



Compare the order of magnitude of an
exponential function an with a power of n,
and nr (r > 0)
Use L’Hôpital’s Rule again (pp. 308)
Conclusion:

Any exponential function an for any real number
a > 1 has strictly greater order of magnitude than
any power nr of n, for any positive integer r

Compare the order of magnitude of two
exponential functions with different bases, an
and bn


Assume 0  a < b,
n
n
a
a
lim n  lim   0
n b
n b
 
Conclusion:

If 0  a < b then an has strictly smaller order of
magnitude than bn
Common Orders

For most algorithm analyses, only a short list of
functions is needed



1 (constant), log n (logarithmic), n (linear), n2 (quadratic), n3
(cubic), 2n (exponential)
They are in strictly increasing order of magnitude
One more important function: n log n (see pp. 309)



The order of some advanced sorting algorithms
n log n has strictly greater order of magnitude than n
n log n has strictly smaller order of magnitude than any
power nr for any r > 1
Growth Rate of Common Functions
The Big-O and Related Notations

These notations are pronounced “little oh”, “Big Oh”,
“Big Theta”, and “Big Omega”, respectively.

Examples





On a list of length n, sequential search has running time
(n)
On an ordered list of length n, binary search has running
time (log n)
Retrieval from a contiguous list of length n has running time
O(1)
Retrieval from a linked list of length n has running time O(n).
Any algorithm that uses comparisons of keys to search a
list of length n must make (log n) comparisons of keys

If f(n) is a polynomial in n of degree r , then
f(n) is (nr)
If r < s, then nr is o(ns)

If a > 1 and b > 1, then loga(n) is (logb(n))


log n is o(nr) for any r > 0
For any real number a > 1 and any positive
integer r, nr is o(an)

If 0  a < b then an is o(bn)

Algorithm 4
1 int count_0(int n)
2 {
3
sum = 0
4
for i=1 to n {
5
for j=1 to n {
6
If i<=j then
7
sum++
8
}
9
}
10
return sum
11 }
The running time is O(n2)
O(1)
O(n)
O(n2)
O(n2)
O(n2)
O(1)
Summary of Running Times
Algorithm
Running Time Order of Running Time
n2
Algorithm 2
3 2 7
n  n2
2
2
5n+2
Algorithm 3
5
Constant
Algorithm 1
n
Asymptotic Running Times
Algorithm
Running Time
Asymptotic Bound
Algorithm 1
O(n2)
Algorithm 2
3 2 7
n  n2
2
2
5n+2
Algorithm 3
5
O(1)
Algorithm 4
-
O(n2)
O(n)
More Examples
1)
int x = 0;
for (int i = 0; i < 100; i++)
x += i;
2)
int x = 0;
for (int i = 0; i < n2; i++)
x += i;
* Assume that the value of n is the size of the problem
3)
int x = 0;
for (int i = 1; i < n; i *= 2)
x += i;
4)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = 1; j < i; j++)
x += i + j;
5)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = i; j < 100; j++)
x += i + j;
6)
int x = 0;
for (int i = 1; i < n; i++)
for (int j = n; j > i; j /= 3)
x += i + j;
7)
int x = 0;
for (int i = 1; i < n * n; i++)
for (int j = 1; j < i; j++)
x += i + j;
Review: Arithmetic
Sequences/Progressions


An arithmetic sequence is a sequence of
numbers such that the difference of any two
successive members of the sequence is a
constant
If the first term of an arithmetic sequence is a1
and the common difference of successive
members is d, then the nth term an of the
sequence is:
an  a1  n  1d
Analyzing Recursive Algorithms

Often a recurrence equation is used as the starting
point to analyze a recursive algorithm


In the recurrence equation, T(n) denotes the running time
of the recursive algorithm for an input of size n
We will try to convert the recurrence equation into a
closed form equation to have a better understanding
of the time complexity


Closed Form: No reference to T(n) on the right side of the
equation
Conversions to the closed form solution can be very
challenging

Example: Factorial
int factorial (int n)
/* Pre: n is an integer no less than 0
Post: The factorial of n (n!) is returned
Uses: The function factorial recursively */
{
if (n == 0)
return 1;
else
return n * factorial (n - 1);
}
}
1
1
T (n  1)  3

The time complexity of factorial(n) is:
if n  0
2
T (n)  
T (n  1)  4 if n  0

T(n) is an arithmetic sequence with the common difference
4 of successive members and T(0) equals 2
T (n)  T (0)  nd  2  4n

3+1: The
comparison
is included
The time complexity of factorial is O(n)
Recurrence Equations Examples

Divide and conquer: Recursive merge sorting
template <class Record>
void Sortable_list<Record> :: recursive_merge_sort(
int low, int high)
/* Post: The entries of the sortable list between index low and high
have been rearranged so that their keys are sorted into nondecreasing order.
Uses: The contiguous List
*/
{
if (high > low) {
recursive_merge_sort(low, (high + low) / 2);
recursive_merge_sort((high + low) / 2 + 1, high);
merge(low, high);
}
}

The time complexity of recursive_merge_sort is:
if n  1
1
T (n)  
T ( n / 2)  T ( n / 2)  cn if n  1

To obtain a closed form equation for T(n), we assume n is
a power of 2
T (n )  2T ( n / 2)  cn  2(2T (n / 22 )  cn / 2)  cn
 22 T (n / 22 )  2cn  23 T (n / 23 )  3cn
   2i T (n / 2i )  icn

When i = log2n, we have:
T (n)  2log n T (n / 2log n )  (logn)cn  nT (1)  cn logn
 n  cn logn

The time complexity is O(nlogn)

Fibonacci numbers
int fibonacci(int n)
/* fibonacci : recursive version */
{
if (n <= 0) return 0;
else if (n == 1) return 1;
else return fibonacci(n − 1) + fibonacci(n − 2);
}

The time complexity of fibonacci is:
2

T (n)   3
T ( n  1)  T ( n  2)  6



if n  0
if n  1
if n  1
Theorem (in Section A.4): If F(n) is defined by a Fibonacci
sequence, then F(n) is (gn), where g  (1  5) / 2
The time complexity is exponential: O(gn)
Download