 Growth of Functions 1

advertisement


Growth of Functions

1



How fast will your program run?

The running time of your program will depend upon:
 The algorithm
 The input
 Your implementation of the algorithm in a
programming language
 The compiler you use
 The OS on your computer
 Your computer hardware
 Maybe other things: temperature outside; other
programs on your computer; …

Our Motivation:
 analyze the running time of an algorithm as a function
of only simple parameters of the input.
2




Complexity


Complexity is the number of steps required to
solve a problem

The goal is
 to find the best algorithm to solve the problem
with a less number of steps

Complexity of Algorithms
 The size of the problem is a measure of the
quantity of the input data
n
 The time needed by an algorithm, expressed as
a function of the size of the problem (it solves),
is called the (time) complexity of the algorithm
T(n)
3



Measures of Algorithm Complexity 

Let T(n) denote the number of operations required
by an algorithm to solve a given class of problems

Often T(n) depends on the input, in such cases one
can talk about
 Worst-case complexity,
 Best-case complexity,
 Average-case complexity of an algorithm
4


Measures of Algorithm Complexity 




Worst-Case Running Time: the longest time for
any input size of n
 provides an upper bound on running time for
any input
Best-Case Running Time: the shortest time for
any input size of n
 provides lower bound on running time for any
input
Average-Case Behavior: the expected
performance averaged over all possible inputs
 it is generally better than worst case behavior,
but sometimes it’s roughly as bad as worst case
 difficult to compute
5


Example: Sequential Search
Algorithm

Step Count
// Searches for x in array A of n items returns
// index of found item, or n+1 if not found
Seq_Search( A[n]: array, x: item){
j=1
while ((j n) & (A[j] <> x)){
j = j +1
}
return j
}

6
0
1
n+1
n
0
1
0



Example: Sequential Search

worst-case running time
 when x is not in the original array A
 in this case, while loop needs 2(n + 1)
comparisons + c other operations
 So, T(n) = 2(n + 1) + c  Linear complexity

best-case running time
 when x is found in A[1]
 in this case, while loop needs 2 comparisons +
c other operations
 So, T(n) = 2 + c  Constant complexity
7




Example: Sequential Search

average-case running time
 assume x is equally likely to equal A[1], A[2],
…, A[n]
 in this case, Pr[x=A[i]] = 1/n, for 1 i  n
 and i comparisons are needed if x is found in ith
position
 then, the average-case running time is
n
n
n
i 1
Pr[ x  A[i ]] * i     i

n i 1
i 1
i 1 n



=(1/n)*n(n+1)/2 = (n+1)/2
So, T(n) = (n + 1)/2  Linear complexity
8


Order of Growth

For very large input size, it is the rate of grow, or
order of growth that matters asymptotically

We can ignore the lower-order terms, since they are
relatively insignificant for very large n

We can also ignore leading term’s constant
coefficients, since they are not as important for the
rate of growth in computational efficiency for very
large n

Higher order functions of n are normally considered
efficient

9



Asymptotic Notation

By now we should have an intuitive feel for
asymptotic (big-O) notation:




What does O(n) running time mean? O(n2)?
O(n lg n)?
Our first task is to define this notation more
formally and completely
10

Big-O notation
(Upper Bound – Worst Case)







For a given function g(n), we denote by O(g(n)) the set of
functions
 O(g(n)) = {f(n) : there exist positive constants c  0
and n0  0, such that 0  f(n)  c.g(n) for all n  n0 }
We say g(n) is an asymptotic upper bound for f(n):
f ( n)
0  lim n 
 
g ( n)
O(g(n)) means that as n  , the execution time f(n) is
at most c.g(n) for some positive constant c
What does O(g(n)) running time mean?
 The worst-case running time (upper-bound) is a
function of g(n) to a within a constant factor
11


Big-O notation
(Upper Bound – Worst Case)

c.g(n)
time
f(n)
n0

n
f(n) = O(g(n)
12

Big-O notation
(Upper Bound – Worst Case)


This is a mathematically formal way of ignoring
constant factors, and looking only at the “shape”
of the function

f(n)=O(g(n)) should be considered as saying that
“f(n) is at most g(n), up to constant factors”.

We usually will have f(n) be the running time of
an algorithm and g(n) a nicely written function
E.g. The running time of insertion sort algorithm
is O(n2)


13


Big-O notation
(Upper Bound – Worst Case)





Example1:
Is 2n + 7 = O(n)?
Let
 T(n) = 2n + 7
 T(n) = n (2 + 7/n)
 Note for n = 7;
 2 + 7/n = 2 + 7/7 = 3


T(n)  3 n;

c
Then T(n) = O(n)
n7
14
n0

Big-O notation
(Upper Bound – Worst Case)





Example1:
Is 2n + 7 = O(n)?
Let
 T(n) = 2n + 7
 T(n) = n (2 + 7/n)
 Note for n = 7;
 2 + 7/n = 2 + 7/7 = 3

T(n)  3 n;

c
Then T(n) = O(n)
limn[T(n)/n)] = 2  0  T(n) = O(n)


n7
15
n0

Big-O notation
(Upper Bound – Worst Case)





Example2:
Is 5n3 + 2n2 + n + 106 = O(n3)
Let
 T(n) = 5n3 + 2n2 + n + 106
 T(n) = n3 (5 + 2/n + 1/n2 + 106/n3)
 Note for n=100;
 5 + 2/n + 1/n2 + 106/n3 =
 5 + 2/100 + 1/10000 + 1 = 6.05


T(n)  6.05 n3;

c
Then T(n) = O(n3)
 n  100
16
n0

Big-O notation
(Upper Bound – Worst Case)





Example2: limn[T(n)/n3)]=50T(n)=O(n3)
Is 5n3 + 2n2 + n + 106 = O(n3)
Let
 T(n) = 5n3 + 2n2 + n + 106
 T(n) = n3 (5 + 2/n + 1/n2 + 106/n3)
 Note for n=100;
 5 + 2/n + 1/n2 + 106/n3 =
 5 + 2/100 + 1/10000 + 1 = 6.05


T(n)  6.05 n3;

c
Then T(n) = O(n3)
 n  100
17
n0

Big-O notation
(Upper Bound – Worst Case)





Express the execution time as a function of the input size n
Since only the growth rate matters, we can ignore the
multiplicative constants and the lower order terms, e.g.,
 n, n + 1, n + 80, 40n, n + lg n
is O(n)
 n1.1 + 10000000000n
is O(n1.1)
 n2 + 106 n
is O(n2)
 3n2 + 6n + lg n + 24.5
is O(n2)

O(1)  O(lg n)  O((lg n)3)  O(n)  O(n2)  O(n3) 
O(nlg n)  O(2sqrt(n))  O(2n)  O(n!)  O(nn)

Constant  Logarithmic  Linear  Quadratic  Cubic 
Exponential  Factorial
18

-notation (Omega)
(Lower Bound – Best Case)




For a given function g(n), we denote by (g(n)) the set of
functions
 (g(n)) = {f(n) : there exist positive constants c  0
and n0  0, such that 0  c.g(n)  f(n) for all n  n0 }

We say g(n) is an asymptotic lower bound for f(n):
f ( n)
0  lim n 
 
g ( n)

(g(n)) means that as n  , the execution time f(n) is at
least c.g(n) for some constant c

What does (g(n)) running time mean?
 The best-case running time (lower-bound) is a
function of g(n) to a within a constant factor
19

-notation
(Lower Bound – Best Case)


f(n)
time
c.g(n)
n0

n
f(n) = (g(n)
20

-notation (Omega)
(Lower Bound – Best Case)




We say Insertion Sort’s run time T(n) is (n)
Proof:
 Suppose run time is T(n) = a.n + |b|
 Let g(n) = n
 a.g(n) = a.n  T(n) = a.n + |b| for all n  1
 T(n) = (g(n)) = (n)


For example
 the worst-case running time of insertion sort is
O(n2), and
 the best-case running time of insertion sort is
(n)
 Running time falls anywhere between a linear
function of n and a quadratic function of n
21

-notation (Omega)
(Lower Bound – Best Case)


Examples:
 n, n + 1, n + 80, 40n
 n1.1 + 10000000000n
 n
 limn[n/n2] = 0


3n2 + 6n + lg n + 24.5
22

is (n)
is (n1.1)
is not (n2)
is (n2)

 notation (Theta)
(Tight Bound)




In some cases,
 f(n) = O(g(n)) and f(n) = (g(n))
 This means, that the worst and best cases
require the same amount of time t within a
constant factor
 In this case we use a new notation called “theta
”
For a given function g(n), we denote by (g(n)) the
set of functions
 (g(n)) = {f(n) : there exist positive constants
c1  0, c2  0 , and n0  0, such that
 c1.g(n)  f(n)  c2.g(n)  n  n0}

23

 notation (Theta)
(Tight Bound)


We say g(n) is an asymptotic tight bound for f(n):
0 


lim
n 
f ( n)
g ( n)



Theta notation
 (g(n)) means that as n  , the execution
time f(n) is at most c2.g(n) and at least c1.g(n)
for some positive constants c1 and c2.

f(n) = (g(n)) if and only if
 f(n) = O(g(n)) & f(n) = (g(n))

Written set-theoretically, (g(n))  O(g(n)) and
(g(n))  (g(n))
24

 notation (Theta)
(Tight Bound)


c2.g(n)
f(n)
time
c1.g(n)
n0

n
f(n) = (g(n)
25

 notation (Theta)
(Tight Bound)




Example 1:
Show that n2/2 – 3n = (n2)
by providing positive
constants c1, c2 and n0 such that
 c1n2  n2/2 – 3n  c2n2 for all n  n0
 Dividing by n2 yields
 c1  1/2 – 3/n  c2



1/2 – 3/n  c2 is true for all n  1 by choosing c2  1/2
c1  1/2 – 3/n is true for all n  7 by choosing c1  1/14
Thus by choosing






c1 = 1/14
c2 = 1/2
n0 = 7
We can verify that
n2/2 – 3n = (n2)
limn[(n2/2-3n)/n2] = limn[1/2 – 3/n] = ½ - 0 = ½
26

 notation (Theta)
(Tight Bound)





Example 2:
Show that
6n3  (n2)
Suppose for the purpose of contradiction that c2 and n0
exist such that 6n3  c2n2 for all n  n0
 Dividing by n2 yields
 n  c2/6



which cannot possibly hold for arbitrary large n,
since c2 is constant
Also, limn[6n3 / n2 ] = limn[6n] = , which is
not a non-zero constant
27


Properties

 Transitivity
f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))
f(n) = O(g(n)) & g(n) = O(h(n))  f(n) = O(h(n))
f(n) = (g(n)) & g(n) = (h(n))  f(n) = (h(n))
 Symmetry
f(n) = (g(n)) if and only if g(n) = (f(n))
 Transpose
Symmetry
f(n) = O(g(n)) if and only if g(n) = (f(n))

28



Comparison of Functions
functions
fg


numbers
ab
f(n) = O(g(n))
f(n) = (g(n))
f(n) = (g(n))



a  b
a  b
a = b
29



Examples

f(n)
g(n)
Is
Solution
5n2 + 100n 3n2 + 2 f = (g)? f = (n2), n2 = (g)
 f = (g)
log3(n2) log2(n3) f = (g)? logba = logca / logcb
f = 2 lg n / lg 3 & g = 3 lg n
f/g = 2/(3 lg 3)
 f = (g)

30



Some Common Name for Complexity 
O(1)
Constant time
O(lg n)
Logarithmic time
O(lg2 n)
Log-squared time
O(n)
Linear time
O(n2)
Quadratic time
O(n3)
Cubic time
O(ni) for some i
Polynomial time
O(2n)
Exponential time
31


Growth Rates of some Functions


 n   On

 
 
 On log n   O n log 2 n  O n1.5  O n 2
 
 
 O n3  O n 4
   
  O2 
 O n
 O 2   O 3   O 4 
 On!  O n 
Polynomial
Functions

Olog n   O log 2 n  O

O n c  O 2 c log n for any constant c
n
Exponential
Functions
log2 n
log n
n
n
n

32


Floors & Ceilings

For any real number x, we denote the greatest
integer less than or equal to x by x


x – 1  x  x  x  x + 1
For any integer n,


read “the ceiling of x”
For all real x,


read “the floor of x”
For any real number x, we denote the least integer
greater than or equal to x by
x



n/2 + n/2 = n
33


Polynomials


Given a positive integer d, a polynomial in n of
degree d is a function P(n) of the form
d

P(n) =  ai n i

where a0, a1, …, ad are coefficient of the polynomial



i 0
ad  0
A polynomial is asymptotically positive iff ad  0
 Also P(n) = (nd)
34



Exponents

x0 = 1
x1 = x

xa . xb = xa+b

xa / xb = xa-b

(xa)b = (xb)a = xab

xn + xn = 2xn  x2n

2n + 2n = 2.2n = 2n+1

x-1 = 1/x
35


Logarithms (1)











In computer science, all logarithms are to
base 2 unless specified otherwise
xa = b iff logx(b) = a
lg(n) =
log2(n)
ln(n) =
loge(n)
lgk(n) =
(lg(n))k
loga(b) =
logc(b) / logc(a) ;
c0
lg(ab) =
lg(a) + lg(b)
lg(a/b) =
lg(a) - lg(b)
lg(ab) =
b . lg(a)
36


Logarithms (2)










a
= blogb(a)
alogb(n) = nlogb(a)
lg (1/a) = - lg(a)
logb(a) = 1/loga(b)
lg(n)
n
for all n  0
loga(a) = 1
lg(1) = 0, lg(2) = 1, lg(1024=210) = 10
lg(1048576=220) = 20
37



Summation

Why do we need to know this?
We need it for computing the running time of a
given algorithm.

Example: Maximum Sub-vector
Given an array a[1…n] of numeric values (can
be positive, zero and negative) determine the
sub-vector a[i…j] (1 i  j  n) whose sum
of elements is maximum over all subvectors.
38



Summation

Max-Subvector(A, n) {
max-sum = 0;
for i = 1 to n {
for j = i to n {
sum = 0;
for k = i to j { sum += A[k] }
max-sum = max(sum, max-sum);
}
}
return max-sum;
}
n
n
j
T (n)  1
i 1

j i k i
39



Summation
n
 k  1  2  ...  n  n(n  1) / 2  (n )
2
k 1
n 1
x
1
k
2
n
x  1  x  x ...  x 

x 1
k 0
n
n
 (ca
n
k 1
 (a
k 1

k 0
n
k 1
k 1
 bk )  c  ak   bk
k
 ak 1 )  an  a0 , for a0 , a1 ,..., an
k
 ak 1 )  a0  an , for a0 , a1 ,..., an
n 1
 (a
k
n
40


Summation


Constant Series: For a, b  0,
b
1  b  a  1
i a

Quadratic Series: For n  0,
n
2
2
2
2
3
2
i

1

2

...

n

(
2
n

3
n
 n) / 6

i 1

Linear-Geometric Series: For n  0,
n
i
2
n
n 1
n
2
ic

c

2
c

...

nc

[(
n

1
)
c

nc
]
/(
c

1
)

i 1

41



Series
42



Proof of Geometric series

A Geometric series is one in which the sum approaches a
given number as N tends to infinity.
Proofs for geometric series are done by cancellation, as
demonstrated.

43



Geometric series (cont.)
44



Factorials

n! (“n factorial”) is defined for integers n  0
as


1

n.(n  1)!
if n  0,
if n  0

n! =

n! = 1 . 2 . 3 . … . n
n!  nn
for n  2


45


Proofs



We have already proved formulas mathematically.
There are 3 other ways to prove theorem:
 Counterexample:
 By providing an example of in which the theorem
does not hold, you prove the theory to be false.
 For example: All multiples of 5 are even. However 3x5 is 15,
which is odd. The theorem is false.

Contradiction:
 Assume the theorem to be true. If the assumption
implies that some known property is false, then the
theorem CANNOT be true.

46


Proof by Induction



Proof by induction has three standard parts:
 The first step is proving a base case, that is,
establishing that a theorem is true for some
small (usually degenerate) value(s), this step is
almost always trivial.
 Next, an inductive hypothesis is assumed.
Generally this means that the theorem is
assumed to be true for all cases up to some limit
k.
 Using this assumption, the theorem is then
shown to be true for the next value, which is
typically k+1. This proves the theorem (as long
as k is finite).
47

Induction Example:
Gaussian Closed Form



Prove 1 + 2 + 3 + … + n = n(n + 1) / 2
 Basis:
 If n = 0, then 0 = 0(0 + 1) / 2

Inductive hypothesis:
 Assume 1 + 2 + 3 + … + n = n(n + 1) / 2

Step (show true for n+1):
1 + 2 + … + n + n + 1 = (1 + 2 + …+ n) + (n + 1)
= n (n + 1)/2 + n + 1 = [n(n + 1) + 2(n + 1)]/2
= (n + 1)(n + 2)/2 = (n + 1)(n+1 + 1) / 2

48

Induction Example:
Geometric Closed Form




Prove
a0 + a1 + … + an = (an+1 - 1)/(a - 1) for all a  1
 Basis: show that a0 = (a0+1 - 1)/(a - 1)
a0 = 1 = (a1 - 1)/(a - 1) = 1

Inductive hypothesis:
 Assume a0 + a1 + … + an = (an+1 - 1)/(a - 1)

Step (show true for n+1):
a0 + a1 + … + an+1 = a0 + a1 + … + an + an+1
= (an+1 - 1)/(a - 1) + an+1 = (an+1+1 - 1)/(a - 1)

49

Download