02 Analysis

advertisement

COSC 3100

Fundamentals of Analysis of

Algorithm Efficiency

Instructor: Tanvir

Outline (Ch 2)

• Input size, Orders of growth,

Worst-case, Best-case, Average-case

(Sec 2.1)

• Asymptotic notations and Basic efficiency classes (Sec 2.2)

• Analysis of Nonrecursive Algorithms

(Sec 2.3)

• Analysis of Recursive Algoithms (Sec 2.4)

• Example: Computing n-th Fibonacci

Number (Sec 2.5)

• Empirical Analysis, Algorithm Visualization

(Sec 2.6, 2.7)

Analysis of Algoithms

• An investigation of an algorithm’s efficiency with respect to two resources:

– Running time (time efficiency or time complexity)

– Memory space (space efficiency or space complexity)

Analysis Framework

• Measuring an input’s size

• Measuring Running Time

• Orders of Growth (of algorithm’s efficiency function)

• Worst-case, Best-case, and Averagecase efficiency

Measuring Input Size

• Efficiency is defined as a function of input size

• Input size depends on the problem

– Example 1: what is the input size for sorting n numbers?

– Example 2: what is the input size for evaluating 𝑝 𝑥 = 𝑎 𝑛 𝑥 𝑛 + 𝑎 𝑛−1 𝑥 𝑛−1 +⋯+ 𝑎

1 𝑥 + 𝑎

0

– Example 3: what is the input size for multiplying two n×n matrices?

Units of measuring running time

• Measure running time in milliseconds, seconds, etc.

– Depends on which computer

• Count the number of times each operation is executed

– Difficult and unnecessary

• Count the number of times an algorithm’s

“basic operation” is executed

Measure running time in terms of # of basic operations

Basic operation: the operation that contributes the most to the total running time of an algorithm

• Usually the most time consuming operation in the algorithm’s innermost loop

Input size and basic operation examples

Problem Measure of input size

Basic operation

Search for a key in a list of n items

# of items in the list

Key comparison

Addition Add two n×n matrices

Polynomial evaluation

Dimensions of the matrices, n

Order of the polynomial

Multiplication

Theoretical Analysis of Time

Efficiency

• Count the number of times the algorithm’s basic operation is executed on inputs of size n: C(n)

T(n) ≈ c

Input size op

× C(n)

Ignore c op

,

Focus on orders of growth

Running time

Execution time for basic operation

# of times basic op. is executed

If C(n) = 𝟏 𝒏 𝒏 − 𝟏

, how much longer it runs

𝟐 if the input size is doubled?

Orders of Growth

• Why do we care about the order of growth of an algorithm’s efficiency function, i.e., the total number of basic operations?

gcd(60, 24) gcd(31415, 14142) gcd(218922995834555169026,

135301852344706746049)

Euclid’s

3

10

97

Consecutive

Integer

Checking

13

14142

> 10 20

How fast efficiency function grows as n gets larger and larger…

n

10

10 2

10 3

10 4

10 5

10 6

Orders of Growth (contd.) lgn

3.3

6.6

10

13

17

20 n n×lgn n 2 n 3

10

10 2

10 3

10 4

10 5

10 6

3.3×10

6.6×10 2

10×10 3

13×10 4

17×10 5

20×10 6

10 2

10 4

10 6

10 8

10 10

10 12

10 3

2 n

10 3 n!

3.6×10 6

10 6 1.3×10 30 9.3×10 157

10 9

10 12

10 15

10 18

Orders of Growth (contd.)

• Plots of growth…

• Consider only the leading term

• Ignore the constant coefficients

Worst, Best, Average Cases

• Efficiency depends on input size n

• For some algorithms, efficiency depends on the type of input

• Example: Sequential Search

– Given a list of n elements and a search key k, find if k is in the list

– Scan list, compare elements with k until either found a match (success), or list is exhausted (failure)

Sequential Search Algorithm

ALGORITHM SequentialSearch(A[0..n-1], k)

//Input: A[0..n-1] and k

//Output: Index of first match or -1 if no match is

//found i <- 0

while i < n and A[i] ≠ k do i <- i+1

if i < n

return i //A[i] = k else

return -1

Different cases

• Worst case efficiency

– Efficiency (# of times the basic op. will be executed) for the worst case input of size n

– Runs longest among all possible inputs of size n

• Best case efficiency

– Runs fastest among all possible inputs of size n

• Average case efficiency

– Efficiency for a typical/random input of size n

– NOT the average of worst and best cases

– How do we find average case efficiency?

Average Case of Sequential

Search

• Two assumptions:

– Probability of successful search is p (0 ≤ p ≤ 1)

– Search key can be at any index with equal probability (uniform distribution)

C avg

(n) = Expected # of comparisons =

Expected # of comparisons for success +

Expected # of comparisons if k is not in the list

Summary of Analysis

Framework

• Time and space efficiencies are functions of input size

• Time efficiency is # of times basic operation is executed

• Space efficiency is # of extra memory units consumed

• Efficiencies of some algorithms depend on type of input: requiring worst, best, average case analysis

• Focus is on order of growth of running time (or extra memory units) as input size goes to infinity

Asymptotic Growth Rate

• 3 notations used to compare orders of growth of an algorithm’s basic operation count

– O(g(n)): Set of functions that grow no faster than g(n)

– Ω(g(n)): Set of functions that grow at least as fast as g(n)

– Θ(g(n)): Set of functions that grow at the same rate as g(n)

O(big oh)-Notation c × g(n) t(n)

Doesn’t matter n

0 t(n) є O(g(n)) n

O-Notation (contd.)

Definition: A function t(n) is said to be in O(g(n)), denoted t(n) є O(g(n)), if t(n) is bounded above by some positive constant multiple of g(n) for sufficiently large n.

• If we can find +ve constants c and n

0 such that: t(n) ≤ c × g(n) for all n ≥ n

0

O-Notation (contd.)

• Is 100n+5 є O(n 2 ) ?

• Is 2 n+1 є O(2 n ) ? Is 2 2n є O(2 n ) ?

• Is 1

2 n(n-1) є O(n 2 ) ?

Ω(big omega)-Notation t(n) c × g(n)

Doesn’t matter n

0 t(n) є Ω(g(n)) n

Ω-Notation (contd.)

Definition: A function t(n) is said to be in Ω(g(n)) denoted t(n) є Ω(g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all sufficiently large n.

• If we can find +ve constants c and n

0 such that t(n) ≥ c × g(n) for all n ≥ n

0

Ω-Notation (contd.)

• Is n 3 є Ω(n 2 ) ?

• Is 100n+5 є Ω(n 2 ) ?

• Is 1

2 n(n-1) є Ω(n 2 ) ?

Θ(big theta)-Notation c

1

× g(n) c t(n)

2

× g(n)

Doesn’t matter n

0 t(n) є Θ(g(n)) n

Θ-Notation (contd.)

• Definition: A function t(n) is said to be in Θ(g(n)) denoted t(n) є Θ(g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all sufficiently large n.

• If we can find +ve constants c

1 and n such that c

2

0

×g(n) ≤ t(n) ≤ c

1

, c

2,

×g(n) for all n ≥ n

0

Θ-Notation (contd.)

• Is 1

2 n(n-1) є Θ(n 2 ) ?

• Is n 2 +sin(n) є Θ(n 2 ) ?

• Is an 2 +bn+c є Θ(n 2 ) for a > 0?

• Is (n+a) b Θ(n b ) for b > 0 ?

O, Θ, and Ω

(≥) Ω(g(n)): functions that grow at least as fast as g(n) g(n)

(=) Θ(g(n)): functions that grow at the same rate as g(n)

(≤) O(g(n)): functions that grow no faster that g(n)

Theorem

• If t

1

(n) є O(g

1 t

1

(n) + t

2

(n)) and t

2

(n) є O(g

2

(n) є O(max{g

1

(n),g

2

(n)), then

(n)})

• Analogous assertions are true for Ω and Θ notations.

Implication: if sorting makes no more than n 2 comparisons and then binary search makes no more than log

2 n comparisons, then efficiency is O(max{n 2 , log

2 n}) = O(n 2 )

Theorem (contd.)

If t

1 then

(n) є O(g

1

(n)) and t

2

(n) є O(g

2

(n)), t n ≥ n t

1

1

(n) ≤ c

02

(n) + t n

01 t

1

, n t

1

(n) + t

2

02

(n) + t

}

1

2 g

1

(n) for n ≥ n

(n) ≤ c

(n) є O(max{g

1 g

1

(n) + c

2

(n) ≤ max{c

01

2 and t g

2

2

1

(n),g

(n) ≤ c

2

(n)})

2 g

2

(n) for

(n) for n ≥ max { g

2

(n) for n ≥ max { n t

1

(n) + t

2

1

, c

2

} g

01

(n) ≤ 2max{c

1

, n

02

, c

2

1

}

(n) + max{c

} max{g

1

1

, c

2

}

(n), g

2

(n) } , c for n ≥ max{n

01

, n

02

} n

0

Using Limits for Comparing

Orders of Growth 𝑡(𝑛)

• lim 𝑛→∞ 𝑔(𝑛)

=

0 implies that t(n) grows slower than g(n) c implies that t(n) grows at the same order as g(n)

∞ implies that t(n) grows faster than g(n)

1. First two cases (0 and c) means t(n) є O(g(n))

2. Last two cases (c and ∞) means t(n) є Ω(g(n))

3. The second case (c) means t(n) є Θ(g(n))

Taking Limits

L’Hôpital’s rule: if limits of both t(n) and g(n) are ∞ as n goes to ∞, lim 𝑛→∞ 𝑡(𝑛) 𝑔(𝑛)

= lim 𝑛→∞ 𝑡′(𝑛) 𝑔′(𝑛)

= lim 𝑛→∞ 𝑑𝑡(𝑛) 𝑑𝑛 𝑑𝑔(𝑛) 𝑑𝑛

Stirling’s formula: For large n, we 𝑛 have n! ≈

2𝜋𝑛 𝑛 𝑒

Stirling’s Formula

• n! = n (n-1) (n-2) … 1 n! ≈

𝟐𝝅𝒏

• ln(n!) = 𝑛 𝑘=1

1 𝑛 ln(𝑘) ln 𝑥 𝑑𝑥 = 𝑛𝑙𝑛 𝑛 − 𝑛 + 1

• Trapezoidal rule:

1

…+ln 𝑛−1 +

1

2 ln(𝑛) 𝑛 ln 𝑥 𝑑𝑥 ≈

2

… 𝒏 𝒆

• Add 1

2 ln(𝑛) to both sides,

• 𝑛𝑙𝑛 𝑛 − 𝑛 + 1 +

1 ln(𝑛) ≈ 𝑛 𝑘=1 ln(𝑘)

2

• Exponentiate both sides, n! ≈ C n n e -n n 1/2

= C 𝒏 𝒏 𝒆 𝒏 𝒏

Taking Limits (contd.)

• Compare orders of growth of 2 2n and

3 n

• Compare orders of growth of lgn and 𝒏

• Compare orders of growth of n! and

2 n

Summary of How to Establish

Order of Growth of Basic

Operation Count

Method 1: Using Limits

Method 2: Using Theorem

Method 3: Using definitions of O-,

Ω-, and Θ-notations.

Basic Efficiency Classes

Class

1 lgn n n×lgn n 2 n 3

2 n n!

Name constant logarithmic linear linearithmic quadratic cubic exponential factorial

Comments

May be in best cases

Halving problem size at each iteration

Scan a list of size n

Divide and conquer algorithms, e.g., mergesort

Two embedded loops, e.g., selection sort

Three embedded loops, e.g., matrix multiplication

All subsets of nelements set

All permutations of an nelements set

Important Summation Formulas

• 𝑢 𝑖=𝑙

1 = 𝑢 − 𝑙 + 1

• 𝑛 𝑖=1 𝑖 = 𝑛(𝑛+1)

2

1

2 𝑛

2

• 𝑛 𝑖=1 𝑖 2

• 𝑛 𝑖=0 𝑎 𝑖

• 𝑛 𝑖=1 𝑖𝑎 𝑖

=

= 𝑛(𝑛+1)(2𝑛+1)

6 𝑎 𝑛+1

−1

(𝑎 ≠ 1) 𝑎−1

1

3 𝑛

3 𝑑

= 𝑎 𝑑𝑎 𝑎 𝑛+1

−1 𝑎−1 𝒖 𝒖 𝒖 𝒊=𝒍 𝒂 𝒊 𝒄𝒂 𝒊

= 𝒄 𝒂 𝒊 𝒊=𝒍 𝒖 𝒊=𝒍

± 𝒃 𝒊 𝒖

= 𝒂 𝒊

± 𝒃 𝒊 𝒊=𝒍 𝒊=𝒍

Analysis of Nonrecursive

Algorithms

ALGORITHM MaxElement(A[0..n-1])

//Determines largest element maxval <- A[0]

for i <- 1 to n-1 do

Input size: n

Basic operation: > or <-

if A[i] > maxval maxval <- A[i]

return maxval

C(n) = 𝒏−𝟏 𝒊=𝟏

𝟏 = 𝒏 − 𝟏 − 𝟏 + 𝟏

= 𝒏 − 𝟏 є Θ(n)

General Plan for Analysis of

Nonrecursive algorithms

• Decide on a parameter indicating an input’s size

• Identify the basic operation

• Does C(n) depends only on n or does it also depend on type of input?

Set up sum expressing the # of times basic operation is executed.

• Find closed form for the sum or at least establish its order of growth

Analysis of Nonrecursive

(contd.)

ALGORITHM UniqueElements(A[0..n-1])

//Determines whether all elements are

//distinct

for i <- 0 to n-2 do

for j <- i+1 to n-1 do return true

if A[i] = A[j] return false

Input size: n

Basic operation: A[i] = A[j]

Does C(n) depend on type of input?

UniqueElements (contd.)

for i <- 0 to n-2 do

for j <- i+1 to n-1 do

if A[i] = A[j] return false return true

C worst

(n) = 𝑛−2 𝑖=0 𝑛−1 𝑗=𝑖+1

1 = 𝑛−1 𝑛

2

1

2 𝑛 2 ∈ Θ 𝑛 2

Why C worst

(n) saying C worst

∈ 𝜣 𝒏

(n)

𝟐

∈ 𝑶 𝒏 𝟐 is better than

?

Analysis of Nonrecursive (contd.)

ALGORITHM MatrixMultiplication(A[0..n-1,

0..n-1], B[0..n-1, 0..n-1])

//Output: C = AB

for i <- 0 to n-1 do

for j <- 0 to n-1 do

T(n) ≈ c m

C[i, j] = 0.0

for k <- 0 to n-1 do

M(n)+c

= c

= (c m n 3 +c m

+c a a

A(n) n 3 a

)n 3

C[i, j] = C[i, j] + A[i, k]×B[k, j]

return C

Input size: n

Basic operation: ×

M(n) = 𝒏−𝟏 𝒊=𝟎 𝒏−𝟏 𝒋=𝟎 𝒏−𝟏 𝒌=𝟎

𝟏

Analysis of Nonrecursive (contd.)

ALGORITHM Binary(n)

// Output: # of digits in binary

// representation of n count <- 1

while n > 1 do count <- count+1

Input size: n

Basic operation:

C(n) = ?

n <𝑛

2

return count

Each iteration halves n

Let m be such that 2 m ≤ n < 2 m+1

Take lg: m ≤ lg(n) < m+1, so m = 𝐥𝐠(𝒏) and m+1 = 𝐥𝐠(𝒏)

+1 є Θ(lg(n))

Analysis of Nonrecursive (contd.)

ALGORITHM Mystery(n)

S <- 0

for i <- 1 to n do

S <- S + i×i

return S

What does this algorithm compute?

What is the basic operation?

How many times is the basic operation executed?

What’s its efficiency class?

Can you improve it further, or

Can you prove that no improvement is possible?

Analysis of Nonrecursive (contd.)

ALGORITHM Secret(A[0..n-1]) r <- A[0] s <- A[0]

What does this algorithm compute?

for i <- 1 to n-1 do

if A[i] < r r <- A[i]

if A[i] > s s <- A[i]

return s-r

What is the basic operation?

How many times is the basic operation executed?

What’s its efficiency class?

Can you improve it further,

Or Could you prove that no improvement is possible?

Analysis of Recursive

Algorithms

ALGORITHM F(n)

// Output: n!

if n = 0

return 1 else

return n×F(n-1)

Input size: n

Basic operation: ×

M(n) = M(n-1) + 1 for n > 0 to compute

F(n-1) to multiply n and F(n-1)

Analysis of Recursive (contd.)

M(n) = M(n-1) + 1

M(0) = 0

Recurrence relation

By Backward substitution

M(n) = M(n-1) + 1 = M(n-2)+ 1 + 1 = … …

= M(n-n) + 1 + … + 1 = 0 + n = n n 1’s

We could compute it nonrecursively, saves function call overhead

General Plan for Recursive

• Decide on input size parameter

• Identify the basic operation

• Does C(n) depends also on input type?

• Set up a recurrence relation

• Solve the recurrence or, at least establish the order of growth of its solution

Analysis of Recursive

(contd.): Tower of Hanoi

Tower of Hanoi (contd.)

M(n) = M(n-1) + 1 + M(n-1)

M(1) = 1 1

Tower of Hanoi (contd.)

ALGORITHM ToH(n, s, m, d)

if n = 1

M(1) = 1 print “move from s to d” else

ToH(n-1, s, d, m) print “move from s to d”

M(n-1)

ToH(n-1, m, s, d)

M(n-1)

M(1)

Tower of Hanoi (contd.)

M(n) = 2M(n-1) + 1 for n > 1

M(1) = 1

M(n) = 2M(n-1) + 1 [Backward substitution…]

= 2[ 2M(n-2)+1] + 1 = 2 2 M(n-2)+2+1

= 2 2 [2M(n-3)+1]+2+1 = 2 3 M(n-3)+ 2 2 +2+1

… … …

= 2 n-1 M(n-(n-1))+2 n-2 +2 n-3 +……+2 2 +2+1

= 2 n-1 + 2 n-2 +2 n-3 +……+2 2 +2+1 = 𝑛−1 𝑖=0

M(n) є Θ(2 n )

2 𝑖 = 2 n -1

Be careful! Recursive algorithm may look simple

But might easily be exponential in complexity.

Tower of Hanoi (contd.)

• Recursion tree (# of function calls) n n-1

1

2

1

… n-2

1

2

1 n-2

… n-2 n-1

1

2

1

C(n) = 𝒏−𝟏 𝒍=𝟎

𝟐 𝒍 = 2 n -1 n-2

1

2

1

Analysis of Recursive (contd.)

ALGORITHM BinRec(n)

//Output: # of binary digits in n’s

//binary representation

if n = 1

return 1 else

return BinRec( 𝑛 2

)+1

Input size: n = 2 k

Basic operation: +

A(n) = A(2 k ) = A(2 k-1 ) + 1 = [A(2 k-2 ) + 1] + 1

= … = A(2 k-k )+k = A(1) + k = k = lg(n)

A(n) є Θ(lg(n))

Analysis of Recursive (contd.)

• Maximum how many slices of pizza can a person obtain by making n straight cuts with a pizza knife?

L

0

= 1

L

3

= 7

L

1

= 2

Bonus problem in HW2

L

2

= 4

1 2 3

1, 2, 4, 7, … difference

L n

L

0

= L n-1

= 1

+ n for n > 0

EXAMPLE: nth Fibonacci

Number

• 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …

F(n) = F(n-1) + F(n-2) for n > 1

F(0) = 0, F(1) = 1

Let’s try backward substitution…

F(n) = F(n-2)+F(n-3)+F(n-3)+F(n-4)

= F(n-2)+2F(n-3)+F(n-4)

= F(n-3)+3F(n-4)+3F(n-5)+F(n-6) …

nth Fibonacci (contd.) ax(n)+bx(n-1)+cx(n-2) = 0

Homogeneous linear second-order recurrence with constant coefficients

Guess: x(n) = r n ar n +br n-1 +cr n-2 = 0 ar 2 +br+c = 0 Characteristic equation

THEOREM: If r

1

, r

2 are two real distinct roots of characteristic equation then x(n) = αr

1 n +βr

2 n where α, β are arbitrary constants

nth Fibonacci (contd.)

F(n)-F(n-1)-F(n-2) = 0 r 2 -r-1 = 0 => r

1,2

= 1± 1−4 −1

2

F(n) = α 1+ 5 𝑛

+β 1− 5 𝑛

2 2

=

1± 5

2

F(0) = α 1+ 5

2

0

+β 1− 5

2

0

= 0 => α+β=0

F(1) = α

F(n) = 1

5

1+ 5

2

1

+β 1− 5

2

1

= 1

1+ 5

2 𝑛

1

5

1− 5

2 where 𝜑 = 𝑛

= 1

5 𝜑 𝑛 𝑛

1 + 5 2 ≈ 1.61803

and 𝜑

= -

1 𝜑

nth Fibonacci (contd.)

• F(n) = 1 𝜑 𝑛 𝑛 and

5 𝜑

≈ -0.61803

• F(n) є Θ(φ n )

ALGORITHM F(n) where 𝜑

≈ 1.61803

A(n) = A(n-1)+A(n-2)+1

A(0) = 0, A(1) = 0

if n ≤ 1

return n else

[A(n)+1] –[A(n-1)+1]-[A(n-2)+1]=0

Let B(n) = A(n)+1

B(n) –B(n-1)-B(n-2)=0

B(0)=1, B(1)=1

return F(n-1)+F(n-2)

Notice, B(n) = F(n+1)

A(n) = B(n)-1=F(n+1)-1

= 1

5 𝜑 𝑛+1 𝑛+1 -1

Basic op: +

A(n) є Θ(φ n )

nth Fibonacci (contd.)

F(5)

F(4) F(3)

F(3) F(2) F(2) F(1)

F(1)

F(2) F(1) F(1) F(0) F(1) F(0)

F(0)

ALGORITHM Fib(n)

F[0] <- 0

F[1] <- 1

for i <- 2 to n do

F[i] <- F[i-1]+F[i-2]

return F[n]

Only n-1 additions, Θ(n)!

ALGORITHM Fib(n) f <- 0 fnext <- 1

for i <- 2 to n do tmp <- fnext fnext <- fnext+f f <- tmp

return fnext

Empirical Analysis

• Mathematical Analysis

– Advantages

• Machine and input independence

– Disadvantages

• Average case analysis is hard

• Empirical Analysis

– Advantages

• Applicable to any algorithm

– Disadvantages

• Machine and input dependency

Empirical Analysis (contd.)

• General Plan

– Understand Experiment’s purpose:

• What is the efficiency class?

• Compare two algorithms for same problem

– Efficiency metric: Operation count vs. time unit

– Characteristic of input sample (range, size, etc.)

– Write a program

– Generate sample inputs

– Run on sample inputs and record data

– Analyze data

Empirical Analysis (contd.)

• Insert counters into the program to count C(n)

• Or, time the program, (t finish

-t start

), in

Java System.currentTimeMillis()

– Typically not very accurate, may vary

– Remedy: make several measurements and take mean or median

– May give 0 time, remedy: run in a loop and take mean

Empirical Analysis (contd.)

• Increase sample size systematically, like

1000, 2000, 3000, …, 10000

– Impact is easier to analyze

• Better is to generate random sizes within desired range

– Because, odd sizes may affect

• Pseudorandom number generators

– In Java Math.random() gives uniform random number in [0, 1)

– If you need number in [min, max]:

• min + (int)( Math.random() * ( (max-min) + 1 ) )

Empirical Analysis (contd.)

• Pseudorandom in [min, max]

– Math.random() * (max-min) gives [0, max-min)

– min + Math.random() * (max-min) gives [min, max)

– (int)[Math.random() * (max-min+1)] gives [0, max-min]

– min+ (int)[Math.random() * (max-min+1)] gives

[min, max]

• To get [5, 10]

– (int)Math.random()*(10-5+1) gives [0, 10-5] or

[0, 5]

– 5+ (int)Math.random()*(10-5+1) gives [5, 10]

Empirical Analysis (contd.)

• Profiling: find out most time-consuming portion, Eclipse and Netbeans have it

• Tabulate data n M(n) g(n) M(n)/g(n)

• If M(n)/g(n) converges to a +ve constant, we know M(n) є Θ(g(n))

• Or look at M(2n)/M(n); e.g., if M(n) є

Θ(lgn) then M(2n)/M(n) will be low

Empirical Analysis (contd.)

• Scatterplot

M(n) ≈ ca n lgM(n) ≈ nlga + lgc count/time count/time lgn n n n count/time nlgn or n 2 n

Algorithm Visualization

(contd.)

• Sequence of still pictures or animation

• Sorting Out Sorting on YouTube

– Nice animation of 9 sorting algorithms

Summary

• Time and Space Efficiency

• C(n): Count of # of times the basic operation is executed for input of size n

• C(n) may depend on type of input and then we need worst, average, and best case analysis

• Usually order of growth (O, Ω, Θ) is all that matters: logarithmic, linear, linearithmic, quadratic, cubic, and exponential

• Input size, basic op., worst case?, sum or recurrence, solve it…

• We run on computers for empirical analysis

Homework 2

• DUE ON FEBRUARY 07 (TUESDAY)

IN CLASS

Download