BIG-OH ANALYSIS OF ALGORITHMS

advertisement
BIG-OH ANALYSIS OF ALGORITHMS
Analysis of algorithms does not just mean running them on the computer to see which
one is faster. Rather it is being able to look at the algorithm and determine how it would
perform. This is done by looking at the order of magnitude of the algorithm. As the
number of items(N) changes what effect does it have on the number of operations needed
to execute(time). This method of classification is referred to as BIG-O notation.
SIMPLE ANALYSIS
(darkest statement)
How can we analyze the algorithm without using a computer? Imagine that each time a
line of code is run, the statement turns a darker shade of grey. The darkest statement
would be the one that is executed the most times. Analyzing this statement would allow
us to determine the running time of the algorithm.
public static void swap(int[] nums, int a, int b)
{
int temp = nums[a];
nums[a] = nums[b];
nums[b] = temp;
}
public static void selSort (int[] nums)
{
int min, size = nums.length;
for (int i=0; i < size; i++)
{
min = i;
for (int j = i + 1; j < size; j++)
{
if (nums[j] < nums[min])
min = j;
}
swap (nums, i, min);
}
}
The worst statement (executes most times) is if (nums[j] < nums[min]). First time
through outer loop it executes N-1 times, then N-2, then N-3 downto 1. So the number of
times the statement executes is the sum of the numbers 1 to N-1.
which is equal to
N  1N  1  1  N 2  N 
2
2
1
2
N 2  21 N
This would be considered order O(N2) , since only the most significant term is
considered. This is a quadratic time sorting algorithm.
Here’s the skinny on finding the efficiency of an algorithm:
For every nested loop which depends on the size of the data, and whose
loop control variable changes by means of addition or subtraction, add a
factor of n to the efficiency. O(1) -> O(n) -> O(n2) -> O(n3) -> O(n4)
i.e. Selection Sort is O(n2) because it has:
for (int i=0; i < list.length; i++)
{ …
for (int j = i + 1; j < length; j++)
{ ...___________________________________
However, you would only have O(n) given:
for (int i=0; i < list.length; i++)
{ …
for (int j = i + 1; j < 10; j++)
{ …
because the inside loop always goes 10 times. Only one loop depends on the
size of the input.
________________________________________________________________
If a loop control variable changes by multiplication or division, consider
O(log n).
i.e. for (int j = 0; j < n; j*=2) is O(log n)
{…_____________________________________
However, you would have O(n log n) given:
for (int i=0; i < n; i++)
{ …
for (int j = 0; j < n; j*=2)
{ …
because the inside loop is logarithmic (j *=2, which means the loop control
variable doubles each time, therefore getting to the terminating case quickly) but
it is nested in another loop that goes n times. i.e. n times log n.
________________________________________________________________
If a recursive method calls itself once per case and the input sent to the
recursive call changes by addition or subtraction, consider O(n).
i.e., recursive factorial:
public static long fact(long x)
{
if (x==0) return 1;
return x * fact(x-1);
}
________________________________________________________________
For each time a recursive method calls itself in a particular case, tack on an
exponential factor. O(2n) -> O(3n) -> O(4n) -> O(5n)
i.e., fibonacci is O(2n):
public static long fib(long x)
{
if (x < 2) return 1;
return fib(x-1) + fib(x-2);
}
________________________________________________________________
Only consider the part of an algorithm or method that is the most inefficient.
i.e., for (int i=0; i < n; i++)
//O(n)
Is
System.out.println(“Boo!”);
for (int i=0; i < n; i++)
//O(n2)
for (int j=0; j < n; j++)
System.out.println(“Hiss”);
O(n)+O(n2) which is just considered O(n2).
Constant time.
O(1): System.out.println(“Boo!”);
//Takes the same amount of time every time it is run.
Linear time.
O(n): for(int i = 0; i < n; i++)
System.out.println(“Boo!”);
//The amount of time it takes to run depends on how big n is.
//If it takes 1 nanosecond to do one println(“Boo”) and n happens
//to be 100, then it will take 100 nanoseconds to go through the loop.
Quadratic time.
O(n2): for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++)
System.out.println(“Boo!”);
//As the size of the input increases, the amount of time it takes is squared.
Value of n
# times the nested loop writes “Boo!”
1
1
2
4
3
9
4
16
//A three times nested loop where all loops depend on the input would be
//O(n3).
Exponential time.
O(2n): public static void madness(int n)
{
System.out.println(“Boo!”);
if (n > 0)
{
madness(n-1);
madness(n-1);
}
}
//As the size of the input increases, the amount of time it takes grows
//exponentially.
Value of n
# times the nested loop writes “Boo!”
madness(0)
1
madness(1)
3
madness(2)
7
madness(3)
15
madness(4)
31
madness(5)
63
//If madness was called recursively 3 times in the method it would be
//O(3n). This behavior is to be avoided whenever possible.
Logarithmic time.
O(log2n):
public static void clarity(int n)
{
System.out.println("Boo!");
if (n > 1)
clarity(n/2);
}
//The size of the input can double but the amount of time it takes to run
//increments by one.
//The amount of “work” you have to do gets cut in half with each step.
Value of n
clarity(0)
…
clarity(2)
…
clarity(4)
…
clarity(8)
O(n log n):
# times the nested loop writes “Boo!”
1
2
3
4
for(int i = 0; i < n; i++)
clarity(n);
//logarithmic efficient code run n times.
Value of n
# times the nested loop writes “Boo!”
0
1
1
1
2
2
3
4
4
6
5
9
6
12
7
15
8
18
9
22
10
26
Download