Uploaded by Sara.hippmann

Algorithm exam

advertisement
Question 5
Explanation of difference between big O and big Omega notation:
Big O notation provides an upper bound for the run time of the algorithm. Suppose we have
an algorithm whose run time is O(N), this means the asymptotic run time of the programme
will be no larger than N.
Big Omega notation provides a lower bound for the run time of the algorithm. It asserts that
the programme will never have an order of growth better than a particular function in
asymptotic notation.
Uses of big O and big Omega notations
Big O notation should be used to give a guarantee that a particular algorithm will have a
worst case asymptotic run time better than or equal to the function given.
Big Omega notation is used to provide information on the worst possible time complexity in
the worst case. It proves that no algorithm can do better
Meaning of asymptotic notation:
Asymptotic notations means that the value given refers to the way the programme
scales/the order of growth of the method as the size of inputs tends to infinity.
Question 6
Run time cost:
The analysis performed will be worst case analysis in asymptotic notation.
The method Pythagorean will have a run time dependent on the value of parameter int c.
The analysis will be made easier by splitting the code into sections: section A is line 2,
section B is lines 3-7 and section C is line 8. The worst case is not dependent on the actual
number of pairs present.
Section A involves the declaration and assignment of an integer variable ‘count’. Both of
these are constant run time operations so the total cost of section A is TA(c)=O(1) * 2 = O(1).
Section B involves a for-loop that iterates over all values of integers from 1 to c. This will
involve c-1 iterations which in asymptotic notation can be assumed to be c. Line 4 involves
the declaration and assignment of an integer variable. This will have a total cost that is a
scalar multiple of O(1) as all the operations are constant run time. Line 5 involves calling the
search method which we found has worst case asymptotic run time O(logd) where d is the
range of integers within the argument of the method. The value of d will be c in this case
using big O notation to provide an upper bound. This means the number of values between
b squared and 1 will never be larger than c. The count incrementation will have cost O(1)
which has no effect in asymptotic notation as O(logc) is a more rapidly increasing function.
In the below calculation the number of count incrementations is x where 0 <= x <= c. Hence
the cost of each iteration will be O(1) + O(logc) + x * O(1) = O(logc). So the total cost of
section B will be TB(c) = O(log c) * O( c) = O(clogc)
Section C involves a constant run time return statement TC(c ) = O(1).
So Tp( c) = O(1) + O(1) + O(clogc) = O(clogc)
The method search will have a run time dependent on the number of integers to be
examined. This parameter will be called d = end – start. The code can be split into sections;
section A is line 11, section B is line 12-13, section C is line 14, section D is line 15, section E
is lines 16-17, section F is lines 18-19, section G is line 20.
The run time will depend on whether a value that squares to make n exists and its location
in the sequence of integers. In the worst case there will be no value that squares to make n.
Line 11 will not be executed in the worst case so its total cost will be the cost of checking
the if-conditional containing a constant run time comparison which will be a constant run
time operation TA(d) = Theta(1)
Section B involves the declaration and assignment of 2 variables using simple mathematical
operations. All of the operations involved will have constant run time Theta(1) so the total
cost of this section of code will be TB(d) = c * Theta(1) = Theta(1) since constants can be
ignored in asymptotic notation.
Section C involves checking an if-conditional which will be false in the worst case. The
mathematical operation will have constant run time Theta(1) as will returning the Boolean
false. So this section of code has worst case asymptotic run time TC(d) = Theta(1)
Section D is identical to section C apart from variable values and hence, also has
TD(d)=Theta(1)
Without loss of generality, we assume that n > mid 2 *mid 2 and n > mid1 *mid1 with each
recursion.
Then section E will involve checking the if-conditional consisting of a mathematical
operation. This is a scalar multiple of constant run time operations and hence TE(d) =
Theta(1) by the fact scalars can be ignored in asymptotic notation.
Section F is identical to section E except for variable values and hence TF(d) = Theta(1).
Section G will involve a recursive call to the search method. In the worst case this will be
executed until the base case is reached. Each call involves splitting the sequence of integers
in 3, which can be done a maximum of log3(d) times. Hence the maximum number of
recursions will be log3(d).
To find the total cost of this method, find the cost of each recursive call and then the
number of recursions. Each recursive call has cost TX(d) = c * Theta(1) which is found by
adding up the costs of each section of code. Hence TX(d) = Theta(1). The total cost requires
this number to be multiplied by the number of recursions so Ts(d) = Theta(1) * Theta(log(d))
= Theta(log(d)). The base 3 of the logarithm can be ignored as due to the operations
permissible on logarithms this can be converted to a scalar multiple of log(d) for any base.
Space cost:
The search method has Theta(logd) recursive calls before return so the call stack will require
Theta(log d) memory. The code doesn’t allocate any new objects so the heap requirement is
Theta(1). Hence the total memory requirement will be Theta(logd) + Theta(1) = Theta(logd).
Download