Divide and Conquer – A Top Down Approach - People

advertisement
Divide and Conquer – A Top Down Approach
Subhajit Mondal
The Department of
Computing and Information Sciences
Kansas State University
1
Divide and Conquer – A Top Down Approach
Subhajit Mondal
The Department of
Computing and Information Sciences
Kansas State University
Abstract
The paper derives an algorithm to compute the values of integers a, b and d satisfying the
equation au + bv = d where u and v are positive integers and d = gcd(u, v) using the
Divide and Conquer technique. The same strategy is further employed to solve for s in the
equation ns mod  = 1 where gcd(n, ) = 1. [ 2 ]
1. Introduction:
Let u and v be positive integers and let d be their greatest common divisor. The paper
aims to prove that for any such pair there exist integers a and b which satisfy the equation
au + bv = d.
The algorithm to generate the initial set of values: u, v and d is obtained from
Euclid’s algorithm which employs divide and conquer strategy. The algorithm is further
refined to generate integer value s to satisfy the equation ns modΦ = 1,
where n and Φ are integer values such that gcd(n,Φ) = 1.
2. Euclid’s Algorithm :
Euclid’s algorithm employs divide and conquer strategy in the following manner to
obtain the gcd of two positive numbers:
gcd(u, v)
if (u mod v = 0)
return v
else
return gcd(v, u mod v)
2
Top-down approach is used by decomposing the original problem into simpler
problems, by progressively reducing the size of the integers which the algorithm works
on. The mod operator keeps on reducing the input size of the numbers till a solution is
got. This approach is possible because gcd(u,v) =gcd(v,u mod v) when v≠0, for  v ≤ u.
This helps in reducing the original problem instance into smaller sub-problem instances,
whose solution results in solving the original problem. As the algorithm successively
reduces the larger instances into smaller fractions of the original size of the numbers, it is
said to employ Divide and Conquer strategy
2.1. Theorem 1:
gcd(u, v) = gcd(v, u mod v) when v ≠ 0, for  v ≤ u.
Definition of modulus operator, mod in u mod v:
Let z = u mod v. As per the definitions of the modulus operator z can be
expressed as the remainder from the integer division u ÷ v i.e. z = u – (u ÷ v) v. The ÷
operator refers to integer division.
Proof:
Let gcd(u, v) = d . Hence d | u and d | v .
Therefore u = xd and v = yd, for some integers x and y.
z = u mod v
= u – (u ÷ v) v
=xd – (xd ÷ yd ) yd
=d(x – (xd ÷ yd ) y)
Hence d | z .
Lets assume there exists a certain integer c>d, such that c divides v and z exactly.
Now as u is a linear combination of variables v and z, expressed as u = (u ÷ v) v + z , c
should also divide u exactly. Thus contradicting our assumption as otherwise c becomes
the gcd of u and v instead of d. Hence there can exist no other integer c such that c>d,
where d = gcd(u,v) which divides v and z. Thus proving that d is the greatest common
divisor of v and w.
3
2.2. Claim:
For any positive integers u and v such that v ≤ u, there exist integers a and b
which satisfy the equation au + bv = gcd(u,v).
Base Case:
When u is a multiple of v i.e. u = m v : u mod v = 0 and d = v, where d=gcd(u,v)
The equation au + bv = d is satisfied for the values: a = 0 and b = 1.
Induction Hypothesis:
For a given u, for any u < u and v ≤ u there exist integers a1 and b1 which
satisfy the equation a1u + b1v = gcd(u, v).
Induction Step:
When v < u and u mod v ≠ 0
Since i) v < u and
ii) u mod v < v
As per the Induction Hypothesis there exist integers a1 and b1 such that the
following equation is satisfied:
a1v + b1 (u mod v) = gcd( v, u mod v )
...(1)
Now by definition of mod:
u mod v = u – (u ÷ v) v
Substituting for u mod v in equation (1)

a1v + b1 (u - (u ÷ v) v) = gcd( v, u mod v )

b1u + (a1 - (u ÷ v) b1) v = gcd( v, u mod v )
Substituting:
a = b1
b = a1 - (u ÷ v) b1

au + bv = gcd(v, u mod v)
= gcd(u, v)
As per Theorem 1 i.e. gcd(u, v) = gcd(v, u mod v)
4
Hence its proved through induction that for any positive integers u ,v and d such
that d = gcd(u,v) , there exist two integers a and b such that au + bv = d.
3. Algorithm:
The algorithm for computing values u, v, d, a and b such that they satisfy the
equation au+bv=d is as follows:
Pre-condition:
Input integers --> u and v such that:
i) for  v ≤ u and
ii) v > 0 and u > 0
function(u, v)
if (u mod v = 0)
return (v, 0, 1)
else
(d, a, b) = function(v, u mod v)
return (d, b, a – (u ÷ v) b)
Post-condition:
Output integers --> d, a and b such that:
i) d = gcd( u,v )
ii) a and b satisfy the equation au + bv = d.
Proof of Correctness:
As the algorithm is based upon the proof given above for the claim, the proof for
this algorithm is also the same as above, which is proved by mathematical induction.
Time Complexity:
Execution time for the algorithm depends on the number of recursions executed,
assuming all the arithmetic operations as elementary. In the following section it is proved
that substantial progress is seen after every two recursion of the algorithm.
Theorem 2:
For any integer u and v such that u ≥ v, the equation u mod v < u/2 is always satisfied.
Proof:
The value of v can range from :
5

Case I : 0 < v < u/2
As per the definition of modulo, u mod v for this case can range
from: 0 --> (v-1)
As max value of v < u/2
Hence u mod v < u/2, always holds.

Case II: u/2 < v < u
Let u/v = y.
As v ranges from u --> u/2 when u > v > u/2
y ranges from 1 --> 2 i.e. 1 < y < 2
Hence again as per the definition of modulo it is seen that the value
of u mod v for u > v > u/2 represents the value u – v.
As v > u/2, therefore u – v < u/2.

Case III: v = u/2 or v = u
Lets see the scenario when v = u/2.
As u is a multiple of u/2, the value of :
u mod v = u mod u/2 = 0
The same applies for when v = u.
Hence proving that the inequality u mod v < u/2 always holds.
Let the initial values of u and v be u0 and v0 respectively.
After the first recursion the value of v equals u0 mod v0.After the second recursion u
subsumes this value. As per the explanation provided above for the inequality u mod v <
u/2 it can be seen that the current value of u is less that u0/2 i.e. u < u0/2. It can be
inferred from this that value of u is at least halved every two successive recursions, u
being the larger of the two numbers.
Let f(n) represent the total number of function calls on inputs u and v such that v  u  n.
Then f(n) is defined by the following recurrence relation:
1
f(n) 
for n  2
f(n 2) + 2 otherwise
6
Explanation for the recurrence relation is as follows:
If u  2, then v  2, as v  u, the maximum function calls will be 1. Hence we
have f(n)  1 when n  2, where n is the upper bound on u.
If u > 2, the number of function calls is less than two when u mod v = 0 or v = 0
else there will be minimum of two function calls. Considering the latter case, the value of
u will be at least halved in every two recursive calls as explained above and may be at
most (n  2). The same explanation also holds for v and hence the corresponding value of
v is no larger than new value of u. Thus it takes no more than f(n2) additional calls to
complete the computation. Hence we have f(n)  2 + f(n 2) for n > 2. Note that n is the
upper bound on u.
Solving the above recurrence relation as per Theorem 3.32 [1]
The funtion is of the type :
f(n) Є af(n/b) + X(nqg(n))
Satisfying the following condition:
f(n) Є X(nqg(n) lg n) if a = bq
where X is either О, Ω, or Ө
For values of:
a = 1, b = 2
As 1 = 20 therefore satisfying the equality a = b0
From the above it can be concluded that the time complexity of the
algorithm f(n) Є O(log n).
Space Complexity:
Space complexity for the function depends on the number of recursive calls made
by the algorithm for computing the result. This is the case because the total space
required by the algorithm is primarily the stack space used to store the arguments of the
successive recursive calls and the local variables. As proved above the total recursive
calls is of the order of O(log n). Hence the space complexity of the algorithm is also of
the order of O(log n).
7
4. Algorithm to determine an integer s for integers n and  such that ns mod  = 1
where gcd(n, ) = 1:
As per the definition of mod, the equation ns mod  = 1 can be written as
ns - (ns ÷ )  = 1. This equation can be brought to the form au + bv = d upon
performing the following substitutions:
s = a, n = u, (- (ns ÷ ) ) = b,  = v and 1 = d.
The algorithm described above can be modified to compute the value of s. As d is a
constant value of 1, it need not be recomputed. The modified algorithm is as shown
below:
gcd (n, )
if (n mod  = 0)
return (0, 1)
else
(s, t) = gcd(, n mod )
return (t, s - (n ÷ ) t)
Time complexity for this algorithm is the same as that of the previous algorithm.
5. Reference:
1. Algorithms: A Top-Down Approach, Rodney R. Howell
2. Fundamentals of Algorithmics, G. Brassard and P. Bratley, Prentice Hall,
1996.
8
Download