The Euclidean Algorithm } by

advertisement
The Euclidean Algorithm
The Euclidean algorithm to find d = gcd(a, b), a > b , computes a sequence of remainders {rj } by
a = q 1 b + r1 , b = q 2 r1 + r2 ,
...
rk−2 = qk rk−1 + rk ,
rk−1 = qk+1 rk
We can also write the last line as rk−1 = qk+1 rk + rk+1 where rk+1 = 0. Then rk = gcd(a, b).
Rate of convergence: We know that the scheme must converge since each qi ≥ 1 and so the sequence
{rj } is strictly decreasing and hence must terminate since the final value d = rk ≥ 1 . An important
question is in how many steps does this occur?
Based only on the decreasing of the remainders we get the very crude estimate that it terminates in at
most b steps, which, if the best available would render the algorithm useless for anything but relatively
small numbers. We actually can do much better. In fact, it is quite easy to show that after any two
iterations we must have cut the remainder down by at least one half. Thus if m is the smallest integer
such that b < 2m then the maximum number of steps is at most 2m + 1 = 2 log b + 1. For a 1000
digit (binary ) number this is 2000 steps.
But we can do better. The worst case is when each qi = 1 and this gives the improved bound of
maximum steps = 1.505 log b (Fibonacci sequence, golden ratio . . .)
These estimates are all worst-case; we are guaranteed to take no more than the above. However,
another question is what is the expected number of iterations for a random choice of a and b ? Another
question might be: “how often does the average case and the worst case occur” - more precisely, what
is the distribution of the expected running times of this algorithm?
Download