Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
648 views
in Technique[技术] by (71.8m points)

big o - Time complexity of Euclid's Algorithm

I am having difficulty deciding what the time complexity of Euclid's greatest common denominator algorithm is. This algorithm in pseudo-code is:

function gcd(a, b)
    while b ≠ 0
       t := b
       b := a mod b
       a := t
    return a

It seems to depend on a and b. My thinking is that the time complexity is O(a % b). Is that correct? Is there a better way to write that?

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

One trick for analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations:

a', b' := a % b, b % (a % b)

Now a and b will both decrease, instead of only one, which makes the analysis easier. You can divide it into cases:

  • Tiny A: 2a <= b
  • Tiny B: 2b <= a
  • Small A: 2a > b but a < b
  • Small B: 2b > a but b < a
  • Equal: a == b

Now we'll show that every single case decreases the total a+b by at least a quarter:

  • Tiny A: b % (a % b) < a and 2a <= b, so b is decreased by at least half, so a+b decreased by at least 25%
  • Tiny B: a % b < b and 2b <= a, so a is decreased by at least half, so a+b decreased by at least 25%
  • Small A: b will become b-a, which is less than b/2, decreasing a+b by at least 25%.
  • Small B: a will become a-b, which is less than a/2, decreasing a+b by at least 25%.
  • Equal: a+b drops to 0, which is obviously decreasing a+b by at least 25%.

Therefore, by case analysis, every double-step decreases a+b by at least 25%. There's a maximum number of times this can happen before a+b is forced to drop below 1. The total number of steps (S) until we hit 0 must satisfy (4/3)^S <= A+B. Now just work it:

(4/3)^S <= A+B
S <= lg[4/3](A+B)
S is O(lg[4/3](A+B))
S is O(lg(A+B))
S is O(lg(A*B)) //because A*B asymptotically greater than A+B
S is O(lg(A)+lg(B))
//Input size N is lg(A) + lg(B)
S is O(N)

So the number of iterations is linear in the number of input digits. For numbers that fit into cpu registers, it's reasonable to model the iterations as taking constant time and pretend that the total running time of the gcd is linear.

Of course, if you're dealing with big integers, you must account for the fact that the modulus operations within each iteration don't have a constant cost. Roughly speaking, the total asymptotic runtime is going to be n^2 times a polylogarithmic factor. Something like n^2 lg(n) 2^O(log* n). The polylogarithmic factor can be avoided by instead using a binary gcd.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...