Miller-Rabin primality test

From Free net encyclopedia

The Miller-Rabin primality test or Rabin-Miller primality test is a primality test: an algorithm which determines whether a given number is prime, similar to the Fermat primality test and the Solovay-Strassen primality test. Its original version, due to Gary L. Miller, is deterministic, but it relies on the unproven generalized Riemann hypothesis; Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm.

Contents

Concepts

Just like with the Fermat and Solovay-Strassen tests, with the Miller-Rabin test we will rely on an equality or set of equalities that hold true for prime values, and then see whether or not they hold for a number that we want to test for primality.

First, we'll need a lemma about square roots of unity in the finite field <math>\mathbb{Z}_p</math>. We know that 1 and -1 always yield 1 when squared mod p; we call these trivial square roots of 1. We assert that there are no nontrivial square roots of 1. To show this, suppose that x is a square root of 1 mod p. Then:

<math>x^2 \equiv 1\pmod{p}</math>
<math>(x - 1)(x + 1) \equiv 0\pmod{p}</math>

Since x is not 1 or -1 mod p, neither <math>x-1</math> nor <math>x+1</math> is divisible by p. But if a prime divides neither of two integers, it cannot divide their product.

Now, let n be an odd prime, then we can write n − 1 as <math>2^s \cdot d</math>, where s is an integer and d is odd -- this is the same as factoring out 2 from n − 1 repeatedly. Then, one of the following must be true for all <math>a\in \left(\mathbb{Z}/n\mathbb{Z}\right)^*</math>:

<math>

a^{d} \equiv 1\pmod{n} </math> or

<math>

a^{2^r\cdot d} \equiv -1\pmod{n}</math> for some <math>0 \le r \le s-1 </math>

To show that one of these must be true, recall Fermat's little theorem (which only applies for prime moduli):

<math>

a^{n-1} \equiv 1\pmod{n} </math>

By the lemma above, if we keep taking square roots of an − 1, we will either get 1 or −1. If we get −1 then the second equality holds and we are done.

In the case when we've taken out every power of 2 and the second equality never held, we are left with the first equality which also must be equal to 1 or −1, as it too is a square root. However, if the second equality never held, then it couldn't have held for r = 0, meaning that

<math>

a^{2^0\cdot d} = a^d \not\equiv -1\pmod{n} </math>

Thus in the case the second equality doesn't hold, the first equality must.

The Miller-Rabin primality test is based on the contrapositive of the above claim. That is, if we can find an a such that

<math>

a^{d} \not\equiv 1\pmod{n} </math> and

<math>

a^{2^rd} \not\equiv -1\pmod{n}</math> for all <math>0 \le r \le s-1 </math>

then a is a witness for the compositeness of n (sometimes misleadingly called a strong witness, although it is a certain proof of this fact). Otherwise a is called a strong liar, and n is a strong probable prime to base a. The term "strong liar" refers to the case where n is composite but nevertheless the equations hold as they would for a prime.

For every composite n, there are many witnesses a. However, no simple way of generating such an a is known. The solution is to make the test probabilistic: we choose <math>a\in\mathbb{Z}/n\mathbb{Z}</math> randomly, and check whether or not it is a witness for the compositeness of n. If n is composite, most of the as are witnesses, thus the test will detect n as composite with high probability. There is nevertheless a small chance that we are unlucky and hit an a which is a strong liar for n. We may reduce the probability of such error by repeating the test for several independently chosen a.

Algorithm and running time

The algorithm can be written as follows:

Inputs: n: a value to test for primality; k: a parameter that determines the accuracy of the test
Output: composite if n is composite, otherwise probably prime
write n − 1 as <math>2^s \cdot d</math> by factoring powers of 2 from n − 1
repeat k times:
pick a randomly in the range [1, n − 1]
if ad mod n ≠ 1 and <math>a^{2^rd}</math> mod n ≠ −1 for all r in the range [0, s − 1] then return composite
return probably prime

Using modular exponentiation by repeated squaring, the running time of this algorithm is O(k × log3 n), where k is the number of different values of a we test; thus this is an efficient, polynomial-time algorithm. Fast FFT-based multiplication can push the running time down to Õ(k × log2 n).

Additional information

The more bases a we test, the better the accuracy of the test. It can be shown that for any odd composite n always at least 3/4 of the bases a are witnesses for the compositeness of n. The Miller-Rabin test is strictly stronger than the Solovay-Strassen primality test in the sense the set of strong liars of the Miller-Rabin test is a subset of the set of the Solovay-Strassen primality test. If n is composite then the Miller-Rabin primality test declares n probably prime with a probability at most <math>4^{-k}</math>. On the other hand, the Solovay-Strassen primality test declares n probably prime with a probability at most <math>2^{-k}</math>.

On average the probability that a composite number is declared probable prime is significantly smaller than <math>4^{-k}</math>. Damgård, Landrock and Pomerance compute some explicit bounds. Such bounds can for example be used to generate primes, but should not be used to verify primes with unknown origin. Especially in cryptographic application an adversary might try to send you a pseudoprime in a place where a prime number is required. Then only the bound <math>4^{-k}</math> is valid.

Deterministic variants of the test

The Miller-Rabin algorithm can be made deterministic by trying all possible a below a certain limit. The problem in general is to set the limit so that the test is still reliable.

If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group <math>\left(\mathbb{Z}/n\mathbb{Z}\right)^*</math>, which means that if we test all a from a set which generates <math>\left(\mathbb{Z}/n\mathbb{Z}\right)^*</math>, one of them must be a witness for the compositeness of n. Assuming the truth of the generalized Riemann hypothesis (GRH), it is known that the group is generated by its elements smaller than O((ln n)2), which was already noted by Miller. The constant involved in the big O-notation was reduced to 2 by Bach (1990). This leads to the following conditional primality testing algorithm:

Input: n: a value to test for primality
Output: composite if n is composite, otherwise prime
write n − 1 as <math>2^s \cdot d</math> by factoring powers of 2 from n − 1
repeat for all a in the range <math>[2,\lfloor2(\ln n)^2\rfloor]</math>:
if ad mod n ≠ 1 and <math>a^{2^rd}</math> mod n ≠ −1 for all r in the range [0, s − 1] then return composite
return prime

The running time of the algorithm is Õ((log n)4). The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.

This algorithm is not used in practice, as it is much slower than the randomized version of the Miller-Rabin test. For theoretical purposes, it was superseded by the AKS primality test, which does not rely on unproven assumptions.

When the number n we want to test is small, trying all a < 2(ln n)2 is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, it has been verified [1] that

  • if n < 4,759,123,141, it is enough to test a = 2, 7, and 61,
  • if n < 341,550,071,728,321, it is enough to test a = 2, 3, 5, 7, 11, 13, and 17.

These results give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.

However, no finite set of bases is sufficient for all composite numbers. Alford, Granville and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least <math>(\ln n)^{1/(3\ln\ln\ln n)}\,</math>. They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order <math>\Theta(\ln n\,\ln\ln n)</math>.

References

  • Michael O. Rabin, Probabilistic algorithm for testing primality, Journal of Number Theory 12 (1980), no. 1, pp. 128–138.
  • Gary L. Miller, Riemann's Hypothesis and Tests for Primality, Journal of Computer and System Sciences 13 (1976), no. 3, pp. 300–317.
  • I. Damgård, P. Landrock and C. Pomerance (1993), Average case error estimates for the strong probable prime test, Math. Comp. 61(203) p.177–194.
  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0262032937. Pages 890–896 of section 31.8, Primality testing.
  • René Schoof, Four primality testing algorithms, to appear in: Surveys in algorithmic number theory, Cambridge University Press. PDF
  • Eric Bach, Explicit bounds for primality testing and related problems, Mathematics of Computation 55 (1990), no. 191, pp. 355–380.
  • W. R. Alford, A. Granville and C. Pomerance, On the difficulty of finding reliable witnesses, in: Algorithmic Number Theory, First International Symposium, Proceedings (L. M. Adleman, M.-D. Huang, eds.), LNCS 877, Springer-Verlag, 1994, pp. 1–16.

External links

es:Test de primalidad de Miller-Rabin fr:Test de primalité de Miller-Rabin ko:밀러-라빈 소수판별법 he:האלגוריתם של מילר-רבין pt:Teste de primitividade de Miller-Rabin ru:Тест Миллера — Рабина