# Prime number theorem

Prime number theorem This article utilizes technical mathematical notation for logarithms. All instances of log(x) without a subscript base should be interpreted as a natural logarithm, commonly notated as ln(x) or loge(x).

In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function).

The first such distribution found is π(N) ~ N / log(N) , where π(N) is the prime-counting function (the number of primes less than or equal to N) and log(N) is the natural logarithm of N. This means that for large enough N, the probability that a random integer not greater than N is prime is very close to 1 / log(N). Consequently, a random integer with at most 2n digits (for large enough n) is about half as likely to be prime as a random integer with at most n digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (log(101000) ≈ 2302.6), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (log(102000) ≈ 4605.2). In other words, the average gap between consecutive prime numbers among the first N integers is roughly log(N).[1] Contents 1 Statement 2 History of the proof of the asymptotic law of prime numbers 3 Proof sketch 3.1 Non-vanishing on Re(s) = 1 4 Newman's proof of the prime number theorem 5 Prime-counting function in terms of the logarithmic integral 6 Elementary proofs 7 Computer verifications 8 Prime number theorem for arithmetic progressions 8.1 Prime number race 9 Non-asymptotic bounds on the prime-counting function 10 Approximations for the nth prime number 11 Table of π(x), x / log x, and li(x) 12 Analogue for irreducible polynomials over a finite field 13 See also 14 References 15 Sources 16 External links Statement Graph showing ratio of the prime-counting function π(x) to two of its approximations, x / log x and Li(x). As x increases (note x axis is logarithmic), both ratios tend towards 1. The ratio for x / log x converges from above very slowly, while the ratio for Li(x) converges more quickly from below. Log-log plot showing absolute error of x / log x and Li(x), two approximations to the prime-counting function π(x). Unlike the ratio, the difference between π(x) and x / log x increases without bound as x increases. On the other hand, Li(x) − π(x) switches sign infinitely many times.

Let π(x) be the prime-counting function defined to be the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / log x is a good approximation to π(x) (where log here means the natural logarithm), in the sense that the limit of the quotient of the two functions π(x) and x / log x as x increases without bound is 1: {displaystyle lim _{xto infty }{frac {;pi (x);}{;left[{frac {x}{log(x)}}right];}}=1,} known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as {displaystyle pi (x)sim {frac {x}{log x}}.} This notation (and the theorem) does not say anything about the limit of the difference of the two functions as x increases without bound. Instead, the theorem states that x / log x approximates π(x) in the sense that the relative error of this approximation approaches 0 as x increases without bound.

The prime number theorem is equivalent to the statement that the nth prime number pn satisfies {displaystyle p_{n}sim nlog(n),} the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as n increases without bound. For example, the 2×1017th prime number is 8512677386048191063,[2] and (2×1017)log(2×1017) rounds to 7967418752291744388, a relative error of about 6.4%.

On the other hand, the following asymptotic relations are logically equivalent[3] {displaystyle {begin{aligned}lim _{xrightarrow infty }{frac {pi (x)log x}{x}}=&1,\lim _{xrightarrow infty }{frac {pi (x)log pi (x)}{x}}=&1.end{aligned}}} As outlined below, the prime number theorem is also equivalent to {displaystyle lim _{xto infty }{frac {vartheta (x)}{x}}=lim _{xto infty }{frac {psi (x)}{x}}=1,} where ϑ and ψ are the first and the second Chebyshev functions respectively, and to {displaystyle lim _{xto infty }{frac {M(x)}{x}}=0,} [4] where {displaystyle M(x)=sum _{nleq x}mu (n)} is the Mertens function.

History of the proof of the asymptotic law of prime numbers Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a / (A log a + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B = −1.08366. Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849.[5] In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / log(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.

In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s), for real values of the argument "s", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as x goes to infinity of π(x) / (x / log(x)) exists at all, then it is necessarily equal to one.[6] He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near 1, for all sufficiently large x.[7] Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2.

An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, chiefly that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper that the idea to apply methods of complex analysis to the study of the real function π(x) originates. Extending Riemann's ideas, two proofs of the asymptotic law of the distribution of prime numbers were found independently by Jacques Hadamard and Charles Jean de la Vallée Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function ζ(s) is nonzero for all complex values of the variable s that have the form s = 1 + it with t > 0.[8] During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). Hadamard's and de la Vallée Poussin's original proofs are long and elaborate; later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by the American mathematician Donald J. Newman.[9][10] Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis.

Proof sketch Here is a sketch of the proof referred to in one of Terence Tao's lectures.[11] Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function ψ(x), defined by {displaystyle psi (x)=!!!!sum _{stackrel {p^{k}leq x,}{p{text{ is prime}}}}!!!!log p;.} This is sometimes written as {displaystyle psi (x)=sum _{nleq x}Lambda (n);,} where Λ(n) is the von Mangoldt function, namely {displaystyle Lambda (n)={begin{cases}log p&{text{ if }}n=p^{k}{text{ for some prime }}p{text{ and integer }}kgeq 1,\0&{text{otherwise.}}end{cases}}} It is now relatively easy to check that the PNT is equivalent to the claim that {displaystyle lim _{xto infty }{frac {psi (x)}{x}}=1;.} Indeed, this follows from the easy estimates {displaystyle psi (x)=sum _{stackrel {pleq x}{p{text{ is prime}}}}log pleftlfloor {frac {log x}{log p}}rightrfloor leq sum _{stackrel {pleq x}{p{text{ is prime}}}}log x=pi (x)log x} and (using big O notation) for any ε > 0, {displaystyle psi (x)geq !!!!sum _{stackrel {x^{1-varepsilon }leq pleq x}{p{text{ is prime}}}}!!!!log pgeq !!!!sum _{stackrel {x^{1-varepsilon }leq pleq x}{p{text{ is prime}}}}!!!!(1-varepsilon )log x=(1-varepsilon )left(pi (x)+Oleft(x^{1-varepsilon }right)right)log x;.} The next step is to find a useful representation for ψ(x). Let ζ(s) be the Riemann zeta function. It can be shown that ζ(s) is related to the von Mangoldt function Λ(n), and hence to ψ(x), via the relation {displaystyle -{frac {zeta '(s)}{zeta (s)}}=sum _{n=1}^{infty }Lambda (n),n^{-s};.} A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer x the equation {displaystyle psi (x)=x;-;log(2pi );-sum limits _{rho :,zeta (rho )=0}{frac {x^{rho }}{rho }}} holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term x (claimed to be the correct asymptotic order of ψ(x)) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms.

The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately: {displaystyle sum _{n=1}^{infty }{frac {1}{2n,x^{2n}}}=-{frac {1}{2}}log left(1-{frac {1}{x^{2}}}right),} which vanishes for a large x. The nontrivial zeros, namely those on the critical strip 0 ≤ Re(s) ≤ 1, can potentially be of an asymptotic order comparable to the main term x if Re(ρ) = 1, so we need to show that all zeros have real part strictly less than 1.

Non-vanishing on Re(s) = 1 To do this, we take for granted that ζ(s) is meromorphic in the half-plane Re(s) > 0, and is analytic there except for a simple pole at s = 1, and that there is a product formula {displaystyle zeta (s)=prod _{p}{frac {1}{1-p^{-s}}}} for Re(s) > 1. This product formula follows from the existence of unique prime factorization of integers, and shows that ζ(s) is never zero in this region, so that its logarithm is defined there and {displaystyle log zeta (s)=-sum _{p}log left(1-p^{-s}right)=sum _{p,n}{frac {p^{-ns}}{n}};.} Write s = x + iy ; then {displaystyle {big |}zeta (x+iy){big |}=exp left(sum _{n,p}{frac {cos nylog p}{np^{nx}}}right);.} Now observe the identity {displaystyle 3+4cos phi +cos 2phi =2(1+cos phi )^{2}geq 0;,} so that {displaystyle left|zeta (x)^{3}zeta (x+iy)^{4}zeta (x+2iy)right|=exp left(sum _{n,p}{frac {3+4cos(nylog p)+cos(2nylog p)}{np^{nx}}}right)geq 1} for all x > 1. Suppose now that ζ(1 + iy) = 0. Certainly y is not zero, since ζ(s) has a simple pole at s = 1. Suppose that x > 1 and let x tend to 1 from above. Since {displaystyle zeta (s)} has a simple pole at s = 1 and ζ(x + 2iy) stays analytic, the left hand side in the previous inequality tends to 0, a contradiction.

Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for ψ(x) does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book[12] provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D.J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove.

Newman's proof of the prime number theorem D. J. Newman gives a quick proof of the prime number theorem (PNT). The proof is "non-elementary" by virtue of relying on complex analysis, but uses only elementary techniques from a first course in the subject: Cauchy's integral formula, Cauchy's integral theorem and estimates of complex integrals. Here is a brief sketch of this proof. See [10] for the complete details.

The proof uses the same preliminaries as in the previous section except instead of the function {textstyle psi } , the Chebyshev function {textstyle quad vartheta (x)=sum _{pleq x}log p} is used, which is obtained by dropping some of the terms from the series for {textstyle psi } . It is easy to show that the PNT is equivalent to {displaystyle lim _{xto infty }vartheta (x)/x=1} . Likewise instead of {displaystyle -{frac {zeta '(s)}{zeta (s)}}} the function {displaystyle Phi (s)=sum _{pleq x}log p,,p^{-s}} is used, which is obtained by dropping some terms in the series for {displaystyle -{frac {zeta '(s)}{zeta (s)}}} . The functions {displaystyle Phi (s)} and {displaystyle -zeta '(s)/zeta (s)} differ by a function holomorphic on {displaystyle Re s=1} . Since, as was shown in the previous section, {displaystyle zeta (s)} has no zeroes on the line {displaystyle Re s=1} , {displaystyle Phi (s)-{frac {1}{s-1}}} has no singularities on {displaystyle Re s=1} .

One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that {displaystyle vartheta (x)/x} is bounded. This is proved using an ingenious and easy method due to Chebyshev.

Integration by parts shows how {displaystyle vartheta (x)} and {displaystyle Phi (s)} are related. For {displaystyle Re s>1} , {displaystyle Phi (s)=int _{1}^{infty }x^{-s}dvartheta (x)=sint _{1}^{infty }vartheta (x)x^{-s-1},dx=sint _{0}^{infty }vartheta (e^{t})e^{-st},dt.} Newman's method proves the PNT by showing the integral {displaystyle I=int _{0}^{infty }left({frac {vartheta (e^{t})}{e^{t}}}-1right),dt.} converges, and therefore the integrand goes to zero as {displaystyle tto infty } , which is the PNT. In general, the convergence of the improper integral does not imply that the integrand goes to zero at infinity, since it may oscillate, but since {displaystyle vartheta } is increasing, it is easy to show in this case.

To show the convergence of {displaystyle I} , for {displaystyle Re z>0} let {displaystyle g_{T}(z)=int _{0}^{T}f(t)e^{-zt},dt} and {displaystyle g(z)=int _{0}^{infty }f(t)e^{-zt},dt} where {displaystyle f(t)={frac {vartheta (e^{t})}{e^{t}}}-1} then {displaystyle lim _{Tto infty }g_{T}(z)=g(z)={frac {Phi (s)}{s}}-{frac {1}{s-1}}quad quad {text{where}}quad z=s-1} which is equal to a function holomorphic on the line {displaystyle Re z=0} .

The convergence of the integral {displaystyle I} , and thus the PNT, is proved by showing that {displaystyle lim _{Tto infty }g_{T}(0)=g(0)} . This involves change of order of limits since it can be written {textstyle lim _{Tto infty }lim _{zto 0}g_{T}(z)=lim _{zto 0}lim _{Tto infty }g_{T}(z)} and therefore classified as a Tauberian theorem.

The difference {displaystyle g(0)-g_{T}(0)} is expressed using Cauchy's integral formula and then shown to be small for {displaystyle T} large by estimating the integrand. Fix {displaystyle R>0} and {displaystyle delta >0} such that {displaystyle g(z)} is holomorphic in the region where {displaystyle |z|leq R{text{ and }}Re zgeq -delta } , and let {displaystyle C} be the boundary of this region. Since 0 is in the interior of the region, Cauchy's integral formula gives {displaystyle g(0)-g_{T}(0)={frac {1}{2pi i}}int _{C}left(g(z)-g_{T}(z)right){frac {dz}{z}}={frac {1}{2pi i}}int _{C}left(g(z)-g_{T}(z)right)F(z){frac {dz}{z}}} where {displaystyle F(z)=e^{zT}left(1+{frac {z^{2}}{R^{2}}}right)} is the factor introduced by Newman, which does not change the integral since {displaystyle F} is entire and {displaystyle F(0)=1} .

To estimate the integral, break the contour {displaystyle C} into two parts, {displaystyle C=C_{+}+C_{-}} where {displaystyle C_{+}=Ccap left{z,vert ,Re z>0right}} and {displaystyle C_{-}cap left{Re zleq 0right}} . Then {displaystyle g(0)-g_{T}(0)=int _{C_{+}}int _{T}^{infty }H(t,z)dtdz-int _{C_{-}}int _{0}^{T}H(t,z)dtdz+int _{C_{-}}g(z)F(z){frac {dz}{2pi iz}}} where {displaystyle H(t,z)=f(t)e^{-tz}F(z)/2pi i} . Since {displaystyle vartheta (x)/x} , and hence {displaystyle f(t)} , is bounded, let {displaystyle B} be an upper bound for the absolute value of {displaystyle f(t)} . This bound together with the estimate {displaystyle |F|leq 2exp(TRe z)|Re z|/R} for {displaystyle |z|=R} gives that the first integral in absolute value is {displaystyle leq B/R} . The integrand over {displaystyle C_{-}} in the second integral is entire, so by Cauchy's integral theorem, the contour {displaystyle C_{-}} can be modified to a semicircle of radius {displaystyle R} in the left half-plane without changing the integral, and the same argument as for the first integral gives the absolute value of the second integral is {displaystyle leq B/R} . Finally, letting {displaystyle Tto infty } , the third integral goes to zero since {displaystyle e^{zT}} and hence {displaystyle F} goes to zero on the contour. Combining the two estimates and the limit get {displaystyle limsup _{Tto infty }|g(0)-g_{T}(0)|leq {frac {2B}{R}}.} This holds for any {displaystyle R} so {displaystyle lim _{Tto infty }g_{T}(0)=g(0)} , and the PNT follows.