# PCP theorem

PCP theorem Not to be confused with Post correspondence problem.

In computational complexity theory, the PCP theorem (also known as the PCP characterization theorem) states that every decision problem in the NP complexity class has probabilistically checkable proofs (proofs that can be checked by a randomized algorithm) of constant query complexity and logarithmic randomness complexity (uses a logarithmic number of random bits).

The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof.

The PCP theorem is the cornerstone of the theory of computational hardness of approximation, which investigates the inherent difficulty in designing efficient approximation algorithms for various optimization problems. It has been described by Ingo Wegener as "the most important result in complexity theory since Cook's theorem"[1] and by Oded Goldreich as "a culmination of a sequence of impressive works […] rich in innovative ideas".[2] Contents 1 Formal statement 2 PCP and hardness of approximation 3 Proof 4 History 4.1 Origin of the initials 4.2 First theorem [in 1990] 4.3 Quantum analog 5 Notes 6 References Formal statement The PCP theorem states that NP = PCP[O(log n), O(1)]. PCP and hardness of approximation An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximate within some constant factor.