Kraft–McMillan inequality
Kraft–McMillan inequality (Redirected from Kraft–McMillan theorem) Jump to navigation Jump to search In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code[1] (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McMillan's version) for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory.
Kraft's inequality was published in Kraft (1949). However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The result was independently discovered in McMillan (1956). McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob.
Contents 1 Applications and intuitions 2 Formal statement 3 Example: binary trees 4 Proof 4.1 Proof for prefix codes 4.2 Proof of the general case 4.3 Alternative construction for the converse 5 Notes 6 References 7 See also Applications and intuitions Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential of the length of each valid codeword, the resulting set of values must look like a probability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive. Among the useful properties following from the inequality are the following statements: If Kraft's inequality holds with strict inequality, the code has some redundancy. If Kraft's inequality holds with equality, the code in question is a complete code.[2] If Kraft's inequality does not hold, the code is not uniquely decodable. For every uniquely decodable code, there exists a prefix code with the same length distribution. Formal statement Let each source symbol from the alphabet {displaystyle S={,s_{1},s_{2},ldots ,s_{n},}} be encoded into a uniquely decodable code over an alphabet of size {displaystyle r} with codeword lengths {displaystyle ell _{1},ell _{2},ldots ,ell _{n}.} Then {displaystyle sum _{i=1}^{n}r^{-ell _{i}}leqslant 1.} Conversely, for a given set of natural numbers {displaystyle ell _{1},ell _{2},ldots ,ell _{n}} satisfying the above inequality, there exists a uniquely decodable code over an alphabet of size {displaystyle r} with those codeword lengths.
Example: binary trees 9, 14, 19, 67 and 76 are leaf nodes at depths of 3, 3, 3, 3 and 2, respectively.
Any binary tree can be viewed as defining a prefix code for the leaves of the tree. Kraft's inequality states that {displaystyle sum _{ell in {text{leaves}}}2^{-{text{depth}}(ell )}leqslant 1.} Here the sum is taken over the leaves of the tree, i.e. the nodes without any children. The depth is the distance to the root node. In the tree to the right, this sum is {displaystyle {frac {1}{4}}+4left({frac {1}{8}}right)={frac {3}{4}}leqslant 1.} Proof Proof for prefix codes Example for binary tree. Red nodes represent a prefix tree. The method for calculating the number of descendant leaf nodes in the full tree is shown.
First, let us show that the Kraft inequality holds whenever the code for {displaystyle S} is a prefix code.
Suppose that {displaystyle ell _{1}leqslant ell _{2}leqslant cdots leqslant ell _{n}} . Let {displaystyle A} be the full {displaystyle r} -ary tree of depth {displaystyle ell _{n}} (thus, every node of {displaystyle A} at level {displaystyle
Notes ^ Cover, Thomas M.; Thomas, Joy A. (2006), "Data Compression", Elements of Information Theory (2nd ed.), John Wiley & Sons, Inc, pp. 108–109, doi:10.1002/047174882X.ch5, ISBN 978-0-471-24195-9 ^ De Rooij, Steven; Grünwald, Peter D. (2011), "LUCKINESS AND REGRET IN MINIMUM DESCRIPTION LENGTH INFERENCE", Philosophy of Statistics (1st ed.), Elsevier, p. 875, ISBN 978-0-080-93096-1 References Kraft, Leon G. (1949), A device for quantizing, grouping, and coding amplitude modulated pulses, Cambridge, MA: MS Thesis, Electrical Engineering Department, Massachusetts Institute of Technology, hdl:1721.1/12390. McMillan, Brockway (1956), "Two inequalities implied by unique decipherability", IEEE Trans. Inf. Theory, 2 (4): 115–116, doi:10.1109/TIT.1956.1056818. See also Chaitin's constant Canonical Huffman code Categories: Coding theoryInequalities
Si quieres conocer otros artículos parecidos a Kraft–McMillan inequality puedes visitar la categoría Coding theory.
Deja una respuesta