Information theory

From Free net encyclopedia

Template:Cleanup-date

The topic of this article is distinct from the topics of Library and information science and Information technology.

Information theory is a field of mathematics concerning the storage and transmission of data and includes the fundamental concepts of source coding and channel coding. Source coding involves compression (abbreviation) of data in a manner such that another person can recover either an identical copy of the uncompressed data (lossless data compression, which uses the concept of entropy) or an approximate copy of the uncompressed data (lossy data compression, which uses the theory of rate distortion). Channel coding considers how to transmit such data, asking at how high a rate data can be communicated to someone else through a noisy medium with an arbitrarily small probability of error.

These topics are rigorously addressed using mathematics introduced by Claude Shannon in 1948. His papers spawned the field of information theory, which goes beyond the above questions to extended and combined problems such as those in network information theory and related problems including portfolio theory and cryptography. The impact of information theory has been crucial to the success of the Voyager missions to deep space, the invention of the CD, the feasibility of mobile phones, the development of the Internet and broadband Internet access, the analysis of DNA, and numerous other fields.

Contents

Overview

Information theory is the mathematical theory of data communication and storage, generally considered to have been founded in 1948 by Claude E. Shannon. The central paradigm of classic information theory is the engineering problem of the transmission of information over a noisy channel. The most fundamental results of this theory are Shannon's source coding theorem, which establishes that on average the number of bits needed to represent the result of an uncertain event is given by the entropy; and Shannon's noisy-channel coding theorem, which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold called the channel capacity. The channel capacity is achieved with appropriate encoding and decoding systems.

Information theory is closely associated with a collection of pure and applied disciplines that have been carried out under a variety of banners in different parts of the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, chief among them coding theory.

Coding theory is concerned with finding explicit methods, called codes, of increasing the efficiency and fidelity of data communication over a noisy channel up near the limit that Shannon proved is all but possible. These codes can be roughly subdivided into data compression and error-correction codes. It took many years to find the good codes whose existence Shannon proved. A third class of codes are cryptographic ciphers; concepts from coding theory and information theory are much used in cryptography and cryptanalysis. See the article deciban for an interesting historical application.

Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even musical composition and whale songs.

History

The decisive event which established the subject of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon (19162001)'s classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October of 1948.

In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process, which underlies information theory; and with it the ideas of the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, as underwritten by the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon-Hartley law for the channel capacity of a Gaussian channel; and of course the bit - a new common currency of information.

Before 1948

Quantitative ideas of information

The most direct antecedents of Shannon's work were two papers published in the 1920s by Harry Nyquist and Ralph Hartley, who were both still very much research leaders at Bell Labs when Shannon arrived there in the early 1940s.

Nyquist'’s 1924 paper, Certain Factors Affecting Telegraph Speed is mostly concerned with some detailed engineering aspects of telegraph signals. But a more theoretical section discusses quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation

<math>W = K \log m \,</math>

where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant.

Hartley's 1928 paper, called simply Transmission of Information, went further by introducing the word information, and making explicitly clear the idea that information in this context was a measurable quantity, reflecting only that the receiver was able to distinguish that one sequence of symbols had been sent rather than any other -- quite regardless of any associated meaning or other psychological or semantic aspect the symbols might represent. This amount of information he quantified as

<math>H = \log S^n \,</math>

where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. The Hartley information, H0, is also still very much used as a quantity for the log of the total number of possibilities.

A similar unit of log10 probability, the ban, and its derived unit the deciban (one tenth of a ban), were introduced by Alan Turing in 1940 as part of the statistical analysis of the breaking of the German second world war Enigma cyphers. The decibannage represented the reduction in (the logarithm of) the total number of possibilities (similar to the change in the Hartley information); and also the log-likelihood ratio (or change in the weight of evidence) that could be inferred for one hypothesis over another from a set of observations. The expected change in the weight of evidence is equivalent to what was later called the Kullback discrimination information.

But underlying this notion was still the idea of equal a-priori probabilities, rather than the information content of events of unequal probability; nor yet any underlying picture of questions regarding the communication of such varied outcomes.

Entropy in statistical mechanics

One area where unequal probabilities were indeed well known was statistical mechanics, where Ludwig Boltzmann had, in the context of his H-theorem of 1872, first introduced the quantity

<math>H = - \sum f_i \log f_i </math>

as a measure of the breadth of the spread of states available to a single particle in a gas of like particles, where f represented the relative frequency distribution of each possible state. Boltzmann argued mathematically that the effect of collisions between the particles would cause the H-function to inevitably increase from any initial configuration until equilibrium was reached; and identified it as an underlying microscopic rationale for the macroscopic thermodynamic entropy of Clausius.

(The H-theorem of Boltzmann subsequently led to no end of controversy; and can still cause lively debates to the present day, often aggravated by protagonists not realising that they are arguing at cross-purposes. The theorem relies on a hidden assumption, that useful information is destroyed by the collisions, which can be questioned; also, it relies on a non-equilibrium state being singled out as the initial state (not the final state), which breaks time symmetry; also, strictly it applies only in a statistical sense, namely that an average H-function would be non-decreasing).

Boltzmann's definition was soon reworked by the American mathematical physicist J. Willard Gibbs into a general formula for the statistical-mechanical entropy, no longer requiring identical and non-interacting particles, but instead based on the probability distribution pi for the complete microstate i of the total system:

<math>S = -k_B \sum p_i \ln p_i \,</math>

This (Gibbs) entropy from statistical mechanics can be found to directly correspond to the Clausius's classical thermodynamical definition, as explored further in the article: Thermodynamic entropy.

Szilard, Lewis.

Shannon himself was apparently not particularly aware of the close similarity between his new quantity and the earlier work in thermodynamics; but John von Neumann was. The story goes that when Shannon was deciding what to call his new quantity, fearing that 'information' was already over-used, von Neumann told him firmly: "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."


Development since 1948

Template:Section stub The publication of Shannon's 1948 paper, "A Mathematical Theory of Communication", in the Bell System Technical Journal was the founding of information theory as we know it today. Many developments and applications of the theory have taken place since then, which have made many modern devices for data communication and storage such as CD-ROMs and mobile phones possible.

Mathematical theory of information

For a more thorough discussion of these basic equations, see Information entropy.

The abstract idea of what "information" really is must be made more concrete so that mathematicians can analyze it.

Self-information

Shannon defined a measure of information content called the self-information or surprisal of a message m:

<math> I(m) = - \log p(m),\,</math>

where <math>p(m) = Pr(M=m)</math> is the probability that message m is chosen from all possible choices in the message space <math>M</math>.

This equation causes messages with lower probabilities to contribute more to the overall value of I(m). In other words, infrequently occurring messages are more valuable. (This is a consequence from the property of logarithms that <math>-\log p(m)</math> is very large when <math>p(m)</math> is near 0 for unlikely messages and very small when <math>p(m)</math> is near 1 for almost certain messages).

For example, if John says "See you later, honey" to his wife every morning before leaving to office, that information holds little "content" or "value". But, if he shouts "Get lost" at his wife one morning, then that message holds more value or content (because, supposedly, the probability of him choosing that message is very low).

Entropy

The entropy of a discrete message space <math>M</math> is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message <math>m</math> from that message space:

<math> H(M) = \mathbb{E} \{I(m)\} = \sum_{m \in M} p(m) I(m) = -\sum_{m \in M} p(m) \log p(m).</math>

The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case <math>H(M) = \log |M|</math>.

Joint entropy

The joint entropy of two discrete random variables <math>X</math> and <math>Y</math> is defined as the entropy of the joint distribution of <math>X</math> and <math>Y</math>:

<math>H(X, Y) = \mathbb{E}_{X,Y} [-\log p(x,y)] = - \sum_{x, y} p(x, y) \log p(x, y) \,</math>

If <math>X</math> and <math>Y</math> are independent, then the joint entropy is simply the sum of their individual entropies.

(Note: The joint entropy is not to be confused with the cross entropy, despite similar notation.)

Conditional entropy (equivocation)

Given a particular value of a random variable <math>Y</math>, the conditional entropy of <math>X</math> given <math>Y=y</math> is defined as:

<math> H(X|y) = \mathbb{E}_Template:X [-\log p(x|y)] = -\sum_{x \in X} p(x|y) \log p(x|y)</math>

where <math>p(x|y) = \frac{p(x,y)}{p(y)}</math> is the conditional probability of <math>x</math> given <math>y</math>.

The conditional entropy of <math>X</math> given <math>Y</math>, also called the equivocation of <math>X</math> about <math>Y</math> is then given by:

<math> H(X|Y) = \mathbb E_Y \{H(X|y)\} = -\sum_{y \in Y} p(y) \sum_{x \in X} p(x|y) \log p(x|y) = \sum_{x,y} p(x,y) \log \frac{p(y)}{p(x,y)}.</math>

A basic property of the conditional entropy is that:

<math> H(X|Y) = H(X,Y) - H(Y) .\,</math>

Mutual information (transinformation)

It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The transinformation of <math>X</math> relative to <math>Y</math> (which represents conceptually the amount of information about <math>X</math> that can be gained by observing <math>Y</math>) is given by:

<math>I(X;Y) = \sum_{x,y} p(y)\, p(x|y) \log \frac{p(x|y)}{p(x)} = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)}.</math>

A basic property of the transinformation is that:

<math>I(X;Y) = H(X) - H(X|Y)\,</math>

Mutual information is symmetric:

<math>I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y),\,</math>

Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the Multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Also, mutual information can be expressed through the Kullback-Leibler divergence by measuring the difference (so to say) of the actual joint distribution to the product of the marginal distributions:

<math>I(X; Y) = D_{KL}\left(p(X,Y) \| p(X)p(Y)\right)\,</math>

Continuous equivalents of entropy

See main article: Differential entropy.

Shannon information is appropriate for measuring uncertainty over a discrete space. Its basic measures have been extended by analogy to continuous spaces. The sums can be replaced with integrals and densities are used in place of probability mass functions. By analogy with the discrete case, entropy, joint entropy, conditional entropy, and mutual information can be defined as follows:

<math> h(X) = -\int_X f(x) \log f(x) \,dx </math>
<math> h(X,Y) = -\int_Y \int_X f(x,y) \log f(x,y) \,dx \,dy</math>
<math> h(X|y) = -\int_X f(x|y) \log f(x|y) \,dx </math>
<math> h(X|Y) = -\int_Y \int_X f(x,y) \log \frac{f(x,y)}{f(y)} \,dx \,dy</math>
<math> I(X;Y) = -\int_Y \int_X f(x,y) \log \frac{f(x,y)}{f(x)f(y)} \,dx \,dy </math>

where <math>f(x,y)</math> is the joint density function, <math>f(x)</math> and <math>f(y)</math> are the marginal distributions, and <math>f(x|y)</math> is the conditional distribution.

Channel capacity

Let us return for the time being to our consideration of the communications process over a discrete channel. At this time it will be helpful to have a simple model of the process:

                        o---------o
                        |  Noise  |
                        o---------o
                             |
                             V
o-------------o    X    o---------o    Y    o----------o
| Transmitter |-------->| Channel |-------->| Receiver |
o-------------o         o---------o         o----------o

Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let <math>p(y|x)</math> be the conditional probability distribution function of Y given X. We will consider <math>p(y|x)</math> to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of <math>f(x)</math>, the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the amount of information, or the signal, we can communicate over the channel. The appropriate measure for this is the transinformation, and this maximum transinformation is called the channel capacity and is given by:

<math> C = \max_f I(X;Y).\, </math>

Source theory

Any process that generates successive messages can be considered a source of information. Sources can be classified in order of increasing generality as memoryless, ergodic, stationary, and stochastic, (with each class strictly containing the previous one). The term "memoryless" as used here has a slightly different meaning than it normally does in probability theory. Here a memoryless source is defined as one that generates successive messages independently of one another and with a fixed probability distribution. However, the position of the first occurrence of a particular message or symbol in a sequence generated by a memoryless source is actually a memoryless random variable. The other terms have fairly standard definitions and are actually well studied in their own right outside information theory.

Rate

The rate of a source of information is (in the most general case) <math>r=\mathbb E H(M_t|M_{t-1},M_{t-2},M_{t-3}, \cdots)</math>, the expected, or average, conditional entropy per message (i.e. per unit time) given all the previous messages generated. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply <math>H(M)</math>, since by definition there is no interdependence of the successive messages of a memoryless source. The rate of a source of information is related to its redundancy and how well it can be compressed.

Fundamental theorem

See main article: Noisy channel coding theorem.
Statement (noisy-channel coding theorem)
1. For every discrete memoryless channel, the channel capacity
<math>C = \max_{P_X} \,I(X;Y)</math>
has the following property. For any ε > 0 and R < C, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε.
2. If a probability of bit error pb is acceptable, rates up to R(pb) are achievable, where
<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
3. For any pb, rates greater than R(pb) are not achievable.

(MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11) Template:Section-stub

Channel capacity of particular model channels

Related concepts

Measure theory

If to arbitrary discrete random variables X and Y we associate the existence of sets <math>\tilde X</math> and <math>\tilde Y</math>, somehow representing the information borne by X and Y, respectively, such that:

  • <math>\mu(\tilde X \cap \tilde Y) = 0 </math> whenever X and Y are independent, and
  • <math>\tilde X = \tilde Y</math> whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);

where <math>\mu</math> is a measure over these sets, and we set:

<math>H(X) = \mu(\tilde X),</math>
<math>H(Y) = \mu(\tilde Y),</math>
<math>H(X,Y) = \mu(\tilde X \cup \tilde Y),</math>
<math>H(X|Y) = \mu(\tilde X \,\backslash\, \tilde Y),</math>
<math>I(X;Y) = \mu(\tilde X \cap \tilde Y);</math>

we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal measure over sets. This can be a handy mnemonic device in some situations. Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106-108 for an informal but rather complete discussion.) Namely <math>H(X,Y,Z,\cdots)</math> needs to be defined in the obvious way as the entropy of a joint distribution, and an extended transinformation <math>I(X;Y;Z;\cdots)</math> defined in a suitable manner (left as an exercise for the ambitious reader) so that we can set:

<math>H(X,Y,Z,\cdots) = \mu(\tilde X \cup \tilde Y \cup \tilde Z \cup \cdots),</math>
<math>I(X;Y;Z;\cdots) = \mu(\tilde X \cap \tilde Y \cap \tilde Z \cap \cdots);</math>

in order to define the (signed) measure over the whole σ-algebra. (It is interesting to note that the mutual information of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then <math>I(X;Y;Z) = - 1</math> bit.)

This connection is important for two reasons: first, it reiterates and clarifies the fundamental properties of these basic concepts of information theory, and second, it justifies, in a certain formal sense, the practice of calling Shannon's entropy a "measure" of information.

Applications

Coding theory

Coding theory is the most important and direct application of information theory. It can be subdivided into data compression theory and error correction theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data. There are two formulations for the compression problem — in lossless data compression the data must be reconstructed exactly, whereas lossy data compression examines how many bits are needed to reconstruct the data to within a specified fidelity level. This fidelity level is measured by a function called a distortion function. In information theory this is called rate distortion theory. Both lossless and lossy source codes produce bits at the output which can be used as the inputs to the channel codes mentioned above.

The idea is to first compress the data, i.e. remove as much of its redundancy as possible, and then add just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified by the information transmission theorems, or source-channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.

Cryptography and Cryptanalysis

Template:Section-stub Information theoretic concepts are widely used in making and breaking cryptographic ciphers. For an interesting historical example, see the article on deciban. Shannon himself defined an important concept called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.

Gambling

Template:Section-stub Information theory is also important in gambling and (with some ethical reservations) investing. An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly). The so-called equation of ill-gotten gains can be expressed in logarithmic form as

<math> \mathbb E \log K_t = \log K_0 + \sum_{i=1}^t H_i </math>

for an optimal betting strategy, where <math>K_0</math> is the initial capital, <math>K_t</math> is the capital after the tth bet, and <math>H_i</math> is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each bettable event). This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin.

This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce).

Intelligence

Template:Section-stub Shannon's theory of information is extremely important in intelligence work, much more so than its use in cryptography would indicate. The theory is applied by intelligence agencies to keep classified information secret, and to discover as much information as possible about an adversary. The fundamental theorem leads us to believe it is much more difficult to keep secrets than it might first appear. In general it is not possible to stop the leakage of classified information, only to slow it. Furthermore, the more people that have access to the information, and the more those people have to work with and belabor that information, the greater the redundancy of that information becomes. It is extremely hard to contain the flow of information that has such a high redundancy. This inevitable leakage of classified information is due to the psychological fact that what people know does influence their behavior somewhat, however subtle that influence might be.

The premier example of the application of information theory to covert signaling is the design of the Global Positioning System signal encoding. The system uses a pseudorandom encoding that places the radio signal below the noise floor. Thus, an unsuspecting radio listener would not even be aware that there was a signal present, as it would be drowned out by atmospheric and antenna noise. However, if one integrates the signal over long periods of time, using the "secret" (but known to the listener) pseudorandom sequence, one can eventually detect a signal, and then discern modulations of that signal. In GPS, the C/A signal has been publicly disclosed to be a 1023-bit sequence, but the pseudorandom sequence used in the P(Y) signal remains a secret. The same technique can be used to transmit and receive covert intelligence from short-range, extremely low power systems, without the enemy even being aware of the existence of a radio signal. This is analogous to steganography.

Music

Template:Section-stub Composer James Tenney, among others such as his teacher Lejaren Hiller, has used information theory in the composition of musical works such as Ergodos.

References

The classic paper

Other journal articles

  • R.V.L. Hartley, "Transmission of Information," Bell System Technical Journal, July 1928
  • J. L. Kelly, Jr., "New Interpretation of Information Rate," Bell System Technical Journal, Vol. 35, July 1956, pp. 917-26
  • R. Landauer, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1-4.
  • R. Landauer, "Irreversibility and Heat Generation in the Computing Process" IBM J. Res. Develop. Vol. 5, No. 3, 1961

Textbooks on information theory

Other books

  • James Bamford, The Puzzle Palace, Penguin Books, 1983. ISBN 0140067485
  • Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. ISBN 0486439186
  • W. B. Johnson and J. Lindenstrauss, editors, Handbook of the Geometry of Banach Spaces, Vol. 1. Amsterdam: Elsevier 2001. ISBN 0444828427
  • A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. ISBN 0486604349
  • H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, NJ (1990). ISBN 069108727X

See also

Applications

History

Theory

Template:Col-begin Template:Col-break

Template:Col-break

Template:Col-end

Concepts

Template:Col-begin Template:Col-break

Template:Col-break

Template:Col-end

External links

Template:Cyberneticsde:Informationstheorie es:Teoría de la información et:Informatsiooniteooria fa:نظریه اطلاعات fr:Théorie de l'information gl:Teoría da información he:תורת האינפורמציה hu:Információelmélet id:Teori Informasi io:Informo-teorio it:Teoria dell'informazione ja:情報理論 nl:Informatietheorie no:Informasjonsteori pl:Teoria informacji pt:Teoria da informação ru:Теория информации sv:Informationsteori th:ทฤษฎีข้อมูล uk:Теорія інформації zh:信息论