Conjugate prior

From Free net encyclopedia

In Bayesian probability theory, a conjugate prior is a family of prior probability distributions which has the property that the posterior probability distribution also belongs to that family. The concept, as well as the term "conjugate prior", was introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory.Template:Ref A similar concept had been discovered independently by George Alfred Barnard.Template:Ref

Consider the general problem of inferring a distribution for a parameter θ given some datum or data x. From Bayes' theorem, the posterior distribution is calculated from the prior p(θ) and the likelihood function <math>\theta \mapsto p(x\mid\theta)\!</math> as

<math> p(\theta|x) = \frac{p(x|\theta) \, p(\theta)}
 {\int p(x|\theta) \, p(\theta) \, d\theta}. \!</math>

Let the likelihood function be considered fixed; the likelihood function is usually well-determined from a statement of the data-generating process. It is clear that different choices of the prior distribution p(θ) may make the integral more or less difficult to calculate, and the product p(x|θ) × p(θ) may take one algebraic form or another. For certain choices of the prior, the posterior has the same algebraic form as the prior (generally with different parameters). Such a choice is a conjugate prior.

A conjugate prior is an algebraic convenience: otherwise a difficult numerical integration may be necessary.

Conjugate priors are known for several problems. See Gelman et al.Template:Ref for a catalog.

All members of the exponential family have conjugate priors.

Example

For a random variable which is a Bernoulli trial with unknown probability of success q in [0,1], the usual conjugate prior is the beta distribution with

<math>p(q=x) = {x^{\alpha-1}(1-x)^{\beta-1} \over \Beta(\alpha,\beta)}</math>

where <math>\alpha</math> and <math>\beta</math> are chosen to reflect any existing belief or information (<math>\alpha</math> = 1 and <math>\beta</math> = 1 would give a uniform distribution) and Β(<math>\alpha</math>, <math>\beta</math>) is the Beta function acting as a normalising constant.

If we then sample this random variable and get s successes and f failures, we have

<math>P(s,f|q=x) = {s+f \choose s} x^s(1-x)^f, </math>
<math>p(q=x|s,f) = {{{s+f \choose s} x^{s+\alpha-1}(1-x)^{f+\beta-1} / \Beta(\alpha,\beta)} \over \int_{y=0}^1 \left({s+f \choose s} y^{s+\alpha-1}(1-y)^{f+\beta-1} / \Beta(\alpha,\beta)\right) dy} = {x^{s+\alpha-1}(1-x)^{f+\beta-1} \over \Beta(s+\alpha,f+\beta)} , </math>

which is another Beta distribution with a simple change to the parameters. This posterior distribution could then be used as the prior for more samples, with the parameters simply adding each extra piece of information as it comes.

Notes

  1. Template:NoteHoward Raiffa and Robert Schlaifer. Applied Statistical Decision Theory. Division of Research, Graduate School of Business Administration, Harvard University, 1961.
  2. Template:NoteJeff Miller et al. Earliest Known Uses of Some of the Words of Mathematics, "conjugate prior distributions". Electronic document, revision of November 13, 2005, retrieved December 2, 2005.
  3. Template:NoteAndrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis, 2nd edition. CRC Press, 2003. ISBN 158488388X