Beta distribution

From Free net encyclopedia

Template:Probability distribution {\mathrm{B}(\alpha,\beta)}\!</math>|

 cdf        =<math>I_x(\alpha,\beta)\!</math>|
 mean       =<math>\frac{\alpha}{\alpha+\beta}\!</math>|
 median     =|
 mode       =<math>\frac{\alpha-1}{\alpha+\beta-2}\!</math> for <math>\alpha>1, \beta>1</math>|
 variance   =<math>\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\!</math>|
 skewness   =<math>\frac{2\,(\beta-\alpha)\sqrt{\alpha+\beta+1}}{(\alpha+\beta+2)\sqrt{\alpha\beta}}</math>|
 kurtosis   =see text|
 entropy    =|
 mgf        =<math>1  +\sum_{k=1}^{\infty} \left( \prod_{r=0}^{k=1} \frac{\alpha+r}{\alpha+\beta+r} \right) \frac{t^k}{k!}</math>|
 char       =<math>{}_1F_1(\alpha; \alpha+\beta; i\,t)\!</math>|

}} In probability theory and statistics, the beta distribution is a continuous probability distribution with the probability density function (pdf) defined on the interval [0, 1]:

<math> f(x;\alpha,\beta) = \frac{1}{\mathrm{B}(\alpha,\beta)} x^{\alpha-1}(1-x)^{\beta-1}</math>

where α and β are parameters that must be greater than zero and B is the beta function.

The beta function is a normalization constant to ensure that the integral of the pdf is unity:

<math> f(x;\alpha,\beta) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{\int_0^1 u^{\alpha-1} (1-u)^{\beta-1}\, du} \!</math>
<math>= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\, x^{\alpha-1}(1-x)^{\beta-1}\!</math>
<math>= \frac{1}{\mathrm{B}(\alpha,\beta)}\, x^{\alpha-1}(1-x)^{\beta-1}\!</math>

where Γ is the gamma function.

The special case of the beta distribution when α = 1 and β = 1 is the standard uniform distribution.

The expected value and variance of a beta random variable X with parameters α and β are given by the formulae:

<math> \operatorname{E}(X) = \frac{\alpha}{\alpha+\beta} </math>
<math> \operatorname{var}(X) = \frac{\alpha \beta}{(\alpha+\beta)^2(\alpha+\beta+1)}</math>

The kurtosis excess is:

<math>6\,\frac{\alpha^3-\alpha^2(2\beta-1)+\beta^2(\beta+1)-2\alpha\beta(\beta+2)}

{\alpha \beta (\alpha+\beta+2) (\alpha+\beta+3)}\!</math>

Contents

Parameter estimation

In the method of moments estimate, when the expected value and variance of a beta random variable X are given, the parameters α and β are calculated by the formulae:

<math>

\alpha = \operatorname{E}(X) \left(

\frac{\operatorname{E}(X) (1 - \operatorname{E}(X))}{\operatorname{var}(X)} - 1

\right),</math>

<math>

\beta = (1-\operatorname{E}(X)) \left(

\frac{\operatorname{E}(X) (1 - \operatorname{E}(X))}{\operatorname{var}(X)} - 1

\right).</math>

For any two numbers u, v such that 0 < u < 1 and 0 < v < u(1 − u) there is a beta distribution having expected value E(X) = u and variance var(X) = v.

Cumulative distribution function

The cumulative distribution function is

<math>F(x;\alpha,\beta) = \frac{\mathrm{B}_x(\alpha,\beta)}{\mathrm{B}(\alpha,\beta)} = I_x(\alpha,\beta) \!</math>

where <math>\mathrm{B}_x(\alpha,\beta)</math> is the incomplete beta function and <math>I_x(\alpha,\beta)</math> is the regularized incomplete beta function.

Shapes

The beta function can take on different shapes depending on the values of the two parameters:

  • <math>\alpha < 1,\ \beta < 1</math> is U-shaped (red plot)
  • <math>\alpha < 1,\ \beta \geq 1</math> or <math>\alpha = 1,\ \beta > 1</math> is strictly decreasing (blue plot)
    • <math>\alpha = 1,\ \beta > 2</math> is strictly convex
    • <math>\alpha = 1,\ \beta = 2</math> is a straight line
    • <math>\alpha = 1,\ 1 < \beta < 2</math> is strictly concave
  • <math>\alpha = \beta = 1</math> is the uniform distribution
  • <math>\alpha = 1,\ \beta < 1</math> or <math>\alpha > 1,\ \beta \leq 1</math> is strictly increasing (green plot)
    • <math>\alpha > 2,\ \beta = 1</math> is strictly convex
    • <math>\alpha = 2,\ \beta = 1</math> is a straight line
    • <math>1 < \alpha < 2,\ \beta = 1</math> is strictly concave
  • <math>\alpha > 1,\ \beta > 1</math> is unimodal (purple & black plots)

Moreover, if <math>\alpha = \beta</math> then the density function is symmetric about 1/2 (red & purple plots).

Related distributions

  • <math>X \sim \mathrm{Uniform}(0,1)</math> is a uniform distribution if <math>X \sim \mathrm{Beta}(\alpha = 1, \beta = 1)</math>.
  • If <math>X \sim \mathrm{Gamma}(\alpha,1)</math> and <math>Y \sim \mathrm{Gamma}(\beta,1)</math> are independent gamma variates, then <math>X/(X+Y) \sim \mathrm{Beta}(\alpha,\beta)</math>.
  • If <math>X \sim \mathrm{Beta}(\alpha,\beta)</math> is a beta variate and <math>Y \sim \mathrm{F}(2\beta,2\alpha)</math> is an independent F variate, then <math>\Pr[X \leq \alpha/(\alpha+x\,\beta)] = \Pr[Y > x]</math> for all <math>x>0</math>.
  • The Dirichlet distribution is the multivariate generalization of the beta distribution.
  • The Kumaraswamy distribution resembles the beta distribution.

Applications

Beta distributions are used extensively in Bayesian statistics, since the beta distribution is the conjugate prior distribution to the binomial distribution.

The Beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the Beta distribution - along with the triangular distribution - is used extensively in PERT, CPM and other project management / control systems to describe the time to completion of a task.

External links

es:Distribución beta gl:Distribución beta it:Variabile casuale Beta pl:Rozkład beta su:Sebaran béta sv:Betafördelning