Gamma distribution
From Free net encyclopedia
Template:Probability distribution</math>|
kurtosis =<math>\frac{6}{k}</math>| entropy =<math>k\theta+(1-k)\ln(\theta)+\ln(\Gamma(k))\,</math>
<math>+(1-k)\psi(k)\,</math>| mgf =<math>(1 - \theta\,t)^{-k}</math> for <math>t < 1/\theta</math>| char =<math>(1 - \theta\,i\,t)^{-k}</math>|
}} In probability theory and statistics, the gamma distribution is a continuous probability distribution. For integer values of the parameter k it is also known as the Erlang distribution.
Contents |
Probability density function
The probability density function of the gamma distribution can be expressed in terms of the gamma function:
- <math> f(x;k,\theta) = x^{k-1} \frac{e^{-x/\theta}}{\theta^k \, \Gamma(k)}
\ \mathrm{for}\ x > 0 \,\!</math>
where <math>k > 0</math> is the shape parameter and <math>\theta > 0</math> is the scale parameter of the gamma distribution. (NOTE: this parameterization is what is used in the infobox and the plots.)
Alternatively, the gamma distribution can be parameterized in terms of a shape parameter <math>\alpha = k</math> and an inverse scale parameter <math>\beta = 1/\theta</math>, called a rate parameter:
- <math> g(x;\alpha,\beta) = x^{\alpha-1} \frac{\beta^{\alpha} \, e^{-\beta\,x} }{\Gamma(\alpha)} \ \mathrm{for}\ x > 0 \,\!</math>
Both parameterizations are common because they are convenient to use in certain situations and fields.
Properties
The cumulative distribution function can be expressed in terms of the incomplete gamma function,
- <math> F(x;k,\theta) = \int_0^x f(u;k,\theta)\,du
= \frac{\gamma(k, x/\theta)}{\Gamma(k)} \,\!</math>
The information entropy is given by:
- <math>S=k\theta+(1-k)\ln(\theta)+\ln(\Gamma(k))+(1-k)\psi(k)\,</math>
where <math>\psi(k)</math> is the polygamma function.
If <math>X_i \sim \mathrm{Gamma}(\alpha_i, \beta)</math> for <math>i=1, 2, \cdots, N</math> and <math>\bar{\alpha} = \sum_{k=1}^N \alpha_i</math> then
- <math>
\left[ Y = \sum_{i=1}^N X_i \right] \sim \mathrm{Gamma} \left( \bar{\alpha}, \beta \right) </math>
provided all <math>X_i</math> are independent. The gamma distribution exhibits infinite divisibility.
If <math>X \sim \operatorname{Gamma}(k, \theta)</math>, then <math>\frac X \theta \sim \operatorname{Gamma}(k, 1)</math>. Or, more generally, for any <math>t > 0</math> it holds that <math>tX \sim \operatorname{Gamma} (k, t \theta)</math>. That is the meaning of θ (or β) being the scale parameter.
Parameter estimation
The likelihood function is
- <math>L=\prod_{i=1}^N f(x_i;k,\theta)</math>
from which we calculate the log-likelihood function
- <math>\ell=(k-1)\sum_{i=1}^N\ln(x_i)-\sum x_i/\theta-Nk\ln(\theta)-N\ln\Gamma(k)</math>
Finding the maximum with respect to <math>\theta</math> by taking the derivative and setting it equal to zero yields the maximum likelihood estimate of the <math>\theta</math> parameter:
- <math>\theta=\frac{1}{kN}\sum_{i=1}^N x_i</math>
Substituting this into the log-likelihood function gives:
- <math>\ell=(k-1)\sum_{i=1}^N\ln(x_i)-Nk-Nk\ln\left(\frac{\sum x_i}{kN}\right)-N\ln\Gamma(k)</math>
Finding the maximum with respect to <math>k</math> by taking the derivative and setting it equal to zero yields:
- <math>\ln(k)-\psi(k)=\ln\left(\frac{1}{N}\sum_{i=1}^N x_i\right)-\frac{1}{N}\sum_{i=1}^N\ln(x_i)</math>
where <math>\psi(k)=\frac{\Gamma'(k)}{\Gamma(k)}</math> is the digamma function.
There is no closed-form solution for <math>k</math>. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using Newton's method. An initial value of <math>k</math> can be found either using the method of moments, or using the approximation:
- <math>\ln(k)-\psi(k) \approx \frac{1}{k}\left(\frac{1}{2} + \frac{1}{12k+2}\right)</math>
If we let <math>s = \ln\left(\frac{1}{N}\sum_{i=1}^N x_i\right)-\frac{1}{N}\sum_{i=1}^N\ln(x_i),</math> then <math>k</math> is approximately
- <math>k \approx \frac{3-s+\sqrt{(s-3)^2 + 24s}}{12s}</math>
which is within 1.5% of the correct value.
Generating Gamma random variables
Given the scaling property above, it is enough to generate Gamma variables with <math>\beta = 1</math> as we can later convert to any value of β with simple division.
Using the fact that if <math>X \sim \operatorname{Gamma}(1, 1)</math>, then also <math>X \sim \operatorname {Exp} (1)</math>, and the method of generating exponential variables, we conclude that if U is uniformly distributed on (0, 1], then <math>-\ln U \sim \operatorname{Gamma} (1, 1)</math>. Now, using the "α-addition" property of Gamma distribution, we expand this result:
- <math>\sum _{k=1} ^n {-\ln U_k} \sim \operatorname{Gamma} (n, 1)</math>,
where <math>U_k</math> are all uniformly distributed on (0, 1 ] and independent.
All that is left now is to generate a variable distributed as <math>\operatorname{Gamma} (\delta, 1)</math> for <math>0 < \delta < 1</math> and apply the "α-addition" property once more. This is the most difficult part, however.
We provide an algorithm without proof. It is an instance of the acceptance-rejection method:
- Let m be 1.
- Generate <math>V_{2m - 1}</math> and <math>V_{2m}</math> — independent uniformly distributed on (0, 1] variables.
- If <math>V_{2m - 1} \le v_0</math>, where <math>v_0 = \frac e {e + \delta}</math>, then go to step 4, else go to step 5.
- Let <math>\xi_m = \left( \frac {V_{2m - 1}} {v_0} \right) ^{\frac 1 \delta}, \ \eta_m = V_{2m} \xi _m^ {\delta - 1}</math>. Go to step 6.
- Let <math>\xi_m = 1 - \ln {\frac {V_{2m - 1} - v_0} {1 - v_0}}, \ \eta_m = V_{2m} e^{-\xi_m}</math>.
- If <math>\eta_m > \xi_m^{\delta - 1} e^{-\xi_m}</math>, then increment m and go to step 2.
- Assume <math>\xi = \xi_m</math> to be the realization of <math>\operatorname {Gamma} (\delta, 1)</math>.
Now, to summarize,
- <math>\frac 1 \beta \left( \xi - \sum _{k=1} ^{[\alpha]} {\ln U_k} \right) \sim \operatorname{Gamma}(\alpha, \beta)</math> ,
where <math>[\alpha]</math> is the integral part of α, ξ has been generating using the algorithm above with <math>\delta = \{\alpha\}</math> (the fractional part of α), <math>U_k</math> and <math>V_l</math> are distributed as explained above and are all independent.
Related distributions
- <math>X \sim \mathrm{Exponential}(\theta)</math> is an exponential distribution if <math>X \sim \mathrm{Gamma}(1, \theta)</math>.
- <math>cX \sim \mathrm{Gamma}(k, c\theta)</math> if <math>X \sim \mathrm{Gamma}(k, \theta)</math> for any c > 0 .
- <math>Y \sim \mathrm{Gamma}(N, \theta)</math> is a gamma distribution if <math>Y = X_1 + \cdots + X_N</math> and if the <math>X_i \sim \mathrm{Exponential}(\theta)</math> are all independent and share the same parameter <math>\theta</math>.
- <math>X \sim \chi^2(\nu)</math> is a chi-square distribution if <math>X \sim \mathrm{Gamma}(k=\nu/2, \theta = 2)</math>.
- If <math>k</math> is an integer, the gamma distribution is an Erlang distribution (so named in honor of A. K. Erlang) and is the probability distribution of the waiting time until the <math>k</math>-th "arrival" in a one-dimensional Poisson process with intensity <math>1/\theta</math>.
- <math>X \sim \mathrm{Gamma}(k, \theta)</math> then <math>1/X \sim \mathrm{InvGamma}(k, \theta^{-1})</math>, where <math>\mathrm{InvGamma}</math> is the inverse-gamma distribution.
- <math>Y = X_1/(X_1+X_2) \sim \mathrm{Beta}</math> is a beta distribution if <math>X_1 \sim \mathrm{Gamma}</math> and <math>X_2 \sim \mathrm{Gamma}</math> and are also independent.
- If <math>X^2 \sim \mathrm{Gamma}(3/2,2a^2)</math> then <math>X \sim \mathrm{Maxwell}(a)</math> is a Maxwell-Boltzmann distribution
- <math>Y \sim N(\mu = \alpha \beta, \sigma^2 = \alpha \beta^2)</math> is a normal distribution as <math>Y = \lim_{\alpha \to \infty} X</math> where <math>X \sim \mathrm{Gamma}(\alpha, \beta)</math>.
- The real vector <math>(X_1/S,\ldots,X_n/S)\sim \operatorname{Dirichlet}(\alpha_1,\ldots,\alpha_n)</math> follows a Dirichlet distribution if <math>X_i\sim\operatorname{Gamma}(\alpha_i,\theta)</math> are independent, and <math>S=X_1+\cdots+X_n</math>. This holds true for any θ.
References
- R. V. Hogg and A. T. Craig. Introduction to Mathematical Statistics, 4th edition. New York: Macmillan, 1978. (See Section 3.3.)
See also
es:Distribución gamma fr:Distribution Gamma it:Variabile casuale gamma ja:ガンマ分布 fi:Gamma-jakauma sv:Gammafördelning