Standard deviation

From Free net encyclopedia

In probability and statistics, the standard deviation is the most common measure of statistical dispersion. Simply put, standard deviation measures how spread out the values in a data set are. More precisely, it is a measure of the average difference between the values of the data in the set. If the data points are all similar, then the standard deviation will be low (closer to zero). If the data points are highly variable, then the standard variation is high (further from zero).

The standard deviation is defined as the square root of the variance. This means it is the root mean square (RMS) deviation from the average. The standard deviation is always a positive number and is always measured in the same units as the original data. For example, if the data are distance measurements in meters, the standard deviation will also be measured in meters.

A distinction is made between the standard deviation σ (sigma) of a whole population or of a random variable, and the standard deviation s of a subset-population sample. The formulae are given below.

The term standard deviation was introduced to statistics by Karl Pearson (On the dissection of asymmetrical frequency curves, 1894).

Contents

Definition and calculation

Standard deviation of a random variable

The standard deviation of a random variable X is defined as:

<math>\sigma = \sqrt{\operatorname{E}((X-\operatorname{E}(X))^2)} = \sqrt{\operatorname{E}(X^2) - (\operatorname{E}(X))^2}</math>

Not all random variables have a standard deviation, since these expected values need not exist. For example, the standard deviation of a random variable which follows a Cauchy distribution is undefined.

If the random variable X takes on the values x1,...,xN (which are real numbers) with equal probability, then its standard deviation can be computed as follows. First, the mean of X is defined as:

<math>\overline{x} = \frac{1}{N}\sum_{i=1}^N x_i = \frac{x_1+x_2+\cdots+x_N}{N}</math>

(see sigma notation). Next, the standard deviation simplifies to:

<math>\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2}</math>

In other words, the standard deviation of a discrete uniform random variable X can be calculated as follows:

  1. For each value <math>x_i</math> calculate the difference <math>x_i - \overline{x}</math> between <math>x_i</math> and the average value <math>\overline{x}</math>.
  2. Calculate the squares of these differences.
  3. Find the average of the squared differences. This quantity is the variance <math>\sigma^2</math>.
  4. Take the square root of the variance.

Estimating standard deviation from a sample

Given only a sample of values x1,...,xN from some larger population, many authors define the sample (or estimated) standard deviation by:

<math>

s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (x_i - \overline{x})^2} </math>

The reason for this definition is that s2 is an unbiased estimator for the variance σ2 of the underlying population. (The derivation of this equation assumes only that the samples are uncorrelated and makes no assumption as to their distribution.) However, s is not an unbiased estimator for the standard deviation σ; it tends to underestimate the population standard deviation. Although an unbiased estimator for "s" is known when the random variable is normally distributed, the formula is complicated and amounts to a minor correction. Moreover, unbiasedness, in this sense of the word, is not always desirable; see bias (statistics). Some have argued that even the difference between N and N − 1 in the denominator is overly complex and insignificant. Without that term, what is left is the simpler expression:

<math>

s = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2} </math>

This form has the desirable property of being the maximum-likelihood estimate when the population (or the random variable X) is normally distributed.

Example

We will show how to calculate the standard deviation of a population. Our example will use the ages of four young children: { 5, 6, 8, 9 }.

Step 1. Calculate the mean/average <math>\overline{x}</math>:

<math>\overline{x}=\frac{1}{N}\sum_{i=1}^N x_i</math>

We have N = 4 because there are four data points:

<math>x_1 = 5\,\!</math>
<math>x_2 = 6\,\!</math>
<math>x_3 = 8\,\!</math>
<math>x_4 = 9\,\!</math>
<math>\overline{x}=\frac{1}{4}\sum_{i=1}^4 x_i</math>       Replacing N with 4
<math>\overline{x}=\frac{1}{4} \left ( x_1 + x_2 + x_3 +x_4 \right ) </math>
<math>\overline{x}=\frac{1}{4} \left ( 5 + 6 + 8 + 9 \right ) </math>
<math>\overline{x}= 7</math>   This is the mean.

Step 2. Calculate the standard deviation <math>\sigma\,\!</math>:

<math>\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2}</math>
<math>\sigma = \sqrt{\frac{1}{4} \sum_{i=1}^4 (x_i - \overline{x})^2}</math>       Replacing N with 4
<math>\sigma = \sqrt{\frac{1}{4} \sum_{i=1}^4 (x_i - 7)^2}</math>       Replacing <math>\overline{x}</math> with 7
<math>\sigma = \sqrt{\frac{1}{4} \left [ (x_1 - 7)^2 + (x_2 - 7)^2 + (x_3 - 7)^2 + (x_4 - 7)^2 \right ] }</math>
<math>\sigma = \sqrt{\frac{1}{4} \left [ (5 - 7)^2 + (6 - 7)^2 + (8 - 7)^2 + (9 - 7)^2 \right ] }</math>
<math>\sigma = \sqrt{\frac{1}{4} \left ( (-2)^2 + (-1)^2 + 1^2 + 2^2 \right ) }</math>
<math>\sigma = \sqrt{\frac{1}{4} \left ( 4 + 1 + 1 + 4 \right ) }</math>
<math>\sigma = \sqrt{\frac{10}{4}}</math>
<math>\sigma = 1.5811\,\!</math>   This is the standard deviation.

Were this set a sample drawn from a larger population of children, and the question at hand was the standard deviation of the population, convention would replace the N (or 4) here with N−1 (or 3).

Interpretation and application

A large standard deviation indicates that the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean.

For example, each of the three data sets (0, 0, 14, 14), (0, 6, 8, 14), and (6, 6, 8, 8) has a mean of 7. Their standard deviations are 7, 5 and 1, respectively. The third set has a much smaller standard deviation than the other two because its values are all close to 7. In a loose sense, the standard deviation tells us how far from the mean the data points tend to be. It will have the same units as the data points themselves. If, for instance, the data set (0, 6, 8, 14) represents the ages of four siblings, the standard deviation is 5 years. The data set (1000, 1006, 1008, 1014) may represent the distances travelled by four athletes in 2 minutes, measured in meters. It has a mean of 1007 metres, and a standard deviation of 5 metres. In the age example, a standard deviation of 5 may be considered large; in the distance example, 5 may be considered small.

Standard deviation may serve as a measure of uncertainty. In physical science for example, the reported standard deviation of a group of repeated measurements should give the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then we consider the measurements as contradicting the prediction. This makes sense since they fall outside the range of values that could reasonably be expected to occur if the prediction were correct and the standard deviation appropriately quantified. See prediction interval.

Geometric interpretation

To gain some geometric insights, we will start with a population of three values, x1, x2, x3. This defines a point P = (x1, x2, x3) in R3. Consider the line L = {(r, r, r) : r in R}. This is the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and P would lie on L. So it is not unreasonable to assume that the standard deviation is related to the distance of P to L. And that is indeed the case. Moving orthogonally from P to the line L, one hits the point:

<math>R = (\overline{x},\overline{x},\overline{x})</math>

whose coordinates are the mean of the values we started out with. A little algebra shows that the distance between P and R (which is the same as the distance between P and the line L) is given by σ√3. An analogous formula (with 3 replaced by N) is also valid for a population of N values; we then have to work in RN.

Rules for normally distributed data

Image:Standard deviation diagram.png

In practice, one often assumes that the data are from an approximately normally distributed population. If that assumption is justified, then about 68.26% of the values are at within 1 standard deviation away from the mean, about 95.46% of the values are within two standard deviations and about 99.73% lie within 3 standard deviations. This is known as the "68-95-99.7 rule". As a word of caution, typically this assumption becomes less accurate in the tails.

For normal distributions, the two points of the curve which are one standard deviation from the mean are also the inflection points.

If the distribution is unknown, one can use Chebyshev's inequality to approximate the probability to be away from the mean.

Relationship between standard deviation and mean

The mean and the standard deviation of a set of data are usually reported together. In a certain sense, the standard deviation is a "natural" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise statement is the following: suppose x1, ..., xn are real numbers and define the function:

<math>\sigma(r) = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - r)^2}</math>

Using calculus, it is not difficult to show that σ(r) has a unique minimum at the mean:

<math>r = \overline{x}</math>

(this can also be done with fairly simple algebra alone, since, as a function of r, it is a quadratic polynomial).

The coefficient of variation of a sample is the ratio of the standard deviation to the mean. It is a dimensionless number that can be used to compare the amount of variance between populations with different means.

Chebyshev's inequality proves that in any data set, nearly all of the values will be nearer to the mean value, where the meaning of "close to" is specified by the standard deviation.

Rapid calculation methods

A slightly faster (significantly for running standard deviation) way to compute the population standard deviation is given by the following formula (though this can exacerbate round-off error):

<math>

\sigma\ = \sqrt{\frac{1}{N}\left(\sum_{i=1}^N{{x_i}^2} - \frac{\left(\sum_{i=1}^N{x_i}\right)^2}{N}\right)} = \sqrt{\frac{1}{N}\left(\sum_{i=1}^N{{x_i}^2}\right) - \overline{x}^2} </math>

Similarly for sample standard deviation:

<math>

s = \sqrt{\frac{\sum_{i=1}^N{{x_i}^2} - N\left(\overline{x}\right)^2}{(N-1)}\ } </math>

Or from running sums:

<math>

s = \sqrt{\frac{N\sum_{i=1}^N{{x_i}^2} - \left(\sum_{i=1}^N{x_i}\right)^2}{N(N-1)}} </math>

See also algorithms for calculating variance.

An axiomatic approach

Let <math> X=(X_1,X_2, \dots ,X_n) </math> be a vector of real numbers.

<math> X\in \mathbb{R}^n, \qquad n\in \mathbb{N} </math>

We write:

<math> X \approx \mu \pm \sigma </math>

meaning that <math>\ X </math> is estimated by the mean value <math>\ \mu </math>, and the standard deviation is <math>\ \sigma </math>.

<math>\ \mu </math> is a real number, and <math>\ \sigma </math> is a signless real number, meaning that <math>\ \sigma </math> and <math>\ -\sigma </math> are considered equivalent.

<math> \mu \in \mathbb{R}, \qquad \sigma \in \mathbb{R}/\lbrace{ +1,-1 \rbrace} </math>

The case <math> n=2 </math> is per definition:

<math> X \approx \frac{X_1+X_2}{2} \pm \frac{|X_1-X_2|}{2} </math>

Note the special case <math>\ X_1=X_2 </math>:

<math> X \approx X_1 \pm 0 </math>

The case :

<math> (+1,-1) \approx 0 \pm 1 = \pm 1</math>

justifies the use of the sign <math>\ \pm </math>

A few rules apply. If <math>\ X=(X_1,X_2)\approx \mu \pm \sigma </math> then

  1. Symmetry: <math>\ (X_2,X_1)\approx \mu \pm \sigma </math>
  2. Addition: <math>\ a+X=(a+X_1,a+X_2)\approx (a+\mu) \pm \sigma </math>
  3. Multiplication: <math>\ aX=(aX_1,aX_2)\approx a \mu \pm a\sigma </math>

The addition rule looks like a rule of associativity,

<math>\ a+(\mu \pm \sigma)= (a+\mu) \pm \sigma </math>

and the multiplication rule looks like a rule of distributivity,

<math>\ a(\mu \pm \sigma) = a \mu \pm a\sigma </math>

So:

<math> X \approx \mu \pm \sigma = \mu+\sigma (\pm 1)</math>

Consider the power sums:

<math>\ s_j=\sum_k{X_k^j}, \quad j\in \mathbb N_0</math>

(Note that <math> \ s_0=n.\qquad s_1=X_1+\cdots+X_n. \qquad s_2=X_1^2+\cdots+X_n^2.</math>)

The power sums <math>\ s_j</math> are symmetric functions of the vector <math>\ X </math>, and the symmetric functions <math>\ \mu </math> and <math>\ \sigma </math> are written in terms of these like this:

<math>\ \mu=s_1s_0^{-1} </math>
<math>\ \sigma=(s_0s_2-s_1^2)^{1/2}s_0^{-1} </math>

or

<math>\ X \approx \frac{s_1 \pm \sqrt{s_0s_2-s_1^2}}{s_0}</math>

This formula is readily checked for the special case <math> n=2 </math>, and it generalizes the definition to <math> n\in \mathbb{N}</math> preserving the rules.

The case <math> n=1 </math> is:

<math> X \approx X_1 \pm 0 </math>

Examples:

<math> (1) \approx 1 \pm 0 </math>
<math> (1,1) \approx 1 \pm 0 </math>
<math> (1,-1) \approx 0 \pm 1 </math>
<math> (1,1,1) \approx 1 \pm 0 </math>
<math> (1,1,-1,-1) \approx 0 \pm 1 </math>

When the standard deviation is zero, <math> \approx </math> is replaced by <math> = </math>, and <math> \pm 0 </math> is omitted.

See also

External links

cs:Směrodatná odchylka da:Standardafvigelse de:Standardabweichung et:Standardhälve es:Desviación estándar fr:Écart type gl:Desviación típica hr:Standardna devijacija id:Simpangan baku it:Deviazione standard he:סטיית תקן lt:Standartinis nuokrypis nl:Standaardafwijking ja:標準偏差 no:Standardavvik pl:Odchylenie standardowe pt:Desvio padrão ru:Дисперсия случайной величины sk:Smerodajná odchýlka sl:Standardni odklon sr:Стандардна девијација su:Simpangan baku fi:Hajontaluku sv:Standardavvikelse zh:標準差