Correlation
From Free net encyclopedia
- This article is about the correlation coefficient between two random variables. The term correlation can also mean the cross-correlation of two functions or electron correlation in molecular systems.
In probability theory and statistics, correlation, also called correlation coefficient, indicates the strength and direction of a linear relationship between two random variables. In general statistical usage, correlation or co-relation refers to the departure of two variables from independence. In this broad sense there are several coefficients, measuring the degree of correlation, adapted to the nature of data.
A number of different coefficients are used for different situations. The best known is the Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. Despite its name, it was first introduced by Francis Galton.
Pearson's product-moment coefficient
Mathematical properties
Image:Corr-example.png The correlation ρX, Y between two random variables X and Y with expected values μX and μY and standard deviations σX and σY is defined as:
- <math>
\rho_{X,Y}={\mathrm{cov}(X,Y) \over \sigma_X \sigma_Y} ={E((X-\mu_X)(Y-\mu_Y)) \over \sigma_X\sigma_Y},</math> where E is the expected value of the variable and cov means covariance. Since μX = E(X), σX2 = E(X2) − E2(X) and likewise for Y, we may also write
- <math>\rho_{X,Y}=\frac{E(XY)-E(X)E(Y)}{\sqrt{E(X^2)-E^2(X)}~\sqrt{E(Y^2)-E^2(Y)}}</math>
The correlation is defined only if both standard deviations are finite and both of them are nonzero. It is a corollary of the Cauchy-Schwarz inequality that the correlation cannot exceed 1 in absolute value.
The correlation is 1 in the case of an increasing linear relationship, −1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables. The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables.
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X2. Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, independence is equivalent to uncorrelatedness.
A correlation between two variables is diluted in the presence of measurement error around estimates of one or both variables, in which case disattenuation provides a more accurate coefficient.
The sample correlation
If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, ..., n, then the Pearson product-moment correlation coefficient can be used to estimate the correlation of X and Y . The Pearson coefficient is also known as the "sample correlation coefficient". It is especially important if X and Y are both normally distributed. The Pearson correlation coefficient is then the best estimate of the correlation of X and Y . The Pearson correlation coefficient is written:
- <math>
r_{xy}=\frac{\sum (x_i-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y} </math>
where <math>\bar{x}</math> and <math>\bar{y}</math> are the sample means of xi and yi , sx and sy are the sample standard deviations of xi and yi and the sum is from i = 1 to n. As with the population correlation, we may rewrite this as
- <math>
r_{xy}=\frac{n\sum x_iy_i-\sum x_i\sum y_i} {\sqrt{n\sum x_i^2-(\sum x_i)^2}~\sqrt{n\sum y_i^2-(\sum y_i)^2}}. </math>
Again, as is true with the population correlation, the absolute value of the sample correlation must be less than or equal to 1. Though the above formula conveniently suggests a single-pass algorithm for calculating sample correlations, it is notorious for its numerical instability (see below for something more accurate).
The sample correlation coefficient is the fraction of the variance in yi that is accounted for by a linear fit of xi to yi . This is written
- <math>r_{xy}^2=1-\frac{\sigma_{y|x}^2}{\sigma_y^2}</math>
where σy|x2 is the square of the error of a linear fit of yi to xi by the equation y = a + bx.
- <math>\sigma_{y|x}^2=\sum_{i=1}^n (y_i-a-bx_i)^2</math>
and σy2 is just the variance of y
- <math>\sigma_y^2=\sum_{i=1}^n (y_i-\bar{y})^2</math>
Note that since the sample correlation coefficient is symmetric in xi and yi , we will get the same value for a fit of xi to yi :
- <math>r_{xy}^2=1-\frac{\sigma_{x|y}^2}{\sigma_x^2}</math>
This equation also gives an intuitive idea of the correlation coefficient for higher dimensions. Just as the above described sample correlation coefficient is the fraction of variance accounted for by the fit of a 1-dimensional linear submanifold to a set of 2-dimensional vectors (xi , yi ), so we can define a correlation coefficient for a fit of an m-dimensional linear submanifold to a set of n-dimensional vectors. For example, if we fit a plane z = a + bx + cy to a set of data (xi , yi , zi ) then the correlation coefficient of z to x and y is
- <math>r^2=1-\frac{\sigma_{z|xy}^2}{\sigma_z^2}.\,</math>
Geometric Interpretation of correlation
The correlation coefficient can also be viewed as the cosine of the two vectors representing the two random variables.
Interpretation of the size of a correlation
Several authors have offered guidelines for the interpretation of a correlation coefficient. Cohen (1988), for example, has suggested the following interpretations for correlations in psychological research:
Correlation | Negative | Positive |
Small | −0.29 to −0.10 | 0.10 to 0.29 |
Medium | −0.49 to −0.30 | 0.30 to 0.49 |
Large | −0.50 to −1.00 | 0.50 to 1.00 |
As Cohen himself has observed, however, all such criteria are in some ways arbitrary and should not be observed too strictly. This is because the interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.9 may be very low if one is verifying a physical law using high-quality instruments, but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.
Non-parametric correlation coefficients
Pearson's correlation coefficient is a parametric statistic, and it may be less useful if the underlying assumption of normality is violated. Non-parametric correlation methods, such as Spearman's ρ and Kendall's τ may be useful when distributions are not normal; they are a little less powerful than parametric methods if the assumptions underlying the latter are met, but are less likely to give distorted results when the assumptions fail.
Other measures of dependence among random variables
To get a measure for more general dependencies in the data (also nonlinear) it is better to use the correlation ratio which is able to detect almost any functional dependency, or mutual information which detects even more general dependencies.
Copulas and correlation
Most people erroneously believe that the information given by a correlation coefficient is enough to define the dependence structure between random variables. But to fully capture the dependence between random variables we must consider the copula between them. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the cumulative distribution functions are elliptic (as with, for example, the multivariate normal distribution).
Correlation matrices
The correlation matrix of n random variables X1, ..., Xn is the n × n matrix whose i,j entry is corr(Xi, Xj). If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables Xi /SD(Xi) for i = 1, ..., n. Consequently it is necessarily a non-negative definite matrix.
The correlation matrix is symmetrical (the correlation between <math>X_i</math> and <math>X_j</math> is the same as the correlation between <math>X_j</math> and <math>X_i</math>).
"Correlation does not imply causation"
The conventional dictum that "correlation does not imply causation" is treated in the article titled spurious relationship. See also correlation implies causation (logical fallacy). However, correlations are not presumed to be acausal, though the causes may not be known.
Computing correlation accurately in a single pass
The following algorithm (in pseudocode) will estimate correlation with good numerical stability
sum_sq_x = 0 sum_sq_y = 0 sum_coproduct = 0 mean_x = x[1] mean_y = y[1] for i in 2 to N: sweep = (i - 1.0) / i delta_x = x[i] - mean_x delta_y = y[i] - mean_y sum_sq_x += delta_x * delta_x * sweep sum_sq_y += delta_y * delta_y * sweep sum_coproduct += delta_x * delta_y * sweep mean_x += delta_x / i mean_y += delta_y / i pop_sd_x = sqrt( sum_sq_x / N ) pop_sd_y = sqrt( sum_sq_y / N ) cov_x_y = sum_coproduct / N correlation = cov_x_y / (pop_sd_x * pop_sd_y)
For an enlightening experiment, check the correlation of {900,000,000 + i for i=1...100} with {900,000,000 - i for i=1...100}, perhaps with a few values modified. Poor algorithms will fail.
References
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates.
External links
- Understanding Correlation - Introductory material by a U. of Hawaii Prof
- Statsoft Electronic Textbook
- Pearson's Correlation Coefficient - How to calculate it fast
- Learning by Simulations - The distribution of the correlation coefficientde:Korrelation
fr:Corrélation (mathématiques) id:Korelasi it:Correlazione he:מחקר מתאמי lt:Koreliacija nl:Correlatiecoëfficiënt no:Korrelasjon pl:Korelacja su:Korélasi ru:Корреляция fi:Korrelaatio sv:Korrelation