Schur complement

From Free net encyclopedia

In linear algebra and the theory of matrices, the Schur complement (named after Issai Schur) of a block of a matrix within the larger matrix is defined as follows. Suppose A, B, C, D are respectively p×p, p×q, q×p and q×q matrices, and D is invertible. Let

<math>M=\left[\begin{matrix} A & B \\ C & D \end{matrix}\right]</math>

so that M is a (p+q)×(p+q) matrix.

Then the Schur complement of the block D of the matrix M is the p×p matrix

<math>A-BD^{-1}C.</math>

The Schur complement arises as the result of performing a "partial" Gaussian elimination by multiplying the matrix M from the right with the "lower triangular" block matrix

<math>LT=\left[\begin{matrix} E_p & 0 \\ -D^{-1}C & D^{-1} \end{matrix}\right].</math>

Here Ep denotes a p×p unit matrix. After multiplication with the matrix LT the Schur complement appears in the upper p×p block. The product matrix is

<math>M\cdot LT=\left[\begin{matrix} A-BD^{-1}C & BD^{-1} \\ 0 & E_q \end{matrix}\right].</math>

The inverse of M thus may be expressed involving <math>D^{-1}</math> and the inverse of Schur's complement only as

<math> \left[ \begin{matrix} A & B \\ C & D \end{matrix}\right]^{-1} = \left[ \begin{matrix} \left(A-B D^{-1} C \right)^{-1} & -\left(A-B D^{-1} C \right)^{-1} B D^{-1} \\ -D^{-1}C\left(A-B D^{-1} C \right)^{-1} & D^{-1}+ D^{-1} C \left(A-B D^{-1} C \right)^{-1} B D^{-1} \end{matrix} \right], </math>

or more simply put,

<math> \left[ \begin{matrix} A & B \\ C & D \end{matrix}\right]^{-1} =

\left[ \begin{matrix} I & 0 \\ -D^{-1}C & I \end{matrix}\right] \left[ \begin{matrix} (A-BD^{-1}C)^{-1} & 0 \\ 0 & D^{-1} \end{matrix}\right] \left[ \begin{matrix} I & -BD^{-1} \\ 0 & I \end{matrix}\right]. </math>

If M is a positive definite symmetric matrix, then so is the Schur complement of D in M.

Application to solving linear equations

The Schur complement arises naturally in solving a system of linear equations such as

<math>Ax + By = a</math>
<math>Cx + Dy = b</math>

where x, a are p-dimensional column vectors, y, b are q-dimensional column vectors, and A, B, C, D are as above. Multiplying the bottom equation by <math>BD^{-1}</math> and then subtracting from the top equation one obtains

<math>(A - BD^{-1} C) x = a - BD^{-1} b.\,</math>

Thus if one can invert D as well as the Schur complement of D, one can solve for x, and then by using the equation <math>Cx + Dy = b</math> one can solve for y. This reduces the problem of inverting a <math>(p+q) \times (p+q)</math> matrix to that of inverting a p×p matrix and a q×q matrix. In practice one needs D to be well-conditioned in order for this algorithm to be accurate.

Applications to probability theory and statistics

Suppose the random column vectors X, Y live in Rn and Rm respectively, and the vector (X, Y) in Rn+m has a multivariate normal distribution whose variance is the symmetric positive-definite matrix

<math>V=\left[\begin{matrix} A & B \\ B^T & C \end{matrix}\right].</math>

Then the conditional variance of X given Y is the Schur complement of C in V:

<math>\operatorname{var}(X\mid Y)=A-BC^{-1}B^T.</math>

If we take the matrix V above to be, not a variance of a random vector, but a sample variance, then it may have a Wishart distribution. In that case, the Schur complement of C in V also has a Wishart distribution.it:Complemento di Schur