Levinson recursion
From Free net encyclopedia
←Older revision | Newer revision→
Levinson recursion is a mathematical procedure which recursively calculates the solution to a Toeplitz matrix.
It was proposed by Norman Levinson in 1947, improved by Durbin in 1960, and given a matrix formulation requiring <math>4n^2</math> multiplications by W. F. Trench in 1964. S. Zohar improved Trench's algorithm in 1969, requiring only <math>3n^2</math> multiplications.
In most applications where Toeplitz matrices appear, we are dealing with a problem such as <math>\mathbf{Ta=b}</math> where <math>\mathbf T</math> is an <math>n\times n</math> Toeplitz matrix and <math>\mathbf a</math> and <math>\mathbf b</math> are vectors. The problem is to find <math>\mathbf a</math> when <math>\mathbf T</math> and <math>\mathbf b</math> are known. For the solution, straightforward application of Gaussian elimination is rather inefficient, with complexity, <math>O(n^3)</math>, since it does not employ the strong structures present in the Toeplitz system.
A first improvement to Gaussian elimination is the Levinson recursion which can be applied to symmetric Toeplitz systems. To illustrate the basics of the Levinson algorithm, first define the <math>p\times p</math> principal sub-matrix <math>T_p</math> as the upper left block of <math>\mathbf T</math>. Further, assume that we have the order <math>p</math> solution <math>{\mathbf a}_p</math> to equation
- <math>
\begin{bmatrix} t_0 & t_1 & t_2 & \dots & t_p \\ t_{1} & t_0 & t_1 & \ddots & t_{p-1} \\ t_{2} & t_{1} & t_0 & \ddots & t_{p-2} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ t_{p} & t_{p-1} & t_{p-2} & \dots & t_{0} \\ \end{bmatrix} \begin{bmatrix} 1 \\ a^{(p)}_1 \\ \vdots \\ a^{(p)}_p \end{bmatrix} = \begin{bmatrix} \epsilon_p \\ 0 \\ \vdots \\ 0 \end{bmatrix}.
</math> Extension of <math>{\mathbf a}_p</math> with a zero yields
- <math>
\begin{bmatrix} t_0 & t_1 & t_2 & \dots & t_{p+1} \\ t_{1} & t_0 & t_1 & \ddots & t_{p } \\ t_{2} & t_{1} & t_0 & \ddots & t_{p-1} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ t_{p+1}& t_{p } & t_{p-1} & \dots & t_{0} \\ \end{bmatrix} \begin{bmatrix} 1 \\ a^{(p)}_1 \\ \vdots \\ a^{(p)}_p \\ 0 \end{bmatrix} = \begin{bmatrix} \epsilon_p \\ 0 \\ \vdots \\ 0 \\ \eta_p \end{bmatrix}
</math> where <math>\eta_p=\sum_{i=0}^pa_i^{(p)}t_{p-i+1}</math> and <math>a_0^{(p)}=1</math>. The salient step comes through the realisation that since <math>{\mathbf T}_p{\mathbf a}_p={\mathbf u}_p</math> where <math>{\mathbf u}_p=[\epsilon_p\,0\,\dots\,0]^T</math> and <math>{\mathbf T}_p</math> is symmetric, we have <math>{\mathbf T}_p{\mathbf a}_p^\#={\mathbf u}_p^\#</math>, where superscript # denotes reversal of rows. By defining a reflection coefficient <math>\Gamma_p</math>, we obtain
- <math>
{\mathbf T}_{p+1} \left( \begin{bmatrix} 1 \\ a^{(p)}_1 \\ \vdots \\ a^{(p)}_p \\ 0 \end{bmatrix} + \Gamma_p \begin{bmatrix} 0 \\ a^{(p)}_p \\ a^{(p)}_{p-1} \\ \vdots \\ 1 \end{bmatrix} \right) = \left( \begin{bmatrix} \epsilon_p \\ 0 \\ \vdots \\ 0 \\ \eta_p \end{bmatrix} +\Gamma_p \begin{bmatrix} \eta_p \\ 0 \\ \vdots \\ 0 \\ \epsilon_p \end{bmatrix} \right).</math>
Obviously, choosing <math>\Gamma_p</math> so that <math>\eta_p+\Gamma_p\epsilon_p=0</math> yields the order <math>p+1</math> solution to <math>{\mathbf{Ta=b}}</math> as
- <math>
{\mathbf a}_{p+1}=\begin{bmatrix}{\mathbf a}_p\\0\end{bmatrix} +\Gamma_p\begin{bmatrix}0\\{\mathbf a}_p^\#\end{bmatrix}. </math> Consequently, with a suitable choice of initial values (<math>{\mathbf a}_0=1</math>), this procedure can be used to recursively solve equations of type <math>\mathbf{Ta}=[\sigma^2\,0\,0\dots0]^T</math>. Furthermore, using the intermediate values <math>{\mathbf a}_p</math>, it is straightforward to solve arbitrary problems of type <math>\mathbf{Ta=b}</math>. The former algorithm, is often called the Levinson-Durbin recursion and the latter, the solution of arbitrary Toeplitz equations, the Levinson recursion.
While the Levinson-Durbin recursion has the complexity of <math>O(n^2)</math>, it is possible to further improve the algorithm to reduce complexity by half. The algorithm, called the split Levinson-Durbin algorithm, uses a three term recursion instead of the two term recursion in conventional Levinson recursion. Then either the symmetric or antisymmetric part of two consecutive order solutions, <math>{\mathbf a}_{p-1}</math> and <math>{\mathbf a}_{p}</math> are used to obtain the next order solution <math>{\mathbf a}_{p+1}</math>.
Several other possibilities exist for the solution of Toeplitz systems, such as the Schur or Cholesky decompositions, but the split Levinson-Durbin is the most efficient. The others are sometimes preferred when decimal truncations cause numerical instability.
References
- Bunch, J. R. (1985). "Stability of methods for solving Toeplitz systems of equations." SIAM J. Sci. Stat. Comput., v. 6, pp. 349-364.
- Trench, W. F. (1964). "An algorithm for the inversion of finite Toeplitz matrices." J. Soc. Indust. Appl. Math., v. 12, pp. 515-522.