Transpose

From Free net encyclopedia

See transposition for meanings of this term in telecommunication and music.

In mathematics, and in particular linear algebra, the transpose of a matrix is another matrix, produced by turning rows into columns and vice versa. Informally, the transpose of a square matrix is obtained by reflecting at the main diagonal (that runs from the top left to bottom right of the matrix). The transpose of the matrix A is written as Atr, tA, A′, or AT.

Formally, the transpose of the m-by-n matrix A is the n-by-m matrix AT defined by AT[i, j] = A[j, i] for 1 ≤ in and 1 ≤ jm.

For example,

<math>\begin{bmatrix}

1 & 2 \\ 3 & 4 \end{bmatrix}^{\mathrm{T}} \!\! \;\! = \, \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}\quad\quad \mbox{and}\quad\quad \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^{\mathrm{T}} \!\! \;\! = \, \begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix} \; </math>

Properties

For any two m-by-n matrices A and B and every scalar c, we have (A + B)T = AT + BT and (cA)T = c(AT). This shows that the transpose is a linear map from the space of all m-by-n matrices to the space of all n-by-m matrices.

The transpose operation is self-inverse, i.e taking the transpose of the transpose amounts to doing nothing: (AT)T = A.

If A is an m-by-n and B an n-by-k matrix, then we have (AB)T = (BT)(AT). Note that the order of the factors switches. From this one can deduce that a square matrix A is invertible if and only if AT is invertible, and in this case we have (A-1)T = (AT)-1.

The dot product of two vectors expressed as columns of their coordinates can be computed as

<math> \mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\mathrm{T}} \mathbf{b} \,</math>

where the product on the right is the ordinary matrix multiplication.

If A is an arbitrary m-by-n matrix with real entries, then ATA is a positive semidefinite matrix.

If A is an n-by-n matrix over some field, then A is similar to AT.

Further nomenclature

A square matrix whose transpose is equal to itself is called a symmetric matrix, i.e. A is symmetric iff:

<math>\ A = A^{\mathrm{T}}</math>

A square matrix whose transpose is also its inverse is called an orthogonal matrix, i.e. G is orthogonal iff

<math>
G\, G^{\,\mathrm{T}} = G^{\,\mathrm{T}} G = I_n , \,</math>
  the identity matrix

A square matrix whose transpose is equal to its negative is called skew-symmetric, i.e. A is skew-symmetric iff:

<math>\ A = - A^{\mathrm{T}}</math>

The conjugate transpose of the complex matrix A, written as A*, is obtained by taking the transpose of A and then taking the complex conjugate of each entry.

Transpose of linear maps

If f: V→W is a linear map between vector spaces V and W with nondegenerate bilinear forms, we define the transpose of f to be the linear map tf : W→V determined by

<math>B_V(v,{}^tf(w))=B_W(f(v),w)</math>

for all v in V and w in W. Here, BV and BW are the bilinear forms on V and W respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

Over a complex vector space, one often works with sesquilinear forms instead of bilinear (conjugate-linear in one argument). The transpose of a map between such spaces is defined similarly, and the matrix of the transpose map is given by the conjugate transpose matrix if the bases are orthonormal. In this case, the transpose is also called the Hermitian adjoint.ca:Matriu transposada es:Matriz traspuesta fr:Matrice transposée ko:전치행렬 it:Matrice trasposta he:מטריצה משוחלפת ja:転置行列 pl:Macierz transponowana fi:Transpoosi vi:Ma trận chuyển vị