Discrete Fourier transform

From Free net encyclopedia

Revision as of 21:40, 29 March 2006; view current revision
←Older revision | Newer revision→

Template:Fourier transforms In mathematics, the discrete Fourier transform (DFT), sometimes called the finite Fourier transform, is a Fourier transform widely employed in signal processing and related fields to analyze the frequencies contained in a sampled signal, solve partial differential equations, and to perform other operations such as convolutions. The DFT can be computed efficiently in practice using a fast Fourier transform (FFT) algorithm.

The sequence of N complex numbers x0, ..., xN−1 are transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

<math>X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2 \pi i}{N} k n} \quad \quad k = 0, \dots, N-1</math>

where e is the base of the natural logarithm, i is the imaginary unit (<math>i^2=-1</math>), and π is Pi. The transform is sometimes denoted by the symbol <math>\mathcal{F}</math>, as in <math>\mathbf{X} = \mathcal{F}(\mathbf{x})</math> or <math>\mathcal{F} \mathbf{x}</math>.

The inverse discrete Fourier transform (IDFT) is given by

<math>x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{\frac{2\pi i}{N} k n} \quad \quad n = 0,\dots,N-1.</math>

Note that the normalization factor multiplying the DFT and IDFT (here 1 and 1/N) and the signs of the exponents are merely conventions, and differ in some treatments. The only requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be 1/N. A normalization of <math>1/\sqrt{N}</math> for both the DFT and IDFT makes the transforms unitary, which has some theoretical advantages, but it is often more practical in numerical computation to perform the scaling all at once as above (and a unit scaling can be convenient in other ways).

(The convention of a negative sign in the exponent is often convenient because it means that <math>X_k</math> is the amplitude of a "positive frequency" <math>2\pi k/N</math>. Equivalently, the DFT is often thought of as a matched filter: when looking for a frequency of +1, one correlates the incoming signal with a frequency of −1.)

In the following discussion the terms "sequence" and "vector" will be considered interchangeable.

Contents

Properties

Completeness

The discrete Fourier transform is an invertible, linear transformation

<math>\mathcal{F}:\mathbf{C}^N \to \mathbf{C}^N</math>

with C denoting the set of complex numbers. In other words, for any N > 0, an N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors.

Orthogonality

The vectors exp(2πi kn/N) form an orthogonal basis over the set of N-dimensional complex vectors:

<math>\sum_{n=0}^{N-1}

\left(e^{ \frac{2\pi i}{N} kn}\right) \left(e^{-\frac{2\pi i}{N} k'n}\right) =N~\delta_{kk'} </math>

where δkn is the Kronecker delta.

The Plancherel theorem and Parseval's theorem

If Xk and Yk are the DFTs of xn and yn respectively then we have the Plancherel theorem:

<math>\sum_{n=0}^{N-1} x_n y^*_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k Y^*_k</math>

where the star denotes complex conjugation. Parseval's theorem is a special case of the Plancherel theorem and states:

<math>\sum_{n=0}^{N-1} |x_n|^2 = \frac{1}{N} \sum_{k=0}^{N-1} |X_k|^2.</math>

The shift theorem

Multiplying <math>x_n</math> by a linear phase <math>\exp(2\pi i n m/N)</math> for some integer <math>m</math> corresponds to a circular shift of the output <math>X_k</math>: <math>X_k</math> is replaced by <math>X_{k-m}</math>, where the subscript is interpreted modulo <math>N</math> (i.e. periodically). Similarly, a circular shift of the input <math>x_n</math> corresponds to multiplying the output <math>X_k</math> by a linear phase. Mathematically, if <math>\{x_n\}</math> represents the vector x then

if <math>\mathcal{F}(\{x_n\})_k=X_k</math>
then <math>\mathcal{F}(\{ x_n e^{\frac{2\pi i}{N}n m} \})_k=X_{k-m}</math>
and <math>\mathcal{F}(\{x_{n-m}\})_k=X_k e^{-\frac{2\pi i}{N}k m}</math>

Periodicity

It is shown in the Discrete-time Fourier transform (DTFT) article that the Fourier transform of a discrete time sequence is periodic. A finite length sequence is just a special case. I.e., it is an infinite sequence of zeros containing a region (aka window) in which non-zero values may occur. So <math>X(\omega)\,</math>, the DTFT of the finite sequence <math>x[n]\,</math>, is periodic. Not surprisingly, the DFT is periodic; e.g. <math>X[k+N] = X[k]\,</math>. Less obvious, perhaps, is that the inverse DFT is also periodic; e.g., <math>x[n+N] = x[n]\,</math>. It is a periodically extended version of the finite sequence.

The DTFT of the periodically extended sequence is zero-valued except at the discrete set of frequencies sampled by the DFT. I.e., it is effectively identical to the DFT. The DTFT of the finite sequence has other non-zero values, but it is still identical to the DFT at the frequencies sampled by the DFT. So the approximation error of <math>X[k]\,</math>, as an approximation to <math>X(\omega)\,</math>, lies in the missing non-zero values, not in the <math>X[k]\,</math> coefficients. In terms of the inverse DFT, that approximation error becomes the periodic extension of the finite sequence.

  • Commonly, <math>x[n]\,</math> is a modification of a longer, perhaps infinite, sequence, whose DTFT is only approximated by <math>X(\omega)\,</math>. In that case, of course, <math>X[k]\,</math> too is only an approximation to [samples of] the original DTFT.
  • The shift theorem, above, is also an expression of the implicit periodicity of the inverse DFT, because it shows that the DFT amplitudes <math>|X[k]|\,</math> are unaffected by a circular (periodic) shift of the inputs, which is simply a choice of origin and therefore only affects the phase. Periodic boundary conditions play an important role in many applications of the DFT. When solving differential equations they allow periodic boundary conditions to be automatically satisfied, and thus can be a useful property. See also the applications section below.

Aliasing

Clearly a discrete-time sequence cannot preserve as much detail as a continuous-time function. The frequency domain manifestation of that fact is the periodicity of <math>X(\omega)\,</math> and <math>X[k]\,</math>, vs. the unlimited uniqueness of a continuous time Fourier transform. The fact that a particular frequency component appears periodically at <math>k\,</math>, <math>k\pm N</math>, <math>k\pm 2N</math>, etc. only tells us the possible frequencies of the original source. Usually only one of them is the original, and the rest are appropriately called aliases. Collateral information is generally needed to interpret the ambiguity (analogous to interpreting the two roots of a quadratic equation). An example of collateral information is that the <math>x[n]\,</math> sequence represents the digitized output of a lowpass anti-aliasing filter.

A time-domain representation of the frequency components listed above is:

<math>x[n] = e^{j \frac{2\pi}{N}(k + L\cdot N)n}\,</math>, <math>L=0\,</math>, <math>\pm 1\,</math>, <math>\pm 2\,</math>, etc.

All values of <math>L\,</math> produce the same <math>x[n]\,</math> sequence. It is impossible to determine just from the sequence what the original <math>L\,</math>-value was. So the DFT reveals them all, just as the quadratic formula reveals the ambiguous roots of an equation.

Circular convolution theorem and cross-correlation theorem

The cyclic convolution x*y of the two vectors x = xk  and y = yn  is the vector x*y with components

<math>(\mathbf{x*y})_n = \sum_{m=0}^{N-1} x_m y_{n-m} \quad \quad n = 0,\dots,N-1</math>

where we continue y cyclically so that

<math>y_{-m} = y_{N-m}\quad\quad~~~~~~~~~~ m = 0, ..., N-1</math>

The discrete Fourier transform turns cyclic convolutions into component-wise multiplication. That is, if <math>z_n = (\mathbf{x*y})_n</math> then

<math>Z_k=X_k Y_k \quad \quad~~~~~~~~~~ k = 0,\dots,N-1</math>

where capital letters (X, Y, Z) represent the DFTs of sequences represented by small letters (x, y, z). Note that if a different normalization convention is adopted for the DFT (e.g., the unitary normalization), then there will in general be a constant factor multiplying the above relation.

The direct evaluation of the convolution summation, above, would require <math>O(N^2)</math> operations, but the DFT (via an FFT) provides an <math>O(N\log N)</math> method to compute the same thing. Conversely, convolutions can be used to efficiently compute DFTs via Rader's FFT algorithm and Bluestein's FFT algorithm.

See also: Convolution theorem

In an analogous manner, it can be shown that if <math>z_n</math> is the cross-correlation of <math>x_n</math> and <math>y_n</math>:

<math>z_n=(\mathbf{x\star y})_n = \sum_{m=0}^{N-1}x_m^*\,y_{m+n}</math>

where the sum is again cyclic in m, then the discrete Fourier transform of <math>z_n</math> is:

<math>Z_k = X_k^*\,Y_k</math>

where capital letters are again used to signify the discrete Fourier transform.

Relationship to trigonometric interpolation polynomials

The function

<math>p(t) = \frac{f_0}{N} + \frac{f_1}{N} e^{it} + \frac{f_2}{N} e^{2it} + \cdots + \frac{f_{N-1}}{N} e^{(N-1)it}</math>

whose coefficients fk /N are given by the DFT of xn, above, is called the trigonometric interpolation polynomial of degree N − 1. It is the unique function of this form that satisfies the property: p(2πn/N) = xn for n = 0, ..., N − 1.

Because of aliasing, however, the form of the trigonometric interpolation polynomial is not unique, in that any of the frequencies can be shifted by any multiple of N while maintaining the property p(2πn/N) = xn . In particular, the following form is often preferred:

<math>p(t) = \frac{f_0}{N} + \frac{f_1}{N} e^{it} + \cdots + \frac{f_{N/2}}{N} \cos(Nt/2) + \frac{f_{N/2+1}}{N} e^{(-N/2+1)it} + \cdots + \frac{f_{N-1}}{N} e^{-it}</math>

for even <math>N</math> (where the Nyquist amplitude <math>f_{N/2}</math> should be handled specially) or, for odd <math>N</math>:

<math>p(t) = \frac{f_0}{N} + \frac{f_1}{N} e^{it} + \cdots + \frac{f_{\lfloor N/2 \rfloor}}{N} e^{\lfloor N/2 \rfloor it} + \frac{f_{\lfloor N/2 \rfloor+1}}{N} e^{(-\lceil N/2 \rceil+1)it} + \cdots + \frac{f_{N-1}}{N} e^{-it}</math>

These latter two forms have the useful property that, if the <math>x_n</math> are all real numbers, then <math>p(t)</math> will be real for all <math>t</math> as well. They also use the smallest possible frequencies of the interpolating sinusoids (a balance of positive and negative frequencies instead of all positive frequencies), and consequently minimize the mean-square slope <math>\int |p'(t)|^2 dt</math> of the interpolated function.

The unitary DFT

Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as a Vandermonde matrix:

<math>\mathbf{F} =

\begin{bmatrix}

\omega_N^{0 \cdot 0}     & \omega_N^{0 \cdot 1}     & \ldots & \omega_N^{0 \cdot (N-1)}     \\
\omega_N^{1 \cdot 0}     & \omega_N^{1 \cdot 1}     & \ldots & \omega_N^{1 \cdot (N-1)}     \\
\vdots                   & \vdots                   & \ddots & \vdots                       \\
\omega_N^{(N-1) \cdot 0} & \omega_N^{(N-1) \cdot 1} & \ldots & \omega_N^{(N-1) \cdot (N-1)} \\

\end{bmatrix} </math>

where

<math>\omega_N = e^{-2 \pi i/N}\,</math>

is a primitive Nth root of unity. The inverse transform is then given by the inverse of the above matrix:

<math>\mathbf{F}^{-1}=\frac{1}{N}\mathbf{F}^*</math>

With unitary normalization constants <math>1/\sqrt{N}</math>, the DFT becomes a unitary transformation, defined by a unitary matrix:

<math>\mathbf{U}=\mathbf{F}/\sqrt{N}</math>
<math>\mathbf{U}^{-1}=\mathbf{U}^*</math>
<math>\det(\mathbf{U})=1</math>

where det()  is the determinant function. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT.

The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):

<math>\sum_{m=0}^{N-1}U_{km}U_{mn}^*=\delta_{kn}</math>

If <math>\mathbf{X}</math> is defined as the unitary DFT of the vector <math>\mathbf{x}</math> then

<math>X_k=\sum_{n=0}^{N-1} U_{kn}x_n</math>

and the Plancherel theorem is expressed as:

<math>\sum_{n=0}^{N-1}x_n y_n^* = \sum_{k=0}^{N-1}X_k Y_k^*</math>

If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case <math>\mathbf{x} = \mathbf{y}</math>, this implies that the length of a vector is preserved as well—this is just Parseval's theorem:

<math>\sum_{n=0}^{N-1}|x_n|^2 = \sum_{k=0}^{N-1}|X_k|^2</math>

Expressing the inverse DFT in terms of the DFT

A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.)

First, we can compute the inverse DFT by reversing the inputs:

<math>\mathcal{F}^{-1}(\{x_n\}) = \mathcal{F}(\{x_{N - n}\}) / N</math>

(As usual, the subscripts are interpreted modulo <math>n</math>; thus, for <math>n=0</math>, we have <math>x_{N-0}=x_0</math>.)

Second, one can also conjugate the inputs and outputs:

<math>\mathcal{F}^{-1}(\mathbf{x}) = \mathcal{F}(\mathbf{x}^*)^* / N</math>

Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define swap(<math>x_n</math>) as <math>x_n</math> with its real and imaginary parts swapped—that is, if <math>x_n = a + b i</math> then swap(<math>x_n</math>) is <math>b + a i</math>. Equivalently, swap(<math>x_n</math>) equals <math>i x_n^*</math>. Then

<math>\mathcal{F}^{-1}(\mathbf{x}) = \textrm{swap}(\mathcal{F}(\textrm{swap}(\mathbf{x}))) / N</math>

That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988).

The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutary—that is, which is its own inverse. In particular, <math>T(\mathbf{x}) = \mathcal{F}(\mathbf{x}^*) / \sqrt{N}</math> is clearly its own inverse: <math>T(T(\mathbf{x})) = \mathbf{x}</math>. A closely related involutary transformation (by a factor of (1+i)/√2) is <math>H(\mathbf{x}) = \mathcal{F}((1+i) \mathbf{x}^*) / \sqrt{2N}</math>, since the <math>(1+i)</math> factors in <math>H(H(\mathbf{x}))</math> cancel the 2. For real inputs <math>\mathbf{x}</math>, the real part of <math>H(\mathbf{x})</math> is none other than the discrete Hartley transform, which is also involutary.

The real DFT

If <math>x_0, \ldots, x_{N-1}</math> are real numbers, as they often are in practical applications, then the DFT obeys the symmetry:

<math>X_k = X_{N-k}^* ,</math>

where the star denotes complex conjugation and the subscripts are interpreted modulo N.

Therefore, the DFT output for real inputs is half redundant, and one obtains the complete information by only looking at roughly half of the outputs <math>X_0, \ldots, X_{N-1}</math>. In this case, the "DC" element <math>X_0</math> is purely real, and for even N the "Nyquist" element <math>X_{N/2}</math> is also real, so there are exactly N non-redundant real numbers in the first half + Nyquist element of the complex output X.

Using Euler's formula, the interpolating trigonometric polynomial can then be interpreted as a sum of sine and cosine functions.

Generalized DFT

It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT) and has analogous properties to the ordinary DFT:

<math>X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2 \pi i}{N} (k+b) (n+a)} \quad \quad k = 0, \dots, N-1</math>

Most often, shifts of <math>1/2</math> (half a sample) are used. While the ordinary DFT corresponds to a periodic signal in both time and frequency domains, <math>a=1/2</math> produces a signal that is anti-periodic in frequency domain (<math>X_{k+N} = - X_k</math>) and vice-versa for <math>b=1/2</math>. Thus, the specific case of <math>a = b = 1/2</math> is known as an odd-time odd-frequency discrete Fourier transform (or O2 DFT). Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms.

The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane.

Multidimensional DFT

The ordinary DFT computes the transform of a "one-dimensional" dataset: a sequence (or array) <math>x_n</math> that is a function of one discrete variable <math>n</math>. More generally, one can define the multidimensional DFT of a multidimensional array <math>x_{n_1, n_2, \cdots, n_d}</math> that is a function of <math>d</math> discrete variables <math>n_\ell = 0, 1, \cdots, N_\ell-1</math> for <math>\ell</math> in <math>1, 2, \cdots, d</math>:

<math>X_{k_1, k_2, \cdots, k_d} = \sum_{n_1=0}^{N_1-1} \omega_{N_1}^{~k_1 n_1} \cdots \sum_{n_d=0}^{N_d-1} \omega_{N_d}^{~k_d n_d} x_{n_1, n_2, \cdots, n_d} \, , </math>

where <math>\omega_{N_\ell} = \exp(-2\pi i/N_\ell)</math> as above and the <math>d</math> output indices run from <math>k_\ell = 0, 1, \cdots, N_\ell-1</math>. This is more compactly expressed in vector notation, where <math>\mathbf{n} \equiv (n_1, n_2, \cdots, n_d)</math> and <math>\mathbf{k} \equiv (k_1, k_2, \cdots, k_d)</math> are <math>d</math>-dimensional vectors of indices from 0 to <math>\mathbf{N} - 1 \equiv (N_1 - 1, N_2 - 1, \cdots, N_d - 1)</math>:

<math>X_\mathbf{k} = \sum_{\mathbf{n}=0}^{\mathbf{N}-1} e^{-2\pi i \mathbf{k} \cdot (\mathbf{n} / \mathbf{N})} x_\mathbf{n} \, ,</math>

where the division <math>\mathbf{n} / \mathbf{N} \equiv (n_1/N_1, \cdots, n_d/N_d)</math> is performed element-wise, and the sum denotes the set of nested summations above.

The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by:

<math>x_\mathbf{n} = \frac{1}{\prod_{\ell=1}^d N_\ell} \sum_{\mathbf{k}=0}^{\mathbf{N}-1} e^{2\pi i \mathbf{n} \cdot (\mathbf{k} / \mathbf{N})} X_\mathbf{k} \, .</math>

The multidimensional DFT has a simple interpretation. Just as the one-dimensional DFT expresses the input <math>x_n</math> as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or sinusoids oscillating along the direction <math>\mathbf{k} / \mathbf{N}</math> in space and having amplitude <math>X_\mathbf{k}</math>. Such a decomposition is of great importance for everything from digital image processing (<math>d</math>=2) to solving partial differential equations in three dimensions (<math>d</math>=3) by breaking the solution up into plane waves.

Computationally, the multidimensional DFT is simply the composition of a sequence of one-dimensional DFTs along each dimension. For example, in the two-dimensional case <math>x_{n_1,n_2}</math> one can first compute the <math>N_1</math> independent DFTs of the rows (i.e., along <math>n_2</math>) to form a new array <math>y_{n_1,k_2}</math>, and then compute the <math>N_2</math> independent DFTs of <math>y</math> along the columns (along <math>n_1</math>) to form the final result <math>X_{k_1,k_2}</math>. Or, one can transform the columns and then the rows—the order is immaterial because the nested summations above commute.

Because of this, given a way to compute a one-dimensional DFT (e.g. an ordinary one-dimensional FFT algorithm), one immediately has a way to efficiently compute the multidimensional DFT. This is known as a row-column algorithm, although there are also intrinsically multi-dimensional FFT algorithms.

Applications

The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform.

Spectral analysis

When the DFT is used for spectral analysis, the <math>\{x_n\}\,</math> sequence usually represents a finite set of uniformly-spaced time-samples of some signal <math>x(t)\,</math>, where t of course represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist frequency) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (aka resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired clarity, a standard technique is to perform multiple DFTs. If the desired result is a power spectrum, averaging the magnitude components of the multiple DFTs is often an effective use of the extra data. This technique is referred to as the Welch algorithm.

A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated in the discrete-time Fourier transform article.

  • The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT.
  • As already noted, leakage imposes a limit on the inherent resolution of the DTFT. So there is a practical limit to the benefit that can be obtained from a fine-grained DFT.

Data compression

The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform).

Partial differential equations

Discrete Fourier transforms, especially in more than one dimension, are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The reason is that it expands the signal in complex exponentials einx, which are eigenfunctions of differentiation: d/dx einx = in einx. Thus, in the Fourier representation, a linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method.

Multiplication of large integers

The fastest known algorithms for the multiplication of large integers or polynomials are based on the discrete Fourier transform: the sequences of digits or coefficients are interpreted as vectors whose convolution needs to be computed; in order to do this, they are first Fourier-transformed, then multiplied component-wise, then transformed back.

Outline of DFT polynomial multiplication algorithm

Suppose we wish to compute the polynomial product c(x) = a(x) · b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension d > deg(a(x)) + deg(b(x)). Then,

<math>\mathbf{c} = \mathbf{a} * \mathbf{b}</math>

Where c is the vector of coefficients for c(x), and the convolution operator <math>*\,</math> is defined so

<math>c_n = \sum_{m=0}^{d-1}a_m b_{n-m\ \mathrm{mod}\ d} \qquad\qquad\qquad n=0,1,...,d-1</math>

But convolution becomes multiplication under the DFT:

<math>\mathcal{F}(\mathbf{c}) = \mathcal{F}(\mathbf{a})\mathcal{F}(\mathbf{b})</math>

Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector

<math>\mathbf{c} = \mathcal{F}^{-1}(\mathcal{F}(\mathbf{a})\mathcal{F}(\mathbf{b}))</math>

With a Fast Fourier transform, the resulting algorithm takes O(N log N) arithmetic operations. Due to its simplicity and speed, the Cooley-Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation).

Some discrete Fourier transform pairs

Some DFT pairs
<math>x_n\equiv\frac{1}{N}\sum_{k=0}^{N-1}X_k \cdot e^{i 2 \pi kn/N} </math> <math>X_k\equiv\sum_{n=0}^{N-1}x_n \cdot e^{-i 2 \pi kn/N} </math> Note
<math>x_n \cdot e^{i 2 \pi kn/N} \,</math> <math>X_{n-k}\,</math> Shift theorem
<math>x_{n-k}\,</math> <math>X_k \cdot e^{-i 2 \pi kn/N} </math>
<math>x_n \in \mathbf{R}</math> <math>X_k=X_{N-k}^*\,</math> Real DFT
<math>a^n\,</math> <math>\frac{1-a^N}{1-a \cdot e^{-i 2 \pi k/N} }</math>  
<math>{N-1 \choose n}\,</math> <math>\left(1+e^{-i 2 \pi k/N} \right)^{N-1}\,</math>  

See also

Derivation of the discrete Fourier transform The DFT can be derived as the continuous Fourier transform of infinite periodic sequences of impulses.

References

  • {{cite book
| last = Brigham | first = E. Oran
| title=The fast Fourier transform and its applications
| location = Englewood Cliffs, N.J.
| publisher = Prentice Hall
| year=1988
| id=ISBN 0133075052
}}
  • {{cite book
| author = Oppenheim, Alan V.; Schafer, R. W.; and Buck, J. R.
| title = Discrete-time signal processing
| location = Upper Saddle River, N.J.
| publisher = Prentice Hall
| year = 1999
| id = ISBN 0137549202
}}
  • {{cite book
| last = Smith | first = Steven W.
| url = http://www.dspguide.com/pdfbook.htm
| title = The Scientist and Engineer's Guide to Digital Signal Processing
| location = San Diego, Calif.
| publisher = California Technical Publishing
| year=1997
| id=ISBN 0966017633
}}
  • {{cite book
| first = Thomas H. | last = Cormen | authorlink = Thomas H. Cormen
| coauthors = Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
| year = 2001
| title = Introduction to Algorithms
| edition = Second Edition
| publisher = MIT Press and McGraw-Hill
| id = ISBN 0262032937
| chapter = Chapter 30: Polynomials and the FFT
| pages = pp.822–848
}} esp. section 30.2: The DFT and FFT, pp.830–838.
  • {{cite journal
| author = P. Duhamel, B. Piron, and J. M. Etcheto
| title = On computing the inverse DFT
| journal = IEEE Trans. Acoust., Speech and Sig. Processing
| volume = 36 | issue = 2 | pages = 285–286 | year = 1988
}}de:Diskrete Fourier-Transformation

fr:Transformée de Fourier discrète it:Trasformata di Fourier discreta ja:離散フーリエ変換 nl:Discrete fouriertransformatie pl:Dyskretna transformata Fouriera sr:Дискретна Фуријеова трансформација zh:离散傅里叶变换