DTFS

Table of Contents

1. Discrete-Time Fourier Series

The key idea to the discrete-time Fourier series, or DTFS, is that we can break down a periodic signal into a set of fundmanetal frequencies.

1.1. Synthesis Equation

In general, for a \(p\) periodic signal, we can write:

\begin{align} \boxed{x[n] = \sum_{k=0}^{p-1} X_k e^{ik\omega_0 n}} \end{align}

where \(\omega_0 = \frac{2\pi}{p}\). This equation is known as the synthesis equation for DTFS, which takes us from a set of frequencies back to the time domain.

Each of the \(\psi_k[n] = e^{ik\omega_0 n}\) are the frequencies that we are working with. Our goal now is to figure out what the coefficients \(X_k\) are, so we know what amount of each frequency exists in our signal.

1.2. Vector Representation

Note that we’re living in the subspace of p-periodic DT signals. This means that we can actually express our signals as vectors! For example, for \(p=4\), the signal \(x[n]\) can be represented as:

\begin{align} \mathbf{x} = \begin{bmatrix} x[0] \\ x[1] \\ x[2] \\ x[3] \end{bmatrix} \notag \end{align}

Similarly, for each \(\psi_k\), we have:

\begin{align} \boldsymbol{\psi_0} = \begin{bmatrix} \psi_0[0] \\ \psi_0[1] \\ \psi_0[2] \\ \psi_0[3] \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad \boldsymbol{\psi_1} = \begin{bmatrix} \psi_1[0] \\ \psi_1[1] \\ \psi_1[2] \\ \psi_1[3] \end{bmatrix} = \begin{bmatrix} 1 \\ i \\ -1 \\ -i \end{bmatrix} \quad \boldsymbol{\psi_2} = \begin{bmatrix} \psi_2[0] \\ \psi_2[1] \\ \psi_2[2] \\ \psi_2[3] \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix} \quad \boldsymbol{\psi_3} = \begin{bmatrix} \psi_3[0] \\ \psi_3[1] \\ \psi_3[2] \\ \psi_3[3] \end{bmatrix} = \begin{bmatrix} 1 \\ -i \\ -1 \\ i \end{bmatrix} \notag \end{align}

1.3. Orthogonality

We now wish to establish the orthogonality of each of the basis functions/frequencies \(\psi_k\). More specifically, we want to show that:

\begin{align} \langle \psi_k, \psi_l \rangle = \begin{cases} p & k = l \\ 0 & k \neq l\end{cases} \end{align}

Case (i): \(k=l\)

Algebraically reducing the inner product, we find that:

\begin{align} \langle \psi_k, \psi_k \rangle &= \sum_{n=0}^{p-1} \psi_k[n]\psi_k^*[n] \notag \\ &= \sum_{n=0}^{p-1}e^{ik\omega_0 n}e^{-ik\omega_0 n} \notag \\ &= \sum_{n=0}^{p-1}1 \notag \\ &= p \notag \end{align}

Case (ii): \(k \neq l\)

We find that:

\begin{align} \langle \psi_k, \psi_l \rangle &= \sum_{n=0}^{p-1} \psi_k[n]\psi_k^*[n] \notag \\ &= \sum_{n=0}^{p-1}e^{ik\omega n}e^{-il\omega_0 n} \notag \\ &= \sum_{n=0}^{p-1}\left[e^{i(k-l)\omega_0}\right]^n \notag \end{align}

By the formula for geometric series, we have:

\begin{align} \langle \psi_k, \psi_l \rangle = \frac{1\left(e^{i(k-l)\omega_0 p} - 1\right)}{e^{i(k-l)\omega_0} - 1} \notag \end{align}

Realize that since \(\omega_0 = \frac{2\pi}{p}\), so:

\begin{align} \langle \psi_k, \psi_k \rangle = \frac{1\left(e^{i(k-l)2\pi}-1\right)}{e^{i(k-l)\omega_0}-1} = 0 \quad \square \notag \end{align}

1.4. Analysis Equation

Now that we have proven that \(\psi_k\) are mutually orthogonal, consider the linear combination of vectors:

\begin{align} \mathbf{x} = X_0\boldsymbol{\psi_0} + \cdots + X_k\boldsymbol{\psi_k} + \cdots + X_{p-1}\boldsymbol{\psi_{p-1}} \notag \end{align}

To find the coefficient \(X_k\), we can take the inner product of both sides with \(\boldsymbol{\psi_k}\), and due to orthogonality, all the terms will drop out except for one. However, we can also realize that we can obtain \(X_k\) by finding the orthogonal projection of \(\mathbf{x}\) onto \(\boldsymbol{\psi_k}\), which yields us this equation:

\begin{align} X_k = \frac{\langle \mathbf{x}, \boldsymbol{\psi_k} \rangle}{\langle \boldsymbol{\psi_k}, \boldsymbol{\psi_k} \rangle} \notag \end{align}

By (2), \(\langle\boldsymbol{\psi_k},\boldsymbol{\psi_k}\rangle\) is just \(p\), so in signal form this is just:

\begin{align} X_k = \frac{1}{p}\sum_{n=0}^{p-1}x[n]\psi_k^*[n] \notag \end{align}

This gives us the analysis equation, which takes us from the time domain to the frequency domain:

\begin{align} \boxed{X_k = \frac{1}{p}\sum_{n=0}^{p-1}x[n]e^{-ik\omega_0 n}} \end{align}

1.5. Periodicity of the Basis Functions

Realize that since the following is true:

\begin{align} \psi_k[n+p]=e^{ik\omega_0(n+p)}=e^{ik\omega_0n}e^{ik\omega_0p}=e^{ik\omega_0n}=\psi_k[n] \notag \\ \psi_{k+p}[n]=e^{i(k+p)\omega_0n}=e^{ik\omega_0n}e^{ip\omega_0n}=e^{ik\omega_0n}=\psi_k[n] \notag \end{align}

The basis functions \(\psi_k[n]\) is therefore periodic in both \(n\) and \(k\). Therefore, we can write the synthesis and analysis equations in a more general form:

\begin{align} x[n] &= \sum_{k\in\langle p \rangle} X_k e^{ik\omega_0 n} \\ X_k &= \frac{1}{p}\sum_{n\in\langle p \rangle}x[n]e^{-ik\omega_0 n} \end{align}

where \(\langle p \rangle\) refers to a contiguous interval of \(p\) integers.

2. Matrix-Vector Form of DTFS

2.1. Matrix-Vector Form of Synthesis Equation

We know that we can write our signals as a linear combination of basis functions:

\begin{align} x[n] = \sum_{k=0}^{p-1}X_k\psi_k[n] \notag \end{align}

where \(\psi_k[n] = e^{ik\omega_0n}\). We can rewrite this as a product of two vectors:

\begin{align} x[n] = \begin{bmatrix}\psi_0[n] & \psi_1[n] & \cdots & \psi_{p-1}[n]\end{bmatrix}\begin{bmatrix}X_0 \\ \vdots \\ X_{p-1}\end{bmatrix} \notag \end{align}

Since \(x[n]\) is periodic, we can also represent it as a vector. Thus, we can write:

\begin{align} \begin{bmatrix} x[0] \\ x[1] \\ \vdots \\ x[p-1] \end{bmatrix} = \begin{bmatrix} \psi_0[0] & \cdots & \psi_{p-1}[0] \\ \psi_0[1] & \cdots & \psi_{p-1}[1] \\ \vdots & & \vdots \\ \psi_0[p-1] & \cdots & \psi_{p-1}[p-1] \end{bmatrix} \begin{bmatrix} X_0 \\ X_1 \\ \vdots \\ X_{p-1} \end{bmatrix} \notag \end{align}

So, the synthesis equation in matrix-vector form is:

\begin{align} \boxed{\mathbf{x} = \Psi \mathbf{X}} \end{align}

2.2. Matrix-Vector Form of Analysis Equation

Now, we know that the following facts are true:

\begin{align} \langle \boldsymbol{\psi_k}, \boldsymbol{\psi_l} \rangle = \boldsymbol{\psi_k}^T\boldsymbol{\psi_l}^* = \begin{cases} p & k = l \\ 0 & k \neq l \end{cases} \notag \\ \langle \boldsymbol{\psi_k}, \boldsymbol{\psi_l} \rangle^* = \boldsymbol{\psi_k}^H\boldsymbol{\psi_l} = \begin{cases} p & k = l \\ 0 & k \neq l \end{cases} \notag \\ \end{align}

where \(\mathbb{\psi}^H\) is the Hermitian transpose of \(\mathbb{\psi}\), defined as taking both the transpose and the complex conjugate of a matrix. Now, consider what happens when we take \(\Psi^H\Psi\):

\begin{align} \Psi^H\Psi = \begin{bmatrix} \boldsymbol{\psi_0}^H \\ \boldsymbol{\psi_1}^H \\ \vdots \\ \boldsymbol{\psi_{p-1}}^H \end{bmatrix} \begin{bmatrix} \boldsymbol{\psi_0} & \boldsymbol{\psi_1} & \cdots & \boldsymbol{\psi_{p-1}} \end{bmatrix} = \begin{bmatrix} \boldsymbol{\psi_0}^H\boldsymbol{\psi_0} & \cdots & \boldsymbol{\psi_0}^H\boldsymbol{\psi_{p-1}} \\ \boldsymbol{\psi_1}^H\boldsymbol{\psi_0} & \cdots & \boldsymbol{\psi_1}^H\boldsymbol{\psi_{p-1}} \\ \vdots & & \vdots \\ \boldsymbol{\psi_{p-1}}^H\boldsymbol{\psi_0} & \cdots & \boldsymbol{\psi_{p-1}}^H\boldsymbol{\psi_{p-1}} \\ \end{bmatrix} \notag \end{align}

Since \(\boldsymbol{\psi_k}^H\boldsymbol{\psi_l} = 0\) if \(k \neq l\), and \(p\) otherwise, we get:

\begin{align} \Psi^H\Psi &= pI \notag \\ \frac{1}{p}\Psi^H\Psi &= I \notag \end{align}

Thus, the inverse of \(\Psi\) is:

\begin{align} \Psi^{-1} = \frac{1}{p} \Psi^H \end{align}

The synthesis equation multiplied by \(\Psi^{-1}\) then gives us the analysis equation in matrix-vector form:

\begin{align} \boxed{\mathbf{X} = \frac{1}{p}\Psi^H\mathbf{x}} \end{align}
Last modified: 2026-02-18 21:00