Adjacency Matrices

Table of Contents

1. PageRank

PageRank is an algorithm used by Google Search to rank webpages in search engine results. The idea is that we can represent webpages and the links between them in a graph:

We can define an adjacency matrix where the rows are sink nodes and the columns are the source nodes:

\begin{align} A = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ \end{bmatrix} \notag \end{align}

However, we wish to normalize the total votes each node has:

This gives us the connectivity matrix:

\begin{align} C = \begin{bmatrix} \frac{1}{4} & \frac{1}{3} & 0 & 0 \\ \frac{1}{4} & 0 & 0 & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{3} & 0 & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{3} & 1 & 0 \\ \end{bmatrix} \notag \end{align}

However, not all websites are created equal. Each website should also be weighted diferently, with websites receiving more links having higher importance (and therefore, having higher scores). In general, the score for a webpage \(l\) is the weighted sum of the scores of each webpage linking to it:

\begin{align} s_l = \sum \frac{1}{d_p} s_p \notag \end{align}

where \(d_p\) is the number of links that website \(v_p\) has, and \(s_p\) is the score of website \(v_p\). We can define the score vector to be:

\begin{align} \mathbf{s} = \begin{bmatrix}s_1 \\ s_2 \\ s_3 \\ s_4 \end{bmatrix} \notag \end{align}

Then, we can just write:

\begin{align} \boxed{C\mathbf{s} = \mathbf{s}} \end{align}

Thus, \(\mathbf{s}\) is the eigenvector for \(C\) associated with eigenvalue \(1\).

2. Stochastic Matrices

A nonnegative matrix \(M\) is called column stochastic if each of its columns sums to \(1\) — that is, if \(\mathbb{1}^TM = \mathbb{1}^T\).

A nonnegative matrix \(M\) is called row stochastic if each of its rows sums to \(1\) — that is, if \(M\mathbb{1} = \mathbb{1}\).

A nonnegative matrix \(M\) is called doubly stochastic if both its columns and its rows sum to \(1\).

The connectivity matrix is an example of a column stochastic matrix.

3. Left Eigenvectors

\(\mathbf{w}\) is called the left eigenvector of \(C\) if it satisfies:

\begin{align} \mathbf{w}^TC = \lambda\mathbf{w}^T \end{align}

Taking the transpose, we get:

\begin{align} C^T\mathbf{w} = \lambda\mathbf{w} \notag \end{align}

In other words, \(\mathbf{w}\) is a left eigenvector of \(C\) if and only if it is a right eigenvector of \(C^T\). Thus, a column stochastic matrix has a left eigenvector of \(\mathbb{1}\).

Additionally, we claim that there also exists a right eigenvector with the same eigenvalue. Since \(C\) and \(C^T\) share eigenvalues, and \(\mathbf{w}\) is a right eigenvector for \(C^T\), there must also exist a right eigenvector for \(C\).

4. State-Space Models

Suppose we have a model of a system like so:

We can keep track of the state, which is the minimum information needed at time \(n\) so that we can determine the future state as time goes on. We define the state vector to be:

\begin{align} \mathbf{s}[n] = \begin{bmatrix} s_1[n] \\ s_2[n] \\ s_3[n] \\ s_4[n] \end{bmatrix} \notag \end{align}

where \(s_p[n]\) is the fraction of the population at node \(p\) at time \(n\). Then, the entries of \(\mathbf{s}[n]\) sums to one and is called a stochastic vector. The connectivity matrix \(C\) transforms one state to the next state. In other words, if \(\mathbf{s}[0]\) is the initial state, then:

\begin{align} \boxed{\mathbf{s}[n] = C^n\mathbf{s}[0]} \end{align}

Oftentimes, we represent the initial state with each entry represented as a fraction of the total population: in other words, the initial state vector is stochastic. It can then be seen that if the initial state vector is stochastic, then all future state vectors are stochastic:

\begin{align} \mathbb{1}^T\mathbf{s}[n] = \mathbb{1}^TC^n\mathbf{s}[0] = \mathbb{1}^T\mathbf{s}[0] = 1 \notag \end{align}

4.1. State as a Linear Combination of Eigenvectors

We can represent the diagonalization of \(C\) as:

\begin{align} C = V\Lambda V^{-1} \notag \end{align}

where \(V\) is the eigenvector matrix \(\begin{bmatrix} \mathbf{v_1} & \mathbf{v_2} \end{bmatrix}\), \(\Lambda\) is the diagonal matrix with eigenvalues on the diagonals, and \(V^{-1}\) is actually composed of the transpose of the left eigenvectors:

\begin{align} W = V^{-1} = \begin{bmatrix} \mathbf{w_1}^T \\ \mathbf{w_2}^T \end{bmatrix} \notag \end{align}

From (2), we know that \(\mathbf{s}[n] = C^n\mathbf{s}[0]\), and \(C^n = V\Lambda^n V^{-1}\), so we can write:

\begin{align} \mathbf{s}[n] &= \begin{bmatrix} \mathbf{v_1} & \mathbf{v_2} \end{bmatrix} \begin{bmatrix} \lambda_1^n & 0 \\ 0 & \lambda_2^n \end{bmatrix} \begin{bmatrix}\mathbf{w_1}^T \\ \mathbf{w_2}^T \end{bmatrix} \notag \mathbf{s}[0] \\ \notag &= \begin{bmatrix} \lambda_1^n\mathbf{v_1} & \lambda_2^n\mathbf{v_2} \end{bmatrix} \begin{bmatrix} \mathbf{w_1}^T\mathbf{s}[0] \\ \mathbf{w_2}^T\mathbf{s}[0] \end{bmatrix} \\ \notag &= \mathbf{w_1}^T\mathbf{s}[0]\lambda_1^n\mathbf{v_1} + \mathbf{w_2}^T\mathbf{s}[0]\lambda_2^n\mathbf{v_2} \notag \end{align}

Then call \(\alpha_l = \mathbf{w_l}^T\mathbf{s}[0]\). Our equation becomes:

\begin{align} \boxed{\mathbf{s}[n] = \alpha_1\lambda_1^n\mathbf{v_1} + \alpha_2\lambda_2^n\mathbf{v_2} + \cdots} \end{align}

If \(C\) has a full set of eigenvectors, then they form a basis. Therefore, we can represent the initial state vector \(\mathbf{s}[0]\) as a linear combination of the eigenvectors of \(C\), with \(\alpha_k\) being their coefficients. Then, we can view (4) as simply a linear combination of the eigenvectors of \(C\).

4.2. Equilibrium State

We say \(\mathbf{s}[n] = \mathbf{s}^*\) is an equilibrium state of the system if:

\begin{align} \mathbf{s}[n+1] &= \mathbf{s}^* \notag \\ C\mathbf{s}^* &= \mathbf{s}^* \notag \end{align}

Realize that this is exactly equation (1), which is the equation the score vector for PageRank satisfies. In other words, we can view the PageRank algorithm as a model of population movement in a closed state system, and the score vector as the equilibrium state of that system.

By the Perron-Frobenius Theorem, we know that the largest magnitude of any eigenvalue for a column stochastic matrix (in this case, the connectivity matrix \(C\)) is \(1\). Therefore, if the only eigenvalue with magnitude \(1\) of a system is \(\lambda = 1\), then the system must reach a steady state since all of the other terms in (4) go to zero as time goes to infinity.

4.3. State-Space with Input

Sometimes, our state-space model takes an input (often some kind of control). Then, our state evolution equation becomes:

\begin{align} \mathbf{q}[n+1] = A\mathbf{q}[n] + B\mathbf{x}[n] \end{align}

where \(A\) is our state transition matrix and \(\mathbf{x}[n]\) is our input vector at time step \(n\). Solving for a few terms, it can be seen that in general, our state space can be represented as:

\begin{align} \mathbf{q}[n] = A^n\mathbf{q}[0] + \sum_{k=0}^{n-1} A^{n-1-k}B\mathbf{x}[k] \end{align}
Last modified: 2026-03-19 10:50