Modular Arithmetic
Table of Contents
1. Modular Arithmetic
Modular arithmetic is arithmetic limited to a bounded range of integers \(\{0, 1, 2, \dots, m-1\}\) for some \(m\). We call two numbers \(a\) and \(b\) congruent modulo \(m\) if \(m\) divides \(a-b\):
\begin{align} a \equiv b \pmod{m} \iff m | (a-b) \end{align}All of the congruent integers in some modulus form what is known as a residue class (a modulus \(m\) has a total of \(m\) residue classes, one for each remainder).
Addition and multiplication are simple because we can always reduce intermediate results modulo \(m\). In other words, before doing an addition/subtraction or multiplication operation, we can first reduce each operand modulo \(m\), allowing us to potentially work with much smaller numbers. More formally:
\begin{align} a \pm b &\equiv (a \text{ mod } m) \pm (b \text{ mod } m) \pmod{m} \\ ab &\equiv (a \text{ mod } m) \times (b \text{ mod } m) \pmod{m} \end{align}1.1. Exponentiation
Addition and multiplication are efficient — the number of operations neededs cales with respect to the number of bits, not the size of the number.
However, for exponentiation, the naive method to compute \(x^m\) requires \(m-1\) multiplication operations, which scales according to the size of \(m\). A faster way is to use repeated squaring, where we express \(x^m\) as a product of several \(x^p\), where \(p\) is some power of \(2\). We can then repeatedly square \(x\) starting from \(x^2\) to obtain the necessary powers of \(2\) to then compute \(x^m\).
Example: Repeated squaring
We want to use repeated squaring to compute \(3^{42} \pmod{53}\). First, we note that \(42\) is \(101010\) in binary, so we can represent \(3^{42} = 3^{32} \times 3^8 \times 3^2\). Then, we repeatedly square \(3^2\) to find:
\begin{align} 3^2 &= 9 \notag \\ 3^4 &= 9^2 = 81 \equiv 28 \notag \\ 3^8 &= 28^2 = 784 \equiv 42 \notag \\ 3^{16} &= 42^2 = 1764 \equiv 15 \notag \\ 3^{32} &= 15^2 = 225 \equiv 13 \notag \end{align}Thus,
\begin{align} 3^{42} = 13 \cdot 42 \cdot 9 \equiv 13(-99) \equiv 91 \equiv \boxed{38} \notag \end{align}1.2. Inverses
Division is slightly different in modular arithmetic because we are limited to integers. Instead, we use multiplying by the inverse of \(x\), denoted as \(x^{-1}\), which is the unique number such that:
\begin{align} x(x^{-1}) \equiv 1 \pmod{m} \end{align}For example, \(3\) is the inverse of \(12\) and \(36\). But, it is also possible for a number to have no inverse — it can be shown that there is no inverse of \(10\) and \(35\).
Theorem: \(x\) has an inverse modulo \(m\) if and only if \(x\) and \(m\) are coprime.
Proof: We first prove the forward direction that if \(x\) has an inverse modulo \(m\), then \(x\) and \(m\) are coprime. Suppose \(x\) has an inverse modulo \(m\). Then, \(ax=km+1\) for some \(k \in \mathbb{Z}\).
Let \(d=\text{gcd}(x,m)\). Then \(d|ax\) and \(d|km\), hence \(d|ax-km \Rightarrow d|1\), so \(d=1\) and \(x\) and \(m\) must be coprime.
For the reverse direction, we want to prove that if \(x\) and \(m\) are coprime, then there exists an inverse for \(x\) modulo \(m\). Consider the sequence of multiples of \(x\) \(\{0, x, 2x, \dots, (m-1)x\}\). If we prove that all these multiples are different modulo \(m\), then exactly one of them must be \(1\) modulo \(m\).
Suppose then for contradiction that \(ax \equiv bx \pmod{m}\) with \(0 \leq b < a \leq m-1\). Then \(m|ax-bx\), or equivalently \((a-b)x = km\) for some \(k \in \mathbb{Z}\). Since \(x\) and \(m\) are coprime, \(m|a-b\). But, \(a-b\) must be less than \(m\), so we have a contradiction! \(\square\)
Example: Linear equations
Inverses can be used to solve linear equations in modular arithmetic. For example, here we want to solve \(12x \equiv 11 \pmod{35}\).
Since the inverse of \(12\) modulo \(35\) is \(3\), we can multiply by \(3\) to get:
\begin{align} 12(3)x &\equiv 11 \cdot 3 \pmod{35} \notag \\ x &\equiv 33 \pmod{35} \notag \end{align}2. Euclid’s Algorithm
Euclid’s algorithm is a way to quickly find the greatest common divisor of two numbers. To understand how it works, we must first understand the following claim:
Claim: If \(x \geq y > 0\), then \(\text{gcd}(x,y) = \text{gcd}(y, x\text{ mod }y)\).
Proof: It suffices to prove that \(d | x \land d | y \iff d | y \land d | x\text{ mod } y\). Then we can write \(x\) as :
\begin{align} x = (x \text{ div } y)y + (x \text{ mod } y) \notag \end{align}where \(x \text{ div }y\) is the integer part of \(\frac{x}{y}\). It is clear that if \(d|x\), it must also \(d|x \text{ mod }y\), and vice versa. \(\square\)
Euclid’s algorithm uses this fact to recursively simplify the \(\text{gcd}\) function until one of the numbers is zero:
ALGORITHM gcd(x, y)
CONDITION x >= y >= 0 AND x > 0
IF y == 0 THEN OUTPUT x
ELSE OUTPUT gcd(y, x mod y)
2.1. Extended Euclid’s Algorithm
The extended Euclid’s algorithm provides a way for computing inverses. Given positive integers \(x\) and \(y\), suppose we could compute integers \(a, b\) such that:
\begin{align} \text{gcd}(x, y) = ax + by \end{align}Then if \(\text{gcd}(x,y)=1\), we get \(ax+by=1\), and we have:
\begin{align} by \equiv 1 \pmod{x} \notag \\ ax \equiv 1 \pmod{y} \notag \end{align}So \(b = y^{-1} \pmod{x}\) and \(a = x^{-1} \pmod{y}\).
We can adapt Euclid’s algorithm to compute \(a\) and \(b\). Suppose that our new function, egcd(x, y), instead returns (d, a, b), where d is the GCD, and a, b is what we defined above. In other words, from (d, a, b) = egcd(y, x mod y) we would like to compute A, B for egcd(x, y).
We know that:
\begin{align} d &= ay + b(x \text{ mod } y) \notag \\ d &= ay + b\left( x - y(x \text{ div } y)\right) \notag \\ d &= bx + \left[ a - (x \text{ div } y)b \right]y \notag \end{align}So, we set:
\begin{align} A &= b \notag \\ B &= a- (x \text{ div } y)b \notag \end{align}Then, the extended Euclid’s algorithm is:
ALGORITHM egcd(x, y)
CONDITION x >= y >= 0 AND x > 0
IF y == 0 THEN
OUTPUT (x, 1, 0)
ELSE
(d, a, b) = egcd(y, x mod y)
OUTPUT (d, b, a - (x div y)*b)
3. Chinese Remainder Theorem
Theorem (CRT): Let \(n_1, n_2 , \dots , n_k\) be pairwise coprime, and let \(N = \prod_{i=1}^k n_i\). Then there is a unique \(x \pmod{N}\) that satisfies the following system of modular equations:
\begin{align} \begin{cases} x \equiv a_1 \pmod{n_1} & \\ x \equiv a_2 \pmod{n_2} & \\ \vdots & \\ x \equiv a_k \pmod{n_k} & \end{cases} \notag \end{align}Proof: Suppose we can construct numbers \(u_1, u_2, \dots, u_k\) such that for \(u_i\), it is equivalent to \(1 \text{ mod }n_i\) and \(0\) for all other modulo. In other words, we want to construct:
\begin{align} u_i \equiv \begin{cases} 0 \pmod{n_1} & \\ \vdots & \\ 1 \pmod{n_i} & \\ \vdots & \\ 0 \pmod{n_k} & \\ \end{cases} \notag \end{align}Then, \(x = a_1u_1 + a_2u_2 + \cdots + a_ku_k\) satisfies the system of equations.
Now, let:
\begin{align} u_i = \frac{N}{n_i}\left[\left(\frac{N}{n_1}\right)^{-1}\text{ mod } n_i\right] \end{align}The \(\frac{N}{n_i}\) guarantees that this expression is a multiple of all \(n_1,\dots,n_k\) except for \(n_i\), which means that it will be \(0\) for all \(n_1, \dots,n_k\) except \(n_i\). Next, realize that we are multiplying \(\frac{N}{n_1}\) by its inverse modulo \(n_i\), which means that for \(n_i\), it will be equivalent to \(1\). This satisfies the properties that we want \(u_i\) to have, and all that remains is for us to prove that this solution is unique modulo \(N\).
Suppose that \(x\) and \(y\) are two solutions to our system. Then for all modulo \(n_i\), \(x \equiv y\). This means that \(N | (x-y)\), and so \(x \equiv y \pmod{N}\). \(\square\)
4. Fermat’s Little Theorem
Fermat’s Little Theorem: For any prime \(p\), the following is true:
\begin{align} a^{p-1} \equiv 1 \pmod{p} \; \forall a \in \{1, 2, \dots, p-1\} \end{align}Corollary: For any prime \(p\), we also have:
\begin{align} a^p \equiv a \pmod{p} \; \forall a \in \{0, 1, 2, \dots, p-1\} \end{align}Proof: Consider the set of numbers \(S' = \{0, a, 2a, \dots, (p-1)a\} \text{ mod } p\). These numbers are all different, and since they are modulo \(p\), they must cover the set \(S = \{0, 1, \dots, (p-1)\}\). Hence, the sets \(S\) and \(S'\) are the same set.
But, we know that:
\begin{align} \prod_{x \in S} x \equiv 1 \cdot 2 \cdot \cdots \cdot (p - 1) \equiv (p-1)! \pmod{p} \notag \end{align}and
\begin{align} \prod_{x \in S'} x \equiv a \cdot 2a \cdot \cdots \cdot (p-1)a \equiv (p-1)!a^{p-1} \pmod{p} \notag \end{align}Since \((p-1)!\) has an inverse modulo \(p\), we conclude that \(a^{p-1} \equiv 1 \pmod{p}\). \(\square\)
4.1. Euler’s Totient Function
For positive integer \(n\), let \(\mathbb{Z}_n^*\) denote the set of integers \(x \text{ mod }n\) such that \(\text{gcd}(x, n)=1\). For example \(\mathbb{Z}_{10}^* = \{1, 3 ,7, 9\}\). Define Euler’s totient function as:
\begin{align} \phi(n) = |\mathbb{Z}_n^*| \end{align}Euler’s Theorem: For any \(a \in \mathbb{Z}_n^*\), we have:
\begin{align} a^{\phi(n)} \equiv 1 \pmod{n} \end{align}Realize that Fermat’s little theorem is just a special case of when \(n\) is a prime number \(p\), and so then \(\phi(n) = p-1\).
Proof: For \(a \in \mathbb{Z}_n^*\), consider the set \(S = \{ax \text{ mod }n: x \in \mathbb{Z}_n^*\}\). These numbers are all different, and all of them are in \(\mathbb{Z}_n^*\). Hence \(S\) is the same as the set \(\mathbb{Z}_n^*\).
Thus, we have:
\begin{align} \prod_{x \in \mathbb{Z}_n^*} x &\equiv \prod_{x\in S}x \pmod{n} \notag \\ &\equiv a^{\phi(n)} \prod_{x\in\mathbb{Z}_n^*} x \pmod{n} \notag \end{align}So, we have \(a^{\phi(n)} \equiv 1 \pmod{n}\), and we are done. \(\square\)