Linear Differential Equations
Table of Contents
Say \(y\) is a function of \(t\). Then
\begin{align} a(t)y'' + b(t)y' + c(t)y = f(t) \end{align}with \(a(t) \neq 0\), and \(a(t),\: b(t),\: c(t),\: f(t)\) otherwise any functions of \(t\), is a second-order linear differential equation. Note that linear means linear in \(y\): there can be nonlinear terms in \(t\).
1. Constant Coefficients
We start with a simplified case of the second-order linear differential equation:
\begin{align} ay'' + by' + cy = f(t) \end{align}where \(a,\: b,\: c\) are constants, and \(a \neq 0\). This is known as a second-order linear differential equation with constant coefficients.
1.1. Homogenous Equations
First, consider the special case where the function \(f(t)\) is zero:
\begin{align} ay'' + by' + cy = 0 \end{align}This is known as a homogenous second-order linear differential equation with constant coefficients.
To figure out how to solve this, we observe that in (3), a solution must have the property that its second derivative is expressible as a linear combination of its first and zeroth derivatives. This suggests the possibility of a solution in the form \(y = e^{rt}\), since derivatives of \(e^{rt}\) are just constants times \(e^{rt}\). Substituting into (3), we get:
\begin{align} ar^2e^{rt} + bre^{rt} + cce^{rt} &= 0 \notag \\ e^{rt}(ar^2 + br + c) &= 0 \notag \\ ar^2 + br + c &= 0 \end{align}(4) is known as the characteristic equation (also called the auxiliary equation). \(y = e^{rt}\) is a solution if and only if \(r\) satisfies (4). There are three cases for the roots of this equation: if we get two real roots, a repeated root, or two nonreal roots.
1.1.1. Two Real Roots
If this equation has two roots \(r_1\) and \(r_2\), then we can form an infinite number of solutions with linear combinations of \(e^{rt}\):
\begin{align} y = c_1e^{r_1t} + c_2e^{r_2t} \end{align}Unique solutions must be linearly independent of each other. This means that for a set of funcions \(f_1, f_2, \dots, f_n\), if \(c_1f_1 + c_2f_2 + \cdots + c_nf_n = 0\), then \(c_1 = c_2 = \cdots = c_n = 0\). To check if a set of \(n\) functions \(f_1, f_n, \dots, f_n\) is linearly independent, we can check if the Wronskian is equivalent to zero:
\begin{align} W = \det\begin{bmatrix} f_1 & f_2 & \cdots & f_n \\ f_1' & f_2' & \cdots & f_n' \\ \vdots & \vdots & & \vdots \\ f_1^{(n-1)} & f_n^{(n-1)} & \cdots & f_n^{(n-1)} \end{bmatrix} \notag \end{align}For just two functions, however, it is usually simpler to just check if they are scalar multiples of each other.
1.1.2. Repeated Real Roots
If the characteristic equation has repeated real root \(r\), then one solution is \(e^{rt}\), and the second turns out to be \(te^{rt}\). Then, the general solution is:
\begin{align} y = c_1e^{rt} + c_2te^{rt} \end{align}More generally, if a differential equation has a repeated root \(r\) that is repeated \(n\) times, then we have solutions from \(e^{rt}\) up to \(t^{n}e^{rt}\). To see why this is, let \(D\) represent the derivative operator (i.e. \(Dy = y'\)). Realize that \(D\) fulfills the requirements to be a linear transformation, as it behaves well under addition and scalar multiplication.
We want to find the solutions to \(y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_1y' + a_0y = 0\). If the roots of the characteristic equation are \(r_1, r_2, \dots, r_n\) (they can be repeated), then we can rewrite the differential equation as:
\begin{align} (D-r_1)(D-r_2) \cdot \cdots \cdot (D-r_n)y=0 \notag \end{align}We will proceed by induction. First, we would like to solve the base case, when \(r\) is a repeated real root of the characteristic equation that is repeated once, we have the following term:
\begin{align} (D-r)\left[(D-r)y\right] = 0 \notag \end{align}To find both solutions, we need \((D-r)y\) to be in the nullspace of \(D-r\). Finding the nullspace:
\begin{align} (D-r)y &= 0 \notag \\ y' - ry &= 0 \notag \\ y &= Ce^{rt} \notag \end{align}This is our first solution, and the nullspace of \(D-r\). Then, since we want \((D-r)y\) to be in the nullspace, we can try:
\begin{align} (D-r)y &= e^{rt} \notag \\ y'-ry &= e^{rt} \notag \end{align}Proceeding by integrating factor technique:
\begin{align} e^{-rt}(y'-ry) &= 1 \notag \\ [ye^{-rt}]' &= 1 \notag \\ ye^{-rt} &= t + C \notag \\ y &= te^{rt} + Ce^{rt} \notag \end{align}We see that for the case of one repeated real root, the second solution is \(te^{rt}\). Now, for the inductive step: we assume that for \(n\) repeated real roots \(r\), solutions take the form \((c_nt^n + c_{n-1}t^{n-1} + \cdots + c_0)e^{rt}\). Then, for \(n+1\) repeated real roots, we can write the equation as \((D-r)^{n+2}y = 0\); we just need \((D-r)y\) to be in the nullspace of \((D-r)^{n+1}\). But that is just the solution set for the case of \(n\) repeated real roots, so we can write:
\begin{align} (D-r)y = (c_nt^n + c_{n-1}t^{n-1} + \cdots + c_0)e^{rt} \notag \end{align}Solving this using integrating factor for \(y\), we see that we end up with a new term of \(t^{n+1}\), finishing the inductive step. Therefore, for each additional repeated root for a general differential equation, we simply multiply by \(t\).
1.1.3. Two Nonreal Roots
If a characteristic equation has two nonreal roots \(\alpha \pm \beta i\) for some \(\beta \neq 0\), then we can write a general solution as:
\begin{align} y = Ce^{(\alpha + \beta i)t} + De^{(\alpha - \beta i)t} \notag \end{align}However, it usually doesn't make sense for a solution to a differential equation to be complex. Thus, we would like to take only the real part of this solution, and then make it so that the imaginary part is set to zero. To do this, we can apply Euler's formula, \(e^{it} = \cos t + i\sin t\), which allows us to write the general formula instead as:
\begin{align} y = c_1e^{\alpha t}\cos(\beta t) + c_2 e^{\alpha t} \sin(\beta t) \end{align}where \(c_1\) and \(c_2\) are real.
1.2. Nonhomogenous Equations
Now, we would like to consider the case when \(f(t)\) is any function. The key is to first find a particular solution \(y_p\) to the equation. Then, denote the general solution to the complementary equation \(ay'' + by' + cy = 0\) to be \(y_c\). Finally, the general solution to the nonhomogenous equation is:
\begin{align} y = y_c + y_p \end{align}The issue is now to find the particular solution \(y_p\), which may vary depending on what \(f(t)\) is.
1.2.1. Method of Undetermined Coefficients
To do this, we can employ some educated guesses as to what the solution might be:
- \(f(t)\) is polynomial, try \(y_p\) polynomial
- \(f(t) = e^{kt}\), try \(y_p = Ae^{kt}\)
- \(f(t) = \cos(kt)\) or \(\sin(kt)\), try \(y_p = A\cos(kt) + B\sin(kt)\)
Then, we can plug them into our differential equation and solve for the unknown constants. This is known as the method of undetermined coefficients.
Sometimes, however, the default "try" of the method of undetermined coefficients fails to work. This happens when the homogenous solution overlaps with our guess, thus making our guess always go to zero.
To fix this, we multiply the particular solution by \(t\), or if the overlap has a double root in the characteristic equation, then we multiply it by \(t^2\). Two terms overlap when their ratio is a polynomial or rational function, which results in annihilation under differentiation.
1.2.2. Variation of Parameters
Sometimes, \(f(t)\) is in a form where we cannot use the method of undetermined coefficients (e.g. \(f(t) = \tan t\)). Then, we have to use a different method, variation of parameters.
Say \(y_1\) and \(y_2\) are the two linearly independent solutions of the complementary equation. Now, for the particular solution we try:
\begin{align} y_p = u_1(t)y_1(t) + u_2(t)y_2(t) \notag \end{align}where \(u_1\) and \(u_2\) are unknown functions. Taking the first derivative, we get:
\begin{align} y'_p = u'_1y_1+u_1y'_1+u'_2y_2+u_2y'_2 \notag \end{align}The key idea here is that we assume \(u'_1y_1 + u'_2y_2=0\). This gives us an extra condition to fulfill when we solve for \(u_1\) and \(u_2\) in the end, but it also allows us to simplify the derivatives greatly. Then:
\begin{align} y'_p &= u_1y'_1 + u_2y'_2 \notag \\ y''_p &= u'_1y'_1+u_1y''_1+u'_2y'_2+u_2y''_2 \notag \end{align}Plugging this into the original equation, we get:
\begin{align} f &= a(u'_1y'_1+u_1y''_1+u'_2y'_2+u_2y''_2) + b(u_1y'_1+u_2y'_2) + c(u_1y_1+u_2y_2) \notag \\ &= a(u'_1y'_1+u'_2y'_2) + u_1(ay''_1+by'_1+cy_1) + u_2(ay''_2+by'_2+cy_2) \notag \\ &= a(u'_1y'_1+u'_2y'_2) \notag \end{align}Thus, we end up with the following linear system:
\begin{align} \begin{bmatrix} y_1 & y_2 \\ y'_1 & y'_2 \end{bmatrix} \begin{bmatrix}u'_1 \\ u'_2\end{bmatrix} = \begin{bmatrix} 0 \\ \frac{f}{a} \end{bmatrix} \notag \end{align}We can then solve the system with Cramer's Rule to get:
\begin{align} \boxed{u'_1 = \frac{\begin{vmatrix}0 & y_2 \\ \frac{f}{a} & y'_2\end{vmatrix}}{\begin{vmatrix}y_1 & y_2 \\ y'_1 & y'_2\end{vmatrix}} = \frac{-fy_2}{a(y_1y'_2-y'_1y_2)}} \\ \boxed{u'_2 = \frac{\begin{vmatrix}y_1 & 0 \\y'_1 & \frac{f}{a}\end{vmatrix}}{\begin{vmatrix}y_1 & y_2 \\ y'_1 & y'_2\end{vmatrix}} = \frac{fy_1}{a(y_1y'_2-y'_1y_2)}} \end{align}Notice that the denominator is just the Wronskian. \(u_1\) and \(u_2\) can now be determined by taking integrals.
1.3. Superposition Principle
The superposition principle is a simple observation that allows us to extend the nonhomogeneous equations we can solve. If \(ay''+by'+c=f_1(t)\) has particular solution \(y_1\), and \(ay''+by'+c=f_2(t)\) has particular solution \(y_2\), then for \(ay''+by'+c=k_1f_1(t) + k_2f_2(t)\),
\begin{align} y(t) = k_1y_1 + k_2y_2 \end{align}is a solution, where \(k_1\) and \(k_2\) are constants.
2. Variable Coefficients
Say we want to solve the following differential equation with non-constant coefficients:
\begin{align} a(t)y'' + b(t)y' + c(t) = f(t) \notag \end{align}If we are given \(y_1\) and \(y_2\), the two linearly independent solutions of the homogenous case, then we can still use variation of parameters because it doesn't depend on the fact that our coefficients were constant. We get:
\begin{align} \boxed{u'_1 = \frac{\begin{vmatrix}0 & y_2 \\ \frac{f(t)}{a(t)} & y'_2\end{vmatrix}}{\begin{vmatrix}y_1 & y_2 \\ y'_1 & y'_2\end{vmatrix}}}\\ \boxed{u'_2 = \frac{\begin{vmatrix}y_1 & 0 \\y'_1 & \frac{f(t)}{a(t)}\end{vmatrix}}{\begin{vmatrix}y_1 & y_2 \\ y'_1 & y'_2\end{vmatrix}}} \end{align}