In this chapter we will discuss the Laplace transform\(^\). The Laplace transform turns out to be a very efficient method to solve certain ODE problems. In particular, the transform can take a differential equation and turn it into an algebraic equation. If the algebraic equation can be solved, applying the inverse transform gives us our desired solution. The Laplace transform also has applications in the analysis of electrical circuits, NMR spectroscopy, signal processing, and elsewhere. Finally, understanding the Laplace transform will also help with understanding the related Fourier transform, which, however, requires more understanding of complex numbers. The Laplace transform also gives a lot of insight into the nature of the equations we are dealing with. It can be seen as converting between the time and the frequency domain. For example, take the standard equation \[mx''(t) = cx'(t) + kx(t) = f(t). \nonumber \] We can think of \(t\) as time and \(f(t)\) as incoming signal. The Laplace transform will convert the equation from a differential equation in time to an algebraic (no derivatives) equation, where the new independent variable \(s\) is the frequency. We can think of the Laplace transform as a black box that eats functions and spits out functions in a new variable. We write \(\mathcal \ = F(s)\) for the Laplace transform of \(f(t)\). It is common to write lower case letters for functions in the time domain and upper case letters for functions in the frequency domain. We use the same letter to denote that one function is the Laplace transform of the other. For example \(F(s)\) is the Laplace transform of \(f(t)\). Let us define the transform. \[\mathcal \ = F(s) \overset> \int_0^ <\infty>e^f(t)\,dt. \nonumber \] We note that we are only considering \(t \ge 0\) in the transform. Of course, if we think of \(t\) as time there is no problem, we are generally interested in finding out what will happen in the future (Laplace transform is one place where it is safe to ignore the past). Let us compute some simple transforms.
Suppose \(f(t)=1\), then \[\mathcal
Suppose \(f(t)=e^<-at>\), then \[\mathcal \\> = \int_0^ <\infty>e^e^ <-at>dt = \int _0^ <\infty>e^ dt = \left [ \dfrac
Suppose \(f(t)=t\), then using integration by parts \[\begin
A common function is the unit step function , which is sometimes called the Heaviside function\(^\) . This function is generally given as \[u(t)= \left\< \begin 0 & >t<0, \\ 1 & >t \geq 0.\end \right. \nonumber \] Let us find the Laplace transform of \(u(t-a)\), where \(a \ge 0\) is some constant. That is, the function that is 0 for \(t \< u(t-a)\>= \int_0^<\infty>e^u(t-a)dt= \int_a^<\infty>e^dt= \left[ \dfrac
By applying similar procedures we can compute the transforms of many elementary functions. Many basic transforms are listed in Table \(\PageIndex<1>\).1>
\(f(t)\) | \(\mathcal \\) |
---|---|
\(C\) | \(\dfrac\) |
\(t\) | \(\dfrac>\) |
\(t^2\) | \(\dfrac\) |
\(t^3\) | \(\dfrac\) |
\(t^n\) | \(\dfrac>\) |
\(e^\) | \(\dfrac\) |
\(\sin(\omega t)\) | \(\dfrac<\omega> |
\(\cos(\omega t)\) | \(\dfrac |
\(\sinh(\omega t)\) | \(\dfrac<\omega>\) |
\(\cosh(\omega t)\) | \(\dfrac |
\(u(t-a)\) | \(\dfrac>\) |
Since the transform is defined by an integral. We can use the linearity properties of the integral. For example, suppose \(C\) is a constant, then \[ \mathcal
Linearity of the Laplace Transform Suppose that \(A\) , \(B\) , and \(C\) are constants, then \[ \mathcal \ = A\mathcal \ + B\mathcal \ \nonumber \] and in particular \[\mathcal \ = C\mathcal \ . \nonumber \]
These rules together with Table \(\PageIndex<1>\) make it easy to find the Laplace transform of a whole lot of functions already. But be careful. It is a common mistake to think that the Laplace transform of a product is the product of the transforms. In general1>
\[\mathcal
Let us consider when does the Laplace transform exist in more detail. First let us consider functions of exponential order. The function \(f(t)\) is of exponential order as \(t\) goes to infinity if \[ \left|f(t)\right| \le Me^, \nonumber \] for some constants \(M\) and \(c\), for sufficiently large \(t\) (say for all \(t > t_o\) for some \(t_o\)). The simplest way to check this condition is to try and compute \[ \lim_ \dfrac
Use L'Hopital's rule from calculus to show that a polynomial is of exponential order. Hint: Note that a sum of two exponential order functions is also of exponential order. Then show that \(t^n\) is of exponential order for any \(n\) .
For an exponential order function we have existence and uniqueness of the Laplace transform.Existence Let \(f(t)\) be continuous and of exponential order for a certain constant \(c\) . Then \( F(s) = \mathcal \ \) is defined for all \(s>c\) .
The existence is not difficult to see. Let \(f(t)\) be of exponential order, that is \(|f(t)| \leq Me^
Uniqueness Let \(f(t)\) and \(g(t)\) be continuous and of exponential order. Suppose that there exists a constant \(C\) , such that \(F(s) = G(s)\) for all \(s> C\). Then \(f(t)=g(t)\) for all \(t \ge 0\) .
Both theorems hold for piecewise continuous functions as well. Recall that piecewise continuous means that the function is continuous except perhaps at a discrete set of points where it has jump discontinuities like the Heaviside function. Uniqueness however does not “see” values at the discontinuities. So we can only conclude that \(F(s) = G(s)\) outside of discontinuities. For example, the unit step function is sometimes defined using \(u(0)=1/2\). This new step function, however, has the exact same Laplace transform as the one we defined earlier where \(u(0)=1\).
As we said, the Laplace transform will allow us to convert a differential equation into an algebraic equation. Once we solve the algebraic equation in the frequency domain we will want to get back to the time domain, as that is what we are interested in. If we have a function \(F(s)\), to be able to find \(f(t)\) such that \(\mathcal
Find the inverse Laplace transform of \(F(s) = \dfrac<1>\) Solution We look at the table to find \[ \mathcal^ \left\< \dfrac<1> \right\> = e^ \nonumber \]1>
As the Laplace transform is linear, the inverse Laplace transform is also linear. That is, \[ \mathcal
Another useful property is the so-called shifting property or the first shifting property \[ \mathcal \f(t)\> = F(s+a) \nonumber \] where \(F(s)\) is the Laplace transform of \(f(t)\) and \(a\) is a constant.
The shifting property can be used, for example, when the denominator is a more complicated quadratic that may come up in the method of partial fractions. We complete the square and write such quadratics as \((s+a)^2 + b\) and then use the shifting property.
Find \[ \mathcal
In general, we want to be able to apply the Laplace transform to rational functions, that is functions of the form \[\dfrac \nonumber \] where \(F(s)\) and \(G(s)\) are polynomials. Since normally, for the functions that we are considering, the Laplace transform goes to zero as \(s \rightarrow \infty\), it is not hard to see that the degree of \(F(s)\) must be smaller than that of \(G(s)\). Such rational functions are called proper rational functions and we can always apply the method of partial fractions. Of course this means we need to be able to factor the denominator into linear and quadratic terms, which involves finding the roots of the denominator,
[1] Just like the Laplace equation and the Laplacian, the Laplace transform is also named after Pierre-Simon, marquis de Laplace (1749 – 1827). [2] The function is named after the English mathematician, engineer, and physicist Oliver Heaviside (1850–1925). Only by coincidence is the function “heavy” on “one side.”
This page titled 6.1: The Laplace Transform is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Jiří Lebl via source content that was edited to the style and standards of the LibreTexts platform.