DyingLoveGrape.


Differential Equations: The Laplace Transform, Part 2: Two Other Importan Functions.


Some more neat functions.

There's two other main types of functions that work well with Laplace transform: the Dirac $\delta$ function and the heaviside function. Let's talk about the $\delta$ function first.

Dirac $\delta$ "Function".

Strictly speaking, the $\delta$ function is not a "function", but it's a nice tool to use with functions. The idea is to look at a limit of "normal curves" like this:

Notice that these functions get thinner and thinner, steeper and steeper, but each of them have total area under them equal to 1. That is, the integral of each of these functions from $-\infty$ to $\infty$ is equal to 1. What happens when we take the limit of these functions? We get something infinitely high at the point $x = 0$ and equal to 0 for $x\neq 0$; we define this limit to be the Dirac $\delta$ function. The "point at infinity" is why we can't call the $\delta$ function a "true" function: functions must have finite values for each $x$. Moreover, we define the total integral of the delta function to be 1. That's kind of weird. Let's definite this formally:

Definition (Dirac $\delta$ Function). The $\delta$ function $\delta_{a}(x)$ is defined to be infinitely large at $x = a$ and equal to 0 elsewhere. Moreover, \[\int_{-\infty}^{\infty}\delta_{a}(x)\,dx = 1.\]

The idea behind the Dirac function is that it stands for an "impulse" at some point; it is almost like a brief, strong jolt. It's a bit like the feeling right when you wake up from a terrible dream in a cold sweat, except in function form.

What can we do with $\delta_{a}(x)$? Well, since this post is about Laplace transformations, maybe we ought to try that. By definition, \[{\mathcal L}(\delta_{a})(s) = \int_{-\infty}^{\infty} \delta_{a}(t)e^{-st}\,dt\] But, note, $\delta_{a}(t)e^{-st} = 0$ for every $x\neq a$. The integral should be equal to 0, but because of how strange the $\delta$ function is (and because its total integral is equal to 1), we essentially just have that the integral is equal to whatever value $e^{-st}$ is at the non-zero point of $\delta_{a}$; that is, it's equal to $e^{-sa}$. \[{\mathcal L}(\delta_{a})(s) = \int_{-\infty}^{\infty} \delta_{a}(t)e^{-st}\,dt = e^{-sa}.\] Notice that ${\mathcal L}(\delta_{0})(s) = e^{0} = 1$. That's pretty neat. That means that the inverse Laplace of the function 1 is just the $\delta$ function.

[Small Note: sometimes we make the $\delta$ function slightly "more" of a function by defining the point where it is infinitely tall to be 1 instead of $\infty$. We still have the property that the total integral is equal to 1, but this makes the $\delta$ function at least plot-able.]

The Heaviside Function.

This function was constructed by Oliver Heaviside, but I like to think that it's called the Heaviside function because it's "heavy" on one side. You'll see what I mean.

Definition (Heaviside Function). Define the heaviside function by \[ H_{a}[x] = \left\{ \begin{array}{lr} 0 & n \lt a\\ 1 & n\geq a \end{array} \right\}.\]

The Heaviside function looks pretty boring, but it turns out to be quite useful. Here's a picture of the Heaviside function $H_{0}(x)

Most of the time when we multiply something by the Heaviside function it just sends a lot of the function to 0 and then the rest of it stays the same. For example, with the function $f(x) = e^{-x}\sin(2x)$ and the Heaviside function $H_{0}(x)$ we have their product is:

The dotted part of the graph is the rest of $e^{-x}\sin(2x)$ that the heaviside function "sent to 0." The Heaviside function is nice if you want to cut off useless data; if, for example, you make $\$4$ an hour as a graduate student, then if you plot that function you don't want it to extend to the negative axis since there's no such thing as a negative hour.

Let's look at what happens when we take the Laplace transform of the Heaviside function: \[\begin{align*}{\mathcal L}(H_{a})(s) &= \int_{-\infty}^{\infty} H_{a}[x]e^{-st}\,dt\\ &= \int_{a}^{\infty}e^{-st}\,dt\\ &= \frac{-1}{s}\left[e^{-st}\right]_{a}^{\infty}\\ &= \frac{-1}{s}(0 - e^{-sa}) = \frac{e^{-sa}}{s}\end{align*}\]

To sum up,...

Let's jot all of this down in a box, just so we can refer back to it.

Facts. \[{\mathcal L}(\delta_{a})(s) = e^{-as},\] \[{\mathcal L}(H_{a}[x])(s) = \frac{e^{-as}}{s}.\]

Next time...?

There's a number of different ways to combine these kinds of functions, but generally speaking there isn't a nice way, given $f,g$ functions, to find ${\mathcal L}(f(t)g(t))(s)$ in terms of ${\mathcal L}(f)(s)$ and ${\mathcal L}(g)(s)$; one would expect the Laplace transform of the product to be the product of the Laplace transforms of each function, but this isn't true — but, if we make one slightly alteration and take something called the convolution of $f$ and $g$, denoted $f\ast g$, then it is the case that the Laplace transform of the convolution is equal to the product of the Laplace transform of each function: ${\mathcal L}(f\ast g)(s)={\mathcal L}(f)(s){\mathcal L}(f)(s)$. We'll talk a bit more about convolutions in general next time, and then apply them to Laplace transforms.