Circuit Theory/Laplace Transform

Laplace Transform
The Laplace Transform is a powerful tool that is very useful in Electrical Engineering. The transform allows equations in the "time domain" to be transformed into an equivalent equation in the Complex S Domain. The laplace transform is an integral transform, although the reader does not need to have a knowledge of integral calculus because all results will be provided. This page will discuss the Laplace transform as being simply a tool for solving and manipulating ordinary differential equations.

Laplace transformations of circuit elements are similar to phasor representations, but they are not the same. Laplace transformations are more general than phasors, and can be easier to use in some instances. Also, do not confuse the term "Complex S Domain" with the complex power ideas that we have been talking about earlier. Complex power uses the variable $$\mathbb{S}$$, while the Laplace transform uses the variable s. The Laplace variable s has nothing to do with power.

The transform is named after the mathematician Pierre Simon Laplace (1749-1827). The transform itself did not become popular until Oliver Heaviside, a famous electrical engineer, began using a variation of it to solve electrical circuits.

Laplace Domain
The Laplace domain, or the "Complex s Domain" is the domain into which the Laplace transform transforms a time-domain equation. s is a complex variable, composed of real and imaginary parts:


 * $$s = \sigma + j\omega$$

The Laplace domain graphs the real part (&sigma;) as the horizontal axis, and the imaginary part (&omega;) as the vertical axis. The real and imaginary parts of s can be considered as independent quantities.

The similarity of this notation with the notation used in Fourier transform theory is no coincidence; for $$\sigma=0$$, the Laplace transform is the same as the Fourier transform if the signal is causal.

The Transform
The mathematical definition of the Laplace transform is as follows:


 * $$F(s)

= \mathcal{L} \left\{f(t)\right\} = \int_{0^-}^\infty e^{-st} f(t)\,dt$$

The transform, by virtue of the definite integral, removes all t from the resulting equation, leaving instead the new variable s, a complex number that is normally written as $$s=\sigma+j\omega$$. In essence, this transform takes the function f(t), and "transforms it" into a function in terms of s, F(s). As a general rule the transform of a function f(t) is written as F(s). Time-domain functions are written in lower-case, and the resultant s-domain functions are written in upper-case.

we will use the following notation to show the transform of a function:


 * $$f(t) \Leftrightarrow F(s)$$

We use this notation, because we can convert F(s) back into f(t) using the inverse Laplace transform.

The Inverse Transform
The inverse laplace transform converts a function in the complex S-domain to its counterpart in the time-domain. Its mathematical definition is as follows:



\mathcal{L}^{-1} \left\{F(s)\right\} = {1 \over {2\pi}}\int_{c-i\infty}^{c+i\infty} e^{ft} F(s)\,ds = f(t)$$

where $$c$$ is a real constant such that all of the poles $$s_1,s_2,...,s_n$$ of $$F(s)$$ fall in the region $$\mathfrak{R}\{s_i\} < c$$. In other words, $$c$$ is chosen so that all of the poles of $$F(s)$$ are to the left of the vertical line intersecting the real axis at $$s=c$$.

The inverse transform is more difficult mathematically than the transform itself is. However, luckily for us, extensive tables of laplace transforms and their inverses have been computed, and are available for easy browsing.

Transform Properties
The most important property of the Laplace Transform (for now) is as follows:


 * $$\mathcal{L} \left\{ f'(t) \right\} = sF(s) - f(0)$$

Likewise, we can express higher-order derivatives in a similar manner:


 * $$\mathcal{L} \left\{f''(t)\right\} = s^2F(s) - s f(0) - f'(0) $$

Or for an arbitrary derivative:


 * $$\mathcal{L} \left\{f^{(n)}(t)\right\} = s^nF(s) - \sum_{i=0}^{n-1} s^{(n-1-i)} f^{(i)}(0) $$

where the notation $$ f^{(n)}(t) $$ means the nth derivative of the function $$ f $$ at the point $$ t $$, and $$ f^{(0)}(t) $$ means $$ f(t) $$.

In plain English, the laplace transform converts differentiation into polynomials. The only important thing to remember is that we must add in the initial conditions of the time domain function, but for most circuits, the initial condition is 0, leaving us with nothing to add.

For integrals, we get the following:


 * $$\mathcal{L}\left\{ \int_0^t f(t)\, dt \right\} = {1 \over s}F(s)$$

Initial Value Theorem
The Initial Value Theorem of the laplace transform states as follows:


 * $$f(0) \Leftrightarrow \lim_{s \to \infty} sF(s)$$

This is useful for finding the initial conditions of a function needed when we perform the transform of a differentiation operation (see above).

Final Value Theorem
Similar to the Initial Value Theorem, the Final Value Theorem states that we can find the value of a function f, as t approaches infinity, in the laplace domain, as such:


 * $$\lim_{t \to \infty} f(t) \Leftrightarrow \lim_{s \to 0} sF(s)$$

This is useful for finding the steady state response of a circuit. The final value theorem may only be applied to stable systems.

Transfer Function
If we have a circuit with impulse-response h(t) in the time domain, with input x(t) and output y(t), we can find the Transfer Function of the circuit, in the laplace domain, by transforming all three elements:



In this situation, H(s) is known as the "Transfer Function" of the circuit. It can be defined as both the transform of the impulse response, or the ratio of the circuit output to its input in the Laplace domain:


 * $$H(s) = \mathcal{L} \left\{h(t) \right\} = \frac{Y(s)}{X(s)}$$

Transfer functions are powerful tools for analyzing circuits. If we know the transfer function of a circuit, we have all the information we need to understand the circuit, and we have it in a form that is easy to work with. When we have obtained the transfer function, we can say that the circuit has been "solved" completely.

Convolution Theorem
Earlier it was mentioned that we could compute the output of a system from the input and the impulse response by using the convolution operation. As a reminder, given the following system:

We can calculate the output using the convolution operation, as such:


 * $$y(t) = x(t) * h(t)$$

Where the asterisk denotes convolution, not multiplication. However, in the S domain, this operation becomes much easier, because of a property of the laplace transform:


 * $$\mathcal{L} \left\{ a(t) * b(t) \right\} = A(s)B(s)$$

Where the asterisk operator denotes the convolution operation. This leads us to an English statement of the convolution theorem:

Now, if we have a system in the Laplace S domain:

We can compute the output Y(s) from the input X(s) and the Transfer Function H(s):


 * $$Y(s) = X(s)H(s)$$

Notice that this property is very similar to phasors, where the output can be determined by multiplying the input by the network function. The network function and the transfer function then, are very similar quantities.

Resistors
The laplace transform can be used independently on different circuit elements, and then the circuit can be solved entirely in the S Domain (Which is much easier). Let's take a look at some of the circuit elements:

Resistors are time and frequency invariant. Therefore, the transform of a resistor is the same as the resistance of the resistor:


 * $$R(s) = r$$

Compare this result to the phasor impedance value for a resistance r:


 * $$Z_r = r \angle 0$$

You can see very quickly that resistance values are very similar between phasors and laplace transforms.

Ohm's Law
If we transform Ohm's law, we get the following equation:


 * $$V(s) = I(s)R$$

Now, following ohms law, the resistance of the circuit element is a ratio of the voltage to the current. So, we will solve for the quantity $$\frac{V(s)}{I(s)}$$, and the result will be the resistance of our circuit element:


 * $$R = \frac{V(s)}{I(s)}$$

This ratio, the input/output ratio of our resistor is an important quantity, and we will find this quantity for all of our circuit elements. We can say that the transform of a resistor with resistance r is given by:


 * $$\mathcal{L}\{\text{resistor}\} = R = r$$

Capacitors
Let us look at the relationship between voltage, current, and capacitance, in the time domain:


 * $$i(t) = C\frac{dv(t)}{dt}$$

Solving for voltage, we get the following integral:


 * $$v(t) = \frac{1}{C}\int_{t_0}^{\infty} i(t)dt$$

Then, transforming this equation into the laplace domain, we get the following:


 * $$V(s) = \frac{1}{C} \frac{1}{s} I(s)$$

Again, if we solve for the ratio $$\frac{V(s)}{I(s)}$$, we get the following:


 * $$\frac{V(s)}{I(s)} = \frac{1}{sC}$$

Therefore, the transform for a capacitor with capacitance C is given by:


 * $$\mathcal{L}\{\mbox{capacitor}\} = \frac{1}{sC}$$

Inductors
Let us look at our equation for inductance:


 * $$v(t) = L \frac{di(t)}{dt}$$

putting this into the laplace domain, we get the formula:


 * $$V(s) = sLI(s)$$

And solving for our ratio $$\frac{V(s)}{I(s)}$$, we get the following:


 * $$\frac{V(s)}{I(s)} = sL$$

Therefore, the transform of an inductor with inductance L is given by:


 * $$\mathcal{L}\{\text{inductor}\} = sL$$

Impedance
Since all the load elements can be combined into a single format dependent on s, we call the effect of all load elements impedance, the same as we call it in phasor representation. We denote impedance values with a capital Z (but not a phasor $$\mathbb{Z}$$).

Determining electric current in circuits


In the network shown, determine the character of the currents $$I_1(t)$$, $$I_2(t)$$, and $$I_3(t)$$ assuming that each current is zero when the switch is closed.

===Solution ===

Current flow at a joint in circuit
Since the algebraic sum of the currents at any junction is zero, then

$$ I_1(t)-I_2(t)-I_3(t) = 0 $$.........(182)

Voltage balance on a circuit
Applying the voltage law to the circuit on the left we get

$$ I_1(t)R_1 + L_2 \frac{dI_2(t)}{dt}\ =E(t) $$......... (182-1)

Applying again the voltage law to the outside circuit, given that E is constant, we get

$$ I_1(t)R_1+I_3(t)R_3 + L_3 \frac{dI_3(t)}{dt}\ =E(t) $$......... (182-2)

Laplace Transforms of current and voltage equations
Transforming (182), (182-1) and (182-2), we get

$$ i_1(s)-i_2(s)-i_3(s)=0$$.........(182-3)

$$ i_1(s)R_1 +sL_2i_2(s)=\frac{E}{s}\ $$......... (182-4)

$$ i_1(s)R_1 +( R_3 + sL_3 ) i_3(s)= \frac{E}{s}\ $$	       ......... (182-5)

Review on implementing Laplace Transformation
The three Laplace transformed equations (182-3), (182-4), and (182-5) show the benefits of integral transformation in converting differential equations into linear algebraic equations that could be solved for the dependent variables (the three currents in this case), then inverse transformed to yield the required solution.


 * In equation (182-3), we utilized the sum property of Laplace transforms.
 * In equation (182-4), we utilized the transform of differential derivative as follows.

$$ si_2(s)-I_2(0)= \mathcal{L}\left\lbrace\frac {dI_2}{dt}\right\rbrace $$.........(182-4.1) Since, substituted by the given initial condition: $$ I_2(0)=0$$


 * In equation (182-5), we also utilized the transform of differential derivative

$$ si_3(s)-I_3(0)= \mathcal{L}\left\lbrace\frac {dI_3}{dt}\right\rbrace $$.........(182-5.2)

Again, we substituted by the given initial condition: $$ I_3(0)=0$$

The fact that the applied voltage was a step function, implied the use of Laplace transform of a step function, as follows:

$$ \frac {E}{s}= \mathcal{L}\left\lbrace E \right\rbrace $$.........(182-5.3)

Solution linear simultaneous equations
The three linear simultaneous equations (182-3), (182-4), and (182-5) have the three unknown $$i_1(s)$$, $$i_2(s)$$, and $$i_3(s)$$ and can be solved by Cramer’s rule of matrices among other simple methods of elimination, as follows.

$$ i_1(s) = \frac{ \begin{vmatrix} 0 & -1 & -1 \\ \frac{E}{s}\ & sL_2 & 0 \\ \frac{E}{s}\ & 0 &R_3 + sL_3 \\ \end{vmatrix}}{ \Delta}= \frac{E}{s}\frac{R_3+s(L_2+L_3)}{\Delta} $$ .........    (182-6)

Where, the determinant ∆ for the matrix is determined as follows $$ \Delta = \begin{vmatrix} 1 & -1 & -1 \\ R_1 & sL_2 & 0 \\ R_1 & 0 &R_3 + sL_3 \\ \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \\ R_1 & sL_2+R_1 & R_1 \\ R_1 & R_1 & R_1+R_3 + sL_3 \\ \end{vmatrix} $$

$$ \Delta = s^2L_2L_3+s(R_1L_2+R_3L_2 +R_1R_3)+R_1R_3$$......... (182-6.1)

Since we are interested in the factors of Δ, we consider the equation Δ =0. Since all coefficients of this equation are positive, hence it cannot have any positive roots. Its discriminant is

$$(R_1L_2+R_3L_2 +R_1R_3)^2-4L_2L_3R_1R_3$$ ........ (182-6.1.1)

which can be written


 * $$R^2_1L^2_2+2R_1L_2(R_3L_2+R_1L_3)+(R_3L_2-R_1L_3)^2$$........ (182-6.1.2)

which is positive. Hence the equation Δ = 0 has two negative distinct roots $$-\alpha_1$$ and $$-\alpha_2$$, say.

Therefore, $$\Delta = L_2L_3(s+\alpha_1)(s+\alpha_2)$$ ......... (182-6.2)

Where,$$-\alpha_1$$ and $$-\alpha_2$$ are the roots of the quadratic equation (182-6.1) as follows

$$\alpha_1=\frac{1}{2}\left\lbrace\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}+\sqrt {\left(\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}\right)^2-4\frac{R_1R_3}{L_2L_3}}\right\rbrace $$ ......... (182-6.2.1)

$$\alpha_2=\frac{1}{2}\left\lbrace\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}-\sqrt {\left(\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}\right)^2-4\frac{R_1R_3}{L_2L_3}}\right\rbrace $$ ......... (182-6.2.2)

Therefore, equations (182-6) and (186-6.2) give

$$i_1(s)=\frac{E}{s}.\frac{R_3+s(L_2+L_3)}{L_2L_3(s+\alpha_1)(s+\alpha_2)} $$

$$i_1(s)=\frac{A_0}{s}+\frac{A_1}{s+\alpha_1}+\frac{A_1}{s+\alpha_2} $$ 	 .........(182-7)

The constants $$A_0$$, $$A_1$$, and $$A_2$$ are obtained in terms of $$R_1$$, $$L_2$$, $$L_3$$, and $$R_3$$ and are given as:

$$A_0=-\frac{ER_3}{L_2L_3\alpha_1\alpha_2}$$ .........(182-7.1)

$$A_1=E\frac{R_3\alpha_2-\alpha_1\alpha_2(L_2+L_3)}{L_2L_3\alpha_1\alpha_2(\alpha_1-\alpha_2)}$$.........(182-7.2)

$$A_2=E\frac{\alpha_2(L_2+L_3)-R_3}{L_2L_3\alpha_2(\alpha_1-\alpha_2)}$$.........(182-7.3)

Inverse Laplace Transforms of current equations
The inverse Laplace transform of (182-7) is therefore, $$I_1(t)=\mathcal{L}^{-1}\left\lbrace\frac{A_0}{s}+\frac{A_1}{s+\alpha_1}+\frac{A_1}{s+\alpha_2}\right\rbrace =A_0+A_1e^{-\alpha_1t}+A_2e^{-\alpha_2t} $$.........(182-8)

The remaining variables $$I_2(t)$$ and $$I_3(t)$$ and the corresponding voltages are determined by equations (182), (182-1) and (182-2)

Analysis of circuit dynamics
The electric current $$I_1(t)$$ in equation (182-8) shows a time-independent component $$A_0$$ and two decay terms, which reach asymptotic values as t reaches ∞. In other words, the currents in the three circuits lack sinusoidal osculation, mainly because: (1) the applied voltage is constant and (2) the circuit does not have capacitance components.

Notes: This example could be modified in various ways to involve voltage impulse, sinusoidal voltage source, capacitance,and various boundary and initial conditions of charges and currents.

Generalization of the method
In the above example, the following modifications can be made:

(1) The applied voltage in the Krichhoff's equation can take many forms such as


 * :$$E(t)=E_o\delta(t)$$
 * :$$E(t)=E_o\sin(\omega t)$$
 * :$$E(t)=E_of(t)$$

(2) Capacitance add the integral term of current over the duration as


 * : $$ \frac{1}{C}\int_0^t I(\tau)d\tau$$