Signals and Systems/Time Domain Analysis

There are many tools available to analyze a system in the time domain, although many of these tools are very complicated and involved. Nonetheless, these tools are invaluable for use in the study of linear signals and systems, so they will be covered here.

Linear Time-Invariant (LTI) Systems
This page will contain the definition of a LTI system and this will be used to motivate the definition of convolution as the output of a LTI system in the next section. To begin with a system has to be defined and the LTI properties have to be listed. Then, for a given input it can be shown (in this section or the following) that the output of a LTI system is a convolution of the input and the system's impulse response, thus motivating the definition of convolution.

Consider a system for which an input of xi(t) results in an output of yi(t) respectively for i = 1, 2.

Linearity
There are 2 requirements for linearity. A function must satisfy both to be called "linear".


 * 1) Additivity: An input of $$x_3(t) = x_1(t) + x_2(t)$$ results in an output of $$y_3(t) = y_1(t) + y_2(t)$$.
 * 2) Homogeneity: An input of $$ax_1$$ results in an output of $$ay_1$$

Being linear is also known in the literature as "satisfying the principle of superposition". Superposition is a fancy term for saying that the system is additive and homogeneous. The terms linearity and superposition can be used interchangeably, but in this book we will prefer to use the term linearity exclusively.

We can combine the two requirements into a single equation: In a linear system, an input of $$a_1x_1(t)+a_2x_2(t)$$ results in an output of $$a_1y_1(t)+a_2y_2(t)$$.

Additivity
A system is said to be additive if a sum of inputs results in a sum of outputs. To test for additivity, we need to create two arbitrary inputs, x1(t) and x2(t). We then use these inputs to produce two respective outputs:


 * $$y_1(t) = f(x_1(t))$$
 * $$y_2(t) = f(x_2(t))$$

Now, we need to take a sum of inputs, and prove that the system output is a sum of the previous outputs:


 * $$y_1(t) + y_2(t) = f(x_1(t)) + f(x_2(t))$$

If this final relationship is not satisfied for all possible inputs, then the system is not additive.

Homogeneity
Similar to additivity, a system is homogeneous if a scaled input (multiplied by a constant) results in a scaled output. If we have two inputs to a system:


 * $$y_1(t) = f(x_1(t))$$
 * $$y_2(t) = f(x_2(t))$$

Where


 * $$x_1(t) = cx_2(t)$$

Where c is an arbitrary constant. If this is the case then the system is homogeneous if


 * $$y_1(t) = cy_2(t)$$

for any arbitrary c.

Time Invariance
If the input signal x(t) produces an output y(t) then any time shifted input, x(t + &delta;), results in a time-shifted output y(t + &delta;).

This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output.

Linear Time Invariant (LTI) Systems
The system is linear time-invariant (LTI) if it satisfies both the property of linearity and time-invariance. This book will study LTI systems almost exclusively, because they are the easiest systems to work with, and they are ideal to analyze and design.

Other Function Properties
Besides being linear, or time-invariant, there are a number of other properties that we can identify in a function:

Memory
A system is said to have memory if the output from the system is dependent on past inputs (or future inputs) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications.

Causality
Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past or current inputs. A system is called non-causal if the output of the system is dependent on future inputs. Most of the practical systems are causal.

Stability
Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if we apply 5 volts to the input terminals of a given circuit, we would like it if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO.

Studying BIBO stability is a relatively complicated course of study, and later books on the Electrical Engineering bookshelf will attempt to cover the topic.

Linear Operators
Mathematical operators that satisfy the property of linearity are known as linear operators. Here are some common linear operators:


 * 1) Derivative
 * 2) Integral
 * 3) Fourier Transform

Impulse Response
Impulse response tells us how a system reacts when we hit it with an impulse signal (also called as dirac delta function). This impulse response is very important term in analyzing the behaviour of systems.

Zero-Input Response

 * $$x(t)=u(t)$$
 * $$h(t)=e^{-x}u(t)$$

Zero-State Response
zero state response means steady state or forced response. This is the system response y(t) to an input f(t) when the system is in zero state; that is, when all initial conditions are zero.

Second-Order Solution

 * Example. Finding the total response of a driven RLC circuit.

Convolution
Convolution (folding together) is a complicated operation involving integrating, multiplying, adding, and time-shifting two signals together.

The convolution a * b of two functions a and b is defined as the function:


 * $$(a*b)(t) = \int_{-\infty}^\infty a(\tau)b(t - \tau)d\tau$$

The Greek letter &tau; (tau) is used as the integration variable, because the letter t is already in use. &tau; is used as a "dummy variable" because we use it merely to calculate the integral.

In the convolution integral, all references to t are replaced with &tau;, except for the -t in the argument to the function b. Function b is time inverted by changing &tau; to -&tau;. Graphically, this process moves everything from the right-side of the y axis to the left side and vice-versa. Time inversion turns the function into a mirror image of itself.

Next, function b is time-shifted by the variable t. Remember, once we replace everything with &tau;, we are now computing in the tau domain, and not in the time domain like we were previously. Because of this, t can be used as a shift parameter.

We multiply the two functions together, time shifting along the way, and we take the area under the resulting curve at each point. Two functions overlap in increasing amounts until some "watershed" after which the two functions overlap less and less. Where the two functions overlap in the t domain, there is a value for the convolution. If one (or both) of the functions do not exist over any given range, the value of the convolution operation at that range will be zero.

After the integration, the definite integral plugs the variable t back in for remaining references of the variable &tau;, and we have a function of t again. It is important to remember that the resulting function will be a combination of the two input functions, and will share some properties of both.

Properties of Convolution
The convolution function satisfies certain conditions:


 * Commutativity:
 * $$f * g = g * f \,$$


 * Associativity:
 * $$f * (g  * h) = (f  * g)  * h \,$$


 * Distributivity:
 * $$f * (g + h) = (f  * g) + (f  * h) \,$$


 * Associativity With Scalar Multiplication:
 * $$a (f * g) = (a f)  * g = f  * (a g) \,$$

for any real (or complex) number a.


 * Differentiation Rule
 * $$(f * g)' = f' * g = f * g' \,$$

Correlation
Akin to Convolution is a technique called "Correlation" that combines two functions in the time domain into a single resultant function in the time domain. Correlation is not as important to our study as convolution is, but it has a number of properties that will be useful nonetheless.

The correlation of two functions, g(t) and h(t) is defined as such:


 * $$R_{gh}(t) = \int_{-\infty}^\infty g(\tau)h(t + \tau) d\tau$$

Where the capital R is the Correlation Operator, and the subscripts to R are the arguments to the correlation operation.

We notice immediately that correlation is similar to convolution, except that we don't time-invert the second argument before we shift and integrate. Because of this, we can define correlation in terms of convolution, as such:


 * $$R_{gh}(t) = g(t) * h(-t)$$

Uses of Correlation
Correlation is used in many places because it demonstrates one important fact: Correlation determines how much similarity there is between the two argument functions. The more the area under the correlation curve, the more is the similarity between the two signals.

Autocorrelation
The term "autocorrelation" is the name of the operation when a function is correlated with itself. The autocorrelation is denoted when both of the subscripts to the Correlation operator are the same:


 * $$R_{xx}(t) = x(t) * x(-t)$$

While it might seem ridiculous to correlate a function with itself, there are a number of uses for autocorrelation that will be discussed later. Autocorrelation satisfies several important properties:


 * 1) The maximum value of the autocorrelation always occurs at t = 0. The function always decreases, stays constant, or fluctuates (if the signal is periodic) as t approaches infinity.
 * 2) Autocorrelation is symmetric about the x axis.

Crosscorrelation
Cross correlation is every instance of correlation that is not considered "autocorrelation". In general, crosscorrelation occurs when the function arguments to the correlation are not equal. Crosscorrelation is used to find the similarity between two signals.


 * $$R_{gh}(t) = \int_{-\infty}^\infty g(\tau)h(t + \tau) d\tau$$