Talk:Control Systems/Transfer Functions

In the "impulse response" section it says h(t) = y(t)/x(t). This is wrong. The relationship between h(t), y(t), and x(t) is shown lower below as y(t) = the convolution of h(t) and x(t).

See, in the laplace domain convolution becomes multiplication, and that's why Y(s) = H(s)*X(s) => H(s) = Y(s)/X(s). Again, to say h(t) = y(t)/x(t) is inaccurate.

Just wanted to let someone know who can hopefully edit this page.

-Asaf

In the "impulse response" section it says h(t) = y(t)/x(t). This is wrong. The relationship between h(t), y(t), and x(t) is shown lower below as y(t) = the convolution of h(t) and x(t).

See, in the laplace domain convolution becomes multiplication, and that's why Y(s) = H(s)*X(s) => H(s) = Y(s)/X(s). Again, to say h(t) = y(t)/x(t) is inaccurate.

Just wanted to let someone know who can hopefully edit this page.

-Asaf

Eigenvector treatment
It is very sad that all books just repeat some mantras without explaining anything. By just saying that Fourier just a replacement of variables does not explain anything to me either. Why replacement of variables? How does time -> freq domain helps us to find the response? Why linear algebra is important?

I guess at first that impulse and frequency responses are two alternative ways to get a response of the system for arbitrary signal. They exploit the fact that any signal can be decomposed into a linear combination of basis signals. You can determine how system reacts on these basis signals (either single pulse or every frequency for freq response) in advance. Now, you decompose your signal into these basis signals, determine the responses and sum them up (linear systems allow doing this). Voila. That is the first point, I believe that everybody must understand.

The impulse response approach considers your signal consisting of pulses. This implies convolution because to know response at time t, we need to integrate the responses of the system to the pulses at time t-1, t-2, t-3, .... So. you sum (input(t-i) * h(i)) where h(i) is a response of system i seconds later, which means response at time t for input applied at t-i. Since you want to know the output y at all times, t, you create a convolution matrix

$$Y = \begin{bmatrix}y_t \\ y_{t-1} \\ y_{t-2} \\ \vdots \\ \end{bmatrix} = \begin{bmatrix}1 & h_0 & h_1 & h_2 & \cdots \\ 0 & h_0 & h_1 & h_2 & \cdots \\ 0 & 0 & h_0 & h_1 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} \, \begin{bmatrix}y_t \\ y_{t-1} \\ y_{t-2} \\ \vdots \\ \end{bmatrix}$$

If you now recall that impulse at the origin is equivalent to a constant 1 in the frequency spectrum, you will understand that these two are basically the same thing! You do not need to determine how your system responds to every particular frequency if you know the impulse response - impulse response contains all that information. In the time domain impulse response displayed you what is the output of the system t seconds after pulse application, frequency response is just Fourier transform of this time plot. It shows the spectrum you have at the output of the system after single impulse application -- how every frequency, equal 1 at the moment of pules, is scaled by the system. This spectrum is a series of eigenvalues, attached to every frequency (eigenvector) of your system. This is the most fundamental characterization of your system.

Impulse vs. frequency response approaches differ only by using two different basis vectors. Whereas impulse response represents every vector [a b c d ...] as using basis [1 0 0 0 0 ...], [0 1 0 0 0 ...], ... (well, it looks like identity matrix or 'standard basis') -- you see the shifted pulses in the time domain, frequency response decomposes your vector into the sines (complex exponentials, in general). Despite you need undergo Fourier transform to get into this basis (whereas decomposition [a b c ...] = a[1 0 0 0 ..], b[0 1 0 0...] is pretty trivial), the exponentials are are more advantageous in LTI since they are basis vectors of the Toeplitz (convolution) operators. Derivative operators, which look like $$\begin{bmatrix}1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ & 0 & 0 & 1 & -1 \\ \end{bmatrix}$$ are instances of convolution. Apply an exponent, eg. column $$[x^3 \, x^2 \, x \, 1 ]$$ to the matrix. You will see that the response would be $$(x-1) [x^3 \, x^2 \, x \, 1 ]$$. That is exponent was an eigenvector with eigenvalue (x-1). That is why the recurrence relations, $$a_n {D^n \over dt} y_n + a_{n-1} {D^{n-1} \over dt} y_{n-1} + \ldots + a_0 = b_m {D^m \over dt} x_m + b_{m-1} {D^{m-1} \over dt} y_{m-1} + \ldots + b_0 $$ is reduced to a simple algebraic "characteristic equations" in fourier domain. The fact that exponent is an eigenvector is of differential operators D/dt is also reflected in the relationship $$D(e^{at})/dt = ae^{at}$$. It would be noteworthy to point out that LTI engeneers are used to denote the differentiation operator D/Dt by variable "s". Henceforth, scaling by 1/s is "integration".

Taking Fourier transform of the vector means that we decompose our vector into exponintial basis. This is eigenbasis for the convolution and that is why complex convolution (complexity is n x n for n-component matrix/vector) is replaced with a simple mulitplication. You have basically diagonalized your system and comvolution in one domain is the multiplication in the other. It makes sense to filter (i.e. convolve) very long vectors in the frequency domain after advent of fast fourier transform.

I am far from understandint these matters completely and the lack of comprehensive treatise is one reason for that. But I am sure that this is the key. Yet, these insights not only silenced, they are removed from convolution theorem (see my recent complain in). It seems that our dons want us to repeat stupid mantras. --Javalenok (discuss • contribs) 11:02, 25 July 2014 (UTC)