Control Systems/Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors
The eigenvalues and eigenvectors of the system matrix play a key role in determining the response of the system. It is important to note that only square matrices have eigenvalues and eigenvectors associated with them. Non-square matrices cannot be analyzed using the methods below.

The word "eigen" comes from German and means "own" as in "characteristic", so this chapter could also be called "Characteristic values and characteristic vectors". The terms "Eigenvalues" and "Eigenvectors" are most commonly used. Eigenvalues and Eigenvectors have a number of properties that make them valuable tools in analysis, and they also have a number of valuable relationships with the matrix from which they are derived. Computing the eigenvalues and the eigenvectors of the system matrix is one of the most important things that should be done when beginning to analyze a system matrix, second only to calculating the matrix exponential of the system matrix.

The eigenvalues and eigenvectors of the system determine the relationship between the individual system state variables (the members of the x vector), the response of the system to inputs, and the stability of the system. Also, the eigenvalues and eigenvectors can be used to calculate the matrix exponential of the system matrix through spectral decomposition. The remainder of this chapter will discuss eigenvalues, eigenvectors, and the ways that they affect their respective systems.

Characteristic Equation
The characteristic equation of the system matrix A is given as:


 * $$Av = \lambda v$$

Where &lambda; are scalar values called the eigenvalues, and v are the corresponding eigenvectors. To solve for the eigenvalues of a matrix, we can take the following determinant:


 * $$|A - \lambda I| = 0$$

To solve for the eigenvectors, we can then add an additional term, and solve for v:


 * $$(A - \lambda I)v = 0$$

Another value worth finding are the left eigenvectors of a system, defined as w in the modified characteristic equation:


 * $$wA = \lambda w$$

For more information about eigenvalues, eigenvectors, and left eigenvectors, read the appropriate sections in the following books:


 * Linear Algebra
 * Engineering Analysis

Diagonalization
If the matrix A has a complete set of distinct eigenvalues, the matrix can be diagonalized. A diagonal matrix is a matrix that only has entries on the diagonal, and all the rest of the entries in the matrix are zero. We can define a transformation matrix, T, that satisfies the diagonalization transformation:


 * $$A = TDT^{-1}$$

Which in turn will satisfy the relationship:


 * $$e^{At} = Te^{Dt}T^{-1}$$

The right-hand side of the equation may look more complicated, but because D is a diagonal matrix here (not to be confused with the feed-forward matrix from the output equation), the calculations are much easier.

We can define the transition matrix, and the inverse transition matrix in terms of the eigenvectors and the left eigenvectors:


 * $$ T = \begin{bmatrix} v_1 & v_2 & v_3 & \cdots & v_n\end{bmatrix}$$


 * $$ T^{-1} = \begin{bmatrix} w_1' \\w_2' \\ w_3' \\\vdots \\ w_n'\end{bmatrix}$$

We will further discuss the concept of diagonalization later in this chapter.

Exponential Matrix Decomposition
A matrix exponential can be decomposed into a sum of the eigenvectors, eigenvalues, and left eigenvectors, as follows:


 * $$e^{At} = \sum_{i = 1}^n e^{\lambda_i t}v_i w_i'$$

Notice that this equation only holds in this form if the matrix A has a complete set of n distinct eigenvalues. Since w'i is a row vector, and x(0) is a column vector of the initial system states, we can combine those two into a scalar coefficient &alpha;:


 * $$e^{At} x(t_0) = \sum_{i = 1}^n \alpha_i e^{\lambda_i t} v_i $$

Since the state transition matrix determines how the system responds to an input, we can see that the system eigenvalues and eigenvectors are a key part of the system response. Let us plug this decomposition into the general solution to the state equation:


 * $$x(t) = \sum_{i = 1}^n \alpha_i e^{\lambda_i t} v_i + \sum_{i = 1}^n \int_0^t e^{\lambda_i (t-\tau)}v_i w_i' Bu(\tau) d\tau$$

We will talk about this equation in the following sections.

State Relationship
As we can see from the above equation, the individual elements of the state vector x(t) cannot take arbitrary values, but they are instead related by weighted sums of multiples of the systems right-eigenvectors.

Decoupling
If a system can be designed such that the following relationship holds true:


 * $$w_i'B = 0$$

then the system response from that particular eigenvalue will not be affected by the system input u, and we say that the system has been decoupled. Such a thing is difficult to do in practice.

Condition Number
With every matrix there is associated a particular number called the condition number of that matrix. The condition number tells a number of things about a matrix, and it is worth calculating. The condition number, k, is defined as:


 * $$k = \frac{\|w_i\|\|v_i\|}{|w_i'v_i|}$$

Systems with smaller condition numbers are better, for a number of reasons:
 * 1) Large condition numbers lead to a large transient response of the system
 * 2) Large condition numbers make the system eigenvalues more sensitive to changes in the system.

We will discuss the issue of eigenvalue sensitivity more in a later section.

Stability
We will talk about stability at length in later chapters, but is a good time to point out a simple fact concerning the eigenvalues of the system. Notice that if the eigenvalues of the system matrix A are positive, or (if they are complex) that they have positive real parts, that the system state (and therefore the system output, scaled by the C matrix) will approach infinity as time t approaches infinity. In essence, if the eigenvalues are positive, the system will not satisfy the condition of BIBO stability, and will therefore become unstable.

Another factor that is worth mentioning is that a manufactured system never exactly matches the system model, and there will always been inaccuracies in the specifications of the component parts used, within a certain tolerance. As such, the system matrix will be slightly different from the mathematical model of the system (although good systems will not be severely different), and therefore the eigenvalues and eigenvectors of the system will not be the same values as those derived from the model. These facts give rise to several results:


 * 1) Systems with high condition numbers may have eigenvalues that differ by a large amount from those derived from the mathematical model. This means that the system response of the physical system may be very different from the intended response of the model.
 * 2) Systems with high condition numbers may become unstable simply as a result of inaccuracies in the component parts used in the manufacturing process.

For those reasons, the system eigenvalues and the condition number of the system matrix are highly important variables to consider when analyzing and designing a system. We will discuss the topic of stability in more detail in later chapters.

Non-Unique Eigenvalues
The decomposition above only works if the matrix A has a full set of n distinct eigenvalues (and corresponding eigenvectors). If A does not have n distinct eigenvectors, then a set of generalized eigenvectors need to be determined. The generalized eigenvectors will produce a similar matrix that is in Jordan canonical form, not the diagonal form we were using earlier.

Generalized Eigenvectors
Generalized eigenvectors can be generated using the following equation:


 * $$(A - \lambda I) v_{n+1} = v_n$$

if d is the number of times that a given eigenvalue is repeated, and p is the number of unique eigenvectors derived from those eigenvalues, then there will be q = d - p generalized eigenvectors. Generalized eigenvectors are developed by plugging in the regular eigenvectors into the equation above (vn). Some regular eigenvectors might not produce any non-trivial generalized eigenvectors. Generalized eigenvectors may also be plugged into the equation above to produce additional generalized eigenvectors. It is important to note that the generalized eigenvectors form an ordered series, and they must be kept in order during analysis or the results will not be correct.

Jordan Canonical Form
If a matrix has a complete set of distinct eigenvectors, the transition matrix T can be defined as the matrix of those eigenvectors, and the resultant transformed matrix will be a diagonal matrix. However, if the eigenvectors are not unique, and there are a number of generalized eigenvectors associated with the matrix, the transition matrix T will consist of the ordered set of the regular eigenvectors and generalized eigenvectors. The regular eigenvectors that did not produce any generalized eigenvectors (if any) should be first in the order, followed by the eigenvectors that did produce generalized eigenvectors, and the generalized eigenvectors that they produced (in appropriate sequence).

Once the T matrix has been produced, the matrix can be transformed by it and it's inverse:


 * $$A = T^{-1}JT$$

The J matrix will be a Jordan block matrix. The format of the Jordan block matrix will be as follows:


 * $$J = \begin{bmatrix}

D & 0 & \cdots & 0 \\ 0 & J_1 & \cdots & 0 \\ \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & \cdots & J_n \end{bmatrix}$$

Where D is the diagonal block produced by the regular eigenvectors that are not associated with generalized eigenvectors (if any). The Jn blocks are standard Jordan blocks with a size corresponding to the number of eigenvectors/generalized eigenvectors in each sequence. In each Jn block, the eigenvalue associated with the regular eigenvector of the sequence is on the main diagonal, and there are 1's in the sub-diagonal.

Equivalence Transformations
If we have a non-singular n &times; n matrix P, we can define a transformed vector "x bar" as:


 * $$\bar{x} = Px$$

We can transform the entire state-space equation set as follows:


 * $$\bar{x}'(t) = \bar{A}\bar{x}(t) + \bar{B}u(t)$$
 * $$\bar{y}(t) = \bar{C}\bar{x}(t) + \bar{D}u(t)$$

Where:


 * {| class="wikitable"


 * $$\bar{A} = PAP^{-1}$$
 * $$\bar{B} = PB$$
 * $$\bar{C} = CP^{-1}$$
 * $$\bar{D} = D$$
 * }
 * $$\bar{C} = CP^{-1}$$
 * $$\bar{D} = D$$
 * }
 * }

We call the matrix P the equivalence transformation between the two sets of equations.

It is important to note that the eigenvalues of the matrix A (which are of primary importance to the system) do not change under the equivalence transformation. The eigenvectors of A, and the eigenvectors of $$\bar{A}$$ are related by the matrix P.

Lyapunov Transformations
The transformation matrix P is called a Lyapunov Transformation if the following conditions hold:


 * P(t) is nonsingular.
 * P(t) and P'(t) are continuous
 * P(t) and the inverse transformation matrix P-1(t) are finite for all t.

If a system is time-variant, it can frequently be useful to use a Lyapunov transformation to convert the system to an equivalent system with a constant A matrix. This is not always possible in general, however it is possible if the A(t) matrix is periodic.

System Diagonalization
If the A matrix is time-invariant, we can construct the matrix V from the eigenvectors of A. The V matrix can be used to transform the A matrix to a diagonal matrix. Our new system becomes:


 * $$Vx'(t) = VAV^{-1}Vx(t) + VBu(t)$$
 * $$y(t) = CV^{-1}Vx(t) + Du(t)$$

Since our system matrix is now diagonal (or Jordan canonical), the calculation of the state-transition matrix is simplified:


 * $$e^{VAV^{-1}} = \Lambda$$

Where &Lambda; is a diagonal matrix.

MATLAB Transformations
The MATLAB function ss2ss can be used to apply an equivalence transformation to a system. If we have a set of matrices A, B, C and D, we can create equivalent matrices as such:

[Ap, Bp, Cp, Dp] = ss2ss(A, B, C, D, p);

Where p is the equivalence transformation matrix.