Control Systems/State-Space Equations

Time-Domain Approach
The "Classical" method of controls (what we have been studying so far) has been based mostly in the transform domain. When we want to control the system in general, we represent it using the Laplace transform (Z-Transform for digital systems) and when we want to examine the frequency characteristics of a system we use the Fourier Transform. The question arises, why do we do this?

Let's look at a basic second-order Laplace Transform transfer function:


 * $$\frac{Y(s)}{X(s)} = G(s) = \frac{1 + s}{1 + 2s + 5s^2}$$

We can decompose this equation in terms of the system inputs and outputs:


 * $$(1 + 2s + 5 s^2)Y(s) = (1 + s)X(s)$$

Now, when we take the inverse Laplace transform of our equation, we can see that:


 * $$y(t) + 2\frac{d y(t)}{dt} + 5\frac{d^2y(t)}{dt^2} = x(t) + \frac{dx(t)}{dt}$$

The Laplace transform is transforming the fact that we are dealing with second-order differential equations. The Laplace transform moves a system out of the time-domain into the complex frequency domain so we can study and manipulate our systems as algebraic polynomials instead of linear ODEs. Given the complexity of differential equations, why would we ever want to work in the time domain?

It turns out that to decompose our higher-order differential equations into multiple first-order equations, one can find a new method for easily manipulating the system without having to use integral transforms. The solution to this problem is state variables . By taking our multiple first-order differential equations and analyzing them in vector form, we can not only do the same things we were doing in the time domain using simple matrix algebra, but now we can easily account for systems with multiple inputs and outputs without adding much unnecessary complexity. This demonstrates why the "modern" state-space approach to controls has become popular.

State-Space
In a state-space system, the internal state of the system is explicitly accounted for by an equation known as the state equation. The system output is given in terms of a combination of the current system state, and the current system input, through the output equation. These two equations form a system of equations known collectively as state-space equations. The state-space is the vector space that consists of all the possible internal states of the system.

For a system to be modeled using the state-space method, the system must meet this requirement:


 * 1) The system must be "lumped"

"Lumped" in this context, means that we can find a finite-dimensional state-space vector which fully characterises all such internal states of the system.

This text mostly considers linear state-space systems where the state and output equations satisfy the superposition principle. However, the state-space approach is equally valid for nonlinear systems although some specific methods are not applicable to nonlinear systems.

State
Central to the state-space notation is the idea of a state. A state of a system is the current value of internal elements of the system which change separately (but are not completely unrelated) to the output of the system. In essence, the state of a system is an explicit account of the values of the internal system components. Here are some examples:

State Variables
When modeling a system using a state-space equation, we first need to define three vectors:


 * Input variables: A SISO (Single-Input Single-Output) system will only have one input value, but a MIMO (Multiple-Input Multiple-Output) system may have multiple inputs. We need to define all the inputs to the system and arrange them into a vector.
 * Output variables: This is the system output value, and in the case of MIMO systems we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector.
 * State Variables: The state variables represent values from inside the system that can change over time. In an electric circuit for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables.

We denote the input variables with u, the output variables with y, and the state variables with x. In essence, we have the following relationship:


 * $$y = f(x, u)$$

Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the system input:


 * $$x' = g(x, u)$$

Where x'  is the rate of change of the state variables. We will define f(u, x) and g(u, x) in the next chapter.

Multi-Input, Multi-Output
In the Laplace domain, if we want to account for systems with multiple inputs and multiple outputs, we are going to need to rely on the principle of superposition to create a system of simultaneous Laplace equations for each input and output. For such systems, the classical approach not only doesn't simplify the situation, but because the systems of equations need to be transformed into the frequency domain first, manipulated, and then transformed back into the time domain, they can actually be more difficult to work with. However, the Laplace domain technique can be combined with the State-Space techniques discussed in the next few chapters to bring out the best features of both techniques. We will discuss MIMO systems in the MIMO Systems Chapter.

State-Space Equations
In a state-space system representation, we have a system of two equations: an equation for determining the state of the system, and another equation for determining the output of the system. We will use the variable y(t) as the output of the system, x(t) as the state of the system, and u(t) as the input of the system. We use the notation x'(t) (note the prime) for the first derivative of the state vector of the system, as dependent on the current state of the system and the current input. Symbolically, we say that there are transforms g and h, that display this relationship:


 * $$x'(t) = g[t_0, t, x(t), x(0), u(t)]$$
 * $$y(t) = h[t, x(t), u(t)]$$

The first equation shows that the system state change is dependent on the previous system state, the initial state of the system, the time, and the system inputs. The second equation shows that the system output is dependent on the current system state, the system input, and the current time.

If the system state change x'(t) and the system output y(t) are linear combinations of the system state and input vectors, then we can say the systems are linear systems, and we can rewrite them in matrix form:


 * $$x'(t) = A(t)x(t) + B(t)u(t)$$


 * $$y(t) = C(t)x(t) + D(t)u(t)$$

If the systems themselves are time-invariant, we can re-write this as follows:


 * $$x'(t) = Ax(t) + Bu(t)$$
 * $$y(t) = Cx(t) + Du(t)$$

The State Equation shows the relationship between the system's current state and its input, and the future state of the system. The Output Equation shows the relationship between the system state and its input, and the output. These equations show that in a given system, the current output is dependent on the current input and the current state. The future state is also dependent on the current state and the current input.

It is important to note at this point that the state space equations of a particular system are not unique, and there are an infinite number of ways to represent these equations by manipulating the A, B, C and D matrices using row operations. There are a number of "standard forms" for these matrices, however, that make certain computations easier. Converting between these forms will require knowledge of linear algebra.

Matrices: A B C D
Our system has the form:


 * $$\mathbf{x}'(t) = \mathbf{g}[t_0, t, \mathbf{x}(t), x(0), \mathbf{u}(t)]$$
 * $$\mathbf{y}(t) = \mathbf{h}[t, \mathbf{x}(t), \mathbf{u}(t)]$$

We've bolded several quantities to try and reinforce the fact that they can be vectors, not just scalar quantities. If these systems are time-invariant, we can simplify them by removing the time variables:


 * $$\mathbf{x}'(t) = \mathbf{g}[\mathbf{x}(t), x(0), \mathbf{u}(t)]$$
 * $$\mathbf{y}(t) = \mathbf{h}[\mathbf{x}(t), \mathbf{u}(t)]$$

Now, if we take the partial derivatives of these functions with respect to the input and the state vector at time t0, we get our system matrices:


 * $$A = \mathbf{g}_x[x(0), x(0), u(0)]$$
 * $$B = \mathbf{g}_u[x(0), x(0), u(0)]$$
 * $$C = \mathbf{h}_x[x(0), u(0)]$$
 * $$D = \mathbf{h}_u[x(0), u(0)]$$

In our time-invariant state space equations, we write these matrices and their relationships as:


 * $$x'(t) = Ax(t) + Bu(t)$$
 * $$y(t) = Cx(t) + Du(t)$$

We have four constant matrices: A, B, C, and D. We will explain these matrices below:
 * Matrix A:Matrix A is the system matrix, and relates how the current state affects the state change x' . If the state change is not dependent on the current state, A will be the zero matrix. The exponential of the state matrix, eAt is called the state transition matrix, and is an important function that we will describe below.
 * Matrix B:Matrix B is the control matrix, and determines how the system input affects the state change. If the state change is not dependent on the system input, then B will be the zero matrix.
 * Matrix C:Matrix C is the output matrix, and determines the relationship between the system state and the system output.
 * Matrix D:Matrix D is the feed-forward matrix, and allows for the system input to affect the system output directly. A basic feedback system like those we have previously considered do not have a feed-forward element, and therefore for most of the systems we have already considered, the D matrix is the zero matrix.

Matrix Dimensions
Because we are adding and multiplying multiple matrices and vectors together, we need to be absolutely certain that the matrices have compatible dimensions, or else the equations will be undefined. For integer values p, q, and r, the dimensions of the system matrices and vectors are defined as follows:


 * {| class="wikitable"

!Vectors || Matrices
 * $$x: p \times 1$$
 * $$x': p\times 1$$
 * $$u: q \times 1$$
 * $$y: r \times 1$$
 * $$A: p \times p$$
 * $$B: p \times q$$
 * $$C: r \times p$$
 * $$D: r \times q$$
 * }
 * $$C: r \times p$$
 * $$D: r \times q$$
 * }

If the matrix and vector dimensions do not agree with one another, the equations are invalid and the results will be meaningless. Matrices and vectors must have compatible dimensions or they cannot be combined using matrix operations.

For the rest of the book, we will be using the small template on the right as a reminder about the matrix dimensions, so that we can keep a constant notation throughout the book.

Notational Shorthand
The state equations and the output equations of systems can be expressed in terms of matrices A, B, C, and D. Because the form of these equations is always the same, we can use an ordered quadruplet to denote a system. We can use the shorthand (A, B, C, D) to denote a complete state-space representation. Also, because the state equation is very important for our later analysis, we can write an ordered pair (A, B) to refer to the state equation:


 * $$(A, B) \to x' = Ax + Bu$$
 * $$(A, B, C, D) \to \left\{\begin{matrix}x' = Ax + Bu \\ y = Cx + Du \end{matrix}\right.$$

Obtaining the State-Space Equations
The beauty of state equations, is that they can be used to transparently describe systems that are both continuous and discrete in nature. Some texts will differentiate notation between discrete and continuous cases, but this text will not make such a distinction. Instead we will opt to use the generic coefficient matrices A, B, C and D for both continuous and discrete systems. Occasionally this book may employ the subscript C to denote a continuous-time version of the matrix, and the subscript D to denote the discrete-time version of the same matrix. Other texts may use the letters F, H, and G for continuous systems and &Gamma;, and &Theta; for use in discrete systems. However, if we keep track of our time-domain system, we don't need to worry about such notations.

From Transfer Functions
The method of obtaining the state-space equations from the Laplace domain transfer functions are very similar to the method of obtaining them from the time-domain differential equations. We call the process of converting a system description from the Laplace domain to the state-space domain realization. We will discuss realization in more detail in a later chapter. In general, let's say that we have a transfer function of the form:


 * $$T(s) = \frac{s^m+a_{m-1}s^{m-1} +\cdots+a_0}{s^n+b_{n-1}s^{n-1}+\cdots+b_0}$$

We can write our A, B, C, and D matrices as follows:


 * {|class="wikitable"

0 & 0 & 1 & \cdots & 0 \\ \vdots &\vdots &\vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ -b_0 & -b_1 & -b_2 & \cdots & -b_{n-1} \end{bmatrix}$$
 * $$A = \begin{bmatrix}0 & 1 & 0 & \cdots & 0 \\
 * $$A = \begin{bmatrix}0 & 1 & 0 & \cdots & 0 \\
 * $$B = \begin{bmatrix}0 \\ 0 \\ \vdots \\1\end{bmatrix}$$
 * $$C = \begin{bmatrix}a_0 & a_1 & \cdots & a_{m-1}\end{bmatrix}$$
 * $$D = 0$$
 * }
 * $$D = 0$$
 * }
 * }

This form of the equations is known as the controllable canonical form of the system matrices, and we will discuss this later.

Notice that to perform this method, the denominator and numerator polynomials must be monic, the coefficients of the highest-order term must be 1. If the coefficient of the highest order term is not 1, you must divide your equation by that coefficient to make it 1.

State-Space Representation
As an important note, remember that the state variables x are user-defined and therefore are arbitrary. There are any number of ways to define x for a particular problem, each of which are going to lead to different state space equations.

Consider the previous continuous-time example. We can rewrite the equation in the form


 * $$ \frac{d}{dt}\left[\frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t)\right] + a_0y(t)=u(t) $$.

We now define the state variables


 * $$ x_1 = y(t) $$


 * $$ x_2 = \frac{dy(t)}{dt} $$


 * $$ x_3 = \frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t) $$

with first-order derivatives


 * $$ x_1' = \frac{dy(t)}{dt} = x_2 $$


 * $$ x_2' = \frac{d^2y(t)}{dt^2} = - a_1x_1 - a_2x_2 + x_3 $$ (suspected error here. Fails to account that :$$\frac{d}{dt}\left[\right] $$. encapsulates :$$ \frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t)$$ five lines earlier.)


 * $$ x_3' = -a_0y(t) + u(t) $$

The state-space equations for the system will then be given by


 * $$ x' = \begin{bmatrix}

0 & 1 & 0 \\              -a_1 & -a_2 & 1 \\ -a_0 & 0 & 0 \end{bmatrix} x(t) + \begin{bmatrix} 0 \\ 0 \\ 1            \end{bmatrix} u(t) $$


 * $$ y(t) = \begin{bmatrix}

1 & 0 & 0              \end{bmatrix} x(t) $$

x may also be used in any number of variable transformations, as a matter of mathematical convenience. However, the variables y and u correspond to physical signals, and may not be arbitrarily selected, redefined, or transformed as x can be.

Discretization
If we have a system (A, B, C, D) that is defined in continuous time, we can discretize the system so that an equivalent process can be performed using a digital computer. We can use the definition of the derivative, as such:


 * $$x'(t) = \lim_{T\to 0} \frac{x(t + T) - x(t)}{T}$$

And substituting this into the state equation with some approximation (and ignoring the limit for now) gives us:


 * $$\lim_{T\to 0} \frac{x(t + T) - x(t)}{T} = Ax(t) + Bu(t)$$


 * $$x(t + T) = x(t) + Ax(t)T + Bu(t)T$$


 * $$x(t + T) = (1 + AT)x(t) + (BT)u(t)$$

We are able to remove that limit because in a discrete system, the time interval between samples is positive and non-negligible. By definition, a discrete system is only defined at certain time points, and not at all time points as the limit would have indicated. In a discrete system, we are interested only in the value of the system at discrete points. If those points are evenly spaced by every T seconds (the sampling time), then the samples of the system occur at t = kT, where k is an integer. Substituting kT for t into our equation above gives us:


 * $$x(kT + T) = (1 + AT)x(kT) + TBu(kT)$$

Or, using the square-bracket shorthand that we've developed earlier, we can write:


 * $$x[k+1] = (1 + AT)x[k] + TBu[k]$$

In this form, the state-space system can be implemented quite easily into a digital computer system using software, not complicated analog hardware. We will discuss this relationship and digital systems more specifically in a later chapter.

We will write out the discrete-time state-space equations as:


 * $$x[n+1] = A_dx[n] + B_du[n]$$
 * $$y[n] = C_dx[n] + D_du[n]$$

Note on Notations
The variable T is a common variable in control systems, especially when talking about the beginning and end points of a continuous-time system, or when discussing the sampling time of a digital system. However, another common use of the letter T is to signify the transpose operation on a matrix. To alleviate this ambiguity, we will denote the transpose of a matrix with a prime:


 * $$A^T \to A'$$

Where A'  is the transpose of matrix A.

The prime notation is also frequently used to denote the time-derivative. Most of the matrices that we will be talking about are time-invariant; there is no ambiguity because we will never take the time derivative of a time-invariant matrix. However, for a time-variant matrix we will use the following notations to distinguish between the time-derivative and the transpose:


 * $$A(t)'$$ the transpose.


 * $$A'(t)$$ the time-derivative.

Note that certain variables which are time-variant are not written with the (t) postscript, such as the variables x, y, and u. For these variables, the default behavior of the prime is the time-derivative, such as in the state equation. If the transpose needs to be taken of one of these vectors, the (t)'  postfix will be added explicitly to correspond to our notation above.

For instances where we need to use the Hermitian transpose, we will use the notation:


 * $$A^H$$

This notation is common in other literature, and raises no obvious ambiguities here.

MATLAB Representation
State-space systems can be represented in MATLAB using the 4 system matrices, A, B, C, and D. We can create a system data structure using the ss function:

sys = ss(A, B, C, D);

Systems created in this way can be manipulated in the same way that the transfer function descriptions (described earlier) can be manipulated. To convert a transfer function to a state-space representation, we can use the tf2ss function:

[A, B, C, D] = tf2ss(num, den);

And to perform the opposite operation, we can use the ss2tf function:

[num, den] = ss2tf(A, B, C, D);

תורת הבקרה/משתני מצב