Control Systems/Digital Control Systems

Digital Systems
Digital systems, expressed previously as difference equations or Z-Transform transfer functions can also be used with the state-space representation. Also, all the same techniques for dealing with analog systems can be applied to digital systems, with only minor changes.

Digital Systems
For digital systems, we can write similar equations, using discrete data sets:


 * $$x[k + 1] = Ax[k] + Bu[k]$$


 * $$y[k] = Cx[k] + Du[k]$$

Zero-Order Hold Derivation
If we have a continuous-time state equation:


 * $$x'(t) = Ax(t) + Bu(t)$$

We can derive the digital version of this equation that we discussed above. We take the Laplace transform of our equation:


 * $$X(s) = (sI - A)^{-1}Bu(s) + (sI - A)^{-1}x(0)$$

Now, taking the inverse Laplace transform gives us our time-domain system, keeping in mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix, &Phi;:


 * $$x(t) = \mathcal{L}^{-1}(X(s)) = \Phi(t - t_0)x(0) + \int_{t_0}^t\Phi(t - \tau)Bu(\tau)d\tau$$

Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period:


 * $$u(t) = u(kT), kT \le t \le (k+1)T$$


 * $$x(t) = \Phi(t, kT)x(kT) + \int_{kT}^t \Phi(t, \tau)Bd\tau u(kT)$$

We were able to remove u(kT) from the integral because it did not rely on &tau;. We now define a new function, &Gamma;, as follows:


 * $$\Gamma(t, t_0) = \int_{t_0}^t \Phi(t, \tau)Bd\tau$$

Inserting this new expression into our equation, and setting t = (k + 1)T gives us:


 * $$x((k + 1)T) = \Phi((k+1)T, kT)x(kT) + \Gamma((k+1)T, kT)u(kT)$$

Now &Phi;(T) and &Gamma;(T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coefficient matrices:


 * $$A_d = \Phi((k+1)T, kT)$$
 * $$B_d = \Gamma((k+1)T, kT)$$

We can use these values in our state equation, converting to our bracket notation instead:


 * $$x[k + 1] = A_dx[k] + B_du[k]$$

Relating Continuous and Discrete Systems
Continuous and discrete systems that perform similarly can be related together through a set of relationships. It should come as no surprise that a discrete system and a continuous system will have different characteristics and different coefficient matrices. If we consider that a discrete system is the same as a continuous system, except that it is sampled with a sampling time T, then the relationships below will hold. The process of converting an analog system for use with digital hardware is called discretization. We've given a basic introduction to discretization already, but we will discuss it in more detail here.

Discrete Coefficient Matrices
Of primary importance in discretization is the computation of the associated coefficient matrices from the continuous-time counterparts. If we have the continuous system (A, B, C, D), we can use the relationship t = kT to transform the state-space solution into a sampled system:


 * $$x(kT) = e^{AkT}x(0) + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$
 * $$x[k] = e^{AkT}x[0] + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$

Now, if we want to analyze the k+1 term, we can solve the equation again:


 * $$x[k+1] = e^{A(k+1)T}x[0] + \int_0^{(k+1)T} e^{A((k+1)T - \tau)}Bu(\tau)d\tau$$

Separating out the variables, and breaking the integral into two parts gives us:


 * $$x[k+1] = e^{AT}e^{AkT}x[0] + \int_0^{kT}e^{AT}e^{A(kT - \tau)}Bu(\tau)d\tau + \int_{kT}^{(k+1)T} e^{A(kT + T - \tau)}Bu(\tau)d\tau$$

If we substitute in a new variable &beta; = (k + 1)T + &tau;, and if we see the following relationship:


 * $$e^{AkT}x[0] = x[k]$$

We get our final result:


 * $$x[k+1] = e^{AT}x[k] + \left(\int_0^T e^{A\alpha}d\alpha\right)Bu[k]$$

Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system.


 * {| class="wikitable"


 * $$A_d = e^{A_cT}$$
 * $$B_d = \int_0^Te^{A\tau}d\tau B_c$$
 * $$C_d = C_c$$
 * $$D_d = D_c$$
 * }
 * $$C_d = C_c$$
 * $$D_d = D_c$$
 * }
 * }

If the Ac matrix is nonsingular, and we can find it's inverse, we can instead define Bd as:


 * $$B_d = A_c^{-1}(A_d - I)B_c$$

The differences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are different. Continuous-time systems are represented by linear differential equations, while the digital systems are described by difference equations. High order terms in a difference equation are delayed copies of the signals, while high order terms in the differential equations are derivatives of the analog signal.

If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm.

Notation
Because the coefficient matrices for the discrete systems are computed differently from the continuous-time coefficient matrices, and because the matrices technically represent different things, it is not uncommon in the literature to denote these matrices with different variables. For instance, the following variables are used in place of A and B frequently:


 * $$\Omega = A_d$$
 * $$R = B_d$$

These substitutions would give us a system defined by the ordered quadruple (&Omega;, R, C, D) for representing our equations.

As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book.

Solving for x[n]
We can find a general time-invariant solution for the discrete time difference equations. Let us start working up a pattern. We know the discrete state equation:


 * $$x[n+1] = Ax[n] + Bu[n]$$

Starting from time n = 0, we can start to create a pattern:


 * $$x[1] = Ax[0] + Bu[0]$$
 * $$x[2] = Ax[1] + Bu[1] = A^2x[0] + ABu[0] + Bu[1]$$
 * $$x[3] = Ax[2] + Bu[2] = A^3x[0] + A^2Bu[0] + ABu[1] + Bu[2]$$

With a little algebraic trickery, we can reduce this pattern to a single equation:


 * $$x[n] = A^nx[n_0] + \sum_{m=0}^{n-1}A^{n-1-m}Bu[m]$$

Substituting this result into the output equation gives us:


 * $$y[n] = CA^nx[n_0] + \sum_{m=0}^{n-1}CA^{n-1-m}Bu[m] + Du[n]$$

Time Variant Solutions
If the system is time-variant, we have a general solution that is similar to the continuous-time case:


 * $$x[n] = \phi[n, n_0]x[n_0] + \sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m]$$


 * $$y[n] = C[n]\phi[n, n_0]x[n_0] + C[n]\sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m] + D[n]u[n]$$

Where &phi;, the state transition matrix, is defined in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are different. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist.

State Transition Matrix
The discrete time state transition matrix is the unique solution of the equation:


 * $$\phi[k+1, k_0] = A[k] \phi[k, k_0]$$

Where the following restriction must hold:


 * $$\phi[k_0, k_0] = I$$

From this definition, an obvious way to calculate this state transition matrix presents itself:


 * $$\phi[k, k_0] = A[k - 1]A[k-2]A[k-3]\cdots A[k_0]$$

Or,


 * $$\phi[k, k_0] = \prod_{m = 1}^{k-k_0}A[k-m]$$

MATLAB Calculations
MATLAB is a computer program, and therefore calculates all systems using digital methods. The MATLAB function lsim is used to simulate a continuous system with a specified input. This function works by calling the c2d, which converts a system (A, B, C, D) into the equivalent discrete system. Once the system model is discretized, the function passes control to the dlsim function, which is used to simulate discrete-time systems with the specified input.

Because of this, simulation programs like MATLAB are subjected to round-off errors associated with the discretization process.

Sampler Systems
Let's say that we introduce a sampler into our system:



Notice that after the sampler, we must introduce a reconstruction circuit (described elsewhere) so that we may continue to keep the input, output, and plant in the laplace domain. Notice that we denote the reconstruction circuit with the symbol: Gr(s).

The preceding was a particularly simple example. However, the reader is encouraged to solve for the transfer function for a system with a sampler (and it's associated reconstructor) in the following places:


 * 1) Before the feedback system
 * 2) In the forward path, after the plant
 * 3) In the reverse path
 * 4) After the feedback loop