LMIs in Control/Matrix and LMI Properties and Tools/Convexity of LMIs

Definition
A set, $$\mathcal{S}$$, in a real inner product space is convex if for all $$x, y\in\mathcal{S}$$ and $$\alpha\in\mathbb{R}$$, where $$0\leq\alpha\leq1$$, it holds that $$\alpha x+(1-\alpha)y\in\mathcal{S}$$.

Lemma 1.1
The set of solutions to an LMI is convex.

That is, the set $$\mathcal{S}=\{x\in\mathbb{R}^{m}\mid F(x)\leq0\}$$ is a convex set, where $$F:\mathbb{R}^{m}\longrightarrow\mathbb{S}^{n}$$ is an LMI.

Lemma 1.2
An LMI, $$F:\mathbb{R}^{m}\longrightarrow\mathbb{S}^{n}$$, in the variable $$x\in\mathbb{R}^{m}$$ is an expression of the form

$$F(x)=F_{0}+\sum_{i=1}^m x_{i}F_{i}\leq0$$

where $$x^{T}=[x_{1}\cdots x_{m}]$$ and $$F_{i}\in\mathbb{S}^{n}$$, $$i=0,\ldots,m$$.

Proof
Consider $$x,y\in\mathbb{R}^{m}$$ and $$\alpha\in[0,1]$$, and suppose that $$x$$ and $$y$$ satisfy Lemma 1.2.

The LMI $$F:\mathbb{R}^{m}\longrightarrow\mathbb{S}^{n}$$ is convex, since

$$F(\alpha x+(1-\alpha)y)=F_{0}+\sum_{i=1}^m (\alpha x_{i}+(1-\alpha)y_{i})F_{i}$$

$$\begin{alignat}{2} F(\alpha x+(1-\alpha)y) & = F_{0}+\sum_{i=1}^m (\alpha x_{i}+(1-\alpha)y_{i})F_{i}

\\ & = F_{0}-\alpha F_{0}+\alpha F_{0}+\alpha\sum_{i=1}^m x_{i}F_{1}+(1-\alpha)\sum_{i=1}^m y_{i}F_{i} \\ & = \alpha F_{0}+\alpha\sum_{i=1}^m x_{i}F_{i}+(1-\alpha)F_{0}+(1-\alpha)\sum_{i=1}^m y_{i}F_{i} \\ & = \alpha F(x)+(1-\alpha)F(y) \\ \end{alignat}$$

Convexity of LMI
From Lemma 1.1, it is known that an optimization problem with a convex objective function and LMI constraints is convex.

The following is a non-exhaustive list of scalar convex objective functions involving matrix variables that can be minimized in conjunction with LMI constraints to yield a semi-definite programming (SDP) problem.


 * $$\mathcal{J}(x)=\frac{1}{2}x^{T}PX+q^{T}x+r$$, where $$x,q\in\mathbb{R}^{n}$$, $$P\in\mathbb{S}^{n}$$, $$P>0$$, and $$r\in\mathbb{R}$$.


 * 1) Special case when $$q=0$$ and $$r=0:\mathcal{J}(x)=\frac{1}{2}x^{T}Px$$, where$$x\in\mathbb{R}^{n}$$, $$P\in\mathbb{S}^{n}$$, and $$P>0$$.
 * 2) Special case when $$P=$$2$$\cdot1$$, $$q=0$$, and $$r=0:\mathcal{J}(x)=x^{T}x=\|x\|^{2}_{2}$$, where $$x\in\mathbb{R}^{n}$$.


 * $$\mathcal{J}(X)=tr(X^{T}PX+Q^{T}X+X^{T}R+S)$$, where $$X$$, $$Q$$, $$R\in\mathbb{R}^{n \times m}$$, $$P\in\mathbb{S}^{n}$$, $$S\in\mathbb{R}^{n \times n}$$, and $$P\geq0$$.


 * 1) Special case when $$Q=R=0$$ and $$S=0:\mathcal{J}(X)=tr(X^{T}PX)$$, where $$X\in\mathbb{R}^{n \times m}$$, $$P\in\mathbb{S}^{n}$$, and $$P\geq0$$.
 * 2) Special case when $$P=1$$, $$Q=R=0$$ and $$S=0:\mathcal{J}(X)=tr(X^{T}X)=\|X\|^{2}_{F}$$, where $$X\in\mathbb{R}^{n \times m}$$.
 * 3) Special case when $$P=0$$, $$R=0$$ and $$S=0:\mathcal{J}(X)=tr(Q^{T}X)$$, where $$X$$, $$Q\in\mathbb{R}^{n \times m}$$.
 * 4) Special case when $$P=1$$, $$Q=R=0$$, $$S=0$$, and $$X\in\mathbb{S}^{n}:\mathcal{J}(X)=tr(X^{2})$$, where $$X\in\mathbb{S}^{n}$$.


 * $$\mathcal{J}(X)=\log(\det(X^{ -1}))=\log(\det(X))$$, where $$X\in\mathbb{S}^{n}$$ and $$X>0$$.

Relative Definition of a Matrix
The definiteness of a matrix can be found relative to another matrix.

For example,

Consider the matrices $$A\in\mathbb{S}^{n}$$ and $$B\in\mathbb{S}^{n}$$. The matrix inequality $$A0$$, when we know that $$B>0$$.

This follows from $$00$$ is implied by $$A\geq\epsilon1$$, where $$\epsilon\in\mathbb{R}_{>0}$$. Similarly, $$B<0$$ is implied by $$B\leq -\epsilon1$$, where $$\epsilon\in\mathbb{R}_{>0}$$

Converting a strict matrix inequality into a non-strict matrix inequality is useful when working with LMI solvers that cannot handle strict constraints.

Concatenation of LMIs
A useful property of LMIs is that multiple LMIs can be concatenated together to form a single LMI.

For example,

satisfying the LMIs $$A<0$$ and $$B<0$$ is equivalent to satisfying the concatenated LMI

$$\begin{bmatrix} A & 0 \\ 0 & B \end{bmatrix}<0$$

More generally, satisfying the LMIs $$A_{i}<0$$, $$i=1,\ldots,n$$ is equivalent to satisfying the concatenated LMI $$diag\{A_{1},\ldots,A_{n}\}<0$$.