Linear Algebra/Singular Value Decomposition

Singular Value Decomposition
Given any $$m \times n$$ matrix $$A$$, the singular value decomposition (SVD) is $$A = U\Sigma V^H$$ where $$U$$ is an $$m \times m$$ unitary matrix, $$V$$ is an $$n \times n$$ unitary matrix, and $$\Sigma$$ is an $$m \times n$$ diagonal matrix where all off-diagonal entries are 0 and the diagonal entries are all non-negative real values. The diagonal entries of $$\Sigma$$ are referred to as "singular values".

As an example, consider the shear transformation $$A = \begin{pmatrix} 1 & 2 \\ 0 & 1 \\ \end{pmatrix}$$. The singular value decomposition of $$A$$ is:

$$\begin{pmatrix} 1 & 2 \\ 0 & 1 \\ \end{pmatrix} = \begin{pmatrix} 0.9239 & -0.3827 \\ 0.3827 & 0.9239 \\ \end{pmatrix} \begin{pmatrix} 2.4142 & 0      \\ 0      & 0.4142 \\ \end{pmatrix} \begin{pmatrix} 0.3827 & 0.9239 \\ -0.9239 & 0.3827 \\ \end{pmatrix}$$

The set of all unit length vectors $$\vec{v}$$ such that $$|\vec{v}| = 1$$ form a sphere of radius 1 around the origin. When $$A$$ is applied to this sphere, it becomes an ellipsoid. The principal radii of this ellipsoid are the singular values, and their directions form the columns of $$U$$.

Existence of the singular value decomposition
One fact that is not immediately obvious is that the singular value decomposition always exists:

In essence, any linear transformation is a rotation, followed by stretching or shrinking parallel to each axis (with some dimensions added or zeroed out of existence), followed by another rotation.

The following proof will demonstrate that the singular value decomposition always exists. An outline of the proof will be given first:

Proof outline

We need to prove that an arbitrary linear transform $$A$$ is a rotation: $$V^H$$, followed by scaling parallel to each axis: $$\Sigma$$, and lastly followed by another rotation: $$U$$, giving $$A = U\Sigma V^H$$.

If the columns of $$A$$ are already mutually orthogonal, then the first rotation is not necessary: $$V = I$$. The entries of $$\Sigma$$ are the lengths of the vectors formed by the columns of $$A$$, and $$U$$ is a rotation that rotates the elementary basis vectors of $$\Complex^m$$ to be parallel with the columns of $$A$$.

In most cases however, the columns of $$A$$ are not mutually orthogonal. In this case, the rotation $$V^H$$ is non-trivial. $$A = (AV)V^H$$, so $$V$$ must be chosen so that the columns of $$AV$$ are mutually orthogonal. Let $$V = \begin{pmatrix} \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_n \end{pmatrix}$$. We need to choose orthonormal vectors $$\vec{v}_1, \vec{v}_2, \dots, \vec{v}_n$$ so that $$A\vec{v}_1, A\vec{v}_2, \dots, A\vec{v}_n$$ are all mutually orthogonal. This can be done iteratively. Imagine that we have chosen $$\vec{v}_1$$ so that when given any vector $$\vec{v}$$ that is orthogonal to $$\vec{v}_1$$, that $$A\vec{v}_1$$ is orthogonal to $$A\vec{v}$$. The effort now switches to finding an orthonormal set of vectors $$\vec{v}_2, \vec{v}_3, \dots, \vec{v}_n$$ confined to the space of vectors that are perpendicular to $$\vec{v}_1$$ such that $$A\vec{v}_2, A\vec{v}_3, \dots, A\vec{v}_n$$ are mutually orthogonal.

Let $$V_1$$ be a unitary matrix with $$\vec{v}_1$$ as the first column. Factoring $$V_1$$ from the left side of $$V$$ to get $$V = V_1V_\text{new}$$ results in a new set of orthonormal vectors that are the columns of $$V_\text{new}$$. The goal of having the columns of $$AV$$ be mutually orthogonal is converted to having the columns of $$(AV_1)V_\text{new}$$ be mutually orthogonal with $$AV_1$$ effectively replacing $$A$$. $$\vec{v}_1$$ transforms to $$\vec{e}_1$$, and the space of vectors orthogonal to $$\vec{v}_1$$ transforms to the space spanned by the standard basis vectors $$\vec{e}_2, \vec{e}_3, \dots, \vec{e}_n$$. The first column of $$AV_1$$ is $$A\vec{v}_1$$ and so is orthogonal to all other columns.

If $$U_1$$ is a unitary matrix where the first column of $$U_1$$ is $$A\vec{v}_1$$ normalized to unit length, then factoring $$U_1$$ from the left side of $$AV_1$$ to get $$A_1 = U_1^HAV_1$$ results in a matrix in which the first column is parallel to the standard basis vector $$\vec{e}_1$$. The first column of $$AV_1$$ is orthogonal to all other columns, so the first column of $$A_1$$ is orthogonal to all other columns, so hence the first row of $$A_1$$ contains all 0s except for the first column.

$$\vec{v}_2, \vec{v}_3, \dots, \vec{v}_n$$ can now be determined recursively with the dimension reduced to $$n-1$$, and $$A$$ is replaced with $$A_1$$ with the first row and column removed. This forms the inductive component of the coming proof.

Lastly, how do we know that there exists $$\vec{v}_1$$ so that when given any vector $$\vec{v}$$ that is orthogonal to $$\vec{v}_1$$, that $$A\vec{v}_1$$ is orthogonal to $$A\vec{v}$$? The answer will be that the unit length $$\vec{v}$$ that maximizes $$|A\vec{v}|$$ is a valid $$\vec{v}_1$$.

We are now ready to give the proof in full detail: