Professional Documents
Culture Documents
Singular Value Decomposition
Singular Value Decomposition
Decomposition
SVD - Overview
A technique for handling matrices (sets of equations) that do not
have an inverse. This includes square matrices whose
determinant is zero and all rectangular matrices.
T
A UWV
Where :
U is a m-by-n matrix of the orthonormal eigenvectors
of AAT
VT is the transpose of a n-by-n matrix containing the
orthonormal eigenvectors of ATA
W is a n-by-n Diagonal matrix of the singular values
which are the square roots of the eigenvalues of ATA
The Algorithm
Derivation of the SVD can be broken down into two major steps
[2] :
1. Reduce the initial matrix to bidiagonal form using Householder transformations
2. Diagonalize the resulting matrix using QR transformations
x x x x
x x x x x x x x
x x x S 2 ...... Pm x x x
x x x x
x x x
x
M3 Mm
x x x x
x x x x
x x x Sn x x
x x x x
x
x
Mn B
Application con’t
Householder Derivation
Now that we’ve seen how Householder matrices are used, how do we get one? Going back to its
definition : H = I – 2wwT
( x y)
w and Hx y and x y
x y
To make the Householder matrix useful, w must be derived from the column (or row) we want to
transform.
s (Length operator)
Householder Example
To derive P1 for the given matrix M :
4 6
x 4 2 2 2 4 2 36 6
x 2 y 0
We would have : With :
4 0
( x y)
This leads to : w 0,-0.314814,0.169789,0.933843
x y
0
- 0.314814
Then : S1 I 2 ww I 2
T 0,-0.314814,0.169789,0.933843
0.169789
0.933843
Example con’t
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.099108 0.053452 0.293987 0.0 0.801784 0.106904 0.587974
Finally : S1 I 2
0.0 0.053452 0.028828
0.158556 0.0 0.106904 0.942344 - 0.317112
0.0 0.293987 0.158556 0.872063 0.0 0.587974 - 0.317112 - 0.744126
2 11
6 5 1.0 0.0 0.0 0.0
3 3
4 2 0.0 0.801784 0.587974
0.106904
M 2 M 1S1 0 1
With that : 3 3 0.0 0.106904 0.942344 - 0.317112
0 0 4 1
0.0 0.587974 - 0.317112 - 0.744126
3 3
6 5 0.0 0.0
0 1.05 1.36 0.51
0 0.34 1.15 0.67
The next step takes B and converts it to the final diagonal form
using successive QR transformations.
QR Decompositions
The QR decomposition is defined as :
M = QR
Where Q is an orthogonal matrix (such that QT = Q-1, QTQ = QQT = I)
x x x x
x x x
And R is an upper triangular matrix : R
x x
x
T
Which can be written as : M x 1 Qx 1M x Qx 1
1 0 0 0
c cos( )
0 c s ji 0
G (i, j , ) ii
s sin( )
0 sij c jj 0
0 0 0 1
[1] Shows transpose on pre-multiply – but examples do not appear to be transposed (i.e. –s is still located i,j).
Givens rotation
The zeroing of an element is performed by computing the c and s
in the following system.
c s a a 2 b 2
s c b
0
Where b is the element being zeroed and a is next to b in the
preceding column / row.
a b
This is results in : c s
a2 b2 a 2 b2
Givens rotation and the Bidiagonal matrix
The application of Givens rotations on a bidiagonal matrix looks
like the following and results in its implicit QR
decomposition.
x x x x x x
x x x x x x
x x J1 J 2 x x x x J3
x x x x x x
x
x
x
B1 B2 B3
x x x x x x
x x x x x x
J4 x x x x J5 J6 x x
x x x x x x
x
x
x
B4 B5 B6
x x x x x x
x x x x x x
x x J7 J8
x x x x
x x x x x x
x
x
x
B7 B8 B`
Givens and Bidiagonal
With the exception of J1, Jx is the Givens matrix computed from
the element being zeroed.
c s d12 x
J1 is computed from the following : s s
d1 f1 0
d1 f1
d2 f2
d n21 f n2 2 d n 1 f n 1
B ... f n2 T
d n 1 f n 1 d n2 f n21
d n 1 f n 1
d n
Bidiagonal and QR
This computation of J1 causes the implicit formation of BTB
which causes :
B` J i ...J 4 J 2 BJ1 J 3 ...J k U i ...U 4U 2 B T BJ1 J 3 ...J k QT BT BQ
QR Decomposition Given's rotation example
Case 1 : b = 0
x is known as the nullspace of M which is defined as the set of all vectors that
satisfy the equation Mx = 0. This is any column in VT associated with a singular
value (in W) equal to 0.
Case 2 : b != 0
Then we have : Mx b
Which can be re-written as :M 1Mx M 1b x M 1b
From the previous slide we know : M 1 VW 1U T
Hence : x VW 1U T b which is easily solvable
SVD Applications con’t
Rank, Range, and Null space
• The rank of matrix A can be calculated from SVD by the
number of nonzero singular values.
• The range of matrix A is The left singular vectors of U
corresponding to the non-zero singular values.
• The null space of matrix A is The right singular vectors of V
corresponding to the zeroed singular values.
SVD Applications con’t
Condition number
• SVD can tell How close a square matrix A is to be singular.
• The ratio of the largest singular value to the smallest singular value can
tell us how close a matrix is to be singular:
• A is singular if c is infinite.
• A is ill-conditioned if c is too large (machine dependent).
SVD Applications con’t
Data Fitting Problem
SVD Applications con’t
Image processing
[U,W,V]=svd(A)
NewImg=U(:,1)*W(1,1)*V(:,1)’
SVD Applications con’t
Digital Signal Processing (DSP)
• SVD is used as a method for noise reduction.