Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 40

Topic:

DIAGONALIZATION,
EIGEN VALUE,
EIGEN VECTOR,
ORTHOGONAL,
CAYLEY HAMILTON THEORM

By rajesh goswami
 DIAGONALIZATION
 EIGEN VALUE
 EIGEN VECTOR
 ORTHOGONAL
 CAYLEY HAMILTON THEOREM

ENGG2013 kshum 2
 A square matrix M is called diagonalizable if we
can find an invertible matrix, say P, such that the
product P–1 M P is a diagonal matrix.
 A diagonalizable matrix can be raised to a high
power easily.
› Suppose that P–1 M P = D, D diagonal.
› M = P D P–1.
› Mn = (P D P–1) (P D P–1) (P D P–1) … (P D P–1)
= P Dn P–1.

4
 Let

 A is diagonalizable because we can find a matrix

such that

5
 By definition a matrix M is diagonalizable if
P–1 M P = D
for some invertible matrix P, and diagonal matrix
D.
or equivalently,

6
 Suppose that

7
 Given a square matrix A, a non-zero vector v is
called an eigenvector of A, if we an find a real
number  (which may be zero), such that

Matrix-vector product Scalar product of a vector

 This number  is called an eigen value of A,


corresponding to the eigenvector v.

8
 If v is an eigenvector of A with eigen value ,
then any non-zero scalar multiple of v also
satisfies the definition of eigenvector.

k0

9
First eigenvalue = 2, with eigenvector

where k is any nonzero real number.

Second eigenvalue = -5, with eigenvector

where k is any nonzero real number.

10
 Matrix addition/subtraction
› Matrices must be of same size.

 Matrix mmultiplication
xn qxp mxp

Condition: n = q
Example:
2x2

3x3

nxn
diagonal matrix:
 The inverse A-1 of a matrix A has the property:
AA-1=A-1A=I

 A-1 exists only if

 Terminology
› Singular matrix: A-1 does not exist
› Ill-conditioned matrix: A is close to being singular
 Properties of the inverse:
 The pseudo-inverse A+ of a matrix A (could be
non-square, e.g., m x n) is given by:

 It can be shown that:


 Equal to the dimension of the largest square sub-
matrix of A that has a non-zero determinant.

has rank 3
Example:
 Alternative definition: the maximum number
of linearly independent columns (or rows) of
A.

Therefore,
Example: rank is not 4 !
• Notation:

•A is orthogonal if:

Example:
•A is orthonormal if:

• Note that if A is orthonormal, it easy to find its inverse:

Property:
Example :-
2 0 4
0 6 0
Diagonalize the matrix A = 4 0 2
by means of an

orthogonal transformation.
Solution:-
Characteristic equation of A is
2λ 0 4
0 6λ 0 0
4 0 2λ
 (2  λ)(6  λ)(2  λ)  16(6  λ)  0
 λ   2, 6, 6

26
 x1 
when λ = -2,let X1 =  x 2  be the eigen vector
 x 3 
then (A + 2I)X1 = 0
 4 0 4   x1   0 
 0 8 0   x  = 0 
   2  
 4 0 4   x 3  0 
 4x1 + 4x 3 = 0 ...(1)
8x 2 = 0 ...(2)
4x1 + 4x 3 = 0 ...(3)
 x1 = k1,x 2 = 0,x 3 = -k1
 1
 X1 = k1  0 
-1

27
 x1 
when λ = 6,let X2 =  x 2  be theeigenvector
 x 3 
then (A - 6I)X 2 = 0
-4 0 4   x1  0 
 0 0 0   x  = 0 
   2  
 4 0 -4   x 3  0 
  4x1 + 4x 3 = 0
4x1 - 4x3 = 0
 x1 = x 3 and x 2 isarbitrary
x 2 must be so chosen that X 2 and X 3 are orthogonal among themselves
and also each is orthogonal with X1.

28
 1 α
Let X2 = 0  and let X3 = β 
 1  γ 
Since X3 is orthogonal to X1
 α-γ =0 ...(4)
X3 is orthogonal to X 2
 α+γ =0 ...(5)
Solving (4)and(5), we get α = γ = 0 and β is arbitrary.
0 
Taking β = 1, X3 =  1
0 
 1 1 0
 Modal matrix is M =  0 0 1
-1 1 0 

29
The normalised modal matrix is
 1 1 
 0
2 2
 
N = 0 0 1
 1 1 
- 0
 2 2 
 1 1 
 0 -   1 1 
2 2 2 0 4  0
   2 2
 1 1  
 0 
D = N'AN =  0 0 6 0  0 1
2 2 
   4 0 2   1 1 
 0 1 0  - 0
   2 2 

-2 0 0
D =  0 6 0  which is the required diagonal matrix.
 0 0 6 

30
 a11 a12 ... a1n 
a a ... a 
A   21 22 2n 
 .... .... .... .... 
 
a n1 a n 2 ... a nn  nn
φ(λ) = A - λI
a11 - λ a12 ... a1n 
 a a22 - λ ... a2n 
=  21

 ... ... ... ... 


 
 an1 an2 ... ann - λ 

| A - λI|= 0
 p0 λn + p1λn-1 + p2 λn-2 +...+ pn = 0
We are to prove that
p0 A n + p1A n-1 + p2 A n-2 +...+ pnI= 0 ...(1)

0 = p0 A n-1 + p1A n-2 + p2 A n-3 +...+ pn-1I + pn A -1


1
 A =- [p0 A n-1 + p1A n-2 + p2 A n-3 +...+pn-1I]
-1

pn
Example 1:-

 2 1 1 
 1 2  1
 
 1  1 2 

2λ 1 1
A  λI  0 i.e.,  1 2λ 1  0
1 1 2λ
or λ 3  6λ 2  9λ  4  0 (on simplification)
 2 1 1   2 1 1   6  5 5 
A 2   1 2  1  1 2  1   5 6  5
 1  1 2   1  1 2   5  5 6 
 6  5 5   2  1 1   22  22  21
A 3  A 2  A   5 6  5  1 2  1   21 22  21
 5  5 6   1  1 2   21  21 22 

 22  22  21  6 5 5   2 1 1 
 21 22  21  5 6  5  1 2  1
     
 21  21 22   5  5 6   1  1 2 

1 0 0
0 1 0 
 
0 0 1

0 0 0 
0 0 0   0
 
0 0 0
Now, pre – multiplying both sides of (1) by A-1 , we
have
A2 – 6A +9I – 4 A-1 = 0
=> 4 A-1 = A2 – 6 A +9I
 6 5 5   2  1 1  1 0 0  3 1  1
 4 A 1   5 6  5  6  1 2  1  9 0 1 0   1 3 1 
     
 5  5 6   1  1 2  0 0 1  1 1 3 
 3 1  1
1
 A 1   1 3 1 
4
 1 1 3 

39

You might also like