Download as pdf or txt
Download as pdf or txt
You are on page 1of 91

Paper No.

- 108
Paper Title: Linear Algebra

Hardik M Pandya
Department of Mathematics,
M. K. Bhavnagar University, Bhavnagar
Unit – 4

 Angle between two non-zero vectors


 Orthogonality
 Pythagoras Theorem
 Angle between two non-zero vectors

Definition
An angle 𝜃 between two non-zero vectors 𝑥 and 𝑦 is
given by
〈𝑥, 𝑦〉
𝑐𝑜𝑠𝜃 = , 0 ≤ 𝜃 ≤ 𝜋.
‖𝑥 ‖‖𝑦‖
Example 1
Find angle between 𝑥 and 𝑦 in each of the following
cases:
(i) 𝑥 = (3,0), 𝑦 = (5,5) (ii) 𝑥 = (1,0), 𝑦 = (0,1)
Solution:
(i) 𝑥 = (3,0), 𝑦 = (5,5)
Here 〈𝑥, 𝑦〉 = 𝑥 ∙ 𝑦
= (3,0) ∙ (5,5)
= (3)(5) + (0)(5)
= 15
‖𝑥 ‖ = √32 + 02
=3
‖𝑦‖ = √52 + 52
= √50
= 5√2
〈𝑥,𝑦〉
Now, 𝑐𝑜𝑠𝜃 = ‖ ‖‖ ‖
𝑥 𝑦
15
= (3)(5√2)
15
=
15√2
1
=
√2
−1 1
∴ 𝜃 = 𝑐𝑜𝑠 ( )
√2
𝜋
∴ 𝜃=
4
(ii) 𝑥 = (1,0), 𝑦 = (0,1)
Here 〈𝑥, 𝑦〉 = 𝑥 ∙ 𝑦
= (1)(0) + (0)(1)
=0
‖𝑥 ‖ = √12 + 02 =1
‖𝑦‖ = √02 + 12 =1
〈𝑥,𝑦〉
Now, 𝑐𝑜𝑠𝜃 = ‖ ‖‖ ‖
𝑥 𝑦
0
= (1)(1)
=0
∴ 𝜃 = 𝑐𝑜𝑠 −1 (0)
𝜋
∴ 𝜃=
2
 Orthogonality

Definition
Let 𝑉 be an inner product space and 𝑥, 𝑦 ∈ 𝑉. We say 𝑥
and 𝑦 are orthogonal to each other if 〈𝑥, 𝑦〉 = 0. In this
case we write 𝑥 ⊥ 𝑦 or 𝑦 ⊥ 𝑥.
Example 2
Show that the vectors 𝑥 = (−3,4) and 𝑦 = (6,4.5) in ℝ2
are orthogonal to each other.
Solution
Here 〈𝑥, 𝑦〉 = 𝑥 ⋅ 𝑦
= (−3,4) ⋅ (6,4.5)
= (−3)(6) + (4)(4.5)
= −18 + 18
=0
So, 𝑥 and 𝑦 are orthogonal to each other.
Theorem 1
Let 𝑉 be an inner product space and let 𝑥 ∈ 𝑉 be fixed.
Then the set {𝑣 ∈ 𝑉 ∶ 〈𝑥, 𝑣 〉 = 0} is a subspace of 𝑉. This
set is denoted by 𝑥 ⊥ .
Proof
Let 𝑥 ∈ 𝑉.
We have to prove that the subset
𝑥 ⊥ = {𝑣 ∈ 𝑉 ∶ 〈𝑥, 𝑣 〉 = 0} of 𝑉 is a vector subspace.
Let 𝑣1 , 𝑣2 ∈ 𝑥 ⊥ and 𝛼, 𝛽 ∈ ℝ.
To show: 𝛼𝑣1 + 𝛽𝑣2 ∈ 𝑥 ⊥
𝑣1 ∈ 𝑥 ⊥ ⟹ 〈𝑥, 𝑣1 〉 = 0
𝑣2 ∈ 𝑥 ⊥ ⟹ 〈𝑥, 𝑣2 〉 = 0

Now, 〈𝑥, 𝛼𝑣1 + 𝛽𝑣2 〉 = 〈𝑥, 𝛼𝑣1 〉 + 〈𝑥, 𝛽𝑣2 〉


= 𝛼 〈𝑥, 𝑣1 〉 + 𝛽 〈𝑥, 𝑣2 〉
= 𝛼 (0) + 𝛽(0)
=0
It shows that 𝛼𝑣1 + 𝛽𝑣2 ∈ 𝑥 ⊥ .
Hence 𝑥 ⊥ is a subspace of 𝑉.
Theorem 2
Let 𝑣 and 𝑤 be two non-zero vectors in an inner product
space 𝑉. If 𝑣 ⊥ 𝑤 then the set {𝑣, 𝑤} is linearly
independent.
Proof:
Given that 𝑣 ⊥ 𝑤.
∴ 〈𝑣, 𝑤〉 = 0.
We have to prove that {𝑣, 𝑤} is linearly independent
set.
Let 𝛼, 𝛽 ∈ ℝ be such that 𝛼𝑣 + 𝛽𝑤 = 𝜃𝑉 .
Then,
‖𝛼𝑣 + 𝛽𝑤‖ = 0
∴ ‖𝛼𝑣 + 𝛽𝑤‖2 = 0
∴ 〈𝛼𝑣 + 𝛽𝑤, 𝛼𝑣 + 𝛽𝑤〉 = 0
∴ ‖𝛼𝑣 ‖2 + 2〈𝛼𝑣, 𝛽𝑤〉 + ‖𝛽𝑤‖2 = 0
∴ 𝛼 2 ‖𝑣 ‖2 + 2𝛼𝛽 〈𝑣, 𝑤〉 + 𝛽2 ‖𝑤‖2 = 0
∴ 𝛼 2 ‖𝑣 ‖2 + 2𝛼𝛽 (0) + 𝛽2 ‖𝑤‖2 = 0
∴ 𝛼 2 ‖𝑣 ‖2 + 𝛽2 ‖𝑤‖2 = 0
Here both the terms 𝛼 2 ‖𝑣 ‖2 and 𝛽2 ‖𝑤‖2 are non-
negative.
Therefore, they must be zero.
∴ 𝛼 2 ‖𝑣 ‖2 = 0
and
𝛽2 ‖𝑤‖2 = 0
But 𝑣 and 𝑤 are non-zero vectors,
so ‖𝑣‖ ≠ 0, ‖𝑤‖ ≠ 0.
∴ 𝛼 2 = 0 and 𝛽2 = 0.
Which implies 𝛼 = 0 and 𝛽 = 0.
It shows that the set {𝑣, 𝑤} is linearly independent.
Theorem 3
Let 𝑉 be an inner product space and 𝑥 ∈ 𝑉. If 𝑥 is
orthogonal to all the vectors in 𝑉 then 𝑥 = 𝜃𝑉 .
Proof:
Given that 𝑥 ∈ 𝑉 is orthogonal to each vector in 𝑉.
In particular 𝑥 is orthogonal to 𝑥. Which gives,
〈𝑥, 𝑥 〉 = 0
⟹ ‖𝑥 ‖2 = 0
⟹ 𝑥 = 𝜃𝑉 .
Definition
Let 𝑉 be an inner product space and let 𝑆 be any
nonempty subset of 𝑉. A vector 𝑣 ∈ 𝑉 is said to be
orthogonal to 𝑆 if 〈𝑣, 𝑠〉 = 0, ∀𝑠 ∈ 𝑆.

Definition
Let 𝑉 be an inner product space and let 𝑆 be any
nonempty subset of 𝑉. Then the set {𝑣 ∈ 𝑉 ∶ 〈𝑣, 𝑠〉 = 0,
∀𝑠 ∈ 𝑆} is called orthogonal complement of 𝑆. It is
denoted by 𝑆 ⊥ .
Theorem 4
Let 𝑉 be an inner product space and let 𝑆 be any
nonempty subset of 𝑉. Then 𝑆 ⊥ is a subspace of 𝑉.
Proof
We have to prove that the subset 𝑆 ⊥ = {𝑣 ∈ 𝑉 ∶ 〈𝑣, 𝑠〉 =
0, ∀𝑠 ∈ 𝑆} of 𝑉 is a vector subspace.
Let 𝑣1 , 𝑣2 ∈ 𝑆 ⊥ and 𝛼, 𝛽 ∈ ℝ.
To show: 𝛼𝑣1 + 𝛽𝑣2 ∈ 𝑆 ⊥
𝑣1 ∈ 𝑆 ⊥ ⟹ 〈𝑣1 , 𝑠〉 = 0, ∀𝑠 ∈ 𝑆
𝑣2 ∈ 𝑆 ⊥ ⟹ 〈𝑣2 , 𝑠〉 = 0, ∀𝑠 ∈ 𝑆
Now for any 𝑠 ∈ 𝑆,
〈𝛼𝑣1 + 𝛽𝑣2 , 𝑠〉 = 〈𝛼𝑣1 , 𝑠〉 + 〈𝛽𝑣2 , 𝑠〉
= 𝛼 〈𝑣1 , 𝑠〉 + 𝛽〈𝑣2 , 𝑠〉
= 𝛼 (0) + 𝛽 (0)
=0
It shows that 𝛼𝑣1 + 𝛽𝑣2 ∈ 𝑆 ⊥ .
Hence 𝑆 ⊥ is a subspace of 𝑉.
Theorem 5 Pythagoras Theorem
Let 𝑉 be an inner product space and 𝑥, 𝑦 ∈ 𝑉. Then 𝑥 ⊥
𝑦 if and only if ‖𝑥 + 𝑦‖2 = ‖𝑥 ‖2 + ‖𝑦‖2 .

Proof

Suppose 𝑥 ⊥ 𝑦 then 〈𝑥, 𝑦〉 = 0.

Now ‖𝑥 + 𝑦‖2 = ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2

= ‖𝑥 ‖2 + 2(0) + ‖𝑦‖2

= ‖𝑥 ‖2 + ‖𝑦‖2
Conversely, suppose ‖𝑥 + 𝑦‖2 = ‖𝑥 ‖2 + ‖𝑦‖2 .

Then,

〈𝑥 + 𝑦, 𝑥 + 𝑦〉 = ‖𝑥 ‖2 + ‖𝑦‖2

∴ ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2 = ‖𝑥 ‖2 + ‖𝑦‖2

∴ 2〈𝑥, 𝑦〉 = 0

∴ 〈𝑥, 𝑦〉 = 0

∴𝑥⊥𝑦
Theorem 6
Let 𝑉 be an inner product space and 𝑥, 𝑦 ∈ 𝑉. Then
𝑥 − 𝑦 ⊥ 𝑥 + 𝑦 if and only if ‖𝑥 ‖ = ‖𝑦‖.
Proof
Given that
𝑥 − 𝑦 ⊥ 𝑥 + 𝑦 ⟺ 〈𝑥 − 𝑦, 𝑥 + 𝑦〉 = 0
⟺ 〈𝑥, 𝑥 〉 + 〈𝑥, 𝑦〉 − 〈𝑦, 𝑥 〉 − 〈𝑦, 𝑦〉 = 0
⟺ 〈𝑥, 𝑥 〉 − 〈𝑦, 𝑦〉 = 0
⟺ ‖𝑥 ‖2 − ‖𝑦‖2 = 0
⟺ ‖𝑥 ‖2 = ‖𝑦‖2
⟺ ‖𝑥 ‖ = ‖𝑦‖
Definition
Let 𝑉 be an inner product space. A subset 𝑆 of 𝑉 is said
to be an orthogonal set if
(i) 𝜃𝑉 ∉ 𝑆
(ii) 〈𝑥, 𝑦〉 = 0, for every 𝑥, 𝑦 ∈ 𝑆 with 𝑥 ≠ 𝑦.
Example 3
Show that a subset 𝑆 = {(2, −3,0), (6,4,3)} of ℝ3 is an
orthogonal set.
Solution:
Say 𝑠1 = (2, −3,0) and 𝑠2 = (6,4,3).
Here 𝑠1 ≠ 𝜃ℝ3 and 𝑠2 ≠ 𝜃ℝ3 therefore, 𝜃ℝ3 ∉ 𝑆.
Now 〈𝑠1 , 𝑠2 〉 = (2, −3,0) ⋅ (6,4,3)
= (2)(6) + (−3)(4) + (0)(3)
= 12 − 12 = 0
So, 𝑆 is orthogonal set in ℝ3 .
Definition
A basis of an inner product space is said to be an
orthogonal basis if it is an orthogonal set.

Theorem 7
Any orthogonal set in an inner product space is linearly
independent.
Proof
Let 𝑉 be an inner product space.
Let 𝑆 = {𝑣1 , 𝑣2 , … … … , 𝑣𝑘 } be an orthogonal set in 𝑉.
Then,
(1) 𝑣𝑖 ≠ 𝜃𝑉 , ∀ 1 ≤ 𝑖 ≤ 𝑘 and
(2) 〈𝑣𝑖 , 𝑣𝑗 〉 = 0, ∀ 1 ≤ 𝑖, 𝑗 ≤ 𝑘 with 𝑖 ≠ 𝑗.
We have to show that 𝑆 is linearly independent set.
Assume that ∑𝑘𝑖=1 𝛼𝑖 𝑣𝑖 = 𝜃𝑉 where 𝛼𝑖 ∈ ℝ, 1 ≤ 𝑖 ≤ 𝑘.
Then for any 𝑣𝑗 ∈ 𝑆 we have
𝑘

〈 ∑ 𝛼𝑖 𝑣𝑖 , 𝑣𝑗 〉 = 〈𝜃𝑉 , 𝑣𝑗 〉 = 0
𝑖=1
But
〈 ∑𝑘𝑖=1 𝛼𝑖 𝑣𝑖 , 𝑣𝑗 〉
= 〈𝛼1 𝑣1 + 𝛼2 𝑣2 + … … 𝛼𝑗 𝑣𝑗 + ⋯ + 𝛼𝑘 𝑣𝑘 , 𝑣𝑗 〉
∴ 0 = 𝛼1 〈𝑣1 , 𝑣𝑗 〉 + 𝛼2 〈𝑣2 , 𝑣𝑗 〉 + ⋯ + 𝛼𝑗 〈𝑣𝑗 , 𝑣𝑗 〉 + ⋯
+𝛼𝑘 〈𝑣𝑘 , 𝑣𝑗 〉
= 𝛼𝑗 〈𝑣𝑗 , 𝑣𝑗 〉 by (2)
Now by (1), 𝑣𝑗 ≠ 𝜃𝑉
∴ 〈𝑣𝑗 , 𝑣𝑗 〉 ≠ 0
∴ 𝛼𝑗 = 0
Since 𝛼𝑗 is arbitrary, we have 𝛼𝑖 = 0, ∀ 1 ≤ 𝑖 ≤ 𝑘. It
means that the set 𝑆 is linearly independent.
Example 4
Let 𝑉 be an inner product space.
Let 𝐸 = {𝑣1 , 𝑣2 , … … … , 𝑣𝑛 } be an orthogonal basis of 𝑉.
𝑛 〈𝑣,𝑣𝑖 〉
If 𝑣 = ∑𝑖=1 𝛼𝑖 𝑣𝑖 then 𝛼𝑖 = 〈 〉 , 1 ≤ 𝑖 ≤ 𝑛.
𝑣𝑖 ,𝑣𝑖
Solution
Given that 𝐸 is an orthogonal basis of 𝑉. So,
(1) 𝑣𝑖 ≠ 𝜃𝑉 , ∀ 1 ≤ 𝑖 ≤ 𝑛 and
(2) 〈𝑣𝑖 , 𝑣𝑗 〉 = 0, ∀ 1 ≤ 𝑖, 𝑗 ≤ 𝑛 with 𝑖 ≠ 𝑗.
Now, for 1 ≤ 𝑖 ≤ 𝑛
〈𝑣, 𝑣𝑖 〉 = 〈∑𝑛𝑗=1 𝛼𝑗 𝑣𝑗 , 𝑣𝑖 〉
= 𝛼1 〈𝑣1 , 𝑣𝑖 〉 + 𝛼2 〈𝑣2 , 𝑣𝑖 〉 + ⋯ + 𝛼𝑖 〈𝑣𝑖 , 𝑣𝑖 〉 + ⋯
+𝛼𝑛 〈𝑣𝑛 , 𝑣𝑖 〉
= 𝛼𝑖 〈𝑣𝑖 , 𝑣𝑖 〉 by (2)
And by (1), 𝑣𝑖 ≠ 𝜃𝑉
∴ 〈𝑣𝑖 , 𝑣𝑖 〉 ≠ 0
〈𝑣,𝑣𝑖 〉
∴ 𝛼𝑖 = 〈 , 1≤𝑖≤𝑛
𝑣𝑖 ,𝑣𝑖 〉
Paper No.- 108
Paper Title: Linear Algebra

Hardik M Pandya
Department of Mathematics,
M. K. Bhavnagar University, Bhavnagar
Unit – 4

 Orthonormality
 Gram – Schmidth Orthonormalization
Process
 Orthonormality
Definition
A Kronecker delta 𝛿𝑖𝑗 is defined as
0 𝑖𝑓 𝑖 ≠ 𝑗
𝛿𝑖𝑗 = {
1 𝑖𝑓 𝑖 = 𝑗

Definition
A subset 𝐸 = {𝑣1 , 𝑣2 , … … … , 𝑣𝑛 } of an inner product
space 𝑉 is said to be orthonormal set if
〈𝑣𝑖 , 𝑣𝑗 〉 = 𝛿𝑖𝑗 , ∀ 𝑣𝑖 , 𝑣𝑗 ∈ 𝐸.
Definition
A basis 𝐸 of an inner product space 𝑉 is said to be
orthonormal basis if 𝐸 is orthonormal set.

Example 5
𝑒1 −𝑒2 𝑒1 +𝑒2
Show that the basis ={ , } , where
√2 √2
𝑒1 = (1,0), 𝑒2 = (0,1) is an orthonormal basis of ℝ2 .
Solution
𝑒1 −𝑒2 𝑒1 +𝑒2
Say = 𝑣1 and = 𝑣2 .
√2 √2
We have to show that the set {𝑣1 , 𝑣2 } is an orthonormal
set.
Given that 𝑒1 = (1,0), 𝑒2 = (0,1)
∴ 𝑒1 − 𝑒2 = (1, −1) and 𝑒1 + 𝑒2 = (1,1)
𝑒1 −𝑒2 1 −1 𝑒1 +𝑒2 1 1
∴ =( , ) and =( , )
√2 √2 √2 √2 √2 √2
1 −1 1 1
∴ 𝑣1 = ( , ) and 𝑣2 = ( , ).
√2 √2 √2 √2
1 −1 1 −1
Now, 〈𝑣1 , 𝑣1 〉 = ( , )⋅( , )
√2 √2 √2 √2
1 2 −1 2
=( ) +( )
√2 √2
1 1
= +
2 2
=1
1 1 1 1
〈𝑣2 , 𝑣2 〉 = ( , )⋅( , )
√2 √2 √2 √ 2
1 2 1 2
=( ) +( )
√2 √2
1 1
= +
2 2
=1
1 −1 1 1
And 〈𝑣1 , 𝑣2 〉 = ( , )⋅( , )
√2 √2 √ 2 √2
1 1 −1 1
= ( )( ) + ( )( )
√2 √2 √2 √2
1 1
= −
2 2
=0
So, we get
0 𝑖𝑓 𝑖 ≠ 𝑗
〈𝑣𝑖 , 𝑣𝑗 〉 = { , 1 ≤ 𝑖, 𝑗 ≤ 2
1 𝑖𝑓 𝑖 = 𝑗
That is, 〈𝑣𝑖 , 𝑣𝑗 〉 = 𝛿𝑖𝑗 , 1 ≤ 𝑖, 𝑗 ≤ 2.
So the set {𝑣1 , 𝑣2 } is an orthonormal set.
Hence the given basis 𝐸 is an orthonormal basis of ℝ2 .
Exercise
Show that a standard basis of ℝ𝑛 is an orthonormal
basis.
Theorem 8
Any orthonormal set in an inner product space is
orthogonal.
Proof
Let 𝑉 be an inner product space.
Suppose 𝐸 = {𝑒1 , 𝑒2 , … … … , 𝑒𝑛 } is an orthonormal subset
of 𝑉.
Then, 〈𝑒𝑖 , 𝑒𝑗 〉 = 𝛿𝑖𝑗 , 1 ≤ 𝑖, 𝑗 ≤ 𝑛.
That is, ∀ 𝑒𝑖 , 𝑒𝑗 ∈ 𝐸,
we have
〈𝑒𝑖 , 𝑒𝑗 〉 = 0, 𝑖𝑓 𝑖 ≠ 𝑗 (1)
and 〈𝑒𝑖 , 𝑒𝑗 〉 = 1, 𝑖𝑓 𝑖 = 𝑗 (2)
Also 𝜃𝑉 ∉ 𝐸 (3)
because if 𝜃𝑉 ∈ 𝐸 then from the Equation (2) we get
〈𝜃𝑉 , 𝜃𝑉 〉 = 1, which is contradiction.
From Equations (1) and (3),
we can say that 𝐸 is orthogonal set in 𝑉.
Example 6
Let 𝑉 be an inner product space. Let 𝐸 =
{𝑣1 , 𝑣2 , … … … , 𝑣𝑛 } be an orthonormal basis of 𝑉. If 𝑣 =
∑𝑛𝑖=1 𝛼𝑖 𝑣𝑖 then 𝛼𝑖 = 〈𝑣, 𝑣𝑖 〉 , 1 ≤ 𝑖 ≤ 𝑛.
Solution
Given that 𝐸 = {𝑣1 , 𝑣2 , … , 𝑣𝑛 } is orthonormal basis of 𝑉.
By Theorem 8, it is an orthogonal basis of 𝑉. Also given
that 𝑣 = ∑𝑛𝑖=1 𝛼𝑖 𝑣𝑖 .
〈𝑣,𝑣𝑖 〉
So by Example 4, 𝛼𝑖 = 〈 , 1 ≤ 𝑖 ≤ 𝑛.
𝑣𝑖 ,𝑣𝑖 〉

But 𝐸 is orthonormal set, so 〈𝑣𝑖 , 𝑣𝑖 〉 = 1, 1 ≤ 𝑖 ≤ 𝑛.


∴ 𝛼𝑖 = 〈𝑣, 𝑣𝑖 〉 , 1≤𝑖≤𝑛
Theorem 9
Let 𝑉 be an inner product space.
If 𝐸 = {𝑣1 , 𝑣2 , … … … , 𝑣𝑛 } is an orthogonal basis of 𝑉 then
𝑣𝑖
the set 𝐸′ = {𝑒1 , 𝑒2 , … … … , 𝑒𝑛 }, where 𝑒𝑖 = ‖ ‖ , 1 ≤ 𝑖 ≤ 𝑛
𝑣𝑖
is an orthonormal basis of 𝑉.
Proof
Given that the basis 𝐸 is an orthogonal basis of 𝑉. So,
(i) 𝑣𝑖 ≠ 𝜃𝑉 , 1 ≤ 𝑖 ≤ 𝑛 and
(ii) 〈𝑣𝑖 , 𝑣𝑗 〉 = 0, 1 ≤ 𝑖, 𝑗 ≤ 𝑛 with 𝑖 ≠ 𝑗
We have to show that the set 𝐸 ′ is an orthonormal basis
of 𝑉.
For any 1 ≤ 𝑖, 𝑗 ≤ 𝑛 with 𝑖 ≠ 𝑗,
𝑣𝑖 𝑣𝑗
〈𝑒𝑖 , 𝑒𝑗 〉 = 〈 , 〉
‖𝑣𝑖 ‖ ‖𝑣𝑗 ‖

1
= 〈𝑣𝑖 , 𝑣𝑗 〉
‖𝑣𝑖 ‖‖𝑣𝑗 ‖

=0 by (ii)
And for any 1 ≤ 𝑖, 𝑗 ≤ 𝑛 with 𝑖 = 𝑗,
𝑣𝑖 𝑣𝑖
〈𝑒𝑖 , 𝑒𝑗 〉 = 〈 , 〉
‖𝑣𝑖 ‖ ‖𝑣𝑖 ‖
1
= 2
〈𝑣𝑖 , 𝑣𝑖 〉
‖𝑣𝑖 ‖

‖𝑣𝑖 ‖2
= ‖𝑣𝑖 ‖2

=1
Hence 𝐸’ is orthonormal basis of 𝑉.
Theorem 10 Gram – Schmidt Orthonormalization
Process
Every inner product space has an orthonormal basis.
Proof
Let 𝑉 be an inner product space.
Let 𝐸 = {𝑣1 , 𝑣2 , … … … , 𝑣𝑛 } be a basis of 𝑉.
In order to prove the theorem, we have to obtain an
orthonormal set of 𝑛 vectors. In view of Theorem 9, it is
enough to produce the orthogonal set of 𝑛 vectors.
Since 𝐸 is a basis, it is linearly independent set and
hence it does not contain 𝜃𝑉 .
So by assuming 𝑢1 = 𝑣1 , 𝑢1 ≠ 𝜃𝑉 .
〈𝑣2 ,𝑢1 〉
Now, take 𝑢2 = 𝑣2 − 〈 𝑢1 then
𝑢1 ,𝑢1 〉
〈𝑣2 ,𝑢1 〉
〈𝑢2 , 𝑢1 〉 = 〈𝑣2 − 𝑢1 , 𝑢1 〉
〈 𝑢1 ,𝑢1 〉
〈𝑣2 ,𝑢1 〉
= 〈𝑣2 , 𝑢1 〉 − 〈 〈𝑢1 , 𝑢1 〉
𝑢1 ,𝑢1 〉

= 〈𝑣2 , 𝑢1 〉 − 〈𝑣2 , 𝑢1 〉
=0 (1)
Also, 𝑢2 ≠ 𝜃𝑉 because if 𝑢2 = 𝜃𝑉 then
〈𝑣2 , 𝑢1 〉
𝑣2 − 𝑢1 = 𝜃𝑉
〈𝑢1 , 𝑢1 〉
〈𝑣2 ,𝑢1 〉
⟹ 𝑣2 − 𝛼𝑢1 = 𝜃𝑉 where 𝛼 = 〈
𝑢1 ,𝑢1 〉

⟹ 𝑣2 − 𝛼𝑣1 = 𝜃𝑉
⟹ 𝑣2 = 𝛼𝑣1
⟹ The set {𝑣1 , 𝑣2 } is linearly dependent
⟹ The set 𝐸 is linearly dependent
Which is contradiction because 𝐸 is linearly
independent set.
∴ 𝑢2 ≠ 𝜃𝑉
Thus, {𝑢1 , 𝑢2 } is an orthogonal set of two vectors in 𝑉.
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
Now, take 𝑢3 = 𝑣3 − 〈 𝑢1 − 〈 𝑢2 then
𝑢1 ,𝑢1 〉 𝑢2 ,𝑢2 〉
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
〈𝑢3 , 𝑢1 〉 = 〈𝑣3 − 𝑢1 − 𝑢 2 , 𝑢1 〉
〈𝑢1 ,𝑢1 〉 〈 𝑢2 ,𝑢2 〉
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
= 〈𝑣3 , 𝑢1 〉 − 〈 〈𝑢1 , 𝑢1 〉 − 〈𝑢2 , 𝑢1 〉
𝑢1 ,𝑢1 〉 〈𝑢2 ,𝑢2 〉
〈𝑣3 ,𝑢2 〉
= 〈𝑣3 , 𝑢1 〉 − 〈𝑣3 , 𝑢1 〉 − 〈 (0)
𝑢2 ,𝑢2 〉

by using Equation (1)


= 〈𝑣3 , 𝑢1 〉 − 〈𝑣3 , 𝑢1 〉
=0
and
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
〈𝑢3 , 𝑢2 〉 = 〈𝑣3 − 𝑢1 − 𝑢 2 , 𝑢 2 〉
〈 〉
𝑢1 ,𝑢1 〈 〉
𝑢2 ,𝑢2
〈𝑣3 , 𝑢1 〉 〈𝑣3 , 𝑢2 〉
= 〈𝑣3 , 𝑢2 〉 − 〈𝑢1 , 𝑢2 〉 − 〈 𝑢2 , 𝑢2 〉
〈𝑢1 , 𝑢1 〉 〈 𝑢2 , 𝑢2 〉
〈𝑣3 ,𝑢1 〉
= 〈𝑣3 , 𝑢2 〉 − 〈 〉
(0) − 〈𝑣3 , 𝑢2 〉
𝑢1 ,𝑢1

by using Equation (1)


= 〈𝑣3 , 𝑢2 〉 − 〈𝑣3 , 𝑢2 〉
=0
Also, 𝑢3 ≠ 𝜃𝑉 because if 𝑢3 = 𝜃𝑉 then
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
𝑣3 − 〈 〉
𝑢1 − 〈 〉
𝑢 2 = 𝜃𝑉
𝑢1 ,𝑢1 𝑢2 ,𝑢2

⟹ 𝑣3 − 𝛼1 𝑢1 − 𝛼2 𝑢2 = 𝜃𝑉
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
where 𝛼1 = 〈 , 𝛼2 = 〈
𝑢1 ,𝑢1 〉 𝑢2 ,𝑢2 〉
〈𝑣2 ,𝑢1 〉
⟹ 𝑣3 − 𝛼1 𝑣1 − 𝛼2 (𝑣2 − 〈 〉
𝑢1 ) = 𝜃𝑉
𝑢1 ,𝑢1

⟹ 𝑣3 − 𝛼1 𝑣1 − 𝛼2 (𝑣2 − 𝛼3 𝑣1 ) = 𝜃𝑉
〈𝑣2 ,𝑢1 〉
where 𝛼3 = 〈
𝑢1 ,𝑢1 〉

⟹ 𝑣3 − 𝛼1 𝑣1 − 𝛼2 𝑣2 + 𝛼2 𝛼3 𝑣1 = 𝜃𝑉
⟹ 𝑣3 = (𝛼1 − 𝛼2 𝛼3 )𝑣1 + 𝛼2 𝑣2
⟹ The set {𝑣1 , 𝑣2 , 𝑣3 } is linearly dependent
⟹ The set 𝐸 is linearly dependent
Which is contradiction because 𝐸 is linearly
independent set.
∴ 𝑢3 ≠ 𝜃𝑉
Thus, {𝑢1 , 𝑢2 , 𝑢3 } is an orthogonal set of three vectors in
𝑉.
In general, for 1 ≤ 𝑘 ≤ 𝑛 if we take
𝑘−1
〈𝑣𝑘 , 𝑢𝑖 〉
𝑢𝑘 = 𝑣𝑘 − ∑ 𝑢𝑖
〈 𝑢𝑖 , 𝑢𝑖 〉
𝑖=1
then the set {𝑢1 , 𝑢2 , … … … , 𝑢𝑛 } becomes an orthogonal
set of 𝑛 vectors.
Therefore the set {𝑢1 , 𝑢2 , … … … , 𝑢𝑛 } becomes an
orthogonal basis of 𝑉.
𝑢1 𝑢2 𝑢𝑛
And the set {‖ ‖ , ‖ ‖ , … … … , ‖ ‖} becomes an
𝑢1 𝑢2 𝑢𝑛
orthonormal basis of 𝑉.
This completes the proof.

Note:
The above process of obtaining orthonormal set (or
orthonormal basis) from the given set (or basis) is
called Gram – Schmidt Orthonormalization Process.
Example 7
Use Gram – Schmidt process and obtain orthonormal
set from the set {(1,2,0), (2, −3,1), (0,2,2)} in ℝ3 .
Solution
Let 𝑣1 = (1,2,0), 𝑣2 = (2, −3,1), 𝑣3 = (0,2,2).
As per Gram – Schmidt process
𝑢1 = 𝑣1
∴ 𝑢1 = (1,2,0)
〈𝑣2 ,𝑢1 〉
𝑢2 = 𝑣2 − 〈 𝑢1
𝑢1 ,𝑢1 〉
But 〈𝑣2 , 𝑢1 〉 = (2, −3,1) ⋅ (1,2,0)
= (2)(1) + (−3)(2) + (1)(0)
=2−6
= −4
〈𝑢1 , 𝑢1 〉 = (1,2,0) ⋅ (1,2,0)
= (1)2 + (2)2 + (0)2
=1+4
=5
−4
∴ 𝑢2 = (2, −3,1) − (1,2,0)
5
4 8
= (2, −3,1) + ( , , 0)
5 5
4 8
= (2 + , −3 + , 1 + 0)
5 5
14 −7
=( , , 1)
5 5

〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉


𝑢3 = 𝑣3 − 〈 〉
𝑢1 − 〈
𝑢
〉 2
𝑢1 ,𝑢1 𝑢2 ,𝑢2

But 〈𝑣3 , 𝑢1 〉 = (0,2,2) ⋅ (1,2,0)


= (0)(1) + (2)(2) + (2)(0)
=4
14 −7
〈𝑣3 , 𝑢2 〉 = (0,2,2) ⋅ ( , , 1)
5 5
14 −7
= (0) ( ) + (2) ( ) + (2)(1)
5 5
−14
= +2
5
−4
=
5

14 −7 14 −7
〈 𝑢2 , 𝑢2 〉 = ( , , 1) ⋅ ( , , 1)
5 5 5 5
14 2 −7 2
= ( ) + ( ) + (1)2
5 5
196 49
= + +1
25 25
196+49+25
=
25
270
=
25
54
=
5
4 −4/5 14 −7
∴ 𝑢3 = (0,2,2) − (1,2,0) − ( , , 1)
5 54/5 5 5
4 8 28 −14 2
= (0,2,2) − ( , , 0) + ( , , )
5 5 135 135 27
4 28 8 14 2
= (0 − + ,2 − − ,2 − 0 + )
5 135 5 135 27
4 28 8 14 2
= (0 − + ,2 − − ,2 − 0 + )
5 135 5 135 27
−80 40 56
=( , , )
135 135 27
−16 8 56
=( , , )
27 27 27

−16 8 56 −16 8 56
And 〈𝑢3 , 𝑢3 〉 = ( , , )( , , )
27 27 27 27 27 27
−16 2 8 2 56 2
=( ) +( ) +( )
27 27 27
256 64 3136
= + +
729 729 729
3456
=
729
128
=
27

Now ‖𝑢1 ‖ = √〈𝑢1 , 𝑢1 〉 = √5


54 3√6
‖𝑢2 ‖ = √〈𝑢2 , 𝑢2 〉 = √ =
5 √5

128 8√2
‖𝑢3 ‖ = √〈𝑢3 , 𝑢3 〉 = √ =
27 3√3
𝑢1 1
∴ = (1,2,0)
‖𝑢1 ‖ √5
1 2
=( , , 0)
√5 √5

𝑢2 1 14 −7
‖𝑢2 ‖
= ( , , 1)
√54/5 5 5

√5 14 −7
= ( , , 1)
3√6 5 5
1 14 −7
= ( , , √5)
3√6 √5 √5
14 √5 −7
= ( , , )
3√30 3√30 3√6
𝑢3 1 −16 8 56
‖𝑢3 ‖
= ( , , )
8√2/3√3 27 27 27
3√3 −16 8 56
= ( , , )
8√2 27 27 27
√3 −2 1 7
= ( , , )
√2 9 9 9
−2 1 7
=( , , )
3√6 3√6 3√6

As per Gram – Schmidt process the set


14 −7 −16 8 56
{𝑢1 , 𝑢2 , 𝑢3 } = {(1,2,0), ( , , 1) , ( , , )}
5 5 27 27 27
is an orthogonal set and
the set

𝑢1 𝑢2 𝑢3
{‖ ‖ , ‖ ‖ , ‖ ‖}
𝑢1 𝑢2 𝑢3
1 2 14 −7 √5 −2 1 7
= {( , , 0) , ( , , ),( , , )}
√ 5 √5 3√30 3√30 3√6 3√6 3√6 3√6

is an orthonormal set in ℝ3 obtained from the given


set {(1,2,0), (2, −3,1), (0,2,2)}.
Example 8
Use Gram – Schmidt process and obtain orthonormal
set from the set {(1, −1,1, −1), (5,1,1,1), (2,3,4, −1)} in
ℝ4 .
Solution
Let
𝑣1 = (1, −1,1, −1), 𝑣2 = (5,1,1,1), 𝑣3 = (2,3,4, −1).
As per Gram – Schmidt process
𝑢1 = 𝑣1
∴ 𝑢1 = (1, −1,1, −1)
〈𝑣2 ,𝑢1 〉
𝑢2 = 𝑣2 − 〈 𝑢
〉 1
𝑢1 ,𝑢1

But 〈𝑣2 , 𝑢1 〉 = (5,1,1,1) ⋅ (1, −1,1, −1)


= (5)(1) + (1)(−1) + (1)(1) + (1)(−1)
=4
〈𝑢1 , 𝑢1 〉 = (1, −1,1, −1) ⋅ (1, −1,1, −1)
= (1)2 + (−1)2 + (1)2 + (−1)2
=4
4
∴ 𝑢2 = (5,1,1,1) − (1, −1,1, −1)
4
= (5,1,1,1) + (−1,1, −1,1)
= (4,2,0,2)
〈𝑣3 ,𝑢1 〉 〈𝑣3 ,𝑢2 〉
𝑢3 = 𝑣3 − 〈 𝑢1 − 〈 𝑢2
𝑢1 ,𝑢1 〉 𝑢2 ,𝑢2 〉

But 〈𝑣3 , 𝑢1 〉 = (2,3,4, −1) ⋅ (1, −1,1, −1)


= (2)(1) + (3)(−1) + (4)(1) + (−1)(−1)
=4
〈𝑣3 , 𝑢2 〉 = (2,3,4, −1) ⋅ (4,2,0,2)
= (2)(4) + (3)(2) + (4)(0) + (−1)(2)
= 12
〈𝑢2 , 𝑢2 〉 = (4,2,0,2) ⋅ (4,2,0,2)
= (4)2 + (2)2 + (0)2 + (2)2
= 24
4 12
∴ 𝑢3 = (2,3,4, −1) − (1, −1,1, −1) − (4,2,0,2)
4 24
= (2,3,4, −1) + (−1,1, −1,1) + (−2, −1,0, −1)
= (2 − 1 − 2,3 + 1 − 1,4 − 1 + 0, −1 + 1 − 1)
= (−1,3,3, −1)
And 〈𝑢3 , 𝑢3 〉 = (−1,3,3, −1)(−1,3,3, −1)
= (−1)2 + (3)2 + (3)2 + (−1)2
= 20
Now ‖𝑢1 ‖ = √〈𝑢1 , 𝑢1 〉
= √4
=2
‖𝑢2 ‖ = √〈𝑢2 , 𝑢2 〉
= √24
= 2√6
‖𝑢3 ‖ = √〈𝑢3 , 𝑢3 〉
= √20
= 2√5
𝑢1 1
∴ ‖𝑢1 ‖
= (1, −1,1, −1)
2
1 −1 1 −1
=( , , , )
2 2 2 2

𝑢2 1
= (4,2,0,2)
‖𝑢2 ‖ 2√6
2 1 1
=( , , 0, )
√6 √6 √6

𝑢3 1
= (−1,3,3, −1)
‖𝑢3 ‖ 2√5
−1 3 3 −1
=( , , , )
2√5 2√5 2√5 2√5
As per Gram – Schmidt process the set
{𝑢1 , 𝑢2 , 𝑢3 } = {(1, −1,1, −1), (4,2,0,2), (−1,3,3, −1)}
is an orthogonal set and
the set
𝑢1 𝑢2 𝑢3
{‖ ‖ , ‖ ‖ , ‖ ‖}
𝑢1 𝑢2 𝑢3
1 −1 1 −1 2 1 1 −1 3 3 −1
= {( , , , ),( , , 0, ),( , , , )}
2 2 2 2 √6 √ 6 √6 2√5 2√5 2√5 2√5
is an orthonormal set in ℝ obtained from the given
4

set {(1, −1,1, −1), (5,1,1,1), (2,3,4, −1)}.


Online Education by

Department of Mathematics
M K Bhavnagar University
For M.Sc. (Mathematics) Sem-2
Paper No.: 108
Linear Algebra
Instructor: Dr. P. I. Andharia
Syllabus: Unit-4
• Gram-Schmidt Orthonormalization
• Linear Functional and Adjoints
• Hermitian, Self-Adjoint, Unitary and
Normal Operators
• Spectral Theorem for Normal
Operators
Theorem: Riesz Representation Theorem
Let be an inner product space and be a linear functional on . Then
there exist a unique ∈ such that ( ) = 〈 , 〉, for all ∈ .
Proof
Given that : → ℝ is a linear functional.
Suppose { 1 , 2, … … … , } is an orthonormal basis of . Then
〈 , 〉= , 1≤ , ≤
Let ∈ then there exists ∈ ℝ, 1 ≤ ≤ such that

=
=1
Since is linear map,
( )=
=1

= ( )
=1

Take = ( ) then ∈ .
=1

And 〈 , 〉 = 〈 , ( ) 〉
=1 =1

= ( )〈 , 〉
, =1
= ( )
, =1

= ( )
=1
= ( )
Here is an arbitrary vector of , so
( ) = 〈 , 〉, ∀ ∈
To prove uniqueness of such ∈ , suppose there exist ∈ such that
( ) = 〈 , 〉, ∀ ∈
Then,
〈 , 〉 = 〈 , 〉, ∀ ∈
⟹ 〈 , 〉 − 〈 , 〉 = 0, ∀ ∈
⟹ 〈 , − 〉 = 0, ∀ ∈
For a particular case = − , we have
⟹〈 − , − 〉=0
⟹ − =
⟹ =
Hence this is unique.
Definition:
Let be a finite dimensional inner product space. A linear map
: → is called a linear operator on .
Theorem:
For any linear operator on a finite dimensional inner product space ,
there exist a unique linear operator ∗ on such that
〈 ∗
, 〉=〈 , 〉, ∀ , ∈ .
Proof:
Given that is linear operator on , so : → is linear map.
Let ∈ be fixed.
Define : → as ( ) = 〈 , 〉, ∀ ∈ then
( + ) = 〈 ( + ), 〉, ∀ , ∈
= 〈 + , 〉, ∵ is linear
= 〈 , 〉+〈 , 〉
= ( )+ ( )
and ( )=〈 ( ) , 〉, ∀ ∈ , ∈
= 〈 , 〉, ∵ is linear
= 〈 , 〉
= ( )
Thus, : → is linear and so is linear functional on .

Hence, by Riesz Representation Theorem, ∃ unique ∈ such that
( )=〈 , ′〉
,∀ ∈ .

So, corresponding to each ∈ , we get unique ∈ .
∗ ∗( )= ′
Define : → as , ∀ ∈ . Then,
〈 ′〉 ∗
, 〉= ( )=〈 , =〈 , 〉, ∀ , ∈

Claim: is linear.
Let , , ∈ and ∈ . Then,
〈 , ∗
( + )〉 = 〈 , + 〉
=〈 , 〉+〈 , 〉
= 〈 , 〉+〈 , 〉
∗ ∗
= 〈 , 〉+〈 , 〉
∗ ∗
=〈 , 〉+〈 , 〉
∗ ∗
=〈 , + 〉
∗( ∗ ∗
∴ + )= +
∗ ∗ ′
Hence, is linear. Uniqueness of is clear from uniqueness of .
Definition:
Let be a linear operator on an inner product space then we say that
has an adjoint on if ∃ a linear operator ∗ on such that
〈 ∗
, 〉=〈 , 〉, ∀ , ∈ .
Notes:
(1) Every linear operator on a finite dimensional inner product space
has an adjoint on but for infinite dimensional inner product space
it is not always true.
(2) ∗ depends not only on but also on inner product.
Theorem:
Let be a finite dimensional inner product space. If and are linear
operators on and ∈ then
(1) ( + )∗ = ∗ + ∗

(2) ( )∗ = ∗

(3) ( )∗ = ∗ ∗
(4) ( ∗ )∗ =
Proof:
Let , ∈ and ∈
(1) 〈 , ( + )∗ 〉 = 〈( + ) , 〉
=〈 + , 〉
=〈 , 〉+〈 , 〉
∗ ∗
=〈 , 〉+〈 , 〉
∗ ∗
=〈 , + 〉
∗ ∗
= 〈 ,( + ) 〉
∴ ( + )∗ = ∗
+ ∗

(2) 〈 , ( )∗ 〉 = 〈( ) , 〉
=〈 , 〉
= 〈 , 〉

= 〈 , 〉

=〈 , 〉
∴ ( )∗ = ∗

(3) 〈 , ( )∗ 〉 = 〈( ) , 〉
=〈 ( ), 〉

=〈 , 〉
∗ ∗
=〈 , ( )〉
∗ ∗
= 〈 ,( ) 〉
∴ ( )∗ = ∗ ∗

∗ )∗ ∗
(4) 〈 , ( 〉=〈 , 〉

=〈 , 〉
=〈 , 〉
=〈 , 〉
∗ )∗
∴ ( =
Definitions:
(1) A linear operator T is called self-adjoint or Hermitian operator if

= .
(2) Let and be two vector spaces. A linear transformation
: → is said to be isomorphism if is one-one and onto
function.
(3) A unitary operator on an inner product space is an isomorphism
of onto itself. i.e. a linear operator : → is called unitary if it
is isomorphism.
(4) Let be a finite dimensional inner product space. Let be a linear

operator on . Then, is said to be normal operator if = ∗ .
(5) Let be a vector space and be a linear operator on . Let be a
characteristic value (Eigen value) of . Then the collection of all
vectors such that = is called the characteristic space (Eigen
space) associated with .

Result without proof:


Let be IPS. A linear operator : → is one-one and onto if and only
if 〈 , 〉 = 〈 , 〉, ∀ , ∈ .
Theorem:
Let be a linear operator on an inner product space . Then is
unitary if and only if ∗ exists and ∗
= ∗ = .
Proof:
Suppose is unitary operator on IPS then : → is isomorphism
i.e. is one-one and onto.
−1
is one-one and onto ⇒ exists and 〈 , 〉 = 〈 , 〉, ∀ , ∈ .

Now, 〈 , 〉=〈 , 〉
=〈 , 〉
−1
=〈 ,( ) 〉
−1 )
=〈 , ( 〉
−1
=〈 , 〉
∗ −1
∴ =
−1 −1 −1
Now, exists gives us = = .
−1 ∗
Substituting = , we get
∗ ∗
= = .
∗ ∗ ∗
Conversely, suppose exists and = = . Then,
∗ −1
clearly, = . Let , ∈ .

Then, 〈 , 〉=〈 , 〉
=〈 , 〉
=〈 , 〉
So, is one-one and onto or is isomorphism.
Hence, is unitary operator.


Recall: (1) A linear operator is self-adjoint then = .
∗ ∗
(2) A linear operator is normal then = .
Theorem:
Let be an inner product space. Let be a self-adjoint linear operator
on . Then each characteristic value (Eigen value) of is real and
corresponding Eigen vectors of associated with Eigen values are
orthogonal.
Proof:
Given that is self-adjoint linear operator on an inner product space .

∴ =
Suppose is an Eigen value of and is an Eigen vector of
corresponding to . Then,
=
Now, 〈 , 〉=〈 , 〉
=〈 , 〉

=〈 , 〉

=〈 , 〉 ∵ =
=〈 , 〉
= ̅〈 , 〉
∴ = ̅
Hence, is real. So, each Eigen value of is real.
Suppose, is another Eigen value of T with ≠ and Eigen vector
associated with is ′ . Then,
′ ′
=
〈 , ′〉 ′〉
Now, =〈 ,
′〉
=〈 ,
∗ ′〉
=〈 ,
′〉 ∗
=〈 , ∵ =
′〉
=〈 ,
′〉
= 〈 ,
′〉
= 〈 , ∵ is real
〈 , ′〉 ′〉
∴ − 〈 , =0
′〉
∴ ( − )〈 , =0
′〉
But, ≠ 0. Hence, we must have 〈 ,
⇒( − )≠0 =0
0.

Thus, and are orthogonal. Hence, Eigen vectors are orthogonal.
Theorem:
Let be a finite dimensional inner product space and let be a normal
operator on . Suppose ∈ then is an Eigen vector for with Eigen
value if and only if is an Eigen vector for ∗ with Eigen value ̅ .
Proof:
∗ ∗
Given that is normal. So, = ……….. (1)
Suppose is any normal operator on . Then,
∗ ∗
=
Now, ‖ ‖2 = 〈 , 〉

=〈 , 〉
∗ ∗ ∗
=〈 , 〉 ∵ =
∗ ∗
=〈 , 〉
∗ ‖2
=‖

∴ ‖ ‖=‖ ‖ ………………….. (2)
Thus, any normal operator satisfies Eq. (2).
Now, given an Eigen value of a normal operator , we have
( − )( − )∗ = ( − )( ∗
− ̅ )
= ∗
− ̅ − ∗
+ ̅
= ∗
− ∗
− ̅ + ̅ From Eq. (1)
= ( ∗ − ̅ )( − )
= ( − )∗ ( − )
So, is normal operator then − is also normal operator.
Hence, − satisfies Eq. (2).
∴ ‖( − ) ‖ = ‖( − )∗ ‖ ………………. (3)
Now, is an Eigen vector for corresponding to Eigen value
⇔ =
⇔( − ) =
⇔ ‖(− ) ‖=0
⇔ ‖(− )∗ ‖ = 0 Using Eq. (3)
⇔ ( ∗− ̅ ) =0
⇔( ∗− ̅ ) =
⇔ ∗
= ̅
⇔ is an Eigen vector for ∗
corresponding to Eigen value ̅
Theorem: Spectral Theorem
Let be a normal operator on a finite dimensional complex inner
product space . Let 1 , 2 , … … , be the distinct characteristic values
of . Let be the characteristic space associated with then is
orthogonal to when ≠ .
Proof:
Let ∈ and ∈ with ≠ .
i.e. is an Eigen vector corresponding to and
is an Eigen vector corresponding to with ≠ .
Then, = , = and
∗ ∗
= , =
Now, 〈 , 〉=〈 , 〉
=〈 , 〉

=〈 , 〉
=〈 , 〉
= 〈 , 〉
∴( − )〈 , 〉 = 0
But, ≠ ⇒( − ) ≠ 0. Hence, we must have 〈 , 〉 = 0.
Thus, and are orthogonal ∀ ∈ and ∀ ∈ with ≠ .
Hence, and are orthogonal when ≠ .

You might also like