Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

P a g e | 226

Vector Space
Let be any field and be any non-empty set then is a vector space over denoted by ( )
if the following axioms hold:
is an abelian group.

( i.e. There exist a map “.” Where . : Called scalar multiplication)


the following conditions satisfied.

where is the multiplicative identity of


Remember that elements of vector space are called vectors and that of field are scalars.

Examples:
Let be any field then is a vector space.
√ √ , are vector spaces.
( is a vector space called Euclidean space.
is a vector space where is set of all matrices over
Let [ ] denotes set of all polynomials with entries from is a vector space over
Let [ ] denotes set of all polynomials of degree less than or equal to with entries from is
a vector space over
Let and consider } then is a vector space over under the addition
and scalar multiplication defined by

Non-Examples:

is not a vector space under the scalar multiplication defined by


where .
is not a vector space as
Let be the first quadrant in the that is, set
,* + - then is not a vector space over as for

* +
P a g e | 227

Let be the union of first and third quadrant in the that is , set
,* + - then is not a vector space over as for

* + * + * +

Theorem:
If is vector space over then

Subspace:
Let be a vector space and then we say is a subspace of if is a
vector space.

Trivial Subspaces:
Zero subspace } is called Trivial subspaces of any vector space

Non-Trivial Subspaces:
All other subspaces of are called non-trivial subspaces of

Examples:
is a non-trivial subspace of .
is a non-trivial subspace of (√ ) .
√ is subspace of .
[ ] is a subspace of [ ].

Subspace Criteria:
Let be a vector space and then we say is a subspace of iff
.
Remember that first of all you will check whether additive identity of is in If it is not in
then don‟t need to apply subspace criteria.

Example:1

Consider 20 1 | 3 then is clearly a subspace of


P a g e | 228

Example:2
A plane in not through the origin is not a subspace of similarly a line in not through
the origin is not a subspace of as both did not contain additive identity.

Example:3
Let } then is not a subspace of as .

Example:4
Let } then is a subspace of as
.

Example:5
Let be the set of points inside and on a unit circle in the that is the set
,* + } then is not a subspace of as for

Example:6
The only subspaces of are } lines through origin and .

Example:7
The only subspaces of are } lines through origin, planes through origin and .

Example:8
Let be vector space of all real matrices then set of all diagonal , upper triangular,
lower triangular, symmetric, skew symmetric matrices are subspaces of

Union of Two subspaces:


Let be subspaces of a vector space then need not to be subspace of
Let and consider } and } then is not a
subspace of as then must contain and is not
closed under addition as
is subspace of iff either or .

Intersection of subspaces:
Intersection of two subspaces is a subspace of In fact, intersection of any collection of
subspaces is again a subspace.
P a g e | 229

Sum of two subspaces:


Let be subspaces of a vector space then
} is also a subspace of .
and both are subspaces of

Result:
If are three subspaces of a vector space Such that
then

Example:
Above result is not true if . For this consider ,
| } | } | }
Then } and but

Direct Sum of subspaces:


A vector space is the direct sum of its subspaces and denoted by if

}
Example:
For any field consider the vector space then is the direct sum of its two
subspaces and where
| } and | }

Example:
Let be vector space of all real valued functions then is the direct sum of
| } and | }

Example:
Let be vector space of all real matrices. Let be set of all symmetric matrices in and
be set of all skew symmetric matrices in then
P a g e | 230

Q:1 Which of the following is not a vector space?

(a) ( ) (b) ( ) (c) ( ) (d) ( )

Explanation:
( ) , ( ) , ( ) are vector spaces while is not a vector space under the scalar
multiplication defined by
where .

Q:2 Which is following is not subspace of

(a) } (b) }
(c) } (d) }

Explanation:
Clearly in option (c) does not belong to the given set don‟t need to apply subspace
criteria and } is not a subspace of
For the sake of our convenience we check option (d)
Let }

Consider

if

and then
Using
we can see that
Hence is a subspace of
In a similar fashion we can check
} and } are subspaces of

Q: Q:3 Let then which is a subspace of

(a) | | } (b) }
(c) for some } (d) All of these
P a g e | 231

Explanation:
Since sum of two singular matrices need not to be singular as for
* + and * +

But * + which is not in so is not closed under addition.


For two idempotent matrices
* + and * + * + is not idempotent.
For option (c) we may proceed as take then and
Consider
so is a
subspace.

Q:4 Which of the following is false?

(a) is a vector space. (b) A sub space is also a vector space


(c) for some elements of (d) A vector is an element in vector space

Explanation:
Option (a) , (b) , (d) are correct statements while statement holds good for all
elements of a vector space

Q:5 Let and be the subspaces of than which is not a subspace of

(a) (b) (c) (d) None of These.

Explanation:
As sum and Intersection of two subspaces is a subspace and union of two subspaces need
not to be subspace of vector space (Example already discussed).

Q:6 Let then which is a subspace of ?

(a) (b) (c) (d)


P a g e | 232

Explanation:
and are not vector spaces due to failure of scalar multiplication axiom while
is not a subset of so cannot be a subspace of
is trivial subspace of .

Q:7 Let } } then is


(a) plane (b) (c) (d) plane

Explanation:
Since then and
then elements of are of the form ( ) and hence sum has elements of the form
( ).

Rewriting in matrix form [ ]

Now applying row operations we get 0 1 so

Q:8 Let } } be subspaces of then

(a) (b) (c) (d) None

Explanation:
We can observe that } and we can show that .
Thus the space is direct sum of its subspaces and

Q:9 Let be vector space of matrices over field of real numbers and and be
subspaces of symmetric and skew-symmetric matrices respectively then

(a) (b) } (c) (d)


P a g e | 233

Explanation:
Since, every real matrix can be written as
where and and Null matrix is the only matrix
which is symmetric and Skew-symmetric.
Hence is the direct sum of and .

Q:10 Consider the following subsets of


} }
} }
Which of the following pair have the property of being a subspace of

(a) (b) (c) (d)

Explanation:
For we proceed as and then
. For consider and but .
and are subspaces of (Using Subspace criteria).

Q:11 Consider the following subsets of


} }
} }
Which of the following pair is subspace of

(a) (b) (c) (d)

Explanation:
Since and are in but
Also and are in but
and are subspaces of
Verify, using subspace criteria.
P a g e | 234

Q:12 Let be set of all complex Hermition matrices then is vector space over

(a) (b) but not (c) Both and (d) but not

Explanation:
is vector space over but not over as for ,* + but

* + * +

Q:13 Which of the following is not a vector space over the field of real numbers?

(a) }
(b) , | [ ] ( ) -
(c) | [ ] }
(d) , | [ ] ( ) -

Explanation:
is a vector space over as .
For we will check that is subspace of all real valued functions by using subspace
criteria.
Let and be in then ( ) ( )
Consider ( ) ( ) ( )
Now for any ( ) ( )
So is subspace of vector space of all real valued functions.
In a similar fashion you can check .
For take and consider ( ) ( ) ( ) .
Hence is not closed under addition.

Q:14 The set } is not a vector subspace of because

(a) zero element does not exist. (b) is not closed under scalar multiplication.
(c) is not closed under vector addition (d) multiplicative inverses does not exist.
P a g e | 235

Explanation:
For given set zero element exist.
is closed under scalar multiplication as if we take
(Here, ) then
Inverse of each element exist.
But is not closed under vector addition as if we take
then

Q:15 Let be a vector space of all real-valued functions then pick up the set which is not a
subspace of ?

(a) } (b) }
(c) } (d) }

Explanation:
For option (a),
Take
Then ,

Also for
Option (a) is a subspace of
For option (b),
Take then
But vector
addition does not hold.
For option (c), take and
Then must exist as both of
these limits exist separately.
Similarly, for option (d).

Answers:
Q:1 d Q:2 c Q:3 c Q:4 c Q:5 c

Q:6 d Q:7 c Q:8 c Q:9 c Q:10 d

Q:11 d Q:12 b Q:13 d Q:14 c Q:15 b


P a g e | 236

Linear Combination:
Let ( ) be a vector space and and ,
where
Then is the linear combination of

Examples:
Let take any then can be written as linear combination of three
vectors , then
.We say is a linear combination of
, and .

Spanning Set:
Let then set consisting of all linear combinations of the elements of is called
Span( and denoted by
is a subspace of and it is the smallest subspace containing
is said to be spanned (generated) by and is called spanning set for

Example:
} then and is a subspace of

As [ ] [ ] [ ] where [ ], [ ]

Example:

Let be set of all vectors of the form 0 1 then =span{ } where thus is a

subspace of

Example:

Let be set of all vectors of the form [ ] where are arbitrary. Then

} and hence is a subspace of


P a g e | 237

Example:
The polynomials span the vector space of all polynomials of degree less than
or equal to So span }

Theorems:

Linear Span of
If is subspace of then and conversely.
If } and } are two set of vectors in any vector space , then
span span if and only if each vector in is a linear combination of those in and each
vector in is a linear combination of those in .

Null Space of a Matrix:


Consider the following system of homogeneous equations:

In the Matrix form where * + the set of all solutions of is the


null space of matrix .
The null space of an matrix , written as Nul( , is the set of all solutions of the
homogeneous equation ,
Null( is in and } or we can say Nul( is the set of all in that are
mapped into zero vector of via the linear transformation

Example:

Let =* + and let [ ] then is in Nul( as

Example:

Let =* + and let [ ] then is not in Nul( as * +.

Theorem:
The null space of an matrix is a subspace of

Example:
Find a spanning set for the null space of the matrix
P a g e | 238

[ ]

Solution:

First of all we will find the general solution of in terms of free variables.
After reducing the matrix in reduced row echelon form we have

[ ]

The general solution is , with and are free.


Now decompose the vector giving the general solution into a linear combination of the vectors
where the weights are the free variables.

[ ] [ ] [ ] [ ] [ ]
Every linear combination of and is an element of Nul( Thus { } is a spanning
set for Nul( .

Column Space of a Matrix:


The Column space of an matrix , written as Col( , is the set of all linear combinations
of the columns of If } then
Span }
Or
Col( for some in }

Example:
Find a Matrix such that

{[ ] }

Since

{ [ ] [ ] } {[ ] [ ] }.

Let =[ ] then
P a g e | 239

Theorems:
 The Column space of an matrix is a subspace of
 The Column space of an matrix is all of iff the equation has a
solution for each

Contrast Between Null( ) and Col( ) for an matrix .


 Null( is implicitly defined. i.e we have only a condition which is satisfied by
the vectors in Nul( while Col( is explicitly defined. i.e we are aware of the method
to build vectors in Col(
 Row operations on are required to find vectors in Nul , on the other hand to find
vectors in Col( is quite easy as the columns of are given and others are formed from
them.
 There is no obvious relation between Null( and the entries in , while there is an
obvious relationship between Col( and entries in
 A vector in Nul( has the property that whereas a vector in Col( has the
property that the equation is consistent.
 Null( } iff the equation has only the trivial solution while Col(
iff the equation has a solution for each in .
 Null( } iff the linear transformation is one-to-one,
Whereas Col( iff the linear transformation maps onto .

Linear Transformation:
A Linear transformation from a vector space into a vector space is a rule that assigns to
each vector in a unique vector in such that
for all , in
for all in and all scalars
Or and
The Kernel (Null Space) of such a is the set of all in such that and it is clearly a
subspace of .
The Range of is the set of all vectors in of the form for some in , subspace of .
If is matrix transformation say then Kernel and the range of are just Null Space
and Column space of any matrix .

Examples:

1. The identity map such that and zero map defined by


are linear transformations.
2. Let be vector space of all real-valued functions defined on [ ] with the property that
is differentiable and is continuous on [ ], take [ ] be vector space of all
continuous functions and let be the transformation that changes in to its
derivative in then is linear transformations as
and .
P a g e | 240

Kernel of is the set of constant functions defined on [ ].


3. defined by then is clearly a linear transformation
Whereas }
4. defined by then is not linear transformation as
and but
Also since a linear transformation is always a group homomorphism so identity should map to
identity, but here
5. let } whereas is a linear transformation from vector space to ,
then is clearly a vector space over (just like group of automorphism).

Notations:
}
For any linear transformation and

Result:
Let be linear transformation then Ker } iff is one-one.

Linearly independent set:


An indexed set of vectors } is said to be linearly independent over iff
has only trivial solution. i.e.
If any one of is non-zero then given set is linearly dependent.

Examples:
Zero vector is linearly dependent.
A finite set containing zero vector is linearly dependent.
Any singleton set } is linearly independent iff
Any set consisting of two vectors is linearly dependent iff one of the vectors is multiple of other.
A set with two or more vectors is linearly dependent iff at least one of the vector in is
expressible as a linear combination of other vectors in .
Let , , then set } is linearly dependent
as
The set } is linearly independent in [ ] as there is no scalar c exists such that
[ ]
The set } is linearly dependent in [ ] as [ ]

Geometrical interpretation of linear independence in


 In , a set of two vectors is linearly independent if and only if the vectors do not lie
on the same line when they are placed with their initial points at origin.
This result follows from the fact that two vectors are linearly independent iff neither vector is a
scalar multiple of the other,
P a g e | 241

 In , a set of three vectors is linearly independent iff vectors do not lie on the same
plane when they are placed with their initial points at origin.
This results follows from the fact that three vectors are linearly independent iff none of the
vectors is linear combination of other two.

Theorems:
 An indexed set { } of two or more vectors with is linearly independent iff
some (with is a linear combination of the preceding vectors ,
 If is linearly independent set then every subset of is linearly independent.
 If is linearly dependent set then every superset of is linearly dependent.
 If } is a linearly independent subset of a vector space ,
then the set } is also linearly independent.

Linear Dependence of Functions:


If are 2 times differentiable functions on then

| | called Wronskian. We can extend this determinant for

functions.

Theorem:
If the functions have continuous derivatives on
and if Wronskian of these functions is not identically zero for all in then these
functions form a linearly independent set of vectors in
Remark: The converse of above theorem is false. If the Wronskian of
is identically zero on then no conclusion can be drawn
about linear independence of } This set of vectors may be linearly dependent or
independent.

Convention:
represents vector space of all functions with continuous derivative on
.

Example:
Linear independent set in
The functions and form a linearly independent set of vectors in
As | | which is not zero for all in .
P a g e | 242

Example:
Linear independent set in
The functions , and form a linearly independent set of vectors in

As | | which is non-zero for any in

Basis and Dimension:


Let be any vector space and be any non-empty subset of then is a basis of iff
(ii) is linearly independent
Whereas the dimension of any vector space is the number of elements in its basis set.

Examples:
1. , }
Take any arbitrary vector then thus
Clearly given set is linearly independent , hence is a basis for and
The set is called standard basis for . Note that every linearly independent set of 2
elements is basis set for
2. , }
then is standard basis for and . On generalizing we say
3. over , },
Then is basis set for and
4. ,* + - is a vector space over and ,* + * + * +- is
basis.
5. then and basis elements are
* +,* +, * + * +.
On generalizing for
6. Let be vector space of all symmetric matrices of order over then

7. Let be vector space of all skew symmetric matrices of order over


then
8. √ is a 2 dimensional vector space with basis set √ }
9. is 1- dimensional vector spaces.
10. is a 2-dimensional vector space while basis set is }.
11. Let ( ) and then [ ]
P a g e | 243

Theorems:
 Let { } be a set in and let
If one of the vectors in say is a linear combination of the remaining vectors in ,then
the set formed from by removing still spans .
 If } , some subset of is a basis for .
(above theorem is known as spanning set theorem)
 If a vector space has basis } then any set in containing more than
vectors is linearly dependent.
 If a vector space has basis of vectors, then any basis of consists of exactly vectors.
 A one-to-one linear transformation preserves basis and dimension.
 A linear transformation maps a linearly independent set to a linearly independent set.
 If is linearly independent then can be extended to form basis of
 If } spans a vector space then any subset of is a basis for .

Infinite Dimensional Vector Spaces:


If basis of any vector space is infinite then is called an infinite dimensional vector space.

Examples:
, vector space of all real valued functions ,

An application of Spanning Set Theorem:

{ | } where is a subspace of .

Solution:
Any vector where and

[ ] [ ] [ ] [ ] then set } spans .

But so is linearly dependent and the set } is independent hence

Basis for Null and Col :


Find a basis for for Col where

[ ] where [ ]
P a g e | 244

Solution:
As and so by spanning set theorem we may discard and and
} will still span .

Let } {[ ] [ ] [ ]} since each column is non-zero and none of them is

linear combination of others so is linearly independent and is basis for

Results on Dimension of a vector space:

 Let be a subspace of a finite-dimensional vector space . Any linearly independent set


in can be expanded if necessary to a basis for . Also is finite dimensional and

 Let be a dimensional vector space , . Any linearly independent set of exactly


elements in is automatically a basis for .
 Any finite dimensional vector space contains a basis.
 Let be an -dimensioan vector space, A set of vectors } is a basis for
iff each vector in is uniquely expressed as a linear combination of
 If and are finite-dimensional subspaces of a vector space then

 Let and be subspaces of a vector space , then is a subspace of and ,


Also .

 Let be an subspace of an vector space ,


then
 Let and be subspaces of a vector space if then
}
 Let be a finite-dimensional vector space and let } be any basis
If a set has more than vectors then it is linearly dependent.
If a set has fewer than vectors then it does not span .
 If is an vector space over then is isomorphic to as a vector
space.

The Row Space:


If is an matrix , each row of has entries then each row is a vector in
The set of all linear combination of the row vectors is called the row space of and is denoted
by Row As rows of are the columns of Thus Col is row space of
Before an example to understand row space, we state a theorem.
If two matrices and are row equivalent, then their row space are the same. If is in echelon
form, the non-zero rows of form a basis for the row space of as well as for that of .
P a g e | 245

Example:

[ ].

Solution: to find basis for the row space and column space , row reduce to echelon form

As [ ] by above theorem the first three rows of form a basis for

the row space of Thus basis for row space of A is


}
For the column space, observe from that the pivots are in column 1,2,4 thus columns 1,2,4 of
form a basis for For Null space of we need reduced echelon form. So reduced

echelon form is [ ] so and are free variables.

Thus basis for

{[ ] [ ]}
(Keep in mind that row operations may change linear dependence relations among rows of a
matrix).

Rank Theorem:
The rank of is the dimension of the column space of
Since row is same as Col( ) so the dimension of row space of is the rank of .
The Rank theorem or Rank-Nullity theorem is stated as
If is a matrix then ( )
The pivot columns of a matrix form a basis for
( ) number of pivot columns = number of non-zero rows is row
reduced echelon form of matrix
( ) Number of free variables. (Number of deleted rows).
If is a matrix then ( )
Or
Number of pivot columns + Number of free variables=
Now, we will see an application of Rank Theorem:
P a g e | 246

Example:

[ ]

Solution:
First we will convert given matrix in reduced echelon form

[ ]

after applying row operations , , we get

[ ] so , , while is arbitrary. Thus

[ ] [ ] [ ] = Thus every linear combination of is an element of

And } spans thus by Rank Theorem


Number of pivot columns = 3 , dim 1,

Invertible Matrix Theorem:


Let be an matrix then the following statements are each equivalent to the statement that
is an invertible matrix.
The columns of form a basis of .
Col
Dim(Col
Rank
Null }
Dim(Null

Quotient Spaces:

Let be subspace of a vector space then Quotient space consists of all cosets of the
form is a vector space over under the addition and scalar multiplication defined
as;
P a g e | 247

Theorems:

 Let be a linear transformation then


(Fundamental theorem of Homomorphism for vector spaces)
 If is onto then
 If and be two subspaces of a vector space then
 Let be a subspace of a finite dimensional vector space then
dim dim dim
P a g e | 248

Q:1 The set of all solutions of is

(a) Null space (b) Column space (c) Row space (d) Rank

Explanation:
The set of all solutions of homogeneous system is called Null space.

Q:2 Dimension of when and

(a) (b) (c) (d)

Explanation:
Since ( ) and ( ) so dimension of quotient space is

Q:3 Which of the following is the linear transformation?

(a) defined by
(b) and
(c) where
(d) defined by | |

Explanation:
Consider then but
so is not linear
Also but
So is not a linear transformation.
For take such that and
so is not linear transformation.
is linear transformation. Check

Q:4 If } is a basis of then

(a) (b) (c) (d)


P a g e | 249

Explanation:
Since the set } is a basis of so for any, ,
We have such that .
By comparing real and imaginary parts we get, , .
Converting in matrix form
* +* + * + since given set is basis so * + is linearly independent and

Q:5 Let be space of all solutions of linear homogeneous differential equation


then

(a) (b) (c) (d)

Explanation:
The set } spans the solution space of given D.E
and being the linearly independent set of solutions is basis.
Hence

Q:6 Let } then

(a) 2 (b) 3 (c) 4 (d) 1

Explanation:

Since, [ ] [ ] [ ] [ ]

So the set {[ ] [ ]} spans and being linearly independent set is basis.

Hence
P a g e | 250

Q:7 The dimension of } subspace of is

(a) 2 (b) 3 (c) 4 (d) 1

Explanation:
As and

[ ] [ ] [ ] [ ] [ ] [ ]

Thus the set is basis of and

{[ ] [ ] [ ] [ ]}

Q:8 Let be linear such that then

(a) (b) (c) (d)

Explanation:

but is correct.

Q:9 Let { } then


_____

(a) (b) (c) (d)

Explanation:
Given that
Now,

Since, there are 5 dependent variables.


So,
P a g e | 251

Q:10 Let be linear transformation such that


then ( ) and ( ) are

(a) (b) (c) (d)

Explanation:
As is generated by
So will be generated by
. Now we will check that whether the set } is linearly
independent or not.
As
So the set is linearly dependent and after removing from we get
} as linearly independent set.
Thus } spans and being linearly independent set forms a basis of
hence ( ) .
For ( ) consider then
on comparing we get
, ,

So [ ] [ ] [ ] Thus the vector generates and being

linearly independent set is basis of ,.


So ( )

Q:11 Pick up the basis set for a subspace ,* + - of the


vector space of all real matrices.

(a) ,* + * + * + * +- (b) ,* + * +-

(c) ,* +- (d) ,* + * +-

Explanation:
Using relation,
So, ,* + - , * + -
P a g e | 252

Q:12 The dimension of null space of the matrix [ ] is?

(a) (b) (c) (d)

Explanation:
We know that
For this purpose, we find rank of given matrix.

[ ] [ ] [ ] [ ]

{
Thus,

Q:13 Let defined by a linear


transformation then ?

(a) (b) (c) (d)

Explanation:

Matrix of linear transformation is [ ]

[ ] {

[ ] {

Thus,
P a g e | 253

Q:14 A linear transformation defined by then


( ) ?

(a) (b) (c) (d)

Explanation:
We know that ( )
Matrix of linear transformation is * + which is already in echelon form.
Thus, ( )

Q:15 Let defined by be a


linear transformation then ?

(a) (b) (c) (d)

Explanation:
We know that }

On solving above system of equations, we have


Thus, } }

In matrix notation, [ ] [ ] {

Thus,

Q:16 The order of matrix of linear transformation is?

(a) (b) (c) (d)

Explanation:
Order of matrix of linear transformation
Thus, order of given matrix of linear transformation is
P a g e | 254

Q:17 Let be a linear transformation,


and be the matrix of linear transformation with respect to standard basis then eigenvalues
of are?

(a) (b) (c) (d)

Explanation:
The matrix of linear transformation with respect to standard basis

is [ ] now characteristics equation of matrix is


is one of its root.
On solving remaining roots are

Q:18 Let be a linear transformation defined by


where are standard basis of then pick up the
correct one.

(a) (b) forms a non-invertible matrix of linear transformation.


(c) (d) is a non-diagonalizable matrix.

Explanation:

The matrix of linear transformation is [ ] having

Clearly, is diagonalizable as well as an invertible matrix.

Q:19 Which of the following triplets of functions is linearly independent?

(a) (b) (c) (d) All of these


P a g e | 255

Explanation:
There differentiable functions defined on some closed interval are linearly
independent if and only if

| |

| |

| |

Q:20 Which of the following must be basis of ?

(a) (b)
(c) (d)

Explanation:
A basis of consists of exactly three elements. Option (d) discarded.
Three vectors will form basis for if and only if they are linearly independent.
For option (a) matrix form is given as

[ ] [ ] [ ] {

Since, there exists zero row in echelon form so vectors are not linearly independent.
Similarly, are not linearly independents (verify it, using matrix
echelon form). Now, we check for (c).

[ ] [ ] [ ] {
P a g e | 256

Answers:

Q:1 a Q:2 c Q:3 c Q:4 d Q:5 d


Q:6 a Q:7 c Q:8 c Q:9 c Q:10 a
Q:11 c Q:12 b Q:13 b Q:14 c Q:15 b
Q:16 c Q:17 b Q:18 c Q:19 a Q:20 c
P a g e | 257

Definition: (Inner Product)


Let be a vector space over the field of real or complex numbers.
A mapping is said to be inner product on
If the following conditions are satisfied:
and
̅̅̅̅̅̅̅̅̅̅̅̅̅
Where ̅̅̅̅̅̅̅̅̅̅̅̅̅ denotes the complex conjugate of .

The pair is called inner product space.


is an inner product space where the inner product is dot product that is for
and
is an inner product space where the inner product is usual multiplication of real numbers.
is an inner product space where ̅ for
Let be vector space of all matrices over . Then is an inner product space under the
inner product defined as
where [ ] and
[ ]
is an inner product space with inner product for defined by
where
Let be vector space of all real-valued continuous functions on the interval [ ]
Then for ∫ is an inner product on .
Let be vector space of all matrices over .
Then is an inner product space under the inner product defined as
for .

Definition:
Let be an inner product space and then √ ‖ ‖ is called the norm or length of
vector

Cauchy Schwarz Inequality:


let and be elements of an inner product space over Then | | ‖ ‖‖ ‖
Theorem: The norm in an inner product space satisfies the following axioms for all .

‖ ‖ and ‖ ‖
‖ ‖ | |‖ ‖
‖ ‖ ‖ ‖ ‖ ‖
Definition:

Let be an inner product space and then ‖ ‖‖ ‖


is the angle between
two vectors and where .
Two vectors and are said to orthogonal iff so and are orthogonal iff .
P a g e | 258

Definition:
A set } vectors in an inner product space over is said to be an
orthogonal system if its distinct vectors are orthogonal. i.e.
If
is said to be an orthonormal system if {

Definition:
A square matrix over for which is called an orthogonal matrix.

Theorems:
 Every orthonormal system } is linearly independent.
 The following conditions for a square matrix are equivalent:
 is orthogonal
 The rows of form an orthonormal set.
 The columns of form an orthonormal set.
 If is orthogonal matrix then | |
 Set of orthogonal matrices form a group under multiplication

Eigen Values and Eigen Vector:


If is an matrix over then a scalar is called an eigen value of if there exist a
non-zero column vector such that where is eigenvector of
Now where is identity matrix of order .
since and system has a non-trivial solution iff is singular. i.e
| |

Remarks:
For matrix the equation | | is polynomial of degree and so has
roots, where some of the roots can be repeated.
The polynomial can be written as
[ ]
For | |
Trace(
The equation | | is called the Characteristic equation of .

Example:

Check whether * + and * + are eigen vectors of * +


Since * +* + * + * + so is an eigen vector of .
P a g e | 259

Also * +* + * + * + so is not an eigen vector of

Example:

Find Eigen values and Eigen vector of * +

Solution:
The characteristic equation is | | then and so equation
are the eigenvalues of equation
Now we will find eigenvector corresponding to
Let * + be eigenvector then implies that * +* + * +

on solving we get so * + 0 1 0 1

So eigenvector corresponding to is 0 1. Using the same approach we get * + as an


eigenvector corresponding to

Example:

Find eigenvalues and Eigen vector of [ ]

The characteristic equation for is and roots of the equation are


The eigenvalue is of multiplicity

For eigenvector corresponding to the where 0 1

[ ]0 1 [ ] applying row operations on coefficient matrix we get

[ ]0 1 [ ]

Let then then 0 1 [ ] [ ] hence [ ]

Scalar multiples of [ ] are all eigenvectors corresponding to and forms a subspace


of generated by [ ] which is the Eigen space of corresponding to
For eigenvector corresponding to consider then

[ ]0 1 [ ] let and then


P a g e | 260

And 0 1 0 1 [ ] [ ] so there are two linearly independent eigen vectors

[ ] [ ] corresponding to .

Any linear combination of [ ] [ ] is also an eigenvector corresponding to . Thus

{[ ] [ ]} is basis of Eigen space of corresponding to

Cayley-Hamilton Theorem:
Every matrix is a root of its characteristics polynomial satisfies its characteristics
equation.

Example:

Let us consider a matrix * + We find its characteristics polynomial as:


| | | |
We verify Cayley-Hamilton Theorem that is he root of its characteristics polynomial.
* + * + * + * + * + * + * +

Remember!
For a matrix, characteristics equation is | |
For a matrix, characteristics equation is ∑ | |
For a triangular matrix characteristics polynomial is

Theorems:
 Non-zero eigenvectors of a matrix corresponding to distinct eigenvalues are linearly
independent.
 If is an eigenvalue of an orthogonal matrix, then | |
 Any two eigenvectors corresponding to two distinct eigenvalues of an orthogonal matrix
are orthogonal.
 Eigen values of a diagonal matrix are its diagonal elements and eigenvectors are the
standard basis vectors.
 A matrix and have same eigenvalues.
 An eigenvector of a square matrix cannot correspond to two distinct eigenvalues.
 If is an eigenvalue of a non-singular matrix then is an eigenvalue of .
 If and are square matrices then and have same eigenvalues.
P a g e | 261

 If are eigenvalues of a square matrix of order then


where is scalar, are eigen values of
 Suppose is an eigenvector of matrices and , then is also an eigenvector of
, where are any scalars.

Definition:
If and are two matrices over then is said to be Similar to if there exists a
nonsingular matrix such that .

Theorems:
 Similarity of matrices is an equivalence relation on the set of all matrices.
 Similar matrices have same eigenvalues.
 An matrix has linearly independent eigenvectors if and only if is similar to a
diagonal matrix.
 The Eigen values of symmetric matrix are all real.
 Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are
orthogonal.

Definition:
An matrix is said to be diagonalizable if it is similar to a diagonal matrix. i.e. is
diagonalizable if there exists an invertible matrix such that is a diagonal matrix. The
matrix is said to diagonalizable .
If is an orthogonal matrix and is a diagonal matrix then is called
orthogonally diagonalizable and is said to orthogonally diagonalizable .
P a g e | 262

Q:1 Let be a matrix with eigen values Then which can be the eigen value of
.

1 (a) (b) (c) (d)

Explanation:
If is an eigen value of matrix then is an eigenvalue of
So must be eigenvalue of

Q:2 If * + is an eigenvector of * + then ___

(a) (b) (c) (d) None of these

Explanation:
* +* + * + on solving we get

Q:3 The minimum and maximum eigenvalue of [ ] are and Then other

eigenvalue is

(a) (b) (c) (d)

Explanation:
Let be required eigen value and since sum of all eigenvalues of matrix is
so
P a g e | 263

Q:4 Let [ ] and be one of its eigenvalue then which of the following

must be another eigenvalue of ?

(a) (b) (c) (d)

Explanation:
Let be eigenvalues of
We know that
Recall that if a matrix has property “Sum of all entries in each row(column) is zero then one
of its eigenvalue must be zero‟‟.

Q:5 Let [ ] then

(a) (b) (d) (d)

Explanation:
We know that if is an eigenvalue of a matrix then eigenvalue of
is
Also we know that eigenvalues of a triangular matrix are its main diagonal entries.
Using this concept,

1+1+

Q:6 Pick up the equation satisfied by * +?

(a) (b)
(c) (d) None of these

Explanation:
For any matrix, characteristic equation is
| |
Now, | |
Equation becomes as
According to Cayley-Hamilton Theorem, must satisfy equation
P a g e | 264

Q:7 Let [ ] then choose the equation whose root is ?

(a) (b)
(c) (d)

Explanation:
According to Cayley-Hamilton Theorem, every matrix is a root of its characteristics
polynomial. Also we know that, characteristics polynomial of a triangular matrix is
| |

Characteristics equation is which is satisfied by

Q:8 Consider a matrix [ ] then which of the following must be eigenvalue of

(a) (b) (c) (d)

Explanation:
We know that

Then, at least one of the eigenvalue must be zero.

Q:9 Let * + then ?

(a) (b) (c) (d)

Explanation:
According to Cayley-Hamilton Theorem,

Since, so
P a g e | 265

Q:10 Let and be eigenvectors of a matrix with two eigenvalues


and then ?

(a) (b) (c) (d)

Explanation:
Since, is matrix. So, it has total eigenvalues
Now, are linearly independent (verify it).
So, eigenvalue has multiplicity at least
Since,
Now,

Q:11 In an inner product space then


choose appropriate value for ?

(a) (b) – (c) (d)

Explanation:
Given that,
̅ ̅ ̅̅̅

Q:12 Let be a invertible matrix with real entries such that then choose
the best option.

(a) All eigenvalues of are non-zero (b) At least one non-zero eigenvalue of
(c) All eigenvalues of are zero. (d) All eigenvalues of are same.

Explanation:
Given that

[ ]
Eigenvalues of a zero-matrix are all zero.
P a g e | 266

Q:13 Let [ ] and then ?

(a) (b) (c) (d)

Explanation:
According to Cayley-Hamilton Theorem, must satisfy characteristic equation of

To find

| |

Q:14 Two eigenvalues of a matrix * + have ratio for What is another value
of for which eigenvalues have same ratio?

(a) (b) (c) ⁄ (d) ⁄

Explanation:
Let be eigenvalues such that
Also,
Also,
From

Q:15 For a matrix * +, if * + is an eigenvector then corresponding eigenvalue is?

(a) (b) (c) (d)

Explanation:
Let * + and be its corresponding eigenvalue then

* +* + * +
P a g e | 267

Q:16 Pick up the best option.

(a) zero can never be eigenvalue of an invertible matrix.


(b) All eigenvalues of a non-invertible matrix are zero.
(c) There exists at most one zero eigenvalue of a non-invertible matrix.
(d) A non-invertible matrix has at least one zero eigenvalue.

Explanation:
It‟s not necessary that an invertible matrix does not have zero eigenvalue.
For example the matrix * + is invertible but zero is its eigenvalue.
All eigenvalues of a non-invertible matrix need not to be zero.
For example the matrix * + is non-invertible but its eigenvalues are
Null matrix is a zero matrix but its all eigenvalues are zero.
Option (d) is best one.

Q:17 The number of linearly independent eigenvectors of the matrix * + is/are

(a) (b) (c) (d) None

Explanation:
We know that corresponding to distinct eigenvalues, eigenvectors are linearly independent.
So, we find the eigenvalues of given matrix.
Characteristics equation is

Q:18 Let be eigenvalues corresponding to eigenvectors * + * + of a matrix What


will be matrix ?

(a) * + (b) * + (c) * + (d) * +


P a g e | 268

Explanation:
Let be required matrix.
Using, * +* + * +

* + * + ,

Again, * +* + * +

* + * + ,
On solving we get
Thus matrix * +

Q:19 Consider the following statements for a matrix [ ]

(i) is an invertible matrix. (ii) has eigenvalue of multiplicity


(iii) must satisfy (iv) is diagonalizable.
Choose the correct statements.

(a) only (i) and (iv) are correct. (b) only (i), (iii) and (iv) are false.
(c) only (i), (iii) and (iv) are correct. (d) only (iii) and (iv) are correct.

Explanation:
Given matrix is non-invertible as
Characteristics equation of is ∑

According to Cayley-Hamilton Theorem must satisfy equation


On solving
We know that a matrix of order is diagonalizable if number of linearly
independent vectors is equal to
Corresponding to distinct eigenvalues, there exist linearly independent eigenvectors.
Thus matrix is diagonalizable.
P a g e | 269

Q:20 Consider a matrix * + then ?

(a) (b) (c) (d)

Explanation:
Characteristics equation of is
According to Cayley-Hamilton Theorem,

Q:21 Which of the following matrix is orthogonal?

(a) Null matrix (b) Diagonal matrix (c) Symmetric matrix (d) None

Explanation:
A null matrix can never be orthogonal as its inverse does not exist.
A diagonal matrix need not to be orthogonal.
For example, * + is not orthogonal as * +* + * + A
symmetric matrix need not to be orthogonal as above example.

Q:22 Consider a matrix [ ] where Then are two real and distinct
eigenvalues of if

(a) (b) (c) (d)

Explanation:
The characteristics equation of is

So,
Now, we check for
Take then
Now,
P a g e | 270

Q:23 Let be two similar matrices of order such that are eigenvalues of
then ?

(a) (b) (c) (d)

Explanation:
Similar matrices have same eigenvalues.
Also,

Q:24 Let [ ] if then ?

(a) (b) (c) (d) None of these

Explanation:
We find the characteristics equation of using Cayley-Hamilton Theorem and compare it
with given relation to find value of

compare it with
We have,

Q:25 Let be a matrix with an eigenvalue Also, and


then eigenvalues of are?

(a) (b)
(c) (d)

Explanation:
Let be eigenvalues of and
Then

Now, eigenvalues of are


P a g e | 271

Q:26 Let [ ] be a real matrix with eigenvalues and If eigenvectors

corresponding to eigenvalues and are [ ] and [ ] then value of is?

(a) (b) (c) (d)

Explanation:
We know that

Using, [ ][ ] [ ] {

Using, [ ][ ] [ ] {

From equation
Putting
Putting
On solving

So,

Q:27 Which of the following matrix is diagonalizable?

(a) * + (b) * + (c) * + (d) * +

Explanation:
We know that a matrix is diagonalizable if
From above options we see that only fulfill condition (It‟s an exercise verify yourself).
P a g e | 272

Q:28 Eigenvectors of a matrix ______ are orthogonal.

(a) * + (b) * + (c) * + (d) All of these

Explanation:
Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal.
All above are symmetric matrices with distinct eigenvalues.

Q:29 Let be a linear transformation as


then eigenvalues of matrix of are?

(a) (b) (c) (d)

Explanation:

Matrix of linear transformation is [ ]

Now, characteristics equation of above matrix is


eigenvalues are

Q:30 Choose the correct one.

(a) All eigenvalues of a triangular matrices are real.


(b) An eigenvector of a square matrix cannot correspond to two distinct eigenvalues.
(c) Eigenvalues of Eigenvalues of
(d) Eigenvalues of scalar matrices are distinct.

Explanation:
Eigenvalues of scalar matrices (which is also diagonal) are not distinct that is the diagonal
elements.
For two square matrices eigenvalues of both are same.
All eigenvalues of a triangular matrices need not to be real. For example * +
An eigenvector of a square matrix cannot correspond to two distinct eigenvalues.

You might also like