Professional Documents
Culture Documents
Vector Space F
Vector Space F
Vector Space
Let be any field and be any non-empty set then is a vector space over denoted by ( )
if the following axioms hold:
is an abelian group.
Examples:
Let be any field then is a vector space.
√ √ , are vector spaces.
( is a vector space called Euclidean space.
is a vector space where is set of all matrices over
Let [ ] denotes set of all polynomials with entries from is a vector space over
Let [ ] denotes set of all polynomials of degree less than or equal to with entries from is
a vector space over
Let and consider } then is a vector space over under the addition
and scalar multiplication defined by
Non-Examples:
* +
P a g e | 227
Let be the union of first and third quadrant in the that is , set
,* + - then is not a vector space over as for
* + * + * +
Theorem:
If is vector space over then
Subspace:
Let be a vector space and then we say is a subspace of if is a
vector space.
Trivial Subspaces:
Zero subspace } is called Trivial subspaces of any vector space
Non-Trivial Subspaces:
All other subspaces of are called non-trivial subspaces of
Examples:
is a non-trivial subspace of .
is a non-trivial subspace of (√ ) .
√ is subspace of .
[ ] is a subspace of [ ].
Subspace Criteria:
Let be a vector space and then we say is a subspace of iff
.
Remember that first of all you will check whether additive identity of is in If it is not in
then don‟t need to apply subspace criteria.
Example:1
Example:2
A plane in not through the origin is not a subspace of similarly a line in not through
the origin is not a subspace of as both did not contain additive identity.
Example:3
Let } then is not a subspace of as .
Example:4
Let } then is a subspace of as
.
Example:5
Let be the set of points inside and on a unit circle in the that is the set
,* + } then is not a subspace of as for
Example:6
The only subspaces of are } lines through origin and .
Example:7
The only subspaces of are } lines through origin, planes through origin and .
Example:8
Let be vector space of all real matrices then set of all diagonal , upper triangular,
lower triangular, symmetric, skew symmetric matrices are subspaces of
Intersection of subspaces:
Intersection of two subspaces is a subspace of In fact, intersection of any collection of
subspaces is again a subspace.
P a g e | 229
Result:
If are three subspaces of a vector space Such that
then
Example:
Above result is not true if . For this consider ,
| } | } | }
Then } and but
}
Example:
For any field consider the vector space then is the direct sum of its two
subspaces and where
| } and | }
Example:
Let be vector space of all real valued functions then is the direct sum of
| } and | }
Example:
Let be vector space of all real matrices. Let be set of all symmetric matrices in and
be set of all skew symmetric matrices in then
P a g e | 230
Explanation:
( ) , ( ) , ( ) are vector spaces while is not a vector space under the scalar
multiplication defined by
where .
(a) } (b) }
(c) } (d) }
Explanation:
Clearly in option (c) does not belong to the given set don‟t need to apply subspace
criteria and } is not a subspace of
For the sake of our convenience we check option (d)
Let }
Consider
if
and then
Using
we can see that
Hence is a subspace of
In a similar fashion we can check
} and } are subspaces of
(a) | | } (b) }
(c) for some } (d) All of these
P a g e | 231
Explanation:
Since sum of two singular matrices need not to be singular as for
* + and * +
Explanation:
Option (a) , (b) , (d) are correct statements while statement holds good for all
elements of a vector space
Explanation:
As sum and Intersection of two subspaces is a subspace and union of two subspaces need
not to be subspace of vector space (Example already discussed).
Explanation:
and are not vector spaces due to failure of scalar multiplication axiom while
is not a subset of so cannot be a subspace of
is trivial subspace of .
Explanation:
Since then and
then elements of are of the form ( ) and hence sum has elements of the form
( ).
Explanation:
We can observe that } and we can show that .
Thus the space is direct sum of its subspaces and
Q:9 Let be vector space of matrices over field of real numbers and and be
subspaces of symmetric and skew-symmetric matrices respectively then
Explanation:
Since, every real matrix can be written as
where and and Null matrix is the only matrix
which is symmetric and Skew-symmetric.
Hence is the direct sum of and .
Explanation:
For we proceed as and then
. For consider and but .
and are subspaces of (Using Subspace criteria).
Explanation:
Since and are in but
Also and are in but
and are subspaces of
Verify, using subspace criteria.
P a g e | 234
Q:12 Let be set of all complex Hermition matrices then is vector space over
(a) (b) but not (c) Both and (d) but not
Explanation:
is vector space over but not over as for ,* + but
* + * +
Q:13 Which of the following is not a vector space over the field of real numbers?
(a) }
(b) , | [ ] ( ) -
(c) | [ ] }
(d) , | [ ] ( ) -
Explanation:
is a vector space over as .
For we will check that is subspace of all real valued functions by using subspace
criteria.
Let and be in then ( ) ( )
Consider ( ) ( ) ( )
Now for any ( ) ( )
So is subspace of vector space of all real valued functions.
In a similar fashion you can check .
For take and consider ( ) ( ) ( ) .
Hence is not closed under addition.
(a) zero element does not exist. (b) is not closed under scalar multiplication.
(c) is not closed under vector addition (d) multiplicative inverses does not exist.
P a g e | 235
Explanation:
For given set zero element exist.
is closed under scalar multiplication as if we take
(Here, ) then
Inverse of each element exist.
But is not closed under vector addition as if we take
then
Q:15 Let be a vector space of all real-valued functions then pick up the set which is not a
subspace of ?
(a) } (b) }
(c) } (d) }
Explanation:
For option (a),
Take
Then ,
Also for
Option (a) is a subspace of
For option (b),
Take then
But vector
addition does not hold.
For option (c), take and
Then must exist as both of
these limits exist separately.
Similarly, for option (d).
Answers:
Q:1 d Q:2 c Q:3 c Q:4 c Q:5 c
Linear Combination:
Let ( ) be a vector space and and ,
where
Then is the linear combination of
Examples:
Let take any then can be written as linear combination of three
vectors , then
.We say is a linear combination of
, and .
Spanning Set:
Let then set consisting of all linear combinations of the elements of is called
Span( and denoted by
is a subspace of and it is the smallest subspace containing
is said to be spanned (generated) by and is called spanning set for
Example:
} then and is a subspace of
As [ ] [ ] [ ] where [ ], [ ]
Example:
Let be set of all vectors of the form 0 1 then =span{ } where thus is a
subspace of
Example:
Let be set of all vectors of the form [ ] where are arbitrary. Then
Example:
The polynomials span the vector space of all polynomials of degree less than
or equal to So span }
Theorems:
Linear Span of
If is subspace of then and conversely.
If } and } are two set of vectors in any vector space , then
span span if and only if each vector in is a linear combination of those in and each
vector in is a linear combination of those in .
Example:
Example:
Theorem:
The null space of an matrix is a subspace of
Example:
Find a spanning set for the null space of the matrix
P a g e | 238
[ ]
Solution:
First of all we will find the general solution of in terms of free variables.
After reducing the matrix in reduced row echelon form we have
[ ]
[ ] [ ] [ ] [ ] [ ]
Every linear combination of and is an element of Nul( Thus { } is a spanning
set for Nul( .
Example:
Find a Matrix such that
{[ ] }
Since
{ [ ] [ ] } {[ ] [ ] }.
Let =[ ] then
P a g e | 239
Theorems:
The Column space of an matrix is a subspace of
The Column space of an matrix is all of iff the equation has a
solution for each
Linear Transformation:
A Linear transformation from a vector space into a vector space is a rule that assigns to
each vector in a unique vector in such that
for all , in
for all in and all scalars
Or and
The Kernel (Null Space) of such a is the set of all in such that and it is clearly a
subspace of .
The Range of is the set of all vectors in of the form for some in , subspace of .
If is matrix transformation say then Kernel and the range of are just Null Space
and Column space of any matrix .
Examples:
Notations:
}
For any linear transformation and
Result:
Let be linear transformation then Ker } iff is one-one.
Examples:
Zero vector is linearly dependent.
A finite set containing zero vector is linearly dependent.
Any singleton set } is linearly independent iff
Any set consisting of two vectors is linearly dependent iff one of the vectors is multiple of other.
A set with two or more vectors is linearly dependent iff at least one of the vector in is
expressible as a linear combination of other vectors in .
Let , , then set } is linearly dependent
as
The set } is linearly independent in [ ] as there is no scalar c exists such that
[ ]
The set } is linearly dependent in [ ] as [ ]
In , a set of three vectors is linearly independent iff vectors do not lie on the same
plane when they are placed with their initial points at origin.
This results follows from the fact that three vectors are linearly independent iff none of the
vectors is linear combination of other two.
Theorems:
An indexed set { } of two or more vectors with is linearly independent iff
some (with is a linear combination of the preceding vectors ,
If is linearly independent set then every subset of is linearly independent.
If is linearly dependent set then every superset of is linearly dependent.
If } is a linearly independent subset of a vector space ,
then the set } is also linearly independent.
functions.
Theorem:
If the functions have continuous derivatives on
and if Wronskian of these functions is not identically zero for all in then these
functions form a linearly independent set of vectors in
Remark: The converse of above theorem is false. If the Wronskian of
is identically zero on then no conclusion can be drawn
about linear independence of } This set of vectors may be linearly dependent or
independent.
Convention:
represents vector space of all functions with continuous derivative on
.
Example:
Linear independent set in
The functions and form a linearly independent set of vectors in
As | | which is not zero for all in .
P a g e | 242
Example:
Linear independent set in
The functions , and form a linearly independent set of vectors in
Examples:
1. , }
Take any arbitrary vector then thus
Clearly given set is linearly independent , hence is a basis for and
The set is called standard basis for . Note that every linearly independent set of 2
elements is basis set for
2. , }
then is standard basis for and . On generalizing we say
3. over , },
Then is basis set for and
4. ,* + - is a vector space over and ,* + * + * +- is
basis.
5. then and basis elements are
* +,* +, * + * +.
On generalizing for
6. Let be vector space of all symmetric matrices of order over then
Theorems:
Let { } be a set in and let
If one of the vectors in say is a linear combination of the remaining vectors in ,then
the set formed from by removing still spans .
If } , some subset of is a basis for .
(above theorem is known as spanning set theorem)
If a vector space has basis } then any set in containing more than
vectors is linearly dependent.
If a vector space has basis of vectors, then any basis of consists of exactly vectors.
A one-to-one linear transformation preserves basis and dimension.
A linear transformation maps a linearly independent set to a linearly independent set.
If is linearly independent then can be extended to form basis of
If } spans a vector space then any subset of is a basis for .
Examples:
, vector space of all real valued functions ,
{ | } where is a subspace of .
Solution:
Any vector where and
[ ] where [ ]
P a g e | 244
Solution:
As and so by spanning set theorem we may discard and and
} will still span .
Example:
[ ].
Solution: to find basis for the row space and column space , row reduce to echelon form
{[ ] [ ]}
(Keep in mind that row operations may change linear dependence relations among rows of a
matrix).
Rank Theorem:
The rank of is the dimension of the column space of
Since row is same as Col( ) so the dimension of row space of is the rank of .
The Rank theorem or Rank-Nullity theorem is stated as
If is a matrix then ( )
The pivot columns of a matrix form a basis for
( ) number of pivot columns = number of non-zero rows is row
reduced echelon form of matrix
( ) Number of free variables. (Number of deleted rows).
If is a matrix then ( )
Or
Number of pivot columns + Number of free variables=
Now, we will see an application of Rank Theorem:
P a g e | 246
Example:
[ ]
Solution:
First we will convert given matrix in reduced echelon form
[ ]
Quotient Spaces:
Let be subspace of a vector space then Quotient space consists of all cosets of the
form is a vector space over under the addition and scalar multiplication defined
as;
P a g e | 247
Theorems:
(a) Null space (b) Column space (c) Row space (d) Rank
Explanation:
The set of all solutions of homogeneous system is called Null space.
Explanation:
Since ( ) and ( ) so dimension of quotient space is
(a) defined by
(b) and
(c) where
(d) defined by | |
Explanation:
Consider then but
so is not linear
Also but
So is not a linear transformation.
For take such that and
so is not linear transformation.
is linear transformation. Check
Explanation:
Since the set } is a basis of so for any, ,
We have such that .
By comparing real and imaginary parts we get, , .
Converting in matrix form
* +* + * + since given set is basis so * + is linearly independent and
Explanation:
The set } spans the solution space of given D.E
and being the linearly independent set of solutions is basis.
Hence
Explanation:
Since, [ ] [ ] [ ] [ ]
Hence
P a g e | 250
Explanation:
As and
[ ] [ ] [ ] [ ] [ ] [ ]
{[ ] [ ] [ ] [ ]}
Explanation:
but is correct.
Explanation:
Given that
Now,
Explanation:
As is generated by
So will be generated by
. Now we will check that whether the set } is linearly
independent or not.
As
So the set is linearly dependent and after removing from we get
} as linearly independent set.
Thus } spans and being linearly independent set forms a basis of
hence ( ) .
For ( ) consider then
on comparing we get
, ,
(a) ,* + * + * + * +- (b) ,* + * +-
(c) ,* +- (d) ,* + * +-
Explanation:
Using relation,
So, ,* + - , * + -
P a g e | 252
Explanation:
We know that
For this purpose, we find rank of given matrix.
[ ] [ ] [ ] [ ]
{
Thus,
Explanation:
[ ] {
[ ] {
Thus,
P a g e | 253
Explanation:
We know that ( )
Matrix of linear transformation is * + which is already in echelon form.
Thus, ( )
Explanation:
We know that }
In matrix notation, [ ] [ ] {
Thus,
Explanation:
Order of matrix of linear transformation
Thus, order of given matrix of linear transformation is
P a g e | 254
Explanation:
The matrix of linear transformation with respect to standard basis
∑
is one of its root.
On solving remaining roots are
Explanation:
Explanation:
There differentiable functions defined on some closed interval are linearly
independent if and only if
| |
| |
| |
(a) (b)
(c) (d)
Explanation:
A basis of consists of exactly three elements. Option (d) discarded.
Three vectors will form basis for if and only if they are linearly independent.
For option (a) matrix form is given as
[ ] [ ] [ ] {
Since, there exists zero row in echelon form so vectors are not linearly independent.
Similarly, are not linearly independents (verify it, using matrix
echelon form). Now, we check for (c).
[ ] [ ] [ ] {
P a g e | 256
Answers:
Definition:
Let be an inner product space and then √ ‖ ‖ is called the norm or length of
vector
‖ ‖ and ‖ ‖
‖ ‖ | |‖ ‖
‖ ‖ ‖ ‖ ‖ ‖
Definition:
Definition:
A set } vectors in an inner product space over is said to be an
orthogonal system if its distinct vectors are orthogonal. i.e.
If
is said to be an orthonormal system if {
Definition:
A square matrix over for which is called an orthogonal matrix.
Theorems:
Every orthonormal system } is linearly independent.
The following conditions for a square matrix are equivalent:
is orthogonal
The rows of form an orthonormal set.
The columns of form an orthonormal set.
If is orthogonal matrix then | |
Set of orthogonal matrices form a group under multiplication
Remarks:
For matrix the equation | | is polynomial of degree and so has
roots, where some of the roots can be repeated.
The polynomial can be written as
[ ]
For | |
Trace(
The equation | | is called the Characteristic equation of .
Example:
Example:
Solution:
The characteristic equation is | | then and so equation
are the eigenvalues of equation
Now we will find eigenvector corresponding to
Let * + be eigenvector then implies that * +* + * +
on solving we get so * + 0 1 0 1
Example:
[ ]0 1 [ ]
[ ] [ ] corresponding to .
Cayley-Hamilton Theorem:
Every matrix is a root of its characteristics polynomial satisfies its characteristics
equation.
Example:
Remember!
For a matrix, characteristics equation is | |
For a matrix, characteristics equation is ∑ | |
For a triangular matrix characteristics polynomial is
Theorems:
Non-zero eigenvectors of a matrix corresponding to distinct eigenvalues are linearly
independent.
If is an eigenvalue of an orthogonal matrix, then | |
Any two eigenvectors corresponding to two distinct eigenvalues of an orthogonal matrix
are orthogonal.
Eigen values of a diagonal matrix are its diagonal elements and eigenvectors are the
standard basis vectors.
A matrix and have same eigenvalues.
An eigenvector of a square matrix cannot correspond to two distinct eigenvalues.
If is an eigenvalue of a non-singular matrix then is an eigenvalue of .
If and are square matrices then and have same eigenvalues.
P a g e | 261
Definition:
If and are two matrices over then is said to be Similar to if there exists a
nonsingular matrix such that .
Theorems:
Similarity of matrices is an equivalence relation on the set of all matrices.
Similar matrices have same eigenvalues.
An matrix has linearly independent eigenvectors if and only if is similar to a
diagonal matrix.
The Eigen values of symmetric matrix are all real.
Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are
orthogonal.
Definition:
An matrix is said to be diagonalizable if it is similar to a diagonal matrix. i.e. is
diagonalizable if there exists an invertible matrix such that is a diagonal matrix. The
matrix is said to diagonalizable .
If is an orthogonal matrix and is a diagonal matrix then is called
orthogonally diagonalizable and is said to orthogonally diagonalizable .
P a g e | 262
Q:1 Let be a matrix with eigen values Then which can be the eigen value of
.
Explanation:
If is an eigen value of matrix then is an eigenvalue of
So must be eigenvalue of
Explanation:
* +* + * + on solving we get
Q:3 The minimum and maximum eigenvalue of [ ] are and Then other
eigenvalue is
Explanation:
Let be required eigen value and since sum of all eigenvalues of matrix is
so
P a g e | 263
Q:4 Let [ ] and be one of its eigenvalue then which of the following
Explanation:
Let be eigenvalues of
We know that
Recall that if a matrix has property “Sum of all entries in each row(column) is zero then one
of its eigenvalue must be zero‟‟.
Explanation:
We know that if is an eigenvalue of a matrix then eigenvalue of
is
Also we know that eigenvalues of a triangular matrix are its main diagonal entries.
Using this concept,
1+1+
(a) (b)
(c) (d) None of these
Explanation:
For any matrix, characteristic equation is
| |
Now, | |
Equation becomes as
According to Cayley-Hamilton Theorem, must satisfy equation
P a g e | 264
(a) (b)
(c) (d)
Explanation:
According to Cayley-Hamilton Theorem, every matrix is a root of its characteristics
polynomial. Also we know that, characteristics polynomial of a triangular matrix is
| |
Explanation:
We know that
Explanation:
According to Cayley-Hamilton Theorem,
Since, so
P a g e | 265
Explanation:
Since, is matrix. So, it has total eigenvalues
Now, are linearly independent (verify it).
So, eigenvalue has multiplicity at least
Since,
Now,
Explanation:
Given that,
̅ ̅ ̅̅̅
Q:12 Let be a invertible matrix with real entries such that then choose
the best option.
(a) All eigenvalues of are non-zero (b) At least one non-zero eigenvalue of
(c) All eigenvalues of are zero. (d) All eigenvalues of are same.
Explanation:
Given that
[ ]
Eigenvalues of a zero-matrix are all zero.
P a g e | 266
Explanation:
According to Cayley-Hamilton Theorem, must satisfy characteristic equation of
∑
To find
| |
Q:14 Two eigenvalues of a matrix * + have ratio for What is another value
of for which eigenvalues have same ratio?
Explanation:
Let be eigenvalues such that
Also,
Also,
From
⁄
Explanation:
Let * + and be its corresponding eigenvalue then
* +* + * +
P a g e | 267
Explanation:
It‟s not necessary that an invertible matrix does not have zero eigenvalue.
For example the matrix * + is invertible but zero is its eigenvalue.
All eigenvalues of a non-invertible matrix need not to be zero.
For example the matrix * + is non-invertible but its eigenvalues are
Null matrix is a zero matrix but its all eigenvalues are zero.
Option (d) is best one.
Explanation:
We know that corresponding to distinct eigenvalues, eigenvectors are linearly independent.
So, we find the eigenvalues of given matrix.
Characteristics equation is
Explanation:
Let be required matrix.
Using, * +* + * +
* + * + ,
Again, * +* + * +
* + * + ,
On solving we get
Thus matrix * +
(a) only (i) and (iv) are correct. (b) only (i), (iii) and (iv) are false.
(c) only (i), (iii) and (iv) are correct. (d) only (iii) and (iv) are correct.
Explanation:
Given matrix is non-invertible as
Characteristics equation of is ∑
Explanation:
Characteristics equation of is
According to Cayley-Hamilton Theorem,
(a) Null matrix (b) Diagonal matrix (c) Symmetric matrix (d) None
Explanation:
A null matrix can never be orthogonal as its inverse does not exist.
A diagonal matrix need not to be orthogonal.
For example, * + is not orthogonal as * +* + * + A
symmetric matrix need not to be orthogonal as above example.
Q:22 Consider a matrix [ ] where Then are two real and distinct
eigenvalues of if
Explanation:
The characteristics equation of is
So,
Now, we check for
Take then
Now,
P a g e | 270
Q:23 Let be two similar matrices of order such that are eigenvalues of
then ?
Explanation:
Similar matrices have same eigenvalues.
Also,
Explanation:
We find the characteristics equation of using Cayley-Hamilton Theorem and compare it
with given relation to find value of
compare it with
We have,
(a) (b)
(c) (d)
Explanation:
Let be eigenvalues of and
Then
Explanation:
We know that
Using, [ ][ ] [ ] {
Using, [ ][ ] [ ] {
From equation
Putting
Putting
On solving
So,
Explanation:
We know that a matrix is diagonalizable if
From above options we see that only fulfill condition (It‟s an exercise verify yourself).
P a g e | 272
Explanation:
Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal.
All above are symmetric matrices with distinct eigenvalues.
Explanation:
Explanation:
Eigenvalues of scalar matrices (which is also diagonal) are not distinct that is the diagonal
elements.
For two square matrices eigenvalues of both are same.
All eigenvalues of a triangular matrices need not to be real. For example * +
An eigenvector of a square matrix cannot correspond to two distinct eigenvalues.