Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

VectorSpace

Observation:
In this book, we will distinguish vectors and scalar elements. For scalar elements
we will use Greek letters , , , , etc. and for the vectors in V we will use
letters x , y ,u ,v ,w , s , , etc., from our own alphabet. Depending on the context if
there is confusion we will use a dash on top, for example with the cero vector we
will always use the notation 0 , this way we will distinguish it from the scalar 0 in
the field K . Context is important in deducing which type of operation we are
applying. For example when we write x , or simply x , we know it is the
external law that operates an scalar, in the field K , with a vector in the group V .
When we write or simply , we mean the product of two scalars of K . For

addition we will always use to add scalars or vectors.

VECTOR SPACE
VECTOR SPACE EXAMPLES :
1
In the set F of real functions with real variables , given any two functions
f and g in F , we define addition f g as the function in which for every
element x we obtain the number f x g x in .
2

Let Pn x be the set of polynomials of a degree less than or equal to n with

x . For the addition of any two polynomials:

p x a0 a1 x a2 x 2 ... an x n (Where ai , for each i from 0 to n )


And

q x b0 b1 x b2 x 2 ... bn x n

(Where bi , for each i from 0 to n )

We define the polynomial:

p x q x a0 b0 a1 b1 x 2 a2 b2 x ... an bn x n
For every in and for every polynomial p x a0 a1 x a2 x ... an x n , in

Pn x we define p x a0 a1 x a2 x ... an x n . Is given as an exercise

,
for the reads to prove that P x , , , , is a vector space

Let m n the set of all real matrices with matrices with m rows and n

columns: The sum of two matrices and the product of a real number by a matrix is
another example. Addition is an internal operation in a m n matrix and, as we
have seen, m n . is an abelian group or a commutative group. The product of a

real number by a matrix is an external operation which complies with the


properties (1).
We have then defined the vector space m n , , , , of matrices with m rows
and n columns, over the real numbers that from now on we will simply call m n .
The zero vector space: Let V consist of a single object, which we denote

by 0 , and define

0 0 0 and k0 0
For all scalars k .

VECTOR SUBSPACE
A subspace H (different from the empty set) of a vector space V , , , will

also be a vector space with respect to the inherited operations from the space V
(both internal and external operations on a field ) if these are closed on H . It
means, that the internal operation of V is also internal in H ; and the external
operation of over V , is also external in over H . We need to verify that:
1.

H V , H

2. x ,y H , we verify that x y H
3. and x H , then x H
CHARACTERIZATION OF A VECTOR SUBSPACE
If V is a vector space over a field , and H is a non-empty subset of V , then:

H is a Vector Subspace of V x y H , , K ; x ,y H
Observation: The proof of how this is equivalent to the three points previously
shown is simple enough to be left to the reader

LINEAR COMBINATION
If V is a vector space over a field , it is said that the vector x V , is a
linear combination of the vectors in the set v1 ,v2 , ...,v m V , if there exist
scalars 1 , 2 , ..., m K , such that

x 1 v1 2 v2 ... m vm

LINEAR DEPENDENCY
If a vector x V can be expressed as a linear combination of a set of vectors
v1 ,v2 , ...,v m of V , we say that this x vector is linearly dependent on them.
The vectors of the set v1 ,v2 , ...,v m V , can be said to be linearly independent, if
none of them is linearly dependent on the rest.

CHARACTERIZATION OF LINEARLY INDEPENDENT VECTORS


The vectors of the subset v1 ,v2 , ...,v m V , are linearly independent the
only linear combination where 1 v1 2 v2 ... m v m , which result is the cero
vector, is that in which all the i are equal to cero.

Proof.
If there exists any i 0, for which

1 v1 ... i vi ... m v m 0 ,


We could divide everything by i , given that v i 1 v 1 ... m v m , so
i
i
that vi , could be expressed as a linear combination of the rest so then the

vectors v1 ,v2 , ...,v m , would not be linearly independent. The reciprocal can be
equally shown.

SYSTEMS

OF

GENERATORS

If V is a vector space over a field


S v1 ,v2 , ...,v m V , the set:

, then, given a vector set

L S x V / x 1 v1 2 v2 ... m v m , 1 , 2 , .., m K ,
(We mean, the subset of all possible linear combinationsof the vectors,v1 ,v2 , ...,v m )

is a vector subspace ofV denoted as generated vector space (or spanned) by S .


The vectors v1 ,v2 , ...,v m will be called a system of generators of said subspace,

and we say that L S is the generated space by vectorsv1 ,v2 , ...,v m . We can also
easily prove that L S is the minimum vector space that contains the vectors

v1 ,v2 , ...,v m .

DIMENSION
The vector space V is said to have dimension n if in it we can find n linearly
independent vectors, but it is impossible to find more than n linearly independent
vectors.
The dimension of a vector space is the maximum number of linearly independent
vectors that the vector space contains. We will denote the dimension of the
vector space V with dim V . The dimension of the vector space 2 is equal to
two; the dimension of the vector space 3 is equal to three. Vector spaces that
have a finite dimension are called finite-dimensional. A space in which we can find
as many linearly independent vectors as we want is called infinite-dimensional. One
example of a infinite-dimensional vector space is the set P x of every polynomial
with real coefficients. The set of all continuous real functions on a given interval
(or continual along the numerical line) is also infinite-dimensional.

BASIS
A whole collection of n linearly independent vectors in an n-dimensional vector
space V is called a basis of this space.
Observation:

By vector spaces own properties it is obvious that any linear combination of


elements of the base are part of the space; additionally there cannot be any
vector in the space that cannot be expressed as a linear combination of the
vectors in the base, if this was false then the n linearly independent vectors in
the base would also be linearly independent with a new vector and we would have
n 1 linearly independent vectors in a space with dimension n which is impossible.
This means that every base of a system of generators space is formed of linearly
independent vectors.
CommonBasis

In the vector space n , the usual base, called the canonical base is
B 1 ,0 , ...,0 , 0 , 1 , ...,0 , ..., 0 ,0 , ..., 1 .

In the vector space Pn x of the polynomials with degree equal to or less


than n , with real coefficients the usual base is: B 1 , x , x 2 , ..., x n .

In the set m n of all real matrices with matrices with m rows and n

columns, the usual base is:

.
.
0

0
0
.
.
0

..
..
.
.
..

0 0 1 .. 0 0 0 .. 1 0 0 .. 0 0 0 .. 0 0 0 .. 0 0 0 .. 0

0 0 0 .. 0 0 0 .. 0 1 0 .. 0 0 0 .. 1 0 0 .. 0 0 0 .. 0
. , . . . . ,..., . . . . , . . . . ,..., . . . . ,..., . . . . ,..., . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
0 0 0 .. 0 0 0 .. 0 0 0 .. 0 0 0 .. 0 1 0 .. 0 0 0 .. 1

Proposition:

Let B e1 ,e2 , ...,en be a basis for V . Then every vector x of the space can
be represented as a linear combination of the vectors in the basis, and,
furthermore, this representation is unique.

Proof.
Let B e1 ,e2 , ...,en be any basis of an n-dimensional space V , and let x .

Given that any n 1 vectors are linearly dependent, in particular, the vectors
e1 ,e2 , ...,en , x , are also linearly dependent, we mean by this that there exist some

numbers 1 , 2 , ..., n , , of which not every number is equal to 0 such that

1 e1 2 e2 ... n en x 0
Furthermore, we have that 0
since, otherwise, we would have
1 e1 2 e2 ... n en 0 with some of the coefficients 1 , 2 , ..., n different
from cero and, consequently, the vectors

e1 ,e2 , ...,en would result linearly

dependent (false since they form a basis). Then 0 and, so, we can write

x
And, setting

e1 2 e2 ... n en

i
xi , we have:

x x1 e1 x2 e2 ... xn en

x
This representation of
is unique because, if we also
x y1 e1 y2 e2 ... yn en , subtracting both expressions, we would have

x1

had

y1 e1 x2 y2 e2 ... xn yn en 0

And, due to the lineal independence of the vectors e1 ,e2 , ...,en , we get

x1 y1 , x2 y2 , ..., xn yn

COORDINATES
Let B e1 ,e2 , ...,en be a basis for V . For every x V , the numbers

x1 , x2 , ..., xn that make x x1 e1 x2 e2 ... xn en are called the coordinates of x on

the basis B . The previous proposition establishes that, given a basis of an ndimensional vector space V , every vector in V has coordinates in this base. It is
also clear that, if the coordinates match for two vectors x and y these vectors
are equivalent, and, in this case

x x1 e1 x2 e2 ... xn en y
5

Because of this we have that once we set a basis for V , to determine a vector x
it is only necessary to indicate its coordinates x1 , x2 , ..., xn , expressed by

x x1 , x2 , ..., xn
Given a vector space V , , , of finite dimension, with B e1 ,e2 , ...,en as a base
of it; the following isomorphism can be established:

: V n
x ( x1 , x2 , ..., xn )
The isomorphism will be defined in such a way that for every vector x in V ,
there is an image which will be the n-tuple x1 , x2 , ..., xn of n , made up of the

x in the base B . It is left to the reader to prove that is


isomorphic, we mean, to prove that is bijective and that:
coordinates of

x y x y , x , y V

x x , x V , R

ROW

AND COLUMN SPACES OF A MATRIX

Given a matrix with m rows and


elements of
a11
a
21
.

.
a
m1

n columns (a m n dimensional matrix) of


a12
a22
.
.
am2

.
.
.
.
.

. a1 n
. a2 n
. .

. .
. amn

Each ith row can be considered as a vector ai 1 , ai 2 , ..., ain n . The set A of all

the row vectors generates a vector space L A of all its possible linear

combinations. The number r of vectors linearly independent in A (which maximum


will be m , because there are only m rows) will be the dimension of the space L A
which is called the vector row space of the matrix.

RANK

OF A SET OF VECTORS

Given any vector space

V , , , with

a finite dimension, and a base

B e1 ,e2 , ...,en of the space; the range r of the set S v1 ,v2 , ...,v m V , is the
highest number of linearly independent vectors that can be extracted from S .
If any vector vi S , is expressed in the base B as:
6

vi ai 1 e1 ai 2 e2 ... ain en
Then, working in coordinates, we could say that the space L S generated by the
vectors from S , is the subspace n , generated by the row space of the matrix:
a11
a
21
A .

.
a
m1

a12
a22
.
.
am2

.
.
.
.
.

. a1 n
. a2 n
. .

. .
. amn

Which means that L S is the row space of the matrix A and, in consequence the
rank r of the set of vectors S , is equivalent to the rank of A . To obtain a base
for the space L S , we only need to obtain the row echelon form of the matrix A .

IMPLICIT

AND PARAMETRIC EQUATIONS OF A SUBSPACE.

Given a finite dimensional vector space V , , , , and a base B e1 ,e2 , ...,en


from it, the implicit equations of the space generated by the set of vectors
S v1 ,v2 , ...,v m V , is the homogenous system which solution space matches the

row space of matrix A defined previously. Then, if r is the rank of S , the matrix
A , in row echelon form, would be written as:

b 11

0
0

b 12

b 13

....

....

b 1 n 1

b 22

b 23

....

....

b 2 n 1

b 33

....

....

b 3 n 1

0
0
0

0
0
0

....
....
....

....
....
....

................
................

b r 1 n 1

....

....

b1n

b2n

b3n

.........
.........

b r 1 n

b rn

Then the row space of A , would be the space generated by the set

b , b
11

12

the vector x1 , x2 , ..., xn , will be part of the row space if and only if

x1 , x2 , ..., xn 1 b11 ,b12 , ...,b1 n 2 0 ,b22 , ...,b2 n ... r 0 ,0 , ...,0 ,brn


Otherwise expressed as:

, ...,b1 n 1 ,b1 n , 0 ,b12 , ...,b1 n 1 ,b1 n , 0 ,0 , ...,b2 n 1 , b2 n , ..., 0 ,0 , ...,0 ,brn

x n

x n 1

.
x
1

b1n 1

b2n 2

... b r 1 n r 1

b rn r

b 1 n 1 1

b 2 n 1 2

... b r 1 n 1 r 1

.
.
b 11 1

.
.
.

.
.
.

.
.
.

.
.
.

.
.
.

... .
... .
... .

.
.
.

These are the parametric equations of the space generated by the set of vectors
S v1 ,v2 , ...,vm V . Reducing the r parameters, n r homogenous equations are
obtained:

a 11 x 1 a 12 x 2 ... a 1 n x n 0

a 21 x 1 a 22 x 2 ... a 2 n x n 0

......

......

a n r 1 x 1 a n r 2 x 2 ... a n r n x n 0
Which are the implicit equations of the space generated by the set of vectors
S v1 ,v2 , ...,vm V

EXTENDING

TO A BASIS

In a finite dimensional vector space every set of linearly independent vectors


can be included as part of a basis.

Proof:
Let e1 ,e2 , ...,ek , be vectors linearly independent in V . If all the remaining

vectors in V are expressed linearly in terms of the vectors e1 ,e2 , ...,ek , these
already constitute a basis. If there is a vector ek 1 that is not expressed linearly
in terms of e1 ,e2 , ...,ek , the

k 1 vectors e1 ,e2 , ...,ek ,ek 1 will be linearly

independent. If the following equality is true

1 e1 2 e2 ... k ek k 1 ek 1 0
with any i 0 , then k 1 0 (due to the linear independence of the vectors

e1 ,e2 , ...,ek ) and, in consequence, the vector ek 1 can be expressed linearly in terms
of e1 ,e2 , ...,ek

Add the vector ek 1 to e1 ,e2 , ...,ek . If all the vectors in V are linearly expressed
in terms of e1 ,e2 , ...,ek ,ek 1 , these already constitute a basis. If there exists a
vector ek 2 that cannot be expressed linearly in terms of e1 ,e2 , ...,ek ,ek 1 , by the
previous logic, it can be added to the set of vectors to form a new set
8

e1 ,e2 , ...,ek ,ek 1 ,ek 2 and the set will be linearly independent, etc. This process

cannot be extended indefinitely, due to the fact that V is, by hypothesis, finite
dimensional, in consequence the space cannot hold an infinite set of linearly
independent vectors. Finishing the process, a linearly independent set of vectors
from which every other can be linearly expressed in V is obtained. This system
will be a basis for the space V that contains the given vectors.

THEOREM

OF BASIS

Every vector space V 0 , with a finite system of generators possesses, at


least, one base. con un sistema de generadores finito posee, al menos, una base.

Proof:
Let Sm v1 ,v2 , ...,v m be a system of generators for V . If Sm is linearly

independent, then B Sm is a basis for V . Otherwise there will be a vector,

supposedly v m , that is a linear combination of the rest, by which, if a vector x is


a linear combination of the vectors in Sm , substituting v m in the linear combination
with respect to the vectors in

Sm 1 v1 ,v2 , ...,v m 1 x

becomes a linear

combination of the vectors in Sm 1 and, as such:

V L Sm L Sm 1
If Sm 1 is linearly independent, then B Sm 1 is a basis forV . Otherwise, the

previous reasoning is repeated until obtaining some Si v1 ,v2 , ...,vi that is


linearly independent and a basis.
The end of the process is ensured, because in the worst case scenario, after de

m 1 steps Si

v1 is reached withv1 0

(becauseV 0 ) and this will be the

basis.

OPERATIONS

ON SUBSPACES

INTERSECTION
Given a finite dimensional vector space V , if S and T are two vector
subspaces of V , then the intersection S T is also a subspace of V .

Proof:
The following needs to be proved:

x , y S T
x y S T

, K
9

To prove it if x , y S T , then x , y S , and because S is a vector space and

x y S ; analogous reasoning demonstrates the same for x y T . It is

x y S T .
Given a finite dimensional vector space V , , , and a basis B e1 ,e2 , ...,en , if

S and T are two vector subspaces in V , to find the equations for S T , the
best is to express S and T by their implicit equations, the equations indicate the
conditions to which a vectors components have to comply for it to belong to the
space S , and the conditions to which these same components have to comply to in
order to belong toT . Obviously if we join every condition, by this it is meant,
every equation, then the vectors that are subject to these conditions will belong
to both S andT , and will define the intersection.
Observation:

Even if the intersection of subspaces is a subspace, it is not the same for the
union (see figure)

Taking the space 3 , and the subspaces S (of equation x 0 ) andT (of
equation z 0 ). The union would be constituted by all vectors that belong to the
union of the gray planes in the figure.It is observed that 1 ,0 ,0 S T and

0 ,0 ,1 S T

but, however,

1 ,0 ,0 0 ,0 ,1 1 ,0 ,1 S T

.Then S T is

not a vector space, because addition is not an internal operation in S T .


However, if the vectors that are the result of adding vectors in S with vectors in
in T are selected, the set of vectors that are obtained,will form a vector space.
SUM
Given a vector space V , if S
sum of S andT is defined as

and T are two vector subspaces from V , the

S T x V / u S y v T / x u v
The result of adding S and T is also a subspace in V .
10

Looking at the previous figure, it is deducted easily that S T , is equal to the


whole of 3 . Additionally, as S is the space defined by the equation x 0 and T
is defined by the equation z 0 , it is clear that any x , y , z 3 , can be
expressed as the sum of a vector in S with another inT , since
x , y , z 0 , y , z x ,0 ,0 . It can also be observed that there is no unique way

to express x , y , z because it is also true that x , y , z 0 , y 1 , z x ,1 ,0

However, different subspaces could be used to obtain 3 . This way, for example,
(see the figure above), the subspaces S of equation x 0 and T of equations
y 0
, also verify that S T 3 , since any x , y , z 3 can be expressed as:

z
0

x , y , z 0 , y , z x ,0 ,0 , but in this
express x , y , z as the result of adding

case there is only one unique way to


a vector in S with another inT while in

the first example this was not the case.


This is what will distinguishsumfrom the other denominateddirect sum.
Additionally, the fact that every element in V can be uniquely expressed as the
addition of an element in S with another inT , is what is going to characterize the
direct sum.
DIRECT SUM
Given a vector space V , if S and T are two vector subspaces in V , thenthe
direct sum

S T x V / !u S y !v T / x = u + v
(where

! , means there exists one and only one), is also a subspace ofV .

DIMENSION FORMULA
Given a finite dimensional vector space V , if S
subspaces inV , then
11

and T

are two vector

dim S T dim S T dim S dim T

Proof:
Let dim S n , dim T m and dim S T r , then if

e1 ,e2 , ...,er

is a

basis for S T , then by extending to a basis, e1 ,e2 , ...,er could be expanded in


such way that:

B1 e1 ,e2 , ...,er ,er 1 , ...,en and B2 e1 ,e2 , ...,er ,v r 1 , ...,v m


Are
basis
for S andT respectively.
It
will
be
proved
that
B e1 ,e2 , ...,er ,er 1 , ...,en ,v r 1 , ...,v m is a basis for S T . For this, in the first
place, it is necessary to prove that the vectors are linearly independent. In
effect: if

1 e1 1 e2 ... r er 1 1 er 1 ..., n en r 1 v r 1 ... m v m 0


it is verified that:

r 1 vr 1 ... m vm 1 e1 1 e2 ... r er 1 1 er 1 ..., n en S S T (*)


and if r 1 v r 1 ... m v m ,

belongs to S T , it can be used as a linear

combination of the elements of its basis e1 ,e2 , ...,er , it is to say there will exist

1 , 2 , ..., r scalars in such a way that:


r 1 v r 1 ... m v m 1 e1 2 e2 ... r er
what is equivalent to saying

1 e1 2 e2

... r er r 1 v r 1 ... m v m 0

and, for being B2 e1 ,e2 , ...,er ,v r 1 , ...,v m a basis of T , its vectors are linearly
independent, then it is required to verify that,

r 1 ... m 0 , and in the

expression (*), it would be had that

r 1 v r 1 ... m v m 1 e1 1 e2 ... r er 1 1 er 1 ..., n en 0


which means that

1 e1 1 e2 ... r er 1 1 er 1 ..., n en 0
and, as B1 e1 ,e2 , ...,er ,er 1 , ...,en is a basis for T , its vectors are linearly
independent, then it is required to verify that,
consequence:

1 ... n r 1 ... m 0
12

1 ... n 0 , and in

and, then the vectors of

B e1 ,e2 , ...,er ,er 1 , ...,en ,v r 1 , ...,v m , are linearly

independent.
Additionally, the vectors in B generate the space S T since

x S T s S

and t T / x s t

and, expressing vectors s and t in the basis B1 e1 ,e2 , ...,er ,er 1 , ...,en and

B2 e1 ,e2 , ...,er ,v r 1 , ...,v m from the spaces S andT respectively:

x s t 1 e1 2 e2 ... r er r 1 er 1 ... n en

1 e1

2 e2 ... r er r 1 v r 1 ... 1 v m

1 1 e1 ... r r er r 1 er 1 ... n en r 1 vr 1 ... m vm


Which means it is generated by the vectors in the basis B . Furthermore, any
vector x generated by the basis B can be decomposed as the sum of a vector s
in S and anothert inT , such as

x 1 e1 2 e2 ... r er ... n en r 1 vr 1 ... m vm

1 e1 2 e2 ... r er ... n en 0 e1 0 e2 ... 0 er r 1 vr 1 ... m vm

s t

Then the vectors in B generate the space S T and, paying attention to the
number of vectors in the basis S T , of S T , of S and ofT , it is has been
proved that:

dim S T dim S T dim S dim T


r

13

r n r m r

You might also like