Maatext Template

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 56

i

1
2
AMS/MAA TEXTBOOKS

Linear Algebra

Jaime Aranguren
Contents

1 Matrices Over the Real Field 1


1.1 The concept of a Matrix 1
1.2 Binary operations with matrices 4
1.3 Unitary operations with matrices 6
1.4 Inverse 9
1.5 Methods for finding the inverse of a matrix 10
2 Linear Spaces 23
2.1 Linear space structure 23
2.2 Linear subspaces 26
2.3 Complementary subspaces 35
2.4 Linear algebra 37
3 Linear Operators 39
3.1 Space of linear operators 39
3.2 Isomorphism in linear spaces 46
4 Spaces with Internal Product 47
5 Eigenvalues and Diagonalization 49
Bibliography 51

v
1
Matrices Over the Real Field

1.1 The concept of a Matrix


A matrix over a field K is a rectangular arrangement, in rows i and columns j, of
elements aij ∈ K, with i = 1, m and j = 1, n.

Field K. When referring to a field K in this document, it is meant:


Real field: (R, addition, multiplication)≡ (R, +, ·)
Complex field: (C, addition, multiplication)≡ (C, +, ·)
Note: mostly the discussion will be about the real field, however, in some cases the
complex field will be referred to.

1.1.1 Notation. Matrices of size mn over R:


 
a11 a12 ... a1n
 a21 a22 ... a2n 
A= . ..  ⇒ A = (aij ) ∈ Mmn (R)
 
.. ..
 .. . . . 
am1 am2 ... amn
Matrix A as column vector of arrow vectors:
 
R1
 R2 
A =  .  with Ri = (ai1 , ai2 , . . . , ain ) ∈ Rn
 
.
 . 
Rm
Matrix A as row vector of column vectors:
 
a1j
 a2j 
A = (C1 , C2 , . . . , Cn ) with Cj =  .  ∈ Rm
 
 .. 
amj
1
2 Chapter 1. Matrices Over the Real Field

1.1.1.1 Particular cases.


(1) Vectors:
A = (a11 , a12 , . . . .a1n ) ≡ (a1 , a2 , . . . , an ) ∈ M1n (R) ≡ Rn
   
a11 a1
 a21   a2 
A =  .  ≡  .  ∈ Mm1 ≡ Rm
   
 ..   .. 
am1 am

(2) Extreme case:


(a11 ) ∈ M11 ≡ R

1.1.2 Types of matrices.

1.1.2.1 Rectangular matrix. The set Mmn (R) with m ̸= n. As an example:


 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A= .
 
.. .. .. 
 .. . . . 
am1 am2 . . . amn

1.1.2.2 Square matrix. The set Mmn (R) with m = n ⇒ Mn (R) ≡ Mm (R). As an
example:  
a11 a12 a13
A = a21 a22 a23 
a31 a32 a33
Within the square matrices, there are:
(1) Diagonal matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is diagonal if:
(
aij , if i = j
aij =
0, otherwise
The diagonal matrix is often also expressed as
(aij ) = Diag(a11 , a22 , . . . , ann )
As an example:  
a11 0 0
A= 0 a22 0 
0 0 a33

(2) Triangular superior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is triangular
superior if: (
aij , if i ≤ j
aij =
0, otherwise
As an example:  
a11 a12 a13
A= 0 a22 a23 
0 0 a33
1.1. The concept of a Matrix 3

(3) Strict triangular superior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is strict
triangular superior if:
(
aij , if i < j
aij =
0, otherwise
As an example:
 
0 a12 a13
A = 0 0 a23 
0 0 0

(4) Triangular inferior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is triangular
inferior if: (
aij , if i ≥ j
aij =
0, otherwise
As an example:
 
a11 0 0
A = a21 a22 0 
a31 a32 a33

(5) Strict triangular inferior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is strict
triangular inferior if:
(
aij , if i > j
aij =
0, otherwise
As an example:
 
0 0 0
A = a21 0 0
a31 a32 0

(6) Bands of a matrix: : Let (aij ) ∈ Mn (R) be.

(a) Superior band: are all aij such that i = j − r, with r ∈ N and r ≤ n − 1,
and 0 otherwise. As an example:
 
0 a12 0
A = 0 0 a23 
0 0 0

(b) Inferior band: are all aij such that i = j + r, with r ∈ N and r ≤ n − 1,
and 0 otherwise. As an example:
 
0 0 0
A = a21 0 0
0 a32 0

Both types of band, superior and inferior, can be combined. As an example:


 
0 a12 0
A = a21 0 a23 
0 a32 0
4 Chapter 1. Matrices Over the Real Field

1.1.3 Matrix equality. Let (aij ), (bij ) ∈ Mmn (R) be. Then:
A = B ⇔ aij = bij , ∀i = 1, . . . , m ∧ ∀j = 1, . . . , n
It is an equivalence relationship: ∀A, B, C ∈ Mmn (R):
(1) Reflexive: A = A
(2) Symmetric: A = B ⇔ B = A
(3) Transitive: A = B ∧ B = C ⇒ A = C

1.2 Binary operations with matrices


1.2.1 Addition. + : Mmn (R) × Mmn (R) → Mmn (R), an internal binary opera-
tion (IBO) such that:
((aij ), (bij )) → (aij ) + (bij ) = (aij + bij )

1.2.1.1 Properties. ∀A, B, C ∈ Mmn (R) with A = (aij ):


(1) Associative: A + (B + C) = (A + B) + C
(2) Additive neutral: A + 0 = 0 + A = A, with 0 = (dij ) ∈ Mmn (R) such that
dij = 0, ∀i, j. 0 is called the null matrix.
(3) Additive inverse: A + E = E + A = 0, with E = (eij ) ∈ Mmn (R) such that
eij = −aij , ∀i, j.
(4) Commutative: A + B = B + A

1.2.1.2 Algebraic structure. (Mmn (R), addition) ≡ (Mmn (R), +) is an abelian


group.

1.2.2 Multiplication by a scalar. R × Mmn (R) → Mmn (R), an external binary


operation (EBO) such that:
(λ, (bij )) → λ(aij ) = (λaij )

1.2.2.1 Properties. ∀A, B ∈ Mmn (R) ∧ ∀φ, λ ∈ R:


(1) 1A = A
(2) λ(A + B) = λA + λB
(3) (λ + φ)A = λA + φA
(4) (λφ)A = λ(φA) = φ(λA)

1.2.2.2 Algebraic structure. (Mmn (R), addition, multiplication by scalar) ≡ (Mmn (R), IBO, EBO)
is a linear space over R.

1.2.3 Matrix multiplication.

1.2.3.1 Rectangular matrices. Mmn (R) × Mnr (R) → Mmr (R), such that:
n
!
X
((aij ), (bij )) → (aij )(bij ) = aik bkj , ∀i = 1, . . . , m ∧ ∀j = 1, . . . , r
k=1
1.2. Binary operations with matrices 5

Properties. ∀A, B, C, Im , In matrices in R for which the operation is conformable


(i.e. is defined):
(1) Associative: A(BC) = (AB)C

Proof. Let A = (aij ) ∈ Mmn (R), B = (bij ) ∈ Mnp (R), C = (cij ) ∈ Mpq (R)
be. Then, we want to demonstrate that:
(aij ) [(bij )(cij )] = [(aij )(bij )] (cij )
By the definition of matrix multiplication:
p
! n
!
X X
(aij ) bit ctj = aik bkj (cij )
t=1 k=1
Also by definition of matrix multiplication (notice the changes in the indexes):
p p
n
!! n
! !
X X X X
aik bkt ctj = aik bkt (ctj )
k=1 t=1 t=1 k=1
By associativity of addition in R:
p p X
n X
! n
!
X X
aik bkt ctj = aik bkt ctj
k=1 t=1 t=1 k=1

(2) Distributive: A(B + C) = AB + AC ∧ (A + B)C = AC + BC


(3) Identical: Let In ∈ Mn (R) be, defined as:

1 if i = j
In = (δij ) with δij = known as Kronecker’s delta
0 if i ̸= j
In ∈ Mn (R) is the identity matrix.
Then, AIn = A ∧ Im A = A with A ∈ Mmn (R).

1.2.3.2 Square matrices. Mn (R) × Mn (R) → Mn (R) is an internal binary opera-


tion (IBO) such that:
n
!
X
((aij ), (bij )) → (aij )(bij ) = aik bkj , ∀i, j = 1, . . . , n
k=1

Properties. ∀A, B, C ∈ Mn (R):


(1) Associative: A(BC) = (AB)C
(2) Distributive: A(B + C) = AB + AC ∧ (A + B)C = AC + BC
(3) Identical: AIn = A ∧ In A = A

Algebraic structure. (Mn (R), addition, multiplication) ≡ (Mn (R), +, ·) is a uni-


tary ring.

Commutation.
(1) If AB = BC, then it is said that A and B commute.
(2) If AB = −BC, then it is said that A and B anti-commute.
6 Chapter 1. Matrices Over the Real Field

1.3 Unitary operations with matrices


T
1.3.1 Transposition. : Mmn (R) → Mnm (R), such that:
(aij ) → (aij )T = (aji )

1.3.1.1 Properties. For each case, let A, B be matrices over the real field such that
the operation is conformable, and for all λ ∈ R:
(1) (AT )T = A

(2) (A + B)T = AT + BT +

(3) (λA)T = λAT

(4) (AB) = BT AT .

Proof. Let A = (aij ) ∈ Mmn (R), B = (bij ) ∈ Mnp (R) be. Then, we want to
demonstrate that:
T
[(aij )(bij )] = (bij )T (aij )T
By definition of matrix multiplication on the left side, and definition of trans-
position on the right side:
n
!T
X
aik bkj = (bji )(aji )
k=1

By definition of transposition on the left side, and definition of matrix multi-


plication on the right side:
n
! n
!
X X
aki bjk = bjk aki
k=1 k=1

By commutativity of the product in R on the right side:


n
! n
!
X X
aki bjk = aki bjk
k=1 k=1

1.3.1.2 Types of matrices generated by the transposition of square matrices.


Let A ∈ Mn (R) be, then it can be said that A is:
(1) Symmetric: if A = AT .
A + AT , AAT , AT A are symmetric ∀A ∈ Mn (R).

(2) Anti-symmetric: if A = −AT .


A − AT is anti-symmetric ∀A ∈ Mn (R).

Proof. Find the transpose of mentioned matrix:


(A − AT )T = AT − A = −(A − AT ) ∴ (A − AT )T = −(A − AT )
As shown, A − AT is always anti-symmetric.
1.3. Unitary operations with matrices 7

Property. ∀A ∈ Mn (R), there exist Ss ∈ Mn (R) symmetric and Sa ∈ Mn (R)


anti-symmetric, such that:
A = Ss + Sa
To prove that, take Ss = 2 (A + A ) and Sa = 12 (A − AT ).
1 T


1.3.2 Conjugation. : Mmn (C) → Mnm (C), such that:
(aij + ibij ) → (aij + ibij ) = (aij + ibij ) = (aij − ibij )

1.3.2.1 Properties. For each case, let A, B be matrices over the complex field such
that the operation is conformable, and for all λ ∈ C:
(1) (A) = A
(2) A + B = A + B
(3) λA = λ A
(4) (AB) = A B

1.3.2.2 Types of matrices generated by the conjugation of complex matrices.


Let A ∈ Mn (C) be, then it can be said that A is:
(1) Real: if A = A
(2) Imaginary: if A = −A
−T
1.3.3 Conjugate transposition. : Mmn (C) → Mnm (C), such that:
T
(aij +ibij ) → (aij + ibij ) = (aij + ibij )T = (aji + ibji ) = (aij −ibij )T = (aji −ibji )

1.3.3.1 Properties. For each case, let A, B be matrices over the complex field such
that the operation is conformable, and for all λ ∈ C:
T
T
(1) (A ) = A
T T T
(2) A + B = A + B
T T
(3) λA = λ A
T T
(4) (AB) = A B

1.3.3.2 Types of matrices generated by the conjugate transposition of square


complex matrices. Let A ∈ Mn (C) be, then it can be said that A is:
T
(1) Hermitian: if A = A.
T T T
A + A , AA , A A are hermitian ∀A ∈ Mn (C).
If A ∈ Mn (C) and A is hermitian, then all of the elements on the diagonal are
real numbers.
T
(2) Anti-hermitian: if A = −A.
T
A − A is anti-hermitian ∀A ∈ Mn (C).
If A ∈ Mn (C) and A is anti-hermitian, then all of the elements on the diagonal
are imaginary numbers.
8 Chapter 1. Matrices Over the Real Field

Property. ∀A ∈ Mn (C), there exist Hh ∈ Mn (C) hermitian and Ha ∈ Mn (C)


anti-hermitian, such that:
A = Hh + Ha
T T
To prove that, take Hh = 21 (A + A ) and Ha = 12 (A − A ).
r
1.3.4 Potentiation. : Mn (R) → Mn (R), such that:
r
Y
(aij ) → (aij )r = (aij ) with r ∈ N
t=1

It is defined A0 = In , ∀A ∈ Mn (R) regular.

1.3.4.1 Properties. For each case, let A, B , C ∈ Mn (R) ∧ ∀p, r, t ∈ N be:


(1) (Ar )t = Art
(2) Ar At = Ar+t
(3) (Ar Bp )Ct = Ar (Bp Ct )
(4) If A and B commute, then also Ar and Br commute
(5) If A and B commute, then (AB)r = Ar Br , r ≥ 2

Proof. Let A, B ∈ Mn (R) be. Then, by mathematical induction:


(a) Veracity check of (AB)r = Ar Br for r = 2
(AB)2 = (AB)(AB) = A(BA)B = A(AB)B = (AA)(BB)
∴ (AB)2 = A2 B2
(b) Inductive hypothesis: for r = k, (AB)r = Ar Br is assumed to be true:
(AB)k = Ak Bk
(c) Veracity check of (AB)r = Ar Br for r = k + 1:
(AB)k+1 = (AB)k (AB) = (Ak Bk )(AB) = (Ak Bk−1 )(BA)B
= (Ak Bk−1 )(AB)B = (Ak Bk−1 )(AB2 ) = (Ak Bk−2 )(BA)B2
= (Ak Bk−2 )(AB)B2 = (Ak Bk−2 )(AB3 ) = . . . = (Ak B)(ABk )
= Ak (BA)Bk = Ak (AB)Bk = (Ak A)(Bk B)
∴ (AB)k+1 = Ak+1 Bk+1

1.3.4.2 Types of matrices generated by the potentiation of square matrices. Let


A ∈ Mn (R) be, then it can be said that A is:
(1) Involute: if A2 = In .
(2) Idempotent: if A2 = A.
(3) Periodic, with period r: if Ar = A.
(4) Nilpotent, with nilpotent index r: if Ar = 0.
1.4. Inverse 9

Properties.
(1) If A is idempotent, then Ar = A with r ≥ 2.
(2) If A is involute, then A2r = In and A2r+1 = A

1.3.5 Matrix polynomials.


Pn r
(1) Pn (A) = i=0 λi A with coefficients λi ∈ R ∀i = 1, . . . , n and variables
A ∈ Mr (R).
Pn
(2) Pn (x) = i=0 Bi xi with coefficients Bi ∈ Mmr (R) ∀i = 1, . . . , n and variables
x ∈ R.
Pn
(3) Pn (A) = i=0 Bi Ai with coefficients Bi ∈ Mmr (R) ∀i = 1, . . . , n and vari-
ables A ∈ Mr (R).

1.4 Inverse
1.4.1 Inverse of a rectangular matrix. Let A ∈ Mmn (R) with m ̸= n be. If
there exist:
• B ∈ Mnm (R) such that AB = Im , then B is called right inverse of A.
• C ∈ Mnm (R) such that CA = In , then C is called left inverse of A.
For a rectangular matrix (m ̸= n) there can be lateral inverse matrices that are
different, or they can not exist, or there can be one but not the other.

1.4.2 Inverse of a square matrix. Let A ∈ Mn (R) be. If there exist:


• D ∈ Mn (R) such that AD = In , then B is called right inverse of A.
• E ∈ Mn (R) such that EA = In , then B is called left inverse of A.
Although AD = In and EA = In , and in consequence AD = EA, it does not
imply that D = E. But in case that D = E, then D = E = A−1 , and A−1 is called
the inverse of A, which implies that:
AA−1 = A−1 A = In
If A−1 exists, then it is said about A that it is regular, invertible, non-singular.

1.4.2.1 Negative powers. If A ∈ Mn (R) is regular, it is defined A− r = (A−1 )r ,


with r ∈ N.

1.4.2.2 Properties.
(1) If A ∈ Mn (R) is regular, then A−1 is unique.

Proof. By contradiction. Assume that A1 , A2 ∈ Mn (R) are both inverse of


̸ A2 , then the following holds:
A with A1 =
Since A1 is inverse of A, then:
A1 A = AA1 = I (1.1)
10 Chapter 1. Matrices Over the Real Field

Since A2 is inverse of A, then:


A2 A = AA2 = I
Pre-multiplying 1.1 with A2 : (A2 A)A1 = A2 In ∴ A1 = A2 ⇒⇐
Post-multiplying 1.1 with A2 : A1 (AA2 ) = In A2 ∴ A1 = A2 ⇒⇐
Both statements contradict the initial assumption that A1 ̸= A2 , and in con-
sequence the inverse of A, if it exists, is unique.

(2) If A ∈ Mn (R) is regular, then A−1 is regular, and (A−1 )−1 = A.

(3) If A ∈ Mn (R) is regular and λ ∈ R − {0}, then λA is regular, and (λA)−1 =


λ−1 A−1 .

(4) If A, B ∈ Mn (R) are regular, then AB is regular, and (AB)−1 = B−1 A−1 .

Proof. If A, B ∈ Mn (R) are regular, then det(A) ̸= 0 and det(B) ̸= 0 (see


1.5.1).
Since det(AB) = det(A) det(B) ̸= 0, then AB is regular.
Since the inverse of AB is (AB)−1 , then:
(AB)−1 AB = In
Post-multiplying by B−1 :
(AB)−1 AB B−1 = In B−1 ∴ (AB)−1 A (BB−1 ) = B−1 ∴ (AB)−1 A = B−1
   

Post-multiplying by A−1 :
(AB)−1 A A−1 = B−1 A−1 ∴ (AB)−1 (AA−1 ) = B−1 A−1
   

From which:
(AB)−1 = B−1 A−1

Qr
If Ai ∈ Mn (R) with i = 1, . . . , r are regular, then i=1 Ai is regular and:
r
!−1 r
Y Y
Ai = (Ar+1−i )−1
i=1 i=1

(5) If A ∈ Mn (R) is regular, then AT is regular, and (AT )−1 = (A−1 )T .


T T
(6) If A ∈ Mn (R) is regular, then A is regular, and (A )−1 = (A−1 )T .

1.4.2.3 Algebraic structure. (Mn (R) regular, multiplication) is a group.

1.5 Methods for finding the inverse of a ma-


trix
1.5.1 Determinant.
1.5. Methods for finding the inverse of a matrix 11

1.5.1.1 Definition. det : Mn (R) → R, such that:


n
X
(aij ) → det(aij ) = |(aij )| = (−1)i+j aij |Mij |, by constant column j between 1 and n
i=1
n
X
= (−1)i+j aij |Mij |, by constant row i between 1 and n
j=1

Mij results from the original matrix Aij by suppressing row i and column j.

Minor of aij : |Mij | = det Mij ∈ R.


The process is repeated until reducing it to the calculation of a determinant of
size 2 or 3:
 
a11 a12 a13
det a21 a22 a23  = a11 (a22 a33 −a23 a32 )−a12 (a21 a33 −a23 a31 )−a13 (a21 a32 −a22 a31 )
a31 a32 a33
 
a a12
det 11 = a11 a22 − a12 a21
a21 a22

Cofactor of aij : cij = (−1)i+j |Mij |. There exist n2 cofactors of A ∈ Mn (R).

Cofactor matrix of A: cof A = (cij ) ∈ Mn (R).

Adjunct matrix of A: adj A = (cof A)T ∈ Mn (R).

1.5.1.2 Properties of the cofactors. Let A ∈ Mn (R) be. Then:


(1) The value of the determinant of A is the sum of the products obtained by
multiplying each element of a row (column) by its cofactor.

Proof. Let A = (aij ) ∈ Mn (R) be. Then, it is known that:


n
X
det A = (−1)i+j aij |Mij |, with i constant.
j=1

But, the cofactors of A are cij = (−1)i+j |Mij |, ∀i ∧ ∀j. Therefore:


n
X
det A = aij cij , with i constant.
j=1

(2) The sum of the products formed by multiplying the elements of a row (column)
of A by the corresponding cofactor of another row (column) of A is zero.

Proof. Let A = (aij ) ∈ Mn (R) be. It shall be proven that:


n
X
arj cij = 0
j=1
12 Chapter 1. Matrices Over the Real Field

Take the matrix A, which has the rows i and r. Substitute row i with row r,
and calculate the determinant for row i, obtaining:
a11 ... a1n a11 ... a1n
.. .. .. .. .. ..
. . . . . .
n
ai1 ... ain a ... arn X
→ r1 = arj cij = 0
ar1 ... arn ar1 ... arn
j=1
.. .. .. .. .. ..
. . . . . .
an1 ... ann an1 ... ann
Because the determinant of a matrix with two equal rows is zero. See 1.5.1.3

From the previous two properties, it can be concluded that:


n
X
arj cij = δir det A
j=1

Analogously, by columns:
n
X
aih cij = δjh det A
j=1

1.5.1.3 Properties of the determinant. Let A, B ∈ Mn (R) and λ ∈ R − {0} be.


Then:
(1) det(λA) = λn det A

(2) det(AB) = det A det B

(3) det AT = det A

(4) If A ∈ Mn (C), then det A = det A


T
(5) If A ∈ Mn (C), then det A = det A

(6) det(A−1 ) = (det A)−1


Qr Qr
(7) det ( i=1 Ai ) = i=1 det Ai , with Ai ∈ Mn (R) ∀i = 1, . . . , r

(8) If a matrix B is obtained by exchanging two rows or two columns of A, then


det B = − det A

(9) If a row (or column) of A is scalar multiple of another row (or column) of A,
then det A = 0

(10) If a matrix B is obtained by multiplying a row (or column) of A by λ, then


det B = λ det A

(11) If a matrix B is obtained by adding a row (or column) of A a scalar multiple


of another row (or column) of A, then det B = det A

Proof. Another way to state the above property, is: Let A, B ∈ Mn (R) be,
and aij = bij , except for the fact that there exists one row in B such that
1.5. Methods for finding the inverse of a matrix 13

bij = aij + λakj , with k ̸= i and i ≥ k ≥ n. Then, the determinant of B by the


row i is obtained as:
Xn
det B = bij cij , with cij the cofactors of B and A
j=1
n
X n
X n
X
= (aij + λakj )cij = aij cij + λakj cij
j=1 j=1 j=1
n
X n
X
= det A + λ akj cij , but akj cij = 0
j=1 j=1
∴ det B = det A

Qn
(12) If A is triangular or diagonal, then det A = i=1 aii

1.5.1.4 Properties of the adjunct matrix.


(1) For any matrix A ∈ Mn (R), it is true that A(adj A) = (adj A)A = det AIn

Proof. The matrix of cofactors of A is: cof A = (cij ). Therefore, the adjunct
matrix of A is: adj A = (cof A)T = (cji ). Developing the products A(adj A)
and (adj A)A:
n
!
X
A(adj A) = aik cjk = (δij det A) = det A(δij ) = det AIn
k=1
n
!
X
(adj A)A = ahi chj = (δij det A) = det A(δij ) = det AIn
h=1

∴ A(adj A) = (adj A)A = det AIn

(2) If A is symmetric, then the adjunct of A is symmetric.


(3) A ∈ Mn (C) is hermitian, then the adjunct of A is hermitian.

1.5.1.5 Inverse of a matrix. From the property 1 of the adjunct matrix, if A is


regular, then A−1 = (det A)−1 adj A. In consequence, A has inverse if and only if
det A ̸= 0.

1.5.2 Matrix equivalence.

1.5.2.1 Elemental operations. ek : Mmn (R) → Mmn (R), such that:


(aij ) → ek (aij )
• e1 : Exchanges 2 rows (or columns).
• e2 : Multiplies the row i (or column j) by λ ∈ R − {0}.
• e3 : Adds the row i (or column j) λ times the row k (or column k), with
λ ∈ R − {0}, k ̸= i, and 1 ≥ k ≥ m (k ̸= j, and 1 ≥ k ≥ n).
14 Chapter 1. Matrices Over the Real Field

Inverse elemental operations. There exist e−1


k such that:
(ek e−1 −1
k )(aij ) = (ek ek )(aij ) = (aij ), ∀(aij ) ∈ Mmn (R)

Behavior of the determinant.


(1) With e1 : Changes of sign. See property 8 of the determinant.
(2) With e2 : Is multiplied by λ. See property 10 of the determinant.
(3) With e3 : Does not change. See property 11 of the determinant.

1.5.2.2 Types of matrices generated by the elemental operations.

Elemental. Are obtained from the identity matrix ba applying a single elemental
operation to it. All elemental matrices are regular.

Echelon form.
(1) Row echelon (RE) matrices are those which meet the following conditions:
(a) The first non-null element on each row is 1.
(b) The first non-null elements of the rows are ordered from left to right.
(c) If there are any null rows, they are located at the bottom.
Every matrix A ∈ Mmn (R) − {0} can be transformed into row echelon form
by means of the Gauss elimination method.
(2) Column echelon (CE) matrices are those which meet the following conditions:
(a) The first non-null element on each column is 1.
(b) The first non-null elements of the columns are ordered from top to bottom.
(c) If there are any null rows, they are located at the right.
Every matrix A ∈ Mmn (R) − {0} can be transformed into column echelon form
by means of the Gauss elimination method.

Reduced echelon form.


(1) Reduced row echelon (RRE) matrices are those which meet the following con-
ditions:
(a) Are in row echelon form.
(b) For the first non-null element on a row, the remaining elements of the
column are 0.
Every matrix A ∈ Mmn (R) − {0} can be transformed into reduced row echelon
form by means of the Gauss-Jordan reduction method.
(2) Reduced column echelon (RCE) matrices are those which meet the following
conditions:
(a) Are in column echelon form.
(b) For the first non-null element on a column, the remaining elements of the
row are 0.
Every matrix A ∈ Mmn (R) − {0} can be transformed into reduced column
echelon form by means of the Gauss-Jordan reduction method.
1.5. Methods for finding the inverse of a matrix 15

1.5.2.3 Equivalence.

Equivalence by rows. The matrix A ∈ Mmn (R) is equivalent by rows to B ∈


Mmn (R), if there exists a finite sequence of elementary row operations that trans-
form A into B:
R 1 r 2 r3 r
k r
A ≡ B ⇔ A −→ A1 −→ A2 −→ . . . −→ B
with rh elementary row operations for h = 1, . . . , k.
Or, said in other words: the matrix A ∈ Mmn (R) is equivalent by rows to
B ∈ Mmn (R), if there exists elementary row matrices R1 , R2 , . . ., Rk ∈ Mm (R)
such that:
k
Y
B = Fk Fk−1 , . . . , F1 A = Fk−h+1 A = PA
h=1

with P ∈ Mm (R) regular (product of regular matrices).

Equivalence by columns. The matrix A ∈ Mmn (R) is equivalent by columns to


B ∈ Mmn (R), if there exists a finite sequence of elementary column operations that
transform A into B:
C 1 c 2 c3 c
k c
A ≡ B ⇔ A −→ A1 −→ A2 −→ . . . −→ B
with rh elementary column operations for h = 1, . . . , r.
Or, said in other words: the matrix A ∈ Mmn (R) is equivalent by columns to
B ∈ Mmn (R), if there exists elementary column matrices C1 , C2 , . . ., Ck ∈ Mn (R)
such that:
r
Y
B = AC1 C2 , . . . , Cr = A Ch = AQ
h=1

with Q ∈ Mn (R) regular (product of regular matrices).

Equivalence of rectangular matrices. Let A, B ∈ Mmn (R) be. Then, A ≡ B if


there exists a finite sequence of elemental row-column operations that transform A
into B. In other words:
A ≡ B ⇔ ∃P ∈ Mm (R) ∧ ∃Q ∈ Mn (R), P, Q regular, such that B = PAQ
The above mentioned definitions of equivalence determine an equivalence relation-
ship.

1.5.2.4 Rank of a matrix. Since the equivalence of matrices is an equivalence re-


lationship, then there exists equivalence classes.
All the matrices equivalent by row are on the same class, whose best represen-
tatives are the row echelon and reduced row echelon matrices on which there exist
a number of non-null rows.
That number of non-null rows identifies all of the elements of the class and is
called the row rank of the mentioned class.
Row rank of A ∈ Mmn (R) = Number of non-null rows of the RE or RRE form of A
= rank {Ai ∈ Mmn (R)|Ai ≡ RE or RRE }
= t, with t ∈ N
16 Chapter 1. Matrices Over the Real Field

In a similar analysis for the equivalence by columns:


Column rank of A ∈ Mmn (R) = Number of non-null columns of the CE or RCE form of A
= rank {Ai ∈ Mmn (R)|Ai ≡ CE or RCE }
= r, with r ∈ N
In the case of equivalence by rows and columns, A ∈ Mmn (R) is equivalent to RE
or RRE, and A ∈ Mmn (R) is equivalent to CE or RCE, therefore RE or RRE is
equivalent to CE or RCE, which implies that:
row rank A = column rank A
In other words, in such case RE, RRE, CE and RCE are in the same class as A,
and in consequence there is the following:

Property. For every A ∈ Mmn (R):


row rank A = column rank A = rank A ≤ min{m, n}
It is defined:
rank 0 = 0

1.5.2.5 Relationship between square matrices.

Equivalence. If B = PAQ with P, Q regular, then B is equivalent to A.

Similarity. If P = Q−1 , then B = Q−1 AQ and it is said that B is similar to A.

Congruence. If P = QT , then B = QT AQ and it is said that B is congruent to


A.

Orthogonal transformation. If P = QT = Q−1 , then B = QT AQ = Q−1 AQ


and it is said that B is orthogonally similar to A.
T T
Unitary transformation. If Q = Q−1 , then B = Q AQ = Q−1 AQ and it is
said that B is orthogonally similar to A ∈ Mn (C).

1.5.2.6 The inverse. A ∈ Mn (R) is regular if and only if it can be expressed as a


product of elementary row matrices or elementary row matrices. In consequence:
A ∈ Mn (R) is regular ⇔ Mn ≡ In
In other words: A ∈ Mn (R) is regular if and only if it can be transformed into In
through the application elementary operations.

Process.
(1) By the application of elementary row operations:
 Elementary row operations 
A−1
 
A In −−−−−−−−−−−−−−−−−→ In

(2) By the application of elementary column operations:


   
A Elementary column operations In
−−−−−−−−−−−−−−−−−−−→
In A−1
1.5. Methods for finding the inverse of a matrix 17

If these processes are possible, then A is a regular matrix. If these processes are
not possible, then A is a non-regular (singular) matrix.
Regarding transposition:
 T  −1
A  Elementary row operations  In
= A In −−−−−−−−−−−−−−−−−→ In A−1 =
 
In A−1

1.5.3 Linear systems.

1.5.3.1 Definition.

Notation. A linear system is defined by means of a system of m linear equations


with n unknowns:
a11 x1 + a12 x2 + . . . + a1n xn = b1
a21 x1 + a22 x2 + . . . + a2n xn = b2
..
.
am1 x1 + am2 x2 + . . . + amn xn = bm

Matrix representation. Ax = b, where:


 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A= . ..  ∈ Mmn (R) is the matrix of coefficients
 
. .. . .
 . . . . 
am1 am2 . . . amn
 
x1
 x2 
x =  .  ∈ Mn1 (R) is the vector of unknowns
 
 .. 
xn
 
b1
 b2 
b =  .  ∈ Mm1 (R) is the vector of independent terms
 
 .. 
bm

Vector representation.
       
a11 a12 a1n b1
 a21   a22   a2n   b2 
C1 x1 + C2 x2 + . . . + Cn xn = b ⇔  .  x1 +  .  x2 + . . . +  .  xn =  . 
       
 ..   ..   ..   .. 
am1 am2 amn bm
As it can be observed, each column in matrix A determines the behavior of the
corresponding unknown in the system.

Solution. x0 = (x01 , x02 , . . . , x0n ) ∈ Rn is a solution of Ax = b if and only if


Ax0 = b

Equivalence property. The equivalence of matrices keeps the solution of the sys-
tem invariant.
18 Chapter 1. Matrices Over the Real Field

Types of linear system according to the vector of independent terms.


Homogeneous systems: Have the form Ax = 0.

Non-homogeneous systems: Have the form Ax = b, with b ̸= 0.

Types of linear system according to the solution.


Compatible or consistent systems: Have at least one solution.

Incompatible or inconsistent systems: Have no solution. They occur if:


 
rank A < rank A b , with b ̸= 0

1.5.3.2 Homogeneous systems: Ax = 0.

Properties.
(1) There exists always a solution, called trivial solution: x0 = 0.

(2) The solution is unique if and only if A ∈ Mn (R) is regular, which implies that
the trivial solution, x0 = 0, is the only solution.

(3) Additivity: if x1 , x2 are solutions of Ax = 0, then x1 + x2 is also a solution of


Ax = 0.

Proof. If x1 , x2 are solutions of Ax = 0, then Ax1 = 0, and Ax2 = 0.


Moreover,
A(x1 + x2 ) = Ax1 + Ax2
=0+0
∴ A(x1 + x2 ) = 0
Thus, clearly x1 + x2 is also a solution of Ax = 0.

(4) Homogeneity: if λ ∈ R and x1 is a solution of Ax = 0, then λx is also a solution


of Ax = 0.

(5) If xi are solutions of Ax = 0 for i = 1, . . . , t, and λi ∈ R for i = 1, . . . , t, then


Pt
the linear combination of solutions i=1 λi xi is also a solution of Ax = 0.

(6) If rank A ≤ m < n, then the system has multiple solutions:


• There are rank A principal, basic unknowns.
• There are n − rank A free, non-basic unknowns, also called parameters.

Scheme for finding the solution of Ax = 0. Let A ∈ Mn (R) with m < n be.
Transform the matrix A by means of elemental operations into:
• Row echelon (RE) form, and obtain the solution by means of backwards sub-
stitution.

• Reduced row echelon (RRE) form, and from this the general solution is obtained.
The following cases can happen:
1.5. Methods for finding the inverse of a matrix 19

(1) No null-rows appear on the RRE :


Elementary row operations  
A −−−−−−−−−−−−−−−−−→ RRE = Im R
Since the matrix equivalence does not alter the solution, then:
 
  xb
Ax = 0 ⇔ (RRE)x = 0 ⇔ Im R = 0 ⇔ Im xb +Rx1 = 0 ⇒ xb = −Rx1
x1
where:
• xb : m principal, basic unknowns, associated to the canonic base: matrix
Im .
• x1 : n − m free, non-basic unknowns: matrix R. In terms of those the
general solution is given.
rank A = m < n. The system is non-redundant.
(2) Null-rows appear on the RRE :
 
Elementary row operations Ir R
A −−−−−−−−−−−−−−−−−→ RRE =
0 0
Since the matrix equivalence does not alter the solution, then:
   
Ir R xb Im xb + Rx1 = 0
Ax = 0 ⇔ (RRE)x = 0 ⇔ =0⇔ ⇒ xb = −Rx1
0 0 x1 0xb + 0x1 = 0
where:
• xb : r principal, basic unknowns, associated to the canonic base: matrix
Ir .
• x1 : n − r free, non-basic unknowns: matrix R. In terms of those the
general solution is given.
rank A = r < m < n. The system is redundant. There are m − r redundant
equations.

1.5.3.3 Non-homogeneous systems: Ax = b with b ̸= 0.

Solution for regular matrices. Ax = b has a unique solution if and only if A is


regular.
(1) Method of the inverse: x = A−1 b
(2) Method of Cramer: let det A = ∆ be the determinant of the system. Let
det Ai = ∆i be the determinant of the matrix obtained by replacing the column
i of A with b, i.e.:
a11 ... b1 ... a1n
a21 ... b2 ... a2n
∆i = det Ai = . .. ..
.. ... . ... .
an1 ... bn ... ann
Then, the solution of the system is given by:
∆i
xi = , ∀i = 1, . . . , n

20 Chapter 1. Matrices Over the Real Field

Augmented matrix of the system. [ A b ] ∈ Mm(n+1) (R)

Non-regular matrices.
• The system does not have solution if and only if:
rank A < rank [ A b ]
• The system has multiple solutions if and only if:
rank A = rank [ A b ]≤m<n
– There are rank A principal unknowns.
– There are n − rank A free unknowns.

Solution for non-regular matrices.


(1) No null-rows appear on the RRE :
Elementary row operations  
[ A b ] −−−−−−−−−−−−−−−−−→ RRE = Im R b0
Since the matrix equivalence does not alter the solution, then:
 
  xb
Ax = b ⇔ Im R = B0 ⇔ Im xb + Rx1 = b0 ⇒ xb = b0 − Rx1
x1
where:
• xb : m principal, basic unknowns, associated to the canonic base: matrix
Im .
• x1 : n − m free, non-basic unknowns: matrix R.
rank A = m = rank [ A b ]. There is no redundancy.
(2) Null-rows appear on the RRE :
 
Elementary row operations Ir R b1
A −−−−−−−−−−−−−−−−−→ RRE =
0 0 b2
Since the matrix equivalence does not alter the solution, then:
      
Ir R xb b1 Ir xb + Rx1 = b1 xb = b1 − Rx1
Ax = b ⇔ = ⇔ ⇒
0 0 xx b2 0xb + 0x1 = b2 0 = b2
where:
• xb : r principal, basic unknowns, associated to the canonic base: matrix
Ir .
• x1 : n − r free, non-basic unknowns: matrix R. In terms of those the
general solution is given.
rank A = r < m < n. The system is redundant. There are m − r redundant
equations.

1.5.3.4 About the solution of non-homogeneous systems. The solution xnh of


a non-homogeneous system Ax = b can be obtained from the solution xh of the
associated homogeneous system Ax = 0 and a particular solution of the non-
homogeneous system, which can be expressed as:
xnh = x0 + xh , with x0 ∈ Rn such that Ax0 = b
1.5. Methods for finding the inverse of a matrix 21

1.5.3.5 About the identity matrix in the REE. Not always the identity matrix is
appears as such, but it possible to select column vectors in the REE forming an
identity matrix, which determine the basic unknowns in the general solution.

1.5.3.6 About the pivot method. In different application fields of linear systems,
like for example in linear programming, there appears the exchange of basic un-
knowns by free unknowns. For that purpose, the so called pivot method is used.
The basic unknown to be taken out and the free variable to be put in, are
selected. The column vector associated to the free unknown is transformed into a
unitary vector replacing the corresponding one of the basic unknown, thus preserv-
ing the general solution scheme.

1.5.3.7 About the regularity (invertibility) of a matrix. The fact that a matrix
is regular allows for the statement of the equivalence of the following properties,
previously discussed:
(1) A ∈ Mn (R) is regular
(2) det A ̸= 0
(3) A ≡
mathbf Nn
(4) rank A = n
(5) The system Ax = 0 has a unique solution, x = 0
(6) The system Ax = b has a unique solution, x = A−1 b
2
Linear Spaces

2.1 Linear space structure


2.1.1 Definition. Let R the real field and V ̸= ∅ a set whose elements in general
depend on R, be. Then, V is a linear (vector) space over R, if:
(1) There exist in V an equivalence relationship, equality, such that for all x, y,
z ∈ V, the following properties hold:
Reflexive: x = x
Symmetric: x = y ⇔ y = x
Transitive: x = y ∧ y = z ⇒ x = z
(2) There exist in V a an internal binary operation (IBO), usually the addition,
expressed as:
IBO:+ : V × V → V
(x, y) → x + y
such that for all x, y, z ∈ V, the following properties hold:
Associative: (x + y) + z = x + (y + z)
Neutral element: x + θ = θ + x = x, where θ ∈ V is the neutral element.
Inverse element: x + (−x) = (−x) + x = θ, where −x ∈ V is the inverse
element of x ∈ V.
Commutativity: x + y = y + x

Algebraic structure. (V, IBO addition) ≡ (V, +) is an abelian group.


(3) There exist in V a an external binary operation (EBO), usually the multipli-
cation by a scalar, expressed as:
EBO: : R × V → V
(λ, x) → λx

23
24 Chapter 2. Linear Spaces

such that for all x, y, z ∈ V and λ, φ ∈ R, the following properties hold:

Distributive for the addition of scalars: (λ + φ)x = λx + φx

Distributive for the addition of vectors: λ(x + y) = λx + λy

Associative / Commutative: (λφ)v = λ(φv) = (φλ)v

Scalar identity: 1x = x

Algebraic structure. (V, IBO addition, EBO multiplication by scalar) is an lin-


ear space over R, also callel real linear space.

2.1.2 Examples of real linear spaces.


(1) Let Mmn (R) be the set of all real matrices (view chapter 1):

• Take the definition of equality in Mmn (R).


• IBO: (+ : Mmn (R) × Mmn (R) → Mmn (R)) and its 4 properties.
• EBO: (R × Mmn (R) → Mmn (R)) and its 4 properties.

Algebraic structure. (Mmn (R), IBO, EBO) is the linear space of the real ma-
trices.

(2) Let Rn be the set of radar vectors (special case of the real matrices):

• Define an equality in Rn . Let a = (a1 , a2 , . . . , an ), b = (b1 , b2 , . . . , bn ) ∈


Rn be. Then:

a = b ⇔ ai = bi , ∀i = 1, . . . , n

• Define as IBO in Rn the addition. Let a = (a1 , a2 , . . . , an ), b = (b1 , b2 , . . . , bn ) ∈


Rn be. Then:

IBO:+ : Rn × Rn → Rn
(a, b) → a + b = (a1 + b1 , a2 + b2 , . . . , an + bn )

and its 4 properties.

Algebraic structure. (Rn , IBO) is an abelian group.

• Define as EBO in Rn the multiplication by a scalar. Let a = (a1 , a2 , . . . , an ) ∈


Rn and λ ∈ R be. Then:

EBO: : R × Rn → Rn
(λ, a) → λa = (λa1 , λa2 , . . . , λan )

and its 4 properties.


2.1. Linear space structure 25

Algebraic structure. (Rn , IBO, EBO) is the linear space of radar vectors.
(3) Let F(A) be the set of the real functions of real variable with domain A:
F(A) = {f : A ⊆ R → R/f is a function}
where A is the domain of f , which is denoted as dom f = A. Then:
• Define an equality in F(A). Let f , g ∈ F(A) be. Then:
f = g ⇔ f (x) = g(x), ∀x ∈ A
• Define as IBO in F(A) the addition. Then:
IBO:+ : F(A) × F(A) → F(A)
(f, g)(x) → (f + g)(x) = f (x) + g(x), ∀x ∈ A
such that for all f , g, h ∈ F(A) the following properties hold:
Associative: (f + g) + h = f + (g + h)
Neutral element (additive neutral): f + θ = θ + f = f , where θ ∈
F(A) is the neutral element such that θ(x) = 0, ∀x ∈ A.
Inverse element (additive inverse): f + e = e + f = 0, where e ∈
F(A) is the inverse element of f ∈ F(A) such that e(x) = −f (x),
∀x ∈ A.
Commutativity: f + g = g + f

Algebraic structure. (F(A), IBO) is an abelian group.


• Define as EBO in F(A) the multiplication by a scalar. Then:
EBO: : R × F(A) → F(A)
(λ, f (a)) → (λf )(a) = λf (a)
With the following properties: for all f , g ∈ F(A) and for all λ, φ ∈ R the
following holds:
(a) (λ + φ)f = λf + φf
(b) λ(f + g) = λf + λg
(c) (λφ)f = λ(φf ) = φ(λf )
(d) 1f = f , with 1 ∈ R

Algebraic structure. (F(A), IBO, EBO) is the linear space of the functions of
real variable.
(4) Let Pn∗ (x) be the set of polynomials of degree less than or equal to n:
( n )
X
∗ k
Pn = ak x |ak ∈ R, ∀k = 0, . . . , n
k=0
Then:
Pn Pn
• Define in Pn∗ (x) an equality. Let P1 (x) = k=0 ak xk , P2 (x) = k=0 bk x
k

Pn∗ (x) be. Then:
P1 (x) = P2 (x) ⇔ ak = bk , ∀k = 0, . . . , n
26 Chapter 2. Linear Spaces
Pn
• Define
Pn as IBO in Pn∗ (x) the addition. Let P1 (x) = k=0 ak xk , P2 (x) =
k ∗
k=0 bk x ∈ Pn (x) be. Then:
IBO:+ : Pn∗ (x) × Pn∗ (x) → Pn∗ (x)
n
X
(P1 (x), P2 (x)) → P1 (x) + P2 (x) = (ak + bk )xk
k=0
and its 4 properties.

Algebraic structure. (Pn∗ (x), IBO) is an abelian group.


• Define
Pn as EBO in Pn∗ (x) the multiplication by a scalar. Let P0 (x) =
k ∗
k=0 ck x ∈ Pn (x) be. Then:
EBO: : R × Pn∗ (x) → Pn∗ (x)
n
X
(λ, P0 (x)) → λP0 (x) = λck xk
k=0
with its 4 properties.

Algebraic structure. (Pn∗ (x), IBO, EBO) is the linear space of the real poly-
nomials of degree less than or equal to n.

2.2 Linear subspaces


2.2.1 Definition. Let V be a linear space over R, and let U ⊆ V with U ̸= ∅
be. Then, U is a linear subspace of V if U with the Internal Binary Operation
(IBO) of V and with the External Binary Operation (EBO) of V, is a linear space.
Equivalently, U is a linear subspace of V if:
(1) θ ∈ U ⇒ U ̸= ∅
(2) Additivity: ∀x1 , x2 ∈ U ⇒ (x1 + x2 ) ∈ U
(3) Homogeneity: ∀x1 ∈ U and ∀λ ∈ R ⇒ (λx1 ) ∈ U

Algebraic structure. (U, IBO, EBO) is linear subspace of V.

2.2.1.1 Examples of linear subspaces.


(1) In the linear space of the square matrices Mn (R):
• Subspace of the diagonal matrices.
• Subspace of the triangular matrices.
• Subspace of the symmetric matrices.
• Subspace of the anti-symmetric matrices.
(2) In the linear space of the real functions of real variable F(A):
• Subspace of the even functions.
• Subspace of the odd functions.
• Subspace of the continuous functions.
2.2. Linear subspaces 27

2.2.2 Linear varieties. Let V be a linear space over R. A linear variety in V is


defined as:
L = {X0 + U|x0 ∈ V ∧ U is a subspace of V}
where:
• X0 : Support of L
• U: Base space (subspace) of of L

2.2.2.1 Properties.
(1) If U = θ, then L = {X0 |x0 ∈ V, which means that every element in V is a
linear variety.
(2) If X0 = θ, then L = {U|U is a subspace of V}, which means that every sub-
space of V is a linear variety.
(3) A linear variety is a subspace of V if and only if θ ∈ L.
(4) Every element of L can be taken as a support.
(5) Every linear variety is uniquely determined by its base space (subspace).
(6) The intersection of two linear varieties L1 and L2 of V, or the empty set, is a
linear variety having as base subspace the intersection of the subspaces.

2.2.2.2 Linear systems as linear varieties. The homogeneous system Ax = 0 is


associated to the linear system Ax = b. If x0 is a particular solution of Ax = b,
and if s is the general solution of Ax = 0, then s + x0 is the general solution of
Ax = b, where:
• s + x0 is a linear variety.
• s is base subspace of the variety.
• x0 is support of the variety.
Therefore, it can be concluded that the solution of a non-homogeneous linear system
is a linear variety, and the solution of a homogeneous linear system is a linear
subspace.

2.2.3 Obtaining linear subspaces. Let V be a linear space over R.

2.2.3.1 Generating subspace. Let S = {V1 , . . . , Vn } ⊂ V, and λi ∈ R be. The set


of the linear combinations of elements of S form a subspace of V. In other words:
n
X
g(S) = λi Vi : Linear combination of the elements of S
i=1
where:
• g(S): Subspace generated by S
• S Generating set
Proof. (1) θ ∈ g(S).
Pn
Proof. θ = i=1 λi Vi with λi = 0 for i = 1, . . . , n, therefore θ ∈ g(S)
28 Chapter 2. Linear Spaces
Pn Pn
(2) Additivity: Let X = i=1 λi Vi , Y = i=1 φi Vi ∈ g(S) be. Then, (X + Y ) ∈
g(S).

Proof.
n
X n
X n
X
X +Y = λi Vi + φi Vi = (λi + φi )Vi
i=1 i=1 i=1
n
X
∴X +Y = ψi Vi with ψi = λi + φi ∈ R for i = 1, . . . , n
i=1

∴ (X + Y ) ∈ g(S)

Pn
(3) Homogeneity: Let X = i=1 λi Vi ∈ g(S) and ω ∈ R be. Then, (ωX) ∈ g(S).

Proof.
n
X n
X
ωX = ω λi Vi = ωλi Vi
i=1 i=1
n
X
∴ ωX = ψi Vi with ψi = ωλi ∈ R for i = 1, . . . , n
i=1

∴ (ωX) ∈ g(S)

Pn
Therefore, as it has been shown, g(S) = i=1 λi Vi is a subspace of V.

2.2.3.2 Operations with subspaces. Let V be a linear space over R, and let U1
and U2 be subspaces of V.

Intersection of subspaces.
U1 ∩ U2 = {X|X ∈ U1 ∧ X ∈ U2 }
is a subspace of V. Generalizing, let Ui be subspaces of V for i = 1, . . . , m.
m
\
= {X|X ∈ Ui ∀i = 1, . . . , m}
i=1

is a subspace of V.

Addition of subspaces.
U1 + U2 = {X|X = X1 + X2 with X1 ∈ U1 ∧ X2 ∈ U2 }
is a subspace of V. Generalizing, let Ui be subspaces of V for i = 1, . . . , m. Then:
m
( n
)
X X
= X|X = Xi with Xi ∈ Ui ∀i = 1, . . . , m
i=1 i=1

is a subspace of V.
2.2. Linear subspaces 29

Direct addition of subspaces.


U = U1 ⊕U2 = {X|X = X1 + X2 with X1 ∈ U1 ∧ X2 ∈ U2 unique for each X ∈ U}
is a subspace of V. In other words:
U = U1 ⊕ U2 ⇔ U = U1 + U2 ∧ U1 ∩ U2 = {θ}
is a subspace of V.
Proof. (1) U1 + U2 is a subspace of V.

Proof. (a) θ ∈ U1 + U2 .

Proof.
θ ∈ U1 and θ ∈ U2 since U1 and U2 are subspaces
∴ θ = θ + θ ⇒ θ ∈ U1 + U2

(b) Additivity: X, Y ∈ (U1 + U2 ) ⇒ (X + Y ) ∈ (U1 + U2 ).

Proof.
X = X1 + X2 with X1 ∈ U1 and X2 ∈ U2
Y = Y1 + Y2 with Y1 ∈ U1 and Y2 ∈ U2
X + Y = (X1 + X2 ) + (Y1 + Y2 ) ∴ X + Y = (X1 + Y1 ) + (X2 + Y2 )
But (X1 + Y1 ) ∈ U1 and (X2 + Y2 ) ∈ U2 , which implies (X + Y ) ∈
(U1 + U2 ).

(c) Homogeneity: X ∈ (U1 + U2 ) ∧ α ∈ R ⇒ (aX) ∈ (U1 + U2 ).

Proof.
X = X1 + X2 with X1 ∈ U1 and X2 ∈ U2
αX = αX1 + αX2
But (αX1 ) ∈ U1 and (αX2 ) ∈ U2 , which implies (αX) ∈ (U1 + U2 ).

Therefore U1 + U2 is a subspace of V.

(2) U1 ∩ U2 = {θ}.

Proof. By contradiction. Assume U1 ∩ U2 ̸= {θ}. Therefore, there exists at


least one X ∈ (U1 ∩ U2 ) with X ∈ (U1 + U2 ) for X ̸= θ. Therefore,
X = X1 + θ, with X1 ∈ U1 and θ ∈ U2 since X ∈ U1
X = θ + X2 , with θ ∈ U1 and X2 ∈ U2 since X ∈ U2
Therefore X can be written in two different forms, which implies that the sum is
not direct, and in consequence the assumption U1 ∩ U2 ̸= {θ} is false, therefore
U1 ∩ U2 = {θ}.
Therefore, U = U1 ⊕ U2 ⇔ U = U1 + U2 ∧ U1 ∩ U2 = {θ} is a subspace of
V.
30 Chapter 2. Linear Spaces

Generalizing, let Ui be subspaces of V for i = 1, . . . , m. Then,


m
( m
)
M X
Ui = X|X = Xi with Xi ∈ Ui unique for each X ∈ U, ∀i = 1, . . . , m
i=1 i=1
is a subspace of V. This can also be expressed as:
Mm Xm \ m
U= Ui ⇔ U = Ui ∧ Ui = {θ}
i=1 i=1 i=1

Examples of direct addition of subspaces.


• Mn (R) = (Subspace of symmetric matrices)⊕(Subspace of antisymmetric matrices)
• F(A) = (Subspace of even functions) ⊕ (Subspace of odd functions)

Cartesian product of subspaces. Let V be a linear subspace over R be, and let
U1 and U2 be subspaces of V:
U = U1 × U2 = {(X1 , X2 )|X1 ∈ U1 ∧ X2 ∈ U2 }
then U is a subspace of V × V ≡ V2 . Generalizing, let Ui be subspaces of V for
i = 1, . . . m:
U = ×m
i=1 Ui = {(X1 , . . . , Xm )|Xi ∈ Ui ∀i = 1, . . . , m}

then U is a subspace of Vm .
Proof. (1) θm = (0, . . . , 0) ∈ U

Proof. Since θ ∈ Ui ∀i = 1, . . . , m because Ui are subspaces, then θm =


(0, . . . , 0) ∈ U.

(2) Additivity: X, Y ∈ U ⇒ (X + Y ) ∈ U.

Proof. Let the following be:


X = (X1 , . . . , Xm ) with Xi ∈ Ui , ∀i = 1, . . . , m
Y = (Y1 , . . . , Ym ) with Yi ∈ Ui , ∀i = 1, . . . , m
Then:
X + Y = (X1 , . . . , Xm ) + (Y1 , . . . , Ym ) = (X1 + Y1 , . . . , Xm + Ym )
Since (Xi + Yi ) ∈ Ui ∀i = 1, . . . , m because Ui are subspaces, then (X + Y ) ∈
U

(3) Homogeneity: X ∈ U ∧ α ∈ R ⇒ (αX) ∈ U.

Proof. Let the following be:


X = (X1 , . . . , Xm ) with Xi ∈ Ui , ∀i = 1, . . . , m
Then:
αX = (αX1 , . . . , αXm )
Since αXi ∈ Ui ∀i = 1, . . . , m because Ui are subspaces, then αX ∈ U.
Therefore, as it has been shown, U is a subspace of Vm
2.2. Linear subspaces 31

2.2.4 Dimension and base of a linear space. Let V be a linear space over R.
Let S = {V1 , . . . , Vn } ⊂ V be, with S ̸= ∅. Let λi ∈ R be, with i = 1, . . . n. Then,
by the properties of a linear space in V, it is possible to postulate:
Xn
λi Vi = θ
i=1
which always brings to a homogeneous system, where:
• Vi ∈ S: are known and determine (form) the matrix of coefficients.
• λi ∈ R: are the variables of the system.
Pn
2.2.4.1 Linear independence. If the system i=1 λi Vi = θ has a unique solution,
i.e. λi = 0 for i = 1, . . . , n, then S = {V1 , . . . , Vn } is linearly independent in V.

Properties.
(1) Every subset of a linearly independent subset, is linearly independent.
(2) Let S ⊂ V be linearly independent. Then, S ∪ {X} is linearly independent if
and only if X ∈
/ g(S), with X ∈ V.
Pn
2.2.4.2 Linear dependence. If the system i=1 λi Vi = θ has multiple solution, i.e.
there exists λi ̸= 0 with 1 ≤ t ≤ n, then S = {V1 , . . . , Vn } is linearly dependent in
V.

Properties.
(1) Every subset S of V for which θ ∈ S, is linearly dependent.
(2) Every set containing a linearly dependent set, is linearly dependent.
(3) In a linearly dependent set, at least one of the elements can be expressed as
the
Pn linear combination of the other elements in the set. In other words, if
i=1 λi Vi = θ with λt ̸= 0 for 1 ≤ t ≤ n, then,
n
X
λi Vi = θ
i=1
n
X
∴ λi Vi + λt Vt = θ
i̸=t
n
X λi
⇒ Vt = ωi Vi with ωi = − ∈R
λt
i̸=t

2.2.5 Wronski matrix. Let C n (I) be the set of the real continuous functions with
continuous derivatives of order n in the open and finite interval I ⊂ R:
C n (I) = {f ∈ F(A)|f is continuous and has derivatives of order n continuous in I ⊂ R}
Then, C n (I) is a subspace of F(A), since from differential calculus, it is known
that:
(1) θ(x) ∈ C n (I) ⇒ C n (I) ̸= ∅.
32 Chapter 2. Linear Spaces

(2) Additivity: f1 , f2 ∈ C n (I) ⇒ (f1 + f2 ) ∈ C n (I).


(3) Homogeneity: f2 ∈ C n (I) ∧ λ ∈ R ⇒ (λf ) ∈ C n (I).
It is of relevance to determine whether {f1 , . . . , fn } ⊂ C n−1 (I) is linearly de-
pendent or independent in the subspace C n−1 (I). In other words, it is desired to
determine whether:
Xn
λi fi = θ ⇒ λi = 0, ∀i = 1, . . . , n, with λi ∈ R, and fi ∈ C n−1 (I), ∀i = 1, . . . , n
i=1
In order to achieve that, the following system is formed:
n
X
λ1 f10 + λ2 f20 + . . . + λn fn0 = θ ⇔ λi fi0 = θ
i=1
n
X
λ1 f11 + λ2 f21 + . . . + λn fn1 = θ ⇔ λi fi1 = θ
i=1
n
X
λ1 f12 + λ2 f22 + . . . + λn fn2 = θ ⇔ λi fi2 = θ
i=1
..
.
n
X
λ1 f1n−1 + λ2 f2n−1 + ... + λn fnn−1 =θ⇔ λi fin−1 = θ
i=1
Such is a system of n equations with n variables, the system is formed by
real continuous functions with continuous derivative of order up to n − 1). If the
functions and each of its derivatives is evaluated at x = x0 ∈ I it becomes a real
homogeneous system which can expressed in matrix form as:
Wf (x0 ) a = θ
where:
f10 (x0 ) f20 (x0 ) fn0 (x0 )
 
...
 f11 (x0 ) f21 (x0 ) ... fn1 (x0 ) 
Wf (x0 ) =   Wronski matrix

.. .. ..

..
 . . . . 
f1n−1 (x0 ) f2n−1 (x0 ) . . . fnn−1 (x0 )
is the so called Wronski matrix.
(
i = 0, . . . , n − 1 refers to the derivative order
Wf (x0 ) = fji (x0 ) ∈ Mn (R), on which

j = 1, . . . , n refers to the set order
The determinant of the Wronski matrix is called the´wronskian of the set.
 
λ1
 λ2 
a =  .  ∈ Mn1 (R) vector of variables
 
 .. 
λn
Therefore, the system Wf (x0 ) a = θ determines the linear dependence or inde-
pendence of the set of functions.
2.2. Linear subspaces 33

Possible cases.
(1) If Wf (x0 ) is regular, i.e., if Wf−1

(x0 ) exists, or equivalently, if det Wf (x0 ) ̸= 0
(i.e. the wronskian of the set does not vanish), then the system has a unique
solution and the set of functions is linearly independent in C n (I). In other
words,
det(Wf (x0 ) ) ̸= 0 ⇒ A = θ
∴ λj = 0 ∀j = 1, . . . , n ⇒ {f1 , . . . , fn } are linearly independent in C n (I)
therefore, if there exists an x0 ∈ I such that Wf (x0 ) is regular, then the function
set is linearly independent in C n (I), which can also be expressed as:
∃x0 ∈ I| det(Wf (x0 ) ) ̸= 0 ⇒ {f1 , . . . , fn } are linearly independent in C n (I)

(2) If Wf (x0 ) is not regular para todo x0 ∈ I, i.e. there exists x0 ∈ I such that
det(Wf (x0 ) ) = 0, then nothing can be stated about the linear dependence or
independence of the function set.

2.2.6 Base of a linear space. The set S = {V1 , V2 , . . . , Vn } ⊂ V is a base of the


linear space V if and only if:

Pnis a generator of V, i.e. g(S) = V ⇔ ∀x ∈ V ∃λi ∈ R such that x =


(1) S
i=1 λi Vi .
Pn
(2) S is linearly independent in V, i.e. i=1 λi Vi = θ ⇒ λi = 0 ∀i = 1, . . . , n.

2.2.6.1 Properties.
(1) If S = {V1 , V2 , . . . , Vn } is a base of V, then every element of V can be uniquely
expressed as a linear combination of the elements of S.

Proof. Uniqueness problem, proven by contradiction. Suppose that x ∈ V


can be expressed in at least two different forms as linear combinations of the
elements of S. Let the following be:
n
X
x= λi Vi , with λi ∈ R, ∀i = 1, . . . , n (2.1)
i=1
n
X
x= φi Vi , with φi ∈ R, ∀i = 1, . . . , n (2.2)
i=1
with λi ̸= φi for some i. From (2.1) and (2.2):
n
X
(λi − φi )Vi = x − x = θ
i=1

Since S = {V1 , V2 , . . . , Vn } is linearly independent, then:


λi − φi = 0 ∴ λi = φi ∀i = 1, . . . , n
which contradicts the assumption that λi ̸= φi for some i.

(2) If S = {V1 , V2 , . . . , Vn } is a base of V, then every element of V is a linear


combination of the elements of S since g(S) = V.
34 Chapter 2. Linear Spaces

2.2.6.2 Vector of coordinates. The coefficients of the above mentioned linear com-
bination determine an element in Rn called vector of coordinates of x in the base
S, which can be written as:
(xS ) = (λ1 , λ2 , . . . , λn )
which is unique for each x ∈ V.

2.2.7 Dimension of a linear space. Let V be a linear space over R with V ̸= ∅.


If S = {V1 , V2 , . . . , Vn } is a base of V, then:
dim V = n, with n ∈ N cardinal of a finite base of V
dim V = ∞ means that there does not exist n ∈ N such that the cardinal of a base
of V is n.
By definition dim{θ} = 0 since it does not exist a linearly independent set that
generates the null subspace.

2.2.8 Examples of dimension and base.


(1) A (canonical) base of Mmn (R) is obtained by setting 1 on each position ij and
zeros elsewhere, i.e.:
     

 1 0 ... 0 0 1 ... 0 0 0 ... 0  
0

0 ... 0 0 0 ... 0 0 0 ... 0


BMmn (R) =  . ..  ,  .. .. ..  ,  .. ..
     
.. .. .. .. .. 

  .. . . . . . . . . . . . 

 
0 0 ... 0 0 0 ... 0 0 0 ... 1
 

thus, obtaining mn different matrices that are linearly independent and gener-
ate a Mmn (R), i.e.
dim Mmn (R) = mn
(2) A (canonical) base of the diagonal matrices Dn (R) is obtained by setting 1 on
each position ii and zeros elsewhere, i.e.:
     

 1 0 ... 0 0 0 ... 0 0 0 ... 0  
0 0 . . . 0 0 1 . . . 0 0 0 . . . 0
 
BDn (R) =  . . . , . . . , . . .
     
. . .
 .. .. . . ..   .. .. . . ..   .. .. . . .. 
  

 
 
0 0 ... 0 0 0 ... 0 0 0 ... 1
 

thus, obtaining n different matrices that are linearly independent and generate
a Dn (R), i.e.
dim Dn (R) = n
(3) A (canonical) base of Pn∗ (x) is given by:
BPn∗ (x) = 1, x, x2 , . . . , xn


Therefore,
dim Pn∗ (x) = n + 1
(4) The space F(A) is of infinite dimension (non numerable).

Property. Let V be a linear space over R with dim V = n. Let P ⊂ V with m


elements and m ̸= n be. Then:
(1) If m > n, then P is linearly dependent.
(2) If m < n, then P does not generate V.
2.3. Complementary subspaces 35

Property. Every base of V has n elements.

Property. The dimension of a space (subspace) is unique.

2.3 Complementary subspaces


2.3.1 Base completion. Let V be a linear space over R with dim V = n finite.
If S = {V1 , V2 , . . . , Vt } is a linearly independent set in V with t < n, then S is not
a base of V, i.e., it does not generate V. It is possible, however, to generate a base
for V from S, by the following procedure:
Pt
• g(S) = i=1 λi Vi is a subspace of V.
• dim g(S) = t, and a base for g(S) is S, since every linearly independent gener-
ator is a base of the generated subspace.
• Take Vt+1 ∈
/ g(S), then S ∪ {Vt+1 } is linearly independent in V.
• If S ∪ {Vt+1 } is linearly independent maximum, then the procedure is finished.
If not, then:
• Take Vt+2 ∈ / g(S ∪ {Vt+1 }) until obtaining a maximum linearly independent
set in V. Maximum means that if another element is added to the already
linearly independent set, it turns linearly dependent.
Thus, S′ = {Vt+1 , Vt+2 , . . . , Vn } is the set of elements that should be added to S to
complete a base for the space V, and it contains n − t elements until completion.
From the above procedure, it can be concluded that:
(1) Since S and S′ are linearly independent in V, then:
• g(S) has S as base, and dim g(S) = t.
• g(S′ ) has S′ as base, and dim g(S′ ) = n − t.
(2) Since S| is a base completion of V, then:
• V has S ∪ S′ as base, and dim V = n = dim g(S) + g(S′ ).
• S ∩ S′ = ∅, and g(S) ∩ g(S′ ) = {θ}, and as such, V = g(S) ⊕ g(S′ ).
• g(S) and g(S′ ) are complementary subspaces of V. This can be written
as:
CV g(S) = g(S′ )
and
CV g(S′ ) = g(S)

2.3.2 Dimension of subspaces. If U1 , U2 are subspaces of V, with dim V = n,


and n is finite, then:
• dim(U1 + U2 ) = dim U1 + dim U2 − dim(U1 ∪ U2 ).

Proof. U1 ∩ U2 is subspace of U1 and of U2 , and therefore, the complement


with respect to each subspace exists. Let U′ be the complement of U1 ∩U2 with
respect to U2 , i.e.: CU2 (U1 ∪ U2 ) = U′ . From the base completion procedure
the following it is obtained that:
dim U′ + dim(U1 ∩ U2 ) = dim U2 (2.3)
36 Chapter 2. Linear Spaces

But, U2 is also complement of U1 with respect to U1 +U2 , i.e.: C(U1 +U2 ) U1 =


U′ , therefore:
dim U′ + dim U1 = dim(U1 + U2 ) (2.4)
From equations 2.3 and 2.4:

dim(U1 + U2 ) − dim U1 = dim U2 − dim(U1 ∩ U2 )

therefore:

dim(U1 + U2 ) = dim U1 + dim U2 − dim(U1 ∩ U2 )

• dim(U1 ⊕ U2 ) = dim U1 + dim U2 .

Proof. If U1 ∩ U2 = {θ}, then dim(U1 ∩ U2 ) = dim{θ} = 0, and U1 + U2


becomes U1 ⊕ U2 . Therefore:

dim(U1 + U2 ) = dim(U1 ⊕ U2 ) ∴ dim(U1 ⊕ U2 ) = dim U1 + dim U2

• dim(U1 × U2 ) = dim U1 + dim U2 .

• dim L = dim{X0 + U1 } = dim U1 .

2.3.3 Subspaces associated to a matrix. Let A ∈ Mmn (R) be. Then, A can
be expressed as:
   
a11 a12 ... a1n R1
 a21 a22 ... a2n 
  R2  
  
A= . ..  =  ..  = C1 C2 ... Cn

.. ..
 .. . . .   . 
am1 am2 ... amn Rm

with Ri ∈ Rm for i = 1, . . . , n, and Cj ∈ Rm for j = 1, . . . , n.

2.3.3.1 Row subspace of the matrix. Corresponds to a linear combination of the


rows of the matrix, i.e.:
m
X
g(row) = λi Ri , with λi ∈ R, for i = 1, . . . , m,
i=1

dim g(row) = rank row A = rank A


A base for this subspace is given by the non-null rows in the row echelon form or
in the reduced row echelon form of A, and is obtained by bringing the matrix to its
row echelon or its reduced row echelon form.
2.4. Linear algebra 37

2.3.3.2 Right-null subspace of the matrix. The solution of the homogeneous sys-
tem Ax = θ, has dimension:
dim sol(Ax = θ) = n − rank A
which gives the number of free variables in the system. A base for this subspace
is given by the vectors associated to a free variables in the general solution of the
system, and is obtained by solving the homogeneous matrix system Ax = θ and
expressing the basic variables in terms of the free variables.
The row subspace and the right-null subspace of A are (orthogonal) comple-
ments in Rn , i.e.:
Rn = g(row) ⊕ sol(Ax = θ)

2.3.3.3 Column subspace of the matrix. Corresponds to a linear combination of


the columns of the matrix, i.e.:
n
X
g(col) = φj Ci , with φj ∈ R, for j = 1, . . . , n
j=1

dim g(col) = rank col A = rank A


A base for this subspace is given by the non-null columns in the column echelon
form or in the reduced column echelon form of A, and is obtained by bringing the
matrix to its column echelon or its reduced column echelon form.

2.3.3.4 Left-null subspace of the matrix. Consider the solution of the homoge-
neous system yT A = θT . The following holds:
yT A = θT ∴ (yT A)T = (θT )T ∴ AT y = θ
Therefore, it is equivalent to the solution of the system AT y = θ, which has di-
mension:

dim sol(AT y = θ) = m − rank A


which gives the number of free variables in the system. A base for this subspace
is given by the vectors associated to a free variables in the general solution of the
system, and is obtained by solving the homogeneous matrix system Ax = θ and
expressing the basic variables in terms of the free variables.
The column subspace and the left-null subspace of A are (orthogonal) comple-
ments in Rm , i.e.:
Rm = g(col) ⊕ sol(yT A = θT ) = g(col) ⊕ sol(AT y = θ)

2.4 Linear algebra


Let V be a finite dimensional linear space over R. Define in V a new internal binary
operation multiplication as:
IBO:∗ : V × V → V
(x, y) → x ∗ y
with the following properties for all x, y, z ∈ V, and for all λ ∈ R:
Associative: (x ∗ y) ∗ z = x ∗ (y) ∗ z)
38 Chapter 2. Linear Spaces

Distributive with respect to IBO in V: x ∗ (y + z) = (x ∗ y) + (x ∗ z)


Distributive with respect to EBO in V: λ(x ∗ y) = (λx) ∗ y = x ∗ (λy)
With the above properties V is said to be a linear algebra, which is written as
A(V).

Algebraic structures.
• (V, IBO, EBO) ≡ (V, +, scalar multiplication) is a linear space.
• (A(V), ∗) ≡ (A(V), multiplication) is a linear algebra.
A(V) depends on the multiplication given in V (∗), and its dimension is the di-
mension of V over R.

2.4.1 Examples of linear algebras.


(1) If in the linear space Mn (R) the matrix multiplication is taken as the mul-
tiplication (∗), then (A(Mn (R)), matrix multiplication) is the linear algebra
of the square matrices of size n × n over R, and dim A(Mn (R)) = n2 , since
dim Mn (R) = n2 .
(2) If in the linear space Pn∗ (x) the function composition is taken as the multipli-
cation (∗), then (A(Pn∗ (x)), function composition) is the linear algebra of the
polynomials of degree less than or equal to n , and dim A(Pn∗ (x)) = n + 1, since
dim Pn∗ (x) = n + 1.

2.4.2 Linear subalgebras. Let A(V) be a linear algebra, and let A0 (V) ⊆ A(V)
be. Then, A0 (V) is a linear subalgebra if for all x, y ∈ A0 (V) and for all λ1 , λ2 ∈ R,
the following holds:
(1) (λ1 x + λ2 y) ∈ A0 (V)
(2) (x ∗ y) ∈ A0 (V)

2.4.2.1 Example of linear subalgebra. If in A(Mn (R)) the diagonal matrices Dn (R)
are taken, then A0 (Dn (R)) is a linear subalgebra of A(Mn (R)) and dim A0 (Dn ) =
n, since dim Dn (R) = n.
3
Linear Operators

3.1 Space of linear operators


3.1.1 Definition. Let V, W be finite dimensional linear spaces over R. Let T be
a function such that:
T :V→W
x → T (x)
which is usually called vector function.
Then, T is a linear function (transformation, application, operator) if for all x,
y ∈ V, and for all λ ∈ R, the following properties hold:
Additivity:
T (x + y) = T (x) + T (y)
↑ ↑
IBO in V IBO in W
T preserves the respective IBO in the space.

Homogeneity:
T (λx) = λT (x)
↑ ↑
EBO in V EBO in W
T preserves the respective EBO in the space.
In general, T is a linear operator for all xi ∈ V, and for all λi ∈ R, the following
holds:
Pn Pn
T ( i=1 λi xi ) = i=1 λi T (xi )
↑ ↑
Linear combination of elements in V Linear combination of elements in W
In other words, T transforms a linear combination of the elements in V in a linear
combination of elements in W.
T is an isomorphism of the space V into the space W.
39
40 Chapter 3. Linear Operators

3.1.2 Space of linear operators. Let L(V, W ) the set of the linear operators of
V in W, defined as:
L(V, W ) = {T : V → W|T is linear}

3.1.2.1 Building an algebra for L(V, W ).


Equivalence: Let T1 , T2 ∈ L(V, W ), then:
T1 = T2 ⇔ T1 (x) = T2 (x), ∀x ∈ V
determines a relation of equivalence in L(V, W ), namely the equality in L(V, W ).

IBO addition:
+ : L(V, W ) × L(V, W ) → L(V, W )
(T1 , T2 )(x) → (T1 + T2 )(x) = T1 (x) + T2 (x)
The addition of linear operators is itself a linear operator.

Proof. Let T1 , T2 ∈ L(V, W ), x1 , x2 , x ∈ V, and α ∈ R be, then:


(1) Additivity: (T1 + T2 )(x1 + x2 ) = (T1 + T2 )(x1 ) + (T1 + T2 )(x2 ).

Proof.
(T1 + T2 )(x1 + x2 ) = T1 (x1 + x2 ) + T2 (x1 + x2 )
= T1 (x1 ) + T1 (x2 ) + T2 (x1 ) + T2 (x2 )
= T1 (x1 ) + T2 (x1 ) + T1 (x2 ) + T2 (x2 )
∴ (T1 + T2 )(x1 + x2 ) = (T1 + T2 )(x1 ) + (T1 + T2 )(x2 )

(2) Homogeneity: (T1 + T2 )(αx) = α(T1 + T2 )(x).

Proof.
(T1 + T2 )(αx) = T1 (αx) + T2 (αx)
= αT1 (x) + αT2 (x)
= α (T1 (x) + T2 (x))
∴ (T1 + T2 )(αx) = α(T1 + T2 )(x)

Thus, as it has been shown, the addition of linear operators is itself a linear
operator.

EBO multiplication by a scalar:


R × L(V, W ) → L(V, W )
(α, T (x)) → (αT )(x) = αT (x)
The multiplication of a linear operator by a scalar, is itself a linear operator.

Proof. Let T ∈ L(V, W ), x1 , x2 , x ∈ V, and φ, λ ∈ R be, then:


(1) Additivity: (λT )(x1 + x2 ) = (λT )(x1 ) + (λT )(x2 ).
3.1. Space of linear operators 41

Proof.
(λT )(x1 + x2 ) = λT (x1 + x2 )
= λ(T (x1 ) + T (x2 ))
= λT (x1 ) + λT (x2 )
∴ (λT )(x1 + x2 ) = (λT )(x1 ) + (λT )(x2 )

(2) Homogeneity: (λT )(φx) = φ(λT )(x).


Proof.
(λT )(φx) = λT (φx)
= λφT (x)
= φλT (x)
= φ(λT (x))
∴ (λT )(φx) = φ(λT )(x)

Thus, as it has been shown, the multiplication of a linear operator by a scalar,


is itself a linear operator.

Null function:
Θ:V→W
x → Θ(x) = θW , ∀x ∈ V
The null function is a linear operator.

Proof. Let x, y ∈ V, and α ∈ R be, then:


(1) Additivity: Θ(x + y) = Θ(x) + Θ(y).
Proof.
Θ(x + y) = θW
= θW + θW
∴ Θ(x + y) = Θ(x) + Θ(y)

(2) Homogeneity: Θ(αx) = αΘ(x).


Proof.
Θ(αx) = θW
= αθW
∴ Θ(αx) = αΘ(x)

Therefore, as it has been shown, the null function is a linear operator.

3.1.2.2 Algebraic structure. L(V, W ) with the above defined IBO and EBO con-
stitutes the linear space of the linear operators defined in V and W.
42 Chapter 3. Linear Operators

3.1.2.3 Properties.
(1) Every linear operator transforms θV into θW .

Proof. It is known that for all x ∈ V, it holds that x + (−1)x = θV . From


which,
T (x + (−1)x) = T (θV )
T (x) + T ((−1)x)) =
T (x) + (−1)T (x) =
∴ θW = T (θV )

(2) For all x ∈ V, it holds that T (−x) = −T (x).

Proof. It is known that for all x ∈ V, there exists −x ∈ V such that x+(−x) =
θV . From which T (x + (−x)) = T (θV ), and in consequence, T (x) + T (−x) =
θW . Therefore, T (−x) = −T (x).

(3) Every linear operator T : V → W is fully defined by its action over a base of
V.

Proof. Let V, W be spaces of finite dimension over R, BV = {V1 , V2 , . . . , Vn }


a base of V, S = {W1 , W2 , . . . , Wn } a set of W. Then, the aim is to demonstrate
that there exists a unique T ∈ L(V, W ) such that T (Vi ) = Wi , ∀ i = 1, . . . , n.
SincePBV is a base of V, then for all x ∈ V there exist αi ∈ R such that
n
x = i=1 αi Vi .
Additionally, it is defined:
T :V→W
n
X
x → T (x) = αi Wi
i=1
Pn Pn
(a) T ∈ L(V, W ). Let x = i=1 αi Vi , y = i=1 λi Vi ∈ V, and φ ∈ R
Additivity:
Xn n
X n
X
x+y = αi Vi + λ i Vi = (αi + λi )Vi
i=1 i=1 i=1
n
X n
X n
X
∴ T (x + y) = (αi + λi )Wi = αi Wi + λi Wi −
i=1 i=1 i=1
∴ T (x + y) = T (x) + T (y)
Homogeneity:
n
X n
X
φx = φ αi Vi = φαi Vi
i=1 i=1
n
X n
X
∴ T (φx) = φαi Wi = φ αi Wi
i=1 i=1
∴ T (φx) = φT (x)
3.1. Space of linear operators 43

(b) Uniqueness of T
Proof. By contradiction. Let T1 , T2 ∈ L(V, W ), with T1 ̸= T2 ∀x ∈ V,
be. Then,
n
! n
X X
T1 (x) = T1 αi Vi = αi Wi
i=1 i=1
and !
n
X n
X
T2 (x) = T2 αi Vi = αi Wi
i=1 i=1
Therefore T1 (X) = T2 (X), which implies T1 = T2 , which contradicts the
premise T1 ̸= T2 . Therefore T is unique.

3.1.2.4 Lineal forms. Let V be a finite dimensional linear space over R. Let T ∈
L(V, R) be. Then, T is a linear form. L(V, R) is called a dual space of V, denoted
by L∗ . In other words:
L∗ = {T : V → R|T is linear}

Property.
dim V = dim V∗
Proof. Let BV = {V1 , V2 , . . . , Vn } be a base of V, therefore dim V = n. If
B∗ = ϑ = {φj : V → R|Vi → φj (Vi ) = δij , ∀j = 1, . . . , n}
then B∗ = ϑ is a base of V∗ , a dual base of V, and therefore dim V = dim V∗ .

3.1.3 Subspaces associated to a linear operator. Let T ∈ L(V, R) be. To T


there associated (T determines) two subspaces, as follows:

3.1.3.1 Kernel of T . Let N(T ) be the set of the elements of V whose image is the
null element of W, i.e.
N(T ) = {x ∈ V|T (x) = θW }
N(T ) is a subspace of V.
Proof. (1) θV ∈ N(T ).

Proof. Indeed, T (θV ) = θW , therefore θV ∈ N(T ).

(2) Additivity: ∀x1 , x2 ∈ N(T ) ⇒ (x1 + x1 ) ∈ N(T ).

Proof. It is known that x1 ∈ N(T ) ⇒ T (x1 ) = θW , and x2 ∈ N(T ) ⇒


T (x2 ) = θW . Therefore,
T (x1 ) + T (x2 ) = θW + θW = θW
∴ T (x1 + x2 ) = θW ⇒ (x1 + x2 ) ∈ N(T )

(3) Homogeneity: ∀x ∈ N(T ) ∧ ∀α ∈ R ⇒ (αx) ∈ N(T ).


44 Chapter 3. Linear Operators

Proof. It is known that x ∈ N(T ) ⇒ T (x) = θW . Therefore,


αT (x) = αθW
∴ T (αx) = θW ⇒ (αx) ∈ N(T )

Thus, as it has been shown, N(T ) is a subspace of V.


Additionally, N(T ) has dimension and base. dim N(T ) is called nullity of T,
and the following holds:
dim N(T ) ≤ dim V

3.1.3.2 Rank of T . Let R(T ) be the set of the elements of W whose which are
image to at least one element of V, i.e.
R(T ) = {y ∈ W|∃x ∈ V with T (x) = y}
R(T ) is a subspace of W.
Proof. (1) θW ∈ R(T ).

Proof. There exists θV ∈ N(T ) such that T (θV ) = θW . Therefore, θW ∈


R(T ).

(2) Additivity: ∀y1 , y2 ∈ R(T ) ⇒ (y1 + y1 ) ∈ R(T ).

Proof. It is known that:


y1 ∈ R(T ) ⇒ ∃x1 ∈ V such that T (x1 ) = y1
and
y2 ∈ R(T ) ⇒ ∃x2 ∈ V such that T (x2 ) = y2
Therefore,
T (x1 ) + T (x2 ) = y1 + y2
∴ T (x1 + x2 ) = y1 + y2 ⇒ (y1 + y2 ) ∈ R(T )

Thus, as it has been shown, R(T ) is a subspace of W.


Additionally, R(T ) has dimension and base, and the following holds:
dim R(T ) ≤ dim W

Property. If V and W are finite dimensional linear spaces over R and T ∈


L(V, W ), then the following holds:
dim V = dim N(T ) + dim R(T )
Proof. Let B = {V1 , . . . , Vr } be a base of N(T ) with dim V = n. Then, dim N(T ) =
r ≤ n. By means of completion of bases, it can be obtained B∗ = {Vr+1 , . . . , Vn },
such that B ∪ B∗ = {V1 , . . . , Vr , Vr+1 , . . . , Vn }. Now, it will be shown that T (B∗ )
is a base for R(T ).
(1) Linear independence of T (B∗ ).
3.1. Space of linear operators 45

Proof. By contradiction. Suppose that T (B∗ ) is linearly dependent in R(T ).


In other words, suppose:
Xn
αi T (Vi ) = θW with αi ̸= 0 for some r + 1 ≤ i ≤ n
i=r+1
Let r + 1 ≤ t ≤ n, such that αi ̸= 0, thus:
n
X αi
T (Vt ) = − T (Vi )
αt
i=r+1,i̸=t

Therefore:
n
X
θW = αt T (Vt ) + αi T (Vi )
i=r+1,i̸=t
   
n
X n
X
= T αt T (Vt ) + αi Vi  ⇒ αt T (Vt ) + αi Vi  ∈ N(T )
i=r+1,i̸=t i=r+1,i̸=t

In consequence, B∗ is a generator of N(T ), thus contradicting the construction


of B and B∗ . Furthermore, αi = 0 for i = r + 1, . . . , n, which contradicts
the initial assumption of linear dependency in W. Thus, T (B∗ ) is linearly
independent in W.

(2) T (B∗ ) generates R(T ).


Pn
Proof. For all x ∈ V there exist αi ∈ R such that x = i=1 αi Vi , since B∪B∗
is a base of V. Therefore,
n
!
X
T (x) = T αi Vi
i=1
r n
!
X X
=T αi Vi + αi Vi
i=1 i=r+1
r
! n
!
X X
=T αi Vi +T αi Vi
i=1 i=r+1
r
X n
X
= αi T (Vi ) + αi T (Vi )
i=1 i=r+1
Xr n
X
= αi θW + αi T (Vi )
i=1 i=r+1
Xn
∴ T (x) = αi T (Vi )
i=r+1

In consequence, T (B∗ ) generates R(T ).

Therefore, T (B∗ ) is a base for R(T ), from which:


dim V = dim N(T ) + dim R(T )
n = r + (n − r)
46 Chapter 3. Linear Operators

3.2 Isomorphism in linear spaces


3.2.1 Types of linear operators.
Injective (1-to-1): T ∈ L(V, V) is injective or 1-to-1 if and only if N(T ) = {θV },
or equivalently, dim N(T ) = 0.
Onto (Exhaustive): There!!
Bijective (Regular): People!
4
Spaces with Internal Product

47
5
Eigenvalues and
Diagonalization

49
Bibliography

51

You might also like