Professional Documents
Culture Documents
Maatext Template
Maatext Template
Maatext Template
1
2
AMS/MAA TEXTBOOKS
Linear Algebra
Jaime Aranguren
Contents
v
1
Matrices Over the Real Field
1.1.2.2 Square matrix. The set Mmn (R) with m = n ⇒ Mn (R) ≡ Mm (R). As an
example:
a11 a12 a13
A = a21 a22 a23
a31 a32 a33
Within the square matrices, there are:
(1) Diagonal matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is diagonal if:
(
aij , if i = j
aij =
0, otherwise
The diagonal matrix is often also expressed as
(aij ) = Diag(a11 , a22 , . . . , ann )
As an example:
a11 0 0
A= 0 a22 0
0 0 a33
(2) Triangular superior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is triangular
superior if: (
aij , if i ≤ j
aij =
0, otherwise
As an example:
a11 a12 a13
A= 0 a22 a23
0 0 a33
1.1. The concept of a Matrix 3
(3) Strict triangular superior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is strict
triangular superior if:
(
aij , if i < j
aij =
0, otherwise
As an example:
0 a12 a13
A = 0 0 a23
0 0 0
(4) Triangular inferior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is triangular
inferior if: (
aij , if i ≥ j
aij =
0, otherwise
As an example:
a11 0 0
A = a21 a22 0
a31 a32 a33
(5) Strict triangular inferior matrix: Let (aij ) ∈ Mn (R) be. Then, (aij ) is strict
triangular inferior if:
(
aij , if i > j
aij =
0, otherwise
As an example:
0 0 0
A = a21 0 0
a31 a32 0
(a) Superior band: are all aij such that i = j − r, with r ∈ N and r ≤ n − 1,
and 0 otherwise. As an example:
0 a12 0
A = 0 0 a23
0 0 0
(b) Inferior band: are all aij such that i = j + r, with r ∈ N and r ≤ n − 1,
and 0 otherwise. As an example:
0 0 0
A = a21 0 0
0 a32 0
1.1.3 Matrix equality. Let (aij ), (bij ) ∈ Mmn (R) be. Then:
A = B ⇔ aij = bij , ∀i = 1, . . . , m ∧ ∀j = 1, . . . , n
It is an equivalence relationship: ∀A, B, C ∈ Mmn (R):
(1) Reflexive: A = A
(2) Symmetric: A = B ⇔ B = A
(3) Transitive: A = B ∧ B = C ⇒ A = C
1.2.2.2 Algebraic structure. (Mmn (R), addition, multiplication by scalar) ≡ (Mmn (R), IBO, EBO)
is a linear space over R.
1.2.3.1 Rectangular matrices. Mmn (R) × Mnr (R) → Mmr (R), such that:
n
!
X
((aij ), (bij )) → (aij )(bij ) = aik bkj , ∀i = 1, . . . , m ∧ ∀j = 1, . . . , r
k=1
1.2. Binary operations with matrices 5
Proof. Let A = (aij ) ∈ Mmn (R), B = (bij ) ∈ Mnp (R), C = (cij ) ∈ Mpq (R)
be. Then, we want to demonstrate that:
(aij ) [(bij )(cij )] = [(aij )(bij )] (cij )
By the definition of matrix multiplication:
p
! n
!
X X
(aij ) bit ctj = aik bkj (cij )
t=1 k=1
Also by definition of matrix multiplication (notice the changes in the indexes):
p p
n
!! n
! !
X X X X
aik bkt ctj = aik bkt (ctj )
k=1 t=1 t=1 k=1
By associativity of addition in R:
p p X
n X
! n
!
X X
aik bkt ctj = aik bkt ctj
k=1 t=1 t=1 k=1
Commutation.
(1) If AB = BC, then it is said that A and B commute.
(2) If AB = −BC, then it is said that A and B anti-commute.
6 Chapter 1. Matrices Over the Real Field
1.3.1.1 Properties. For each case, let A, B be matrices over the real field such that
the operation is conformable, and for all λ ∈ R:
(1) (AT )T = A
(2) (A + B)T = AT + BT +
(4) (AB) = BT AT .
Proof. Let A = (aij ) ∈ Mmn (R), B = (bij ) ∈ Mnp (R) be. Then, we want to
demonstrate that:
T
[(aij )(bij )] = (bij )T (aij )T
By definition of matrix multiplication on the left side, and definition of trans-
position on the right side:
n
!T
X
aik bkj = (bji )(aji )
k=1
−
1.3.2 Conjugation. : Mmn (C) → Mnm (C), such that:
(aij + ibij ) → (aij + ibij ) = (aij + ibij ) = (aij − ibij )
1.3.2.1 Properties. For each case, let A, B be matrices over the complex field such
that the operation is conformable, and for all λ ∈ C:
(1) (A) = A
(2) A + B = A + B
(3) λA = λ A
(4) (AB) = A B
1.3.3.1 Properties. For each case, let A, B be matrices over the complex field such
that the operation is conformable, and for all λ ∈ C:
T
T
(1) (A ) = A
T T T
(2) A + B = A + B
T T
(3) λA = λ A
T T
(4) (AB) = A B
Properties.
(1) If A is idempotent, then Ar = A with r ≥ 2.
(2) If A is involute, then A2r = In and A2r+1 = A
1.4 Inverse
1.4.1 Inverse of a rectangular matrix. Let A ∈ Mmn (R) with m ̸= n be. If
there exist:
• B ∈ Mnm (R) such that AB = Im , then B is called right inverse of A.
• C ∈ Mnm (R) such that CA = In , then C is called left inverse of A.
For a rectangular matrix (m ̸= n) there can be lateral inverse matrices that are
different, or they can not exist, or there can be one but not the other.
1.4.2.2 Properties.
(1) If A ∈ Mn (R) is regular, then A−1 is unique.
(4) If A, B ∈ Mn (R) are regular, then AB is regular, and (AB)−1 = B−1 A−1 .
Post-multiplying by A−1 :
(AB)−1 A A−1 = B−1 A−1 ∴ (AB)−1 (AA−1 ) = B−1 A−1
From which:
(AB)−1 = B−1 A−1
Qr
If Ai ∈ Mn (R) with i = 1, . . . , r are regular, then i=1 Ai is regular and:
r
!−1 r
Y Y
Ai = (Ar+1−i )−1
i=1 i=1
Mij results from the original matrix Aij by suppressing row i and column j.
(2) The sum of the products formed by multiplying the elements of a row (column)
of A by the corresponding cofactor of another row (column) of A is zero.
Take the matrix A, which has the rows i and r. Substitute row i with row r,
and calculate the determinant for row i, obtaining:
a11 ... a1n a11 ... a1n
.. .. .. .. .. ..
. . . . . .
n
ai1 ... ain a ... arn X
→ r1 = arj cij = 0
ar1 ... arn ar1 ... arn
j=1
.. .. .. .. .. ..
. . . . . .
an1 ... ann an1 ... ann
Because the determinant of a matrix with two equal rows is zero. See 1.5.1.3
Analogously, by columns:
n
X
aih cij = δjh det A
j=1
(9) If a row (or column) of A is scalar multiple of another row (or column) of A,
then det A = 0
Proof. Another way to state the above property, is: Let A, B ∈ Mn (R) be,
and aij = bij , except for the fact that there exists one row in B such that
1.5. Methods for finding the inverse of a matrix 13
Qn
(12) If A is triangular or diagonal, then det A = i=1 aii
Proof. The matrix of cofactors of A is: cof A = (cij ). Therefore, the adjunct
matrix of A is: adj A = (cof A)T = (cji ). Developing the products A(adj A)
and (adj A)A:
n
!
X
A(adj A) = aik cjk = (δij det A) = det A(δij ) = det AIn
k=1
n
!
X
(adj A)A = ahi chj = (δij det A) = det A(δij ) = det AIn
h=1
Elemental. Are obtained from the identity matrix ba applying a single elemental
operation to it. All elemental matrices are regular.
Echelon form.
(1) Row echelon (RE) matrices are those which meet the following conditions:
(a) The first non-null element on each row is 1.
(b) The first non-null elements of the rows are ordered from left to right.
(c) If there are any null rows, they are located at the bottom.
Every matrix A ∈ Mmn (R) − {0} can be transformed into row echelon form
by means of the Gauss elimination method.
(2) Column echelon (CE) matrices are those which meet the following conditions:
(a) The first non-null element on each column is 1.
(b) The first non-null elements of the columns are ordered from top to bottom.
(c) If there are any null rows, they are located at the right.
Every matrix A ∈ Mmn (R) − {0} can be transformed into column echelon form
by means of the Gauss elimination method.
1.5.2.3 Equivalence.
Process.
(1) By the application of elementary row operations:
Elementary row operations
A−1
A In −−−−−−−−−−−−−−−−−→ In
If these processes are possible, then A is a regular matrix. If these processes are
not possible, then A is a non-regular (singular) matrix.
Regarding transposition:
T −1
A Elementary row operations In
= A In −−−−−−−−−−−−−−−−−→ In A−1 =
In A−1
1.5.3.1 Definition.
Vector representation.
a11 a12 a1n b1
a21 a22 a2n b2
C1 x1 + C2 x2 + . . . + Cn xn = b ⇔ . x1 + . x2 + . . . + . xn = .
.. .. .. ..
am1 am2 amn bm
As it can be observed, each column in matrix A determines the behavior of the
corresponding unknown in the system.
Equivalence property. The equivalence of matrices keeps the solution of the sys-
tem invariant.
18 Chapter 1. Matrices Over the Real Field
Properties.
(1) There exists always a solution, called trivial solution: x0 = 0.
(2) The solution is unique if and only if A ∈ Mn (R) is regular, which implies that
the trivial solution, x0 = 0, is the only solution.
Scheme for finding the solution of Ax = 0. Let A ∈ Mn (R) with m < n be.
Transform the matrix A by means of elemental operations into:
• Row echelon (RE) form, and obtain the solution by means of backwards sub-
stitution.
• Reduced row echelon (RRE) form, and from this the general solution is obtained.
The following cases can happen:
1.5. Methods for finding the inverse of a matrix 19
Non-regular matrices.
• The system does not have solution if and only if:
rank A < rank [ A b ]
• The system has multiple solutions if and only if:
rank A = rank [ A b ]≤m<n
– There are rank A principal unknowns.
– There are n − rank A free unknowns.
1.5.3.5 About the identity matrix in the REE. Not always the identity matrix is
appears as such, but it possible to select column vectors in the REE forming an
identity matrix, which determine the basic unknowns in the general solution.
1.5.3.6 About the pivot method. In different application fields of linear systems,
like for example in linear programming, there appears the exchange of basic un-
knowns by free unknowns. For that purpose, the so called pivot method is used.
The basic unknown to be taken out and the free variable to be put in, are
selected. The column vector associated to the free unknown is transformed into a
unitary vector replacing the corresponding one of the basic unknown, thus preserv-
ing the general solution scheme.
1.5.3.7 About the regularity (invertibility) of a matrix. The fact that a matrix
is regular allows for the statement of the equivalence of the following properties,
previously discussed:
(1) A ∈ Mn (R) is regular
(2) det A ̸= 0
(3) A ≡
mathbf Nn
(4) rank A = n
(5) The system Ax = 0 has a unique solution, x = 0
(6) The system Ax = b has a unique solution, x = A−1 b
2
Linear Spaces
23
24 Chapter 2. Linear Spaces
Scalar identity: 1x = x
Algebraic structure. (Mmn (R), IBO, EBO) is the linear space of the real ma-
trices.
(2) Let Rn be the set of radar vectors (special case of the real matrices):
a = b ⇔ ai = bi , ∀i = 1, . . . , n
IBO:+ : Rn × Rn → Rn
(a, b) → a + b = (a1 + b1 , a2 + b2 , . . . , an + bn )
EBO: : R × Rn → Rn
(λ, a) → λa = (λa1 , λa2 , . . . , λan )
Algebraic structure. (Rn , IBO, EBO) is the linear space of radar vectors.
(3) Let F(A) be the set of the real functions of real variable with domain A:
F(A) = {f : A ⊆ R → R/f is a function}
where A is the domain of f , which is denoted as dom f = A. Then:
• Define an equality in F(A). Let f , g ∈ F(A) be. Then:
f = g ⇔ f (x) = g(x), ∀x ∈ A
• Define as IBO in F(A) the addition. Then:
IBO:+ : F(A) × F(A) → F(A)
(f, g)(x) → (f + g)(x) = f (x) + g(x), ∀x ∈ A
such that for all f , g, h ∈ F(A) the following properties hold:
Associative: (f + g) + h = f + (g + h)
Neutral element (additive neutral): f + θ = θ + f = f , where θ ∈
F(A) is the neutral element such that θ(x) = 0, ∀x ∈ A.
Inverse element (additive inverse): f + e = e + f = 0, where e ∈
F(A) is the inverse element of f ∈ F(A) such that e(x) = −f (x),
∀x ∈ A.
Commutativity: f + g = g + f
Algebraic structure. (F(A), IBO, EBO) is the linear space of the functions of
real variable.
(4) Let Pn∗ (x) be the set of polynomials of degree less than or equal to n:
( n )
X
∗ k
Pn = ak x |ak ∈ R, ∀k = 0, . . . , n
k=0
Then:
Pn Pn
• Define in Pn∗ (x) an equality. Let P1 (x) = k=0 ak xk , P2 (x) = k=0 bk x
k
∈
Pn∗ (x) be. Then:
P1 (x) = P2 (x) ⇔ ak = bk , ∀k = 0, . . . , n
26 Chapter 2. Linear Spaces
Pn
• Define
Pn as IBO in Pn∗ (x) the addition. Let P1 (x) = k=0 ak xk , P2 (x) =
k ∗
k=0 bk x ∈ Pn (x) be. Then:
IBO:+ : Pn∗ (x) × Pn∗ (x) → Pn∗ (x)
n
X
(P1 (x), P2 (x)) → P1 (x) + P2 (x) = (ak + bk )xk
k=0
and its 4 properties.
Algebraic structure. (Pn∗ (x), IBO, EBO) is the linear space of the real poly-
nomials of degree less than or equal to n.
2.2.2.1 Properties.
(1) If U = θ, then L = {X0 |x0 ∈ V, which means that every element in V is a
linear variety.
(2) If X0 = θ, then L = {U|U is a subspace of V}, which means that every sub-
space of V is a linear variety.
(3) A linear variety is a subspace of V if and only if θ ∈ L.
(4) Every element of L can be taken as a support.
(5) Every linear variety is uniquely determined by its base space (subspace).
(6) The intersection of two linear varieties L1 and L2 of V, or the empty set, is a
linear variety having as base subspace the intersection of the subspaces.
Proof.
n
X n
X n
X
X +Y = λi Vi + φi Vi = (λi + φi )Vi
i=1 i=1 i=1
n
X
∴X +Y = ψi Vi with ψi = λi + φi ∈ R for i = 1, . . . , n
i=1
∴ (X + Y ) ∈ g(S)
Pn
(3) Homogeneity: Let X = i=1 λi Vi ∈ g(S) and ω ∈ R be. Then, (ωX) ∈ g(S).
Proof.
n
X n
X
ωX = ω λi Vi = ωλi Vi
i=1 i=1
n
X
∴ ωX = ψi Vi with ψi = ωλi ∈ R for i = 1, . . . , n
i=1
∴ (ωX) ∈ g(S)
Pn
Therefore, as it has been shown, g(S) = i=1 λi Vi is a subspace of V.
2.2.3.2 Operations with subspaces. Let V be a linear space over R, and let U1
and U2 be subspaces of V.
Intersection of subspaces.
U1 ∩ U2 = {X|X ∈ U1 ∧ X ∈ U2 }
is a subspace of V. Generalizing, let Ui be subspaces of V for i = 1, . . . , m.
m
\
= {X|X ∈ Ui ∀i = 1, . . . , m}
i=1
is a subspace of V.
Addition of subspaces.
U1 + U2 = {X|X = X1 + X2 with X1 ∈ U1 ∧ X2 ∈ U2 }
is a subspace of V. Generalizing, let Ui be subspaces of V for i = 1, . . . , m. Then:
m
( n
)
X X
= X|X = Xi with Xi ∈ Ui ∀i = 1, . . . , m
i=1 i=1
is a subspace of V.
2.2. Linear subspaces 29
Proof. (a) θ ∈ U1 + U2 .
Proof.
θ ∈ U1 and θ ∈ U2 since U1 and U2 are subspaces
∴ θ = θ + θ ⇒ θ ∈ U1 + U2
Proof.
X = X1 + X2 with X1 ∈ U1 and X2 ∈ U2
Y = Y1 + Y2 with Y1 ∈ U1 and Y2 ∈ U2
X + Y = (X1 + X2 ) + (Y1 + Y2 ) ∴ X + Y = (X1 + Y1 ) + (X2 + Y2 )
But (X1 + Y1 ) ∈ U1 and (X2 + Y2 ) ∈ U2 , which implies (X + Y ) ∈
(U1 + U2 ).
Proof.
X = X1 + X2 with X1 ∈ U1 and X2 ∈ U2
αX = αX1 + αX2
But (αX1 ) ∈ U1 and (αX2 ) ∈ U2 , which implies (αX) ∈ (U1 + U2 ).
Therefore U1 + U2 is a subspace of V.
(2) U1 ∩ U2 = {θ}.
Cartesian product of subspaces. Let V be a linear subspace over R be, and let
U1 and U2 be subspaces of V:
U = U1 × U2 = {(X1 , X2 )|X1 ∈ U1 ∧ X2 ∈ U2 }
then U is a subspace of V × V ≡ V2 . Generalizing, let Ui be subspaces of V for
i = 1, . . . m:
U = ×m
i=1 Ui = {(X1 , . . . , Xm )|Xi ∈ Ui ∀i = 1, . . . , m}
then U is a subspace of Vm .
Proof. (1) θm = (0, . . . , 0) ∈ U
(2) Additivity: X, Y ∈ U ⇒ (X + Y ) ∈ U.
2.2.4 Dimension and base of a linear space. Let V be a linear space over R.
Let S = {V1 , . . . , Vn } ⊂ V be, with S ̸= ∅. Let λi ∈ R be, with i = 1, . . . n. Then,
by the properties of a linear space in V, it is possible to postulate:
Xn
λi Vi = θ
i=1
which always brings to a homogeneous system, where:
• Vi ∈ S: are known and determine (form) the matrix of coefficients.
• λi ∈ R: are the variables of the system.
Pn
2.2.4.1 Linear independence. If the system i=1 λi Vi = θ has a unique solution,
i.e. λi = 0 for i = 1, . . . , n, then S = {V1 , . . . , Vn } is linearly independent in V.
Properties.
(1) Every subset of a linearly independent subset, is linearly independent.
(2) Let S ⊂ V be linearly independent. Then, S ∪ {X} is linearly independent if
and only if X ∈
/ g(S), with X ∈ V.
Pn
2.2.4.2 Linear dependence. If the system i=1 λi Vi = θ has multiple solution, i.e.
there exists λi ̸= 0 with 1 ≤ t ≤ n, then S = {V1 , . . . , Vn } is linearly dependent in
V.
Properties.
(1) Every subset S of V for which θ ∈ S, is linearly dependent.
(2) Every set containing a linearly dependent set, is linearly dependent.
(3) In a linearly dependent set, at least one of the elements can be expressed as
the
Pn linear combination of the other elements in the set. In other words, if
i=1 λi Vi = θ with λt ̸= 0 for 1 ≤ t ≤ n, then,
n
X
λi Vi = θ
i=1
n
X
∴ λi Vi + λt Vt = θ
i̸=t
n
X λi
⇒ Vt = ωi Vi with ωi = − ∈R
λt
i̸=t
2.2.5 Wronski matrix. Let C n (I) be the set of the real continuous functions with
continuous derivatives of order n in the open and finite interval I ⊂ R:
C n (I) = {f ∈ F(A)|f is continuous and has derivatives of order n continuous in I ⊂ R}
Then, C n (I) is a subspace of F(A), since from differential calculus, it is known
that:
(1) θ(x) ∈ C n (I) ⇒ C n (I) ̸= ∅.
32 Chapter 2. Linear Spaces
Possible cases.
(1) If Wf (x0 ) is regular, i.e., if Wf−1
(x0 ) exists, or equivalently, if det Wf (x0 ) ̸= 0
(i.e. the wronskian of the set does not vanish), then the system has a unique
solution and the set of functions is linearly independent in C n (I). In other
words,
det(Wf (x0 ) ) ̸= 0 ⇒ A = θ
∴ λj = 0 ∀j = 1, . . . , n ⇒ {f1 , . . . , fn } are linearly independent in C n (I)
therefore, if there exists an x0 ∈ I such that Wf (x0 ) is regular, then the function
set is linearly independent in C n (I), which can also be expressed as:
∃x0 ∈ I| det(Wf (x0 ) ) ̸= 0 ⇒ {f1 , . . . , fn } are linearly independent in C n (I)
(2) If Wf (x0 ) is not regular para todo x0 ∈ I, i.e. there exists x0 ∈ I such that
det(Wf (x0 ) ) = 0, then nothing can be stated about the linear dependence or
independence of the function set.
2.2.6.1 Properties.
(1) If S = {V1 , V2 , . . . , Vn } is a base of V, then every element of V can be uniquely
expressed as a linear combination of the elements of S.
2.2.6.2 Vector of coordinates. The coefficients of the above mentioned linear com-
bination determine an element in Rn called vector of coordinates of x in the base
S, which can be written as:
(xS ) = (λ1 , λ2 , . . . , λn )
which is unique for each x ∈ V.
thus, obtaining mn different matrices that are linearly independent and gener-
ate a Mmn (R), i.e.
dim Mmn (R) = mn
(2) A (canonical) base of the diagonal matrices Dn (R) is obtained by setting 1 on
each position ii and zeros elsewhere, i.e.:
1 0 ... 0 0 0 ... 0 0 0 ... 0
0 0 . . . 0 0 1 . . . 0 0 0 . . . 0
BDn (R) = . . . , . . . , . . .
. . .
.. .. . . .. .. .. . . .. .. .. . . ..
0 0 ... 0 0 0 ... 0 0 0 ... 1
thus, obtaining n different matrices that are linearly independent and generate
a Dn (R), i.e.
dim Dn (R) = n
(3) A (canonical) base of Pn∗ (x) is given by:
BPn∗ (x) = 1, x, x2 , . . . , xn
Therefore,
dim Pn∗ (x) = n + 1
(4) The space F(A) is of infinite dimension (non numerable).
therefore:
2.3.3 Subspaces associated to a matrix. Let A ∈ Mmn (R) be. Then, A can
be expressed as:
a11 a12 ... a1n R1
a21 a22 ... a2n
R2
A= . .. = .. = C1 C2 ... Cn
.. ..
.. . . . .
am1 am2 ... amn Rm
2.3.3.2 Right-null subspace of the matrix. The solution of the homogeneous sys-
tem Ax = θ, has dimension:
dim sol(Ax = θ) = n − rank A
which gives the number of free variables in the system. A base for this subspace
is given by the vectors associated to a free variables in the general solution of the
system, and is obtained by solving the homogeneous matrix system Ax = θ and
expressing the basic variables in terms of the free variables.
The row subspace and the right-null subspace of A are (orthogonal) comple-
ments in Rn , i.e.:
Rn = g(row) ⊕ sol(Ax = θ)
2.3.3.4 Left-null subspace of the matrix. Consider the solution of the homoge-
neous system yT A = θT . The following holds:
yT A = θT ∴ (yT A)T = (θT )T ∴ AT y = θ
Therefore, it is equivalent to the solution of the system AT y = θ, which has di-
mension:
Algebraic structures.
• (V, IBO, EBO) ≡ (V, +, scalar multiplication) is a linear space.
• (A(V), ∗) ≡ (A(V), multiplication) is a linear algebra.
A(V) depends on the multiplication given in V (∗), and its dimension is the di-
mension of V over R.
2.4.2 Linear subalgebras. Let A(V) be a linear algebra, and let A0 (V) ⊆ A(V)
be. Then, A0 (V) is a linear subalgebra if for all x, y ∈ A0 (V) and for all λ1 , λ2 ∈ R,
the following holds:
(1) (λ1 x + λ2 y) ∈ A0 (V)
(2) (x ∗ y) ∈ A0 (V)
2.4.2.1 Example of linear subalgebra. If in A(Mn (R)) the diagonal matrices Dn (R)
are taken, then A0 (Dn (R)) is a linear subalgebra of A(Mn (R)) and dim A0 (Dn ) =
n, since dim Dn (R) = n.
3
Linear Operators
Homogeneity:
T (λx) = λT (x)
↑ ↑
EBO in V EBO in W
T preserves the respective EBO in the space.
In general, T is a linear operator for all xi ∈ V, and for all λi ∈ R, the following
holds:
Pn Pn
T ( i=1 λi xi ) = i=1 λi T (xi )
↑ ↑
Linear combination of elements in V Linear combination of elements in W
In other words, T transforms a linear combination of the elements in V in a linear
combination of elements in W.
T is an isomorphism of the space V into the space W.
39
40 Chapter 3. Linear Operators
3.1.2 Space of linear operators. Let L(V, W ) the set of the linear operators of
V in W, defined as:
L(V, W ) = {T : V → W|T is linear}
IBO addition:
+ : L(V, W ) × L(V, W ) → L(V, W )
(T1 , T2 )(x) → (T1 + T2 )(x) = T1 (x) + T2 (x)
The addition of linear operators is itself a linear operator.
Proof.
(T1 + T2 )(x1 + x2 ) = T1 (x1 + x2 ) + T2 (x1 + x2 )
= T1 (x1 ) + T1 (x2 ) + T2 (x1 ) + T2 (x2 )
= T1 (x1 ) + T2 (x1 ) + T1 (x2 ) + T2 (x2 )
∴ (T1 + T2 )(x1 + x2 ) = (T1 + T2 )(x1 ) + (T1 + T2 )(x2 )
Proof.
(T1 + T2 )(αx) = T1 (αx) + T2 (αx)
= αT1 (x) + αT2 (x)
= α (T1 (x) + T2 (x))
∴ (T1 + T2 )(αx) = α(T1 + T2 )(x)
Thus, as it has been shown, the addition of linear operators is itself a linear
operator.
Proof.
(λT )(x1 + x2 ) = λT (x1 + x2 )
= λ(T (x1 ) + T (x2 ))
= λT (x1 ) + λT (x2 )
∴ (λT )(x1 + x2 ) = (λT )(x1 ) + (λT )(x2 )
Null function:
Θ:V→W
x → Θ(x) = θW , ∀x ∈ V
The null function is a linear operator.
3.1.2.2 Algebraic structure. L(V, W ) with the above defined IBO and EBO con-
stitutes the linear space of the linear operators defined in V and W.
42 Chapter 3. Linear Operators
3.1.2.3 Properties.
(1) Every linear operator transforms θV into θW .
Proof. It is known that for all x ∈ V, there exists −x ∈ V such that x+(−x) =
θV . From which T (x + (−x)) = T (θV ), and in consequence, T (x) + T (−x) =
θW . Therefore, T (−x) = −T (x).
(3) Every linear operator T : V → W is fully defined by its action over a base of
V.
(b) Uniqueness of T
Proof. By contradiction. Let T1 , T2 ∈ L(V, W ), with T1 ̸= T2 ∀x ∈ V,
be. Then,
n
! n
X X
T1 (x) = T1 αi Vi = αi Wi
i=1 i=1
and !
n
X n
X
T2 (x) = T2 αi Vi = αi Wi
i=1 i=1
Therefore T1 (X) = T2 (X), which implies T1 = T2 , which contradicts the
premise T1 ̸= T2 . Therefore T is unique.
3.1.2.4 Lineal forms. Let V be a finite dimensional linear space over R. Let T ∈
L(V, R) be. Then, T is a linear form. L(V, R) is called a dual space of V, denoted
by L∗ . In other words:
L∗ = {T : V → R|T is linear}
Property.
dim V = dim V∗
Proof. Let BV = {V1 , V2 , . . . , Vn } be a base of V, therefore dim V = n. If
B∗ = ϑ = {φj : V → R|Vi → φj (Vi ) = δij , ∀j = 1, . . . , n}
then B∗ = ϑ is a base of V∗ , a dual base of V, and therefore dim V = dim V∗ .
3.1.3.1 Kernel of T . Let N(T ) be the set of the elements of V whose image is the
null element of W, i.e.
N(T ) = {x ∈ V|T (x) = θW }
N(T ) is a subspace of V.
Proof. (1) θV ∈ N(T ).
3.1.3.2 Rank of T . Let R(T ) be the set of the elements of W whose which are
image to at least one element of V, i.e.
R(T ) = {y ∈ W|∃x ∈ V with T (x) = y}
R(T ) is a subspace of W.
Proof. (1) θW ∈ R(T ).
Therefore:
n
X
θW = αt T (Vt ) + αi T (Vi )
i=r+1,i̸=t
n
X n
X
= T αt T (Vt ) + αi Vi ⇒ αt T (Vt ) + αi Vi ∈ N(T )
i=r+1,i̸=t i=r+1,i̸=t
47
5
Eigenvalues and
Diagonalization
49
Bibliography
51