Professional Documents
Culture Documents
Mat2324 1 PDF
Mat2324 1 PDF
Mat2324 1 PDF
Applied Mathematics
Linear Algebra
1 Linear Algebra
Matrices and Linear Systems
Linear Systems
Gauss Elimination
Vector Spaces
Determinants
Cramer’s Rule
Linear Algebra
Each entry in (1) has two subscripts. The first one stands for the row
number and the second one stands for the column number. That is, a21
is the entry in the second row and first column.
If m = n, then the matrix A will be called as a square matrix. In this
case, a11 , a22 , · · · , ann are called as the main diagonal entries of the
matrix A.
Example 1
Readily, we compute that
0 1 2 6 5 4 0+6 1+5 2+4
+ =
9 8 7 3 4 5 9+3 8+4 7+5
6 6 6
= .
12 12 12
Example 2
Obviously, we compute that
4 5 6 3·4 3·5 3·6
3 =
7 8 9 3·7 3·8 3·9
12 15 18
= .
21 24 27
The condition n = p means that the second factor B must have as many
rows as the first factor A has columns. As a diagram of sizes, we can
give the following.
A B = C .
[m×n][n×q] [m×q]
b1k · · · b1q
.. . . .
. . ..
×
bjk · · · bjq
.. .
..
. ..
×
.
bnk · · · bnq
×
ai1 · · · aij · · · ain cik · · · ciq
. . .. .
. . . ... . . . ... ..
. ..
.
.
am1 · · · amj · · · amn cm1 · · · cmk · · · cmq
Example 3
Using Definition 4, we compute that
3 2 2 (−1) 3
4 (−2) 5 3 2
3×2+2×5 3 × (−1) + 2 × 3 3×3+2×2
=
4 × 2 + (−2) × 5 4 × (−1) + (−2) × 3 4 × 3 + (−2) × 2
16 3 13
= .
(−2) (−10) 8
4. (AB)T = B T AT .
Example 6
Consider that
7
(− 32 )
3 2 7 3 2 7 0 0
5 4 3 = 72 4 9
2
+ 3
2 0 (− 23 ) ,
9 3
7 6 4 7 2 4 0 2 0
where the first and the second matrices on the right-hand side are
symmetric and skew-symmetric, respectively.
a11 a12 · · · a1n a11
a22 · · · a2n
a a
21 22
.. . . .. . .
. .. ..
. .
ann an1 an2 · · · ann
Figure 2: Upper and lower triangular matrices, respectively. All the entries in
the red triangles are zero.
a11 1
a22
1
..
.
..
.
ann 1
Figure 3: Diagonal and the identity matrices, respectively. All the entries in the
red triangles are zero.
Example 7
2 (−2) 1
1 1 1 1
Show that √ and 1 2 2 are
2 1 (−1) 3
2 1 (−2)
orthogonal matrices.
Ax = b,
We assume that aij are not all zero, so that A is not a zero matrix. Note
that x has n components and b has m components.
The matrix
a11 a12 ··· a1n b1
a21 a22 ··· a2n b2
A
e :=
.. .. .. .. ..
. . . . .
am1 am2 ··· amn bm
is called the augmented matrix of the system (3). The vertical line can
be omitted (as we shall do later). It is merely a reminder that the last
column of A e does not belong to A.
Gaussian Elimination
Note: The leading entry in all the entire rows of a matrix is considered as
the first nonzero entry in that row.
Example: Determine which are of the following are in the reduced row
echelon form, row echelon form, or none.
1 2 0 4 1 0 3 4
0 0 0 0 0 2 1 2
0 0 1 2 0 0 1 1
1 2 0 3
1 0 0 0
0 0 1 1
0 1
0
0 0 1
0 1 0
0 0 0 0
x1 + 2x2 + x3 = 0 E (1)
2x1 + 2x2 + 3x3 = 3 E (2) (4)
−x1 − 3x2 = 2 E (3)
Step1: Eliminate x1 from E (2) and E (3). Subtract 2 times E (1) from E (2);
and subtract (−1) times E (1) from E (3). This results in the new
system:
x1 + 2x2 + x3 = 0 E (1)
− 2x2 + x3 = 3 E (2)
− x2 + x3 = 2 E (3)
1
Step2: Eliminate x2 , from E (3). Subtract times E (2) from E (3). This
2
yields
x1 + 2x2 + x3 = 0 E (1)
− 2x2 + x3 = 3 E (2) (5)
1 1
x3 = E (3)
2 2
x3 = 1
−2x2 + 1 = 3
x2 = −1
x1 + 2(−1) + 1 = 0
x1 = 1
Steps 1 and 2 are the elimination steps, resulting in (5), which is called
an upper triangular system of linear equations. This system has exactly
the same solutions as (4), but (5) is in a form that is easier to solve.
Step 3 is called solution by back substitution. The entire process is called
Gaussian elimination, and it is generally the most efficient means of
solving a linear system of small to medium size, say, of an order up to
several hundreds.
The elimination steps are more conveniently carried out using the matrix
notation and operations. For this purpose, we form the augmented
matrix for the system (4):
1 2 1 0
A | b = 2 2 3 3 (6)
−1 −3 0 2
This matrix is obtained from the matrix (6) through elementary row
operations and we have reduced the original system (6) to the upper
triangular system
1 2 1 x1 0
0 −2 1 x2 = 3 (7)
1 1
0 0 2
x 3 2
which is the same as (5). Then the backward substitution procedure can
be applied to solve (7).
(2)
aij = aij − mi1 a1j , i, j = 2, 3
(2)
bi = bi − mi1 b1 , i = 2, 3
(2)
Step2: Eliminate x2 from E (3). Again assume temporarily that a22 6= 0.
Define
(2)
a
m32 = 32 (2)
a22
Subtract m32 times E (2) from E (3). This yields
(3) (3)
a33 x3 = b3 E (3)
Generally,
(1) (1) (1)
a11 x1 + ··· + a1n xn = b1 E (1)
.. (10)
.
(1) (1) (1)
an1 x1 + ··· + ann xn = bn E (n)
(k)
Assume akk 6= 0, and define multipliers
(k)
aik
mik = (k)
, i = k + 1, . . . , n
akk
(k) (k)
bik+1 = bi − mik bk , i = k + 1, . . . , n
..
.
unn xn = gn
x1 − x2 + x3 =0
−x1 + x2 − x3 =0
10x2 + 25x3 =90
20x1 + 10x2 =80.
Step 1. Elimination of the first unknown. We mark the first row as the
pivot row and its first entry as the pivot. To eliminate x1 in the other
equations, we add 1 times the pivot row to the second row and add
(−20) times the pivot row to the fourth row. This yields
1 (−1) 1 0
0 0 0 0
.
0 10 25 90
0 30 (−20) 80
Step 2. Elimination of the second unknown. First, push the zero row at
the end of the matrix
1 (−1) 1 0
0 10 25 90
.
0 30 (−20) 80
0 0 0 0
Then, mark the second row as the pivot row and its first non-zero entry
as the pivot. To eliminate x2 in the other equations, we add (−3) times
the pivot row to the third row. Hence, the result is
1 (−1) 1 0
0 10 25 90
.
0 0 (−95) (−190)
0 0 0 0
Step 3. Back substitution. Now, working backwards from the last to the
first row of this triangular matrix, we can readily find x3 , x2 and x1 as
follows.
(−190)
x3 = = 2,
−95x3 = − 190 (−95)
10x2 + 25x3 =90 =⇒ 1
x2 = (90 − 25 × 2) = 4,
x1 − x2 + x3 =0 10
x1 =4 − 2 = 2.
Definition 11
We now call a linear system S1 row-equivalent to another linear system
S2 if S1 can be obtained from S2 only by applying finitely many row
operations.
Thus, we can state the following theorem, which also justifies the Gauss
elimination.
Theorem 1
Row-equivalent linear systems have the same set of solutions.
At the end of the Gauss elimination the form of the coefficient matrix,
the augmented matrix or the system itself is called row-reduced form.
In it, rows of zeroes, if present, are the last rows, and in each non-zero
row the leftmost non-zero entry is farther to the right than in the
previous row. For instance, (see Example 8) the following coefficient
matrix and its augmented matrix are in the row-reduced form.
1 (−1) 1 1 (−1) 1 0
0 10 25 and 0 10 25 90
.
0 0 (−95) 0 0 (−95) (−190)
0 0 0 0 0 0 0
Note that we do not require that the leftmost non-zero entries be 1 since
this has no theoretic or numeric advantage.
Example 9
Transform the following matrices into row-reduced forms.
(−1) 4 1 1 (−2) 1 3
0 0 0 0 and 0 1 1 .
0 0 0 1 2 0 1
• No solution, or
a11 a12 · · · · · · · · · a1n b1
a22 · · · · · · · · · a2n b2
.. .. ..
. . .
a · · · a br
rr rn
br +1
..
.
bm
Figure 4: Row reduced form of the augmented matrix at the end of the Gauss
elimination. Here, r ≤ m and a11 , a22 , · · · , arr 6= 0, and all the entries in the red
triangle and the blue rectangle are zero.
From this, we see that with respect to the solutions of the system with
augmented matrix in Figure 4 (and thus, with respect to the originally
given system) there are three possible cases.
Figure 5: All the entries in the red triangle and the blue rectangles are zero.
Figure 6: All the entries in the red triangle and the blue rectangles are zero.
No Solution
Figure 7: All the entries in the red triangle and the blue rectangle are zero, but
there exists non-zero entries in the orange rectangle.
Example: Solve
x +y +z = 0
2x − y + 3z = 1
x + y − 2z = 1
Example: Solve
x +y +z = 0
2x − y + 3z = 1
3x + 3y + 3z = 1
Example: Solve
x +y +z = 0
2x − y + 3z = 1
3x + 3y + 3z = 0
Vector Spaces
Example 10
Some examples of vector spaces are listed below.
1. The singleton {0}.
2. Rn (the set of vectors).
3. Rm×n (the set of m × n matrices).
4. Pn [x] (the set of polynomials).
5. F[a, b] (the set of functions on [a, b]).
6. C1 [a, b] (the set of continuously differentiable functions on [a, b]).
α1 u 1 + α2 u 2 + · · · + αn u n
α1 u 1 + α2 u 2 + · · · + αn u n = 0
α1 u 1 + α2 u 2 + · · · + αn u n = 0,
Example 11
1. Show that (1, 1), (−3, 2) are linearly independent.
2. Show that (1, 1), (−3, 2), (2, 4) are linearly dependent.
10 20 2×
(− −→
3,
2)
16
(2, →
4)
−
1
)×
0
5
(−
−→ )
,1
(1
×
16
-10 0
-10 -3 0 1 2 10 0 10 16 20
Figure 8: The figures show that 16(1, 1) + 2(−3, 2) + (−5)(2, 4) = (0, 0), i.e.,
(1, 1), (−3, 2), (2, 4) are linearly dependent.
Example 12
Find the rank of the matrix
3 0 2 2
A := (−6) 42 24 54 .
21 (−21) 0 (−15)
Clearly, we have
Thus, rank(A) = 2.
Example 14
The set{(1, 2, 1), (−2, −3, 1),
(3, 5, 0)} is linearly dependent since the
1 2 1
matrix (−2) (−3) 1 has the rank 2, which is less than 3.
3 5 0
Example 15
Show that Span{(1, 1), (−3, 2), (2, 4)} = R2 .
Example 16
Show that {(1, 1), (−3, 2)} is a basis for R2 . Hence, dim(R2 ) = 2.
Finally for a given matrix A, the solution set of the homogeneous system
Ax = 0 is a vector space, which is called as the null space of A, and its
dimension is called as the nullity of A.
rank(A) + nullity(A) = n.
When (11) admits solutions, one can apply Gauss elimination to find
those solutions.
where a11 , a12 , · · · , amn are scalars and x1 , x2 , · · · , xn are unknowns. Note
that x1 , x2 , · · · , xn = 0 is always a solution, which is called as the trivial
solution.
According to Theorem 6, (12) admits non-trivial solutions if r < n, where
r := rank(A). If r < n, then these solutions, together with x = 0, form a
vector space of dimension (n − r ), which is called as the solution space
of (12).
The solution space of (12) is also called as the null space of A because
of Ax = 0 for every x in the solution space of (12). Its dimension is
called as the nullity of A. Hence, by Theorem 5, we have
rank(A) + nullity(A) = n,
Example 17
Show that the solution space (null space) of
x
1 2 3 4 y
= 0
2 4 7 8 z 0
w
is
x
y : x = −2y − 4w and z = 0
z
w
or equivalently
(−2) (−4)
1 0
y
+w
: y, w ∈ R .
0 0
0 1
Theorem 7
A homogeneous linear system with fewer equations than the number of
unkowns has always non-trivial solutions.
Determinants
Example 18
We compute that
3 2
(−4) = 3 · 1 − 2 · (−4) = 11.
1
a11 a12 a13
a21 a22 a23
a31 a32 a33
(−) a11 a12 a13 (+)
(−) a21 a22 a23 (+)
(−) (+)
Example 19
We compute by Sarrus rule that
1 5 (−3)
2
3 (−4) =[1 · 3 · 2 + 2 · 6 · (−3) + (−1) · 5 · (−4)]
(−1) 6 2
− [(−3) · 3 · (−1) + (−4) · 6 · 1 + 2 · 5 · 2]
=[6 + (−36) + 20] − [9 + (−24) + 20]
=(−10) − 5 = −15.
a11 ··· a1(j−1) a1j a1(j+1) ··· a1n
.. .. .. .. .. .. ..
. . . . . . .
a
(i−1)1 · · · a(i−1)(j−1) a(i−1)j a(i−1)(j+1) · · · a(i−1)n
ai1 · · · ai(j−1) aij ai(j+1) · · · ain
a(i+1)1
· · · a(i+1)(j−1) a(i+1)j a(i+1)(j+1) · · · a(i+1)n
.. .. .. .. .. .. ..
. . . . . . .
an1 ··· an(j−1) anj an(j+1) ··· ann
Example 20
We compute that
6 0 (−3) 5
4 13 6 (−8)
(−1) 0 7 4
8 6 0 2
4 6 (−8) 6 (−3) 5
=0(−1)1+2 (−1) 7 4 + 13(−1)2+2 (−1)
7 4
8 0 2 8 0 2
6 (−3) 5 6 (−3) 5
3+2 4+2
+ 0(−1) 4 6 (−8)
+ 6(−1) 4 6 (−8)
8 0 2 (−1) 7 4
=0 · (−1) · 708 + 13 · 1 · (−298) + 0 · (−1) · 48 + 6 · 1 · 674
=(−3874) + 4044 = 170.
Theorem 8
Let A and B be two square matrices of the same size. Then,
Cramer’s Rule
Theorem 10 (Cramer’s Rule)
Denote by A and b the coefficient matrix and the right-hand side vector,
respectively. For k = 1, 2, · · · , n, denote by Ak the matrix formed by
replacing k-th column of A by b. Then, we have the following.
det(Ak )
1. If det(A) 6= 0, then (13) has a unique solution given by xk = det(A) .
2. If det(A) = 0 and det(Ak ) = 0 for all k, then (13) has infinitely many
solutions.
3. If det(A) = 0 and det(Ak ) 6= 0 for some k, then (13) has no solutions.
Then, (14) has the trivial solution only if and only if det(A) 6= 0.
Note that in this case, det(Ak ) = 0 for all k by the second case of
Corollary 5, and thus last case of Theorem 10 does not appear.
Example 21
The system
x + 2y =−3
2x + 6y + 4z = − 6
−x + 2z =1
has a unique solution since the determinant of the coefficient matrix is
(−4). Thus, the solution is given by
(−3) 2 0 1 (−3) 0
(−6) 6 4 2 (−6) 4
1 0 2 (−4) (−1) 1 2
x= = = 1, y = = −2,
(−4) (−4) (−4)
and similarly z = 1.
Example 22
Find the inverse matrix of the matrix A given below.
1 2 3
A := 0 1 4 .
5 6 0
1 2 0 16 (−12) (−3)
∼ 0 1 0 20 (−15) (−4)
0 0 1 (−5) 4 1
1 0 0 (−24) 18 5
∼ 0 1 0 20 (−15) (−4) .
0 0 1 (−5) 4 1
Hence,
(−24) 18 5
A−1 := 20 (−15) (−4) .
(−5) 4 1
1
A−1 := adj(A)
det(A)
Example 23
Clearly, for
3 1
A :=
2 4
we have det(A) = 10 and
T
4 (−2) 4 (−1)
adj(A) = = .
(−1) 3 (−2) 3
Therefore, we have
−1 1 4 (−1)
A = .
10 (−2) 3
Example 24
For the matrix
1 (−1) (−2)
A := 3 (−1) 1 ,
1 (−3) (−4)
we compute that det(A) = 10. Also, we have
7 2 (−3)
adj(A) = 13 (−2) (−7) .
(−8) 2 2
Theorem 13
Let A and B be two square matrices of the same size. Then,
(AB)−1 = B −1 A−1 .
Ax = b.
Once A−1 is known, we can multiply both sides of the above system by
A−1 and get
x = A−1 b.
Example 25
Show that
x − y − 2z = − 10
3x − y + z =5
x − 3y − 4z =20
has the solution
Remark 1
Some remarks on matrix multiplication are given below.
1. Matrix multiplication is not commutative, i.e., we have in general that
AB 6= BA.
Example 26
Let (u1 , u2 ), (v1 , v2 ) ∈ R2 , then
h(u1 , u2 ), (v1 , v2 )i = u1 v1 + u2 v2
Finally, we have
and
hα(u1 ,u2 ), (v1 , v2 )i = h(αu1 , αu2 ), (v1 , v2 )i
=(αu1 )v1 + (αu2 )v2 = α(u1 v1 + u2 v2 )
=αh(u1 , u2 ), (v1 , v2 )i.
Thus, R2 is an inner product space on R.
Example 27
Two examples of inner product spaces are listed below.
1. Rn is an inner product space with
* u1 v1 +
u2 v2
.. , .. := u1 v1 + · · · + un vn .
. .
un vn
Definition 27 (Orthogonality)
Let V be an inner product space on R, and u, v ∈ V . The vectors u and
v are called orthogonal provided that hu, v i = 0.
Example 28
1. In R3 , u := (2, 1, −1) and v := (1, −1, 1) are orthogonal.
a+b a+b 2
2. In C[a, b], f (x) := x − 2 and g (x) := x 2 − (a + b)x + 2 are
orthogonal.
Definition 28 (Norm)
Let V be an inner product space on R, and u ∈ V . The norm of u is
defined by p
kuk := hu, ui.
Further, u is said to be a unit vector provided that kuk = 1.
Example 29
1. On Rn , a norm is given by
u1
u2
q
..
:= u12 + u22 + · · · + un2 .
.
un
Property 6
Inner products and norms satisfy the following properties.
1. Cauchy-Schwarz Inequality: |hu, v i| ≤ kukkv k for all u, v ∈ V .
2. Triangle Inequality: ku + v k ≤ kuk + kv k for all u, v ∈ V .
3. Parallelogram Equality: ku + v k2 + ku − v k2 = 2 kuk2 + kv k2 for
all u, v ∈ V .
Linear Transformations
Example 30
Some examples of linear transforms are listed below.
1. Zero transform, i.e., f (u) = 0.
2. Identity transform, i.e., f (u) = u.
3. Reflection transform, i.e., f (u) = −u.
4. Scaling transform, i.e., f (u) = λu, where λ ∈ R
5. Projection transform.
6. Rotation transform.
7. Differential transform.
8. Integral transform.
Representation Matrix
u = α1 e 1 + α2 e 2 + · · · + αn e n .
f (u) = α1 f (e 1 ) + α2 f (e 2 ) + · · · + αn f (e n ).
Example 31
1 1
Consider the basis , of R2 . Let us find the rule of
1 (−1)
the linear transform f : R2 → R3 satisfying
5 3
1 1
f = (−1) and f = (−5) .
1 (−1)
11 1
Null(f ) := {u : f (u) = 0}
Range(f ) := {v : f (u) = v }.
Further,
nullity(f ) := dim Null(f ) and rank(f ) := dim Range(f ) .
Example 32
Find the rank and the nullity of the transform f : R3 → R2 defined by
x
16x + y − 3z
f y = .
8x − 3y + z
z
We see that
1
Null(f ) = x 5 : x ∈ R
7
1
= Span 5 ,
7
Ax = λx. (15)
This example illustrates the general case as follows. Expanding (15) in its
components, we get
Moving the terms on the right-hand side to the left-hand side gives us
(A − λ I)x = 0.
Theorem 16 (Eigenvalues)
The eigenvalues of a square matrix A are the roots of the characteristic
equation det(A − λ I) = 0. Hence, an n × n matrix has at least 1
eigenvalue and at most n numerically different eigenvalues.
Example 33
2 2
The characteristic polynomial of the matrix A := is
1 3
2−λ 2
D(λ) := = λ2 − 5λ + 4 = (λ − 1)(λ − 4)
1 3−λ
Example 34
1 2 1
Let us consider the matrix A := 0 3 2 . The characteristic
(−1) 1 1
equation of A is D(λ) := −λ3 + 5λ2 − 6λ = −λ(λ − 2)(λ − 3). Thus, A
has the eigenvalues
λ
1 := 0, λ2 := 2and λ3 :=
3. Further, we get
the
1 3 1
eigenvector (−2) for λ1 = 0, 2 for λ2 = 2 and 1
3 (−1) 0
for λ3 = 3.
Example 35
1 0 0
Consider the matrix A := 2 1 0 . The characteristic equation
1 (−2) 3
of A is D(λ) := −λ3 + 5λ2 − 7λ + 3 = −(λ − 1)2 (λ − 3). Thus, A has
the
eigenvalues
λ1,2 := 1 and λ3 :=
3.
Further, we get the eigenvector
0 0
1 for the eigenvalue 1 and 0 for the eigenvalue 3. In this
1 1
example, we can find only two eigenvectors.
B = P −1 AP.
D := X −1 AX
D m = X −1 Am X
or equivalently
Am = X D m X −1 .
Example 36
4 1
Consider the matrix A := . We see that the eigenvalues
(−8) (−5)
1
of A are (−4) and 3, which yield the eigenvectors and
(−8)
1 1 1
, respectively. Let us define X := , which
(−1) (−8) (−1)
(−1) (−1)
yields X −1 = 17 . Then, we get the diagonal matrix
8 1
(−4) 0
D := X −1 AX = .
0 3