Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Lecture 5 - Systems of

Linear Equations - Part 1


COM1033 Foundations of Computing II

Prof. Ferrante Neri


Learning outcomes

By the end of this lecture, we will have learned:


⊚ Definition of a System of Linear Equations
⊚ Cramer’s Theorem
⊚ Cramer’s Method
⊚ Gaussian Elimination

Learning Resources
Chapter 3, Sections 3.1 and 3.3: F. Neri, Linear Algebra for
Computational Sciences and Engineering, 2019

1
Definition of System of Linear Equa-
tions
Linear equation
Definition
A linear equation in ℝ in the variables 𝑥 1 , 𝑥2 , . . . , 𝑥 𝑛 is an
equation of the form:

𝑎1 𝑥1 + 𝑎2 𝑥2 + . . . + 𝑎 𝑛 𝑥 𝑛 = 𝑏

where for every index 𝑖:

⊚ 𝑎 𝑖 is the coefficient of the equation,


⊚ 𝑎 𝑖 𝑥 𝑖 is the 𝑖 𝑡 ℎ term of the equation, and
⊚ 𝑏 is the known term.

Coefficients and the known term are constant and known


numbers in ℝ, while the variables are an unknown set of
numbers in ℝ that satisfy the equality.
3
Systems of Linear Equations

Definition
Consider 𝑚 (with 𝑚 > 1) linear equations in the variables
𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 . These equations compose a system of linear
equations indicated as:

𝑎 𝑥 + 𝑎 1,2 𝑥 2 + . . . + 𝑎 1,𝑛 𝑥 𝑛 = 𝑏1


 1,1 1


 𝑎2,1 𝑥1 + 𝑎2,2 𝑥2 + . . . + 𝑎2,𝑛 𝑥 𝑛 = 𝑏2



.


 ...

 𝑎 𝑚,1 𝑥1 + 𝑎 𝑚,2 𝑥2 + . . . + 𝑎 𝑚,𝑛 𝑥 𝑛 = 𝑏 𝑚


Every ordered 𝑛-tuple of real numbers substituted for


𝑥 1 , 𝑥2 , . . . 𝑥 𝑛 that makes the system of linear equations true is
said to be a solution.
4
Systems of Linear Equations

A system can be written as a matrix equation Ax = b where

𝑎 𝑎 ... 𝑎
© 1,1 1,2 1,𝑛
­ 𝑎2,1 𝑎2,2 . . . 𝑎2,𝑛
ª
®
A = ­­ ®
­ ... ... ... ...
®
®
« 𝑎 𝑚,1 𝑎 𝑚,2 . . . 𝑎 𝑚,𝑛 ¬

𝑥 𝑏
© 1 ª © 1 ª
­ 𝑥2 ® ­ 𝑏2 ®
x = ­­ ®, b = ­­ ®
­ . . . ®
® ­ . . . ®
®
« 𝑥 𝑛 ¬ « 𝑏 𝑚 ¬
If the number of equations is the same as the number of
variables, then the matrix A is square, and the system is said to
be square.
5
Complete Matrix

The coefficient matrix A is called the incomplete matrix. The


matrix Ac ∈ ℝ𝑚,𝑛+1 , whose first 𝑛 columns are those of the
matrix A and the (𝑛 + 1)𝑡 ℎ column is the vector b, is called the
complete matrix:

𝑎 𝑎 ... 𝑎 𝑏
© 1,1 1,2 1,𝑛 1
𝑎 2,1 𝑎 2,2 . . . 𝑎2,𝑛 𝑏2
ª
­ ®
Ac = (A|b) = ­­ ®.
­ ... ... ... ... ...
®
®
« 𝑎 𝑚,1 𝑎 𝑚,2 . . . 𝑎 𝑚,𝑛 𝑏 𝑚 ¬

6
System of Linear Equations

Example
Consider the following system of linear equations:

2𝑥 − 𝑦 + 𝑧 = 3






𝑥 + 2𝑧 = 3 .

𝑥 − 𝑦 = 1


The system can be re-written as Ax = b where

© 2 −1 1 ª © 3 ª © 2 −1 1 3 ª
A = ­­ 1 0 2 ®®, b = ­­ 3 ®®, Ac = ­­ 1 0 2 3 ®®
« 1 −1 0 ¬ « 1 ¬ « 1 −1 0 1 ¬

7
Cramer’s Theory
Cramer’s Theorem

Theorem

Let us consider a square system Ax = b. If A is non-singular, there


is only one solution that simultaneously satisfies all the equations.

Proof.
If A is non-singular, then A−1 exists. Hence,
Ax = b ⇒ x = A−1 b. Since A−1 is unique, the vector x is also
unique. □

9
Cramer’s Theory: Definitions and Observations

Definition
A system of linear equations that has only one solution is
called determined or Cramer’s system.

Observation
Solving a system of linear equations is equivalent to inverting a
matrix and multiplying the inverse by the known term vector.

10
Hybrid Matrix

Definition
Let us consider a system of linear equations Ax = b as defined
above.
The hybrid matrix with respect to the 𝑖 𝑡 ℎ column is the matrix
Ai obtained from A by substituting the 𝑖 𝑡 ℎ column with b:

𝑎 𝑎 ... 𝑏 ... 𝑎
© 1,1 1,2 1 1,𝑛
­ 𝑎2,1 𝑎2,2 . . . 𝑏2 . . . 𝑎2,𝑛
ª
®.
®
Ai = ­­
­ ... ... ... ... ... ...
®
®
« 𝑎 𝑛,1 𝑎 𝑛,2 . . . 𝑏 𝑛 . . . 𝑎 𝑛,𝑛 ¬

11
Hybrid Matrix

Equivalently, if we write A as a vector of column vectors:


 
A= a1 a2 ... ai−1 ai ai+1 ... an

the hybrid matrix Ai would be


 
Ai = a1 a2 . . . ai−1 b ai+1 . . . an .

12
Cramer’s Method

The following theorem shows a method alternative to matrix


inversion, albeit computationally equivalent to it, to solve
determined systems of linear equations.
Theorem

The solution of a determined system of linear equations Ax = b is a


vector x whose elements 𝑥 𝑖 are given by:

det Ai
𝑥𝑖 =
det A

where Ai is the hybrid matrix with respect to the 𝑖 𝑡 ℎ column.

13
Cramer’s Method

Proof.
Let us consider a system of linear equations Ax = b. We can
compute x = A−1 b:

𝑥 𝐴 𝐴 ... 𝐴
𝑛,1 𝑏
© 1 ª © 1,1 2,1
ª© 1 ª
­ 𝑥2 ® 1 ­ 𝐴1,2 𝐴2,2 . . . 𝐴 𝑛,2
­ ® ­ 𝑏2 ®
­ ... ® =
­ ® ®­ ®
­ ... ... ... ...
det A ­ ®­ ... ®
­ ® ®­ ®
« 𝑥𝑛 ¬ « 𝐴1,𝑛 𝐴2,𝑛 . . . 𝐴𝑛,𝑛 ¬ « 𝑏𝑛 ¬

𝑥 𝐴 𝑏 +𝐴 𝑏 +...+𝐴 𝑏
𝑛,1 𝑛
© 1 ª © 1,1 1 2,1 2
­ 𝑥2 ® 𝐴 𝑏 𝐴 𝑏 . . . 𝐴 𝑛,2 𝑏 𝑛
ª
1 ­
­ 1,2 1 + 2,2 2 + + ®
⇒ ­­ ®= ®. □
­ . . . ®
®
det A ­
­ ... ®
®
« 𝑥 𝑛 ¬ « 𝐴1,𝑛 𝑏1 + 𝐴2,𝑛 𝑏2 + · · · + 𝐴𝑛,𝑛 𝑏 𝑛 ¬
14
Cramer’s Method

Proof.
For the I Laplace Theorem, the vector of solutions can be
written as:
𝑥 det A1
© 1 ª
­ 𝑥2 ®
© ª
1 det A 2
­ . . . ® det A ­ . . . ® .
­ ®
­ ®= ­ ®
­ ® ­ ®
𝑥
« 𝑛 ¬ « det A n ¬

15
Example: Cramer’s Method
Example
Let us consider

2𝑥 − 𝑦 + 𝑧 = 3






𝑥 + 2𝑧 = 3 .

𝑥 − 𝑦 = 1


The system can be re-written as a matrix equation Ax = b


where

© 2 −1 1 ª © 3 ª
A = ­­ 1 0 2 ®® , b = ­­ 3 ®®
« 1 −1 0 ¬ « 1 ¬

continued...
16
Example: Cramer’s Method

Example
So det A = 0 − 2 − 1 − 0 + 4 − 0 = 1.
Using Cramer’s Method:

© 3 −1 1 ª
det ­­ 3 0 2 ®®

𝑥=
det A1
= « 1 −1 0 ¬ = 1,
det A 1

© 2 3 1 ª © 2 −1 3 ª
𝑦 = det ­ 1 3 2 ® = 0 and 𝑧 = det ­­ 1 0 3 ®® = 1.
­ ®

« 1 1 0 ¬ « 1 −1 1 ¬

17
Gaussian Elimination
Computational Cost of a Linear System

⊚ The application of Cramer’s Method requires the


calculation of one determinant of an 𝑛-order matrix and 𝑛
determinants of 𝑛-order matrices.
⊚ This means 𝑛! + 𝑛(𝑛!) elementary operations.
⊚ If 𝑛 = 6, the solution of the system requires 5040
mathematical operations, which can be solved by a modern
computer in a fraction of a second.
⊚ If 𝑛 = 50, a modern computer needs 1.75 × 1046 millennia
to solve it using Cramer’s method (more than the age of the
universe).
⊚ Hence, the need for using alternative methods.

19
Equivalent Systems

Definition
Two systems of linear equations in the same variables Ax = b
and Cx = d are equivalent if they have the same solutions.

Theorem
The following transformations on the complete matrix Ac transform
a system into an equivalent one.

⊚ E1: swap of two rows ai and aj .


⊚ E2: multiplication of a row ai by a scalar 𝜆 ∈ ℝ: ai ← 𝜆ai .
⊚ E3: substitution of a row ai by the sum of the row ai and another
row aj : ai ← ai + aj .

20
Example: Complete Matrix Transformations

Example
( (
4𝑥 − 𝑦 = 2 2𝑥 + 4𝑦 = 1
⊚ E1: ⇔
2𝑥 + 4𝑦 = 1 4𝑥 − 𝑦 = 2
( (
4𝑥 − 𝑦 = 2 4𝑥 − 𝑦 = 2
⊚ E2: ⇔
2𝑥 + 4𝑦 = 1 4𝑥 + 8𝑦 = 2
( (
4𝑥 − 𝑦 = 2 4𝑥 − 𝑦 = 2
⊚ E3: ⇔
2𝑥 + 4𝑦 = 1 6𝑥 + 3𝑦 = 3
!
1
The solution of all systems is x = 2 .
0

21
Gaussian Elimination - General Idea

From Ax = b, the transformations E1, E2, E3 are applied to Ac


to obtain an equivalent triangular system of equations.

𝑎 1,1 𝑥 1 + 𝑎 1,2 𝑥2 + . . . + 𝑎 1,𝑛 𝑥 𝑛 = 𝑏1







 𝑎2,2 𝑥2 + . . . + 𝑎2,𝑛 𝑥 𝑛 = 𝑏2





 ...

 𝑎 𝑛,𝑛 𝑥 𝑛 = 𝑏 𝑛



The system can be solved sequentially

𝑥 𝑛 = 𝑎𝑏𝑛,𝑛
𝑛




 𝑥 𝑛−1 = 𝑏 𝑛−1𝑎 −𝑎 𝑛−1,𝑛 𝑥 𝑛



 𝑛−1,𝑛−1

 ...
 Í𝑛
 𝑏𝑖 − 𝑗=𝑖+1 𝑎 𝑖,𝑗 𝑥 𝑗
 𝑥𝑖 =


 𝑎 𝑖,𝑖
22
Gaussian Elimination - description of the method

Let us consider a system of linear equations in a matrix


formulation:
Ax = b
and let us write the complete matrix Ac in terms of its row
vectors
r
© 1 ª
­ r2 ®®
Ac = ­­
­ ... ®
®

« rn ¬
and, to emphasize that we are working at the step one
(1)
r1
© (1) ª
­ r ®
A c(1)
= ­­ 2 ®® .
­ ... ®
(1)
« rn ¬ 23
Gaussian Elimination - description of the method

The Gaussian transformations to obtain the matrix at step (2)


are:
(2) (1)
r1 = r1 
(1)
(2) (1) −𝑎 2,1 (1)
r2 = r2 + (1) r1
𝑎 1,1
 (1)

(2) (1) −𝑎 3,1 (1)
r3 = r3 + (1) r1
𝑎 1,1
. . . 
(1)
(2) (1) −𝑎 𝑛,1 (1)
rn = rn + (1) r1 .
𝑎1,1

24
Gaussian Elimination - description of the method

After the application of these steps, the complete matrix can be


written as

(2) (2) (2) (2)


© 𝑎 1,1 𝑎 1,2 ... 𝑎1,𝑛 𝑏1 ª
(2) (2) (2)
0 𝑎 2,2 ... 𝑎2,𝑛 𝑏2 ®
­ ®
Ac(2) =­ ®.
­
­ ... ... ... ... ... ®
(2) (2) (2)
­ ®
0 𝑎 𝑛,2 . . . 𝑎 𝑛,𝑛 𝑏𝑛
« ¬

25
Gaussian Elimination - description of the method

The Gaussian transformations to obtain the matrix at step (3)


are:

(3) (2)
r1 = r1
(3) (2)
r2 = r2 
(1)
(3) (2) −𝑎 (2)
r3 = r3 + (2)3,2 r2
𝑎 2,2
. . . 
(2)
(3) (2) −𝑎 𝑛,2 (2)
rn = rn + (2) r2
𝑎 2,2

26
Gaussian Elimination - description of the method

which leads to the following complete matrix

(3) (3) (3) (3)


𝑎 1,1 𝑎 1,2 . . . 𝑎1,𝑛 𝑏1
© (3) (3) (3)
ª
­
­ 0 𝑎 2,2 . . . 𝑎2,𝑛 𝑏2 ®
®
𝑏3 ® .
(3) (3) ®
Ac(3) =­ ... 𝑎3,𝑛
­
0 0
­ ®
­
­ ... ... ... ... . . . ®®
(3) (3)
« 0 0 . . . 𝑎 𝑛,𝑛 𝑏𝑛 ¬

27
Gaussian Elimination- description of the method

At the generic step (𝑘 + 1) the Gaussian transformation


formulas are

(k+1) (k)
r1 = r1
(k+1) (k)
r2 = r2
...
(k+1) (k)
rk = rk
(𝑘)
 
(k+1) (k) −𝑎 𝑘+1,𝑘 (k)
rk+1 = rk+1 + (𝑘) rk
𝑎 𝑘,𝑘
(𝑘)
 
(k+1) (k) −𝑎 𝑘+2,𝑘 (k)
rk+2 = rk+2 + (𝑘) rk
𝑎 𝑘,𝑘
...
(𝑘)
 
(k+1) (k) −𝑎 𝑛,𝑘 (k)
rn = rn + (𝑘) rk
𝑎 𝑘,𝑘
28
Example: Gaussian Elimination

Example
Let us apply the Gaussian elimination to solve the following
system of linear equations:

𝑥 − 𝑥2 − 𝑥3 + 𝑥4 = 0


 1



 2𝑥1 + 2𝑥3 = 8


.


 −𝑥 2 − 2𝑥 3 = −8


 3𝑥1 − 3𝑥2 − 2𝑥3 + 4𝑥4 = 7

29
Example: Gaussian Elimination

Example
The associated complete matrix is

1 −1 −1 1 0
© ª
­ 2 0 2 0 8 ®®
Ac(1) = (A|b) = ­
­
®.
­ 0 −1 −2 0 −8 ®
« 3 −3 −2 4 7 ¬

30
Example: Gaussian Elimination

Example
Let us apply the Gaussian transformations to move to step (2)
(2) (1)
r1 (1)= r1
(2) (1) −𝑎 2,1 (1) (1) (1)
r2 = r2 + (1) r1 = r2 − 2r1
𝑎 1,1
 (1)

(2) (1) −𝑎 3,1 (1) (1) (1)
r3 = r3 + (1) r1 = r3 + 0r1
𝑎 1,1
 (1)

(2) (1) −𝑎 4,1 (1) (1) (1)
r4 = r4 + (1) r1 = r4 − 3r1
𝑎 1,1

31
Example: Gaussian Elimination

Example
thus obtaining the following complete matrix

1 −1 −1 1 0
© ª
­ 0 2 4 −2 8 ®®
Ac(2) = (A|b) = ­
­
®.
­ 0 −1 −2 0 −8 ®
« 0 0 1 1 7 ¬

32
Example: Gaussian Elimination

Example
Let us apply the Gaussian transformations to move to step (3)
(3) (2)
r1 = r1
(3) (2)
 r2 (2)= r2
(3) (2) −𝑎 3,2 (2) (2) (2)
r3 = r3 + (2) r2 = r3 + 21 r2
𝑎 2,2
 (2)

(3) (2) −𝑎 4,2 (2) (2) (2)
r4 = r4 + (2) r2 = r4 + 0r2
𝑎 2,2

33
Example: Gaussian Elimination

Example
thus obtaining the following complete matrix

1 −1 −1 1 0
© ª
­ 0 2 4 −2 8 ®®
Ac(2) = (A|b) = ­­ .
­ 0 0 0 −1 −4 ®®
« 0 0 1 1 7 ¬

34
Example: Gaussian Elimination

Example
We would need one more step to obtain a triangular matrix.
However, in this case, after two steps the matrix is already
triangular. It is enough to swap the third and fourth rows to
obtain

1 −1 −1 1 0
© ª
­ 0 2 4 −2 8 ®®
Ac(2) = (A|b) = ­­ .
­ 0 0 1 1 7 ®®
« 0 0 0 −1 −4 ¬

Solutions 𝑥 4 = 4, 𝑥 3 = 3, 𝑥 2 = 2, and 𝑥 1 = 1.

35
Complexity of Gaussian Elimination

The number of elementary operations involved in Gaussian


Elimination is on the order of 𝑛 3 . Hence, a system of 50
equations requires about 125, 000 operations and can be solved
by a computer in just a few milliseconds.

36
Summary and Next Lecture

Summary
⊚ Square systems of linear equations with non-singular
matrices have only one solution.
⊚ The solution is calculated by inverting the system’s matrix
or applying Cramer’s method.
⊚ Alternatively, Gaussian elimination manipulates the
matrix to produce a triangular system. This method is
computationally less expensive and can be applied to
larger systems of equations.

Next Lecture
We will learn how to approach systems of linear equations
with singular matrices.
37
Mock Test Session
Question 1

Apply, if possible, Cramer’s Theory to solve the following


system of linear equations

3𝑥 − 2𝑦 + 𝑧 = 2






2𝑧 = 2

𝑥 + 𝑦 = 2


39
Question 2

Apply, if possible, Cramer’s Theory to solve the following


system of linear equations

3𝑥 − 2𝑦 + 𝑧 = 2






2𝑥 + 2𝑧 = 2

 𝑥 + 𝑦 + 2𝑧 = 2


40
Question 3

Solve the following system of linear equations by Gaussian


elimination
𝑥 + 𝑦 = 1





4𝑥 + 𝑦 − 𝑧 = 0

 2𝑥 − 𝑦 + 𝑧 = 2


41
Question 4

Solve if possible the following system of linear equations by


Gaussian elimination.

𝑥 + 𝑦 + 2𝑧 = 4






2𝑥 + 𝑦 + 𝑧 = 4

𝑥 − 𝑧 = 0


42
Question 5

Consider the following system of linear equations

𝑎 12 𝑦 + 𝑎13 𝑧 = 𝑏1






𝑎 𝑥+𝑎 𝑧=𝑏
21 23 2

 𝑎31 𝑥 + 𝑎32 𝑦 = 𝑏3


Can you apply Gaussian elimination to this system? Justify


your answer.

43
THANK
YOU

You might also like