Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

“AZƏRBAYCAN HAVA YOLLARI” QSC

NATIONAL AVIATION ACADEMY

Individual Work № 1

Topic: The Cramer’s and Gauss elimination method for systems of linear algebraic
equations.

Subject: Higher Mathematics Teacher: Elvin Əzizbəyov

Group: 1419i Student: Hidayət Köçərli

Date:20.12.2020 Signature: Köçərli

Baku 2020
What is Gaussian elimination

Gaussian elimination is the name of the method we use to perform the three types of matrix row
operations on an augmented matrix coming from a linear system of equations in order to find the
solutions for such system. This technique is also called row reduction and it consists of two
stages: Forward elimination and back substitution.

These two Gaussian elimination method steps are differentiated not by the operations you can
use through them, but by the result they produce. The forward elimination step refers to the row
reduction needed to simplify the matrix in question into its echelon form. Such stage has the
purpose to demonstrate if the system of equations portrayed in the matrix have a unique possible
solution, infinitely many solutions or just no solution at all. If found that the system has no solution,
then there is no reason to continue row reducing the matrix through the next stage.

If is possible to obtain solutions for the variables involved in the linear system, then the Gaussian
elimination with back substitution stage is carried through. This last step will produce a reduced
echelon form of the matrix which in turn provides the general solution to the system of linear
equations.

The Gaussian elimination rules are the same as the rules for the three elementary row operations,
in other words, you can algebraically operate on the rows of a matrix in the next three ways (or
combination of):

1. Interchanging two rows


2. Multiplying a row by a constant (any constant which is not zero)
3. Adding a row to another row

And so, solving a linear system with matrices using Gaussian elimination happens to be a structured,
organized and quite efficient method.

Gaussian elimination examples

As our last section, let us work through some more exercises on Gaussian elimination (row reduction)
so you can acquire more practice on this methodology. Throughout many future lessons in this
course for Linear Algebra, you will find that row reduction is one of the most important tools there are
when working with matrix equations. Therefore, make sure you understand all of the steps involved in
the solution for the next problems.

 If we were to have the following system of linear equations containing three equations for three
unknowns:

Equation 1: System of linear equations to solve


• We know from our lesson on representing a linear system as a matrix that we can represent such system an
augmented matrix like the one below as:

Equation 2: Transcribing the linear system into an augmented matrix

 Let us row-reduce (use Gaussian elimination) so we can simplify the matrix:

Equation 3: Row reducing (applying the Gaussian elimination method to) the augmented matrix

 Resulting in the matrix:

Equation 4: Reduced matrix into its echelon form



Notice that at this point, we can observe that this system of linear equations is solvable with a unique
solution for each of its variables. What we have performed so far is the first stage of row reduction:
Forward elimination. We can continue simplifying this matrix even more (which would take us to the
second stage of back substitution) but we really don't need to since at this point the system is easily
solvable. Thus, we look at the resulting system to solve it directly:


 Equation 5: Resulting linear system of equations to solve
From this set, we can automatically observe that the value of the variable z is: z=-2. We use this
knowledge to substitute it on the second equations to solve for y, and the substitute both y and z
values on the first equations to solve for x:

 Applying the values of y and z to the first equation

Equation 6: Solving the resulting linear system of equations

 And the final solution for the system is:

Equation 7: Final solution to the system of linear equations for example 1


More Gaussian elimination problems have been added to this lesson in its last section. Make sure to
work through them in order to practice.

Cramer's rule
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as
many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in
terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one
column by the column vector of right-hand-sides of the equations. It is named after Gabriel Cramer (1704–
1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also
published special cases of the rule in 1748 (and possibly knew of it as early as 1729).

The proof for Cramer's rule


The proof for Cramer's rule uses the following properties of the determinants: linearity with respect to any
given column and the fact that the determinant is zero whenever two columns are equal, which is implied by
the property that the sign of the determinant flips if you switch two columns.

Given a system of linear equations, Cramer's Rule is a handy way to solve for just one of the
variables without having to solve the whole system of equations. They don't usually teach Cramer's
Rule this way, but this is supposed to be the point of the Rule: instead of solving the entire system of
equations, you can use Cramer's to solve for just one single variable.

Let's use the following system of equations:

2x +   y + z = 3
  x –   y – z = 0
  x + 2y + z = 0

We have the left-hand side of the system with the variables (the "coefficient matrix") and the right-
hand side with the answer values. Let D be the determinant of the coefficient matrix of the above
system, and let Dx be the determinant formed by replacing the x-column values with the answer-
column values:

coefficient Dx: coefficient determinant


system of answer
matrix's with answer-column
equations column
determinant values in x-column

2x + 1y + 1z = 3
 1x – 1y – 1z = 0
1x + 2y + 1z = 0

Similarly, Dy and Dz would then be:   Copyright © Elizabeth Stapel 2004-2011 All Rights Reserved

Evaluating each determinant (using the method explained here), we get:


Cramer's Rule says that x = Dx  ÷ D, y = Dy  ÷ D, and z = Dz  ÷ D. That is:

x = 3/3 = 1,  y = –6/3 = –2,  and  z = 9/3 = 3

That's all there is to Cramer's Rule. To find whichever variable you want (call it "ß" or "beta"), just
evaluate the determinant quotient Dß ÷ D.

You might also like