Professional Documents
Culture Documents
nUMERICAL 05
nUMERICAL 05
Theory
Linear equations are foundational in mathematics and find extensive application across various
fields, including physics, engineering, economics, and computer science. A system of linear
equations involves multiple equations with multiple variables, and finding solutions to such
systems is a fundamental problem in algebra. Several methods exist for solving systems of linear
equations, each with its advantages and applications. Here, we discuss some of the primary
solution methods:
1. Graphical Method:
This method involves plotting each equation on a graph and identifying the
point(s) of intersection, which represent the solution(s) of the system.
Suitable for systems with two variables, allowing for visual interpretation of
solutions.
Impractical for systems with more than two variables.
2. Substitution Method:
In this method, one variable is solved for in terms of the other variables in one of
the equations. The resulting expression is then substituted into the other
equations, reducing the system to one with fewer variables.
The process is repeated until all variables are solved for, yielding the solution(s)
of the system.
Effective for systems with relatively simple equations and a small number of
variables.
3. Elimination Method (Gaussian Elimination):
This method involves transforming the system of equations into an equivalent
triangular form through a series of row operations, such as adding or subtracting
multiples of one equation from another.
Once the system is in triangular form, back substitution is used to find the values
of the variables.
Gaussian elimination is versatile and applicable to systems of any size, making it
one of the most widely used methods.
4. Matrix Method (Matrix Algebra):
Systems of linear equations can be represented in matrix form 𝐴𝑥=𝑏Ax=b, where
𝐴A is the coefficient matrix, 𝑥x is the column vector of variables, and 𝑏b is the
column vector of constants.
Solution methods include Gaussian elimination, matrix inversion, and matrix
decomposition techniques such as LU decomposition or QR decomposition.
Matrix methods are computationally efficient and particularly suited for
implementation in computer algorithms.
5. Cramer's Rule:
Cramer's rule provides a formulaic approach for solving systems of linear
equations using determinants.
It expresses the solutions as ratios of determinants involving matrices derived
from the coefficient matrix and the constant vector.
While conceptually elegant, Cramer's rule can be computationally intensive and
less practical for large systems due to its reliance on determinants.
6. Iterative Methods:
Iterative methods, such as Jacobi method, Gauss-Seidel method, and Successive
Over-Relaxation (SOR), are iterative techniques used to approximate solutions to
systems of linear equations.
These methods start with an initial guess and iteratively refine the solution until
convergence criteria are met.
Iterative methods are useful for large, sparse systems and are often employed in
numerical simulations and scientific computing.
% Augmented matrix
G = [A B];
% Back substitution
X = zeros(1,r);
for i = r:-1:1
X(i) = (G(i,c) - sum(G(i,1:r) .* X)) /
G(i,i);
end
% Solutions X
X
2. Solve the following linear clear all; X=
close all;
equation using Gauss clc;
Elimination Method with 6.0795 -3.0712 -5.8157
partial % Coefficient matrix
A = [0 .3 5;
pivoting -7 4 -9;
6 20 -3];
0.3b+5c+30=0
% Constant matrix
-7a+4b-9c=-2.5 b = [-30 -2.5 -7.5 ]';
6a+20b-3c+7.5=0
% Augmented matrix
G = [A, b];
% Back substitution
X = zeros(1,r);
for i = r:-1:1
X(i) = (G(i,c) - sum(G(i,1:r) .* X)) /
G(i,i);
end
% Solutions
X
Home Assignments
% Constant matrix
B = [-30; -2.5; -7.5];
% Augmented matrix
AB = [A, B];
% Gaussian Elimination
[n, m] = size(AB);
for i = 1:n
% Pivot operation
pivot = AB(i, i);
AB(i, :) = AB(i, :) /
pivot;
for j = i+1:n
factor = AB(j, i)
/ AB(i, i);
AB(j, :) = AB(j,
:) - factor * AB(i, :);
end
end
% Back substitution
x = zeros(n, 1);
for i = n:-1:1
x(i) = (AB(i, m) -
AB(i, 1:n) * x) / AB(i,
i);
end
% Solutions
x
2. Solve the following linear clear all; x=
close all;
equation using Gauss clc;
Elimination Method with 6.4837
partial % Coefficient matrix -2.4481
pivoting A = [0 0.3 5; -5.8531
-7 4 -9;
6 20 -3];
0.3b+5c+30=0
-7a+4b-9c=-2.5 % Constant matrix
6a+20b-3c+7.5=0 b = [-30;
-2.5;
7.5];
% Augmented matrix
AB = [A, b];
% Gauss Elimination
Method with Partial
Pivoting
for j = 1:r-1
% Partial Pivoting
[~, k] =
max(abs(AB(j:end,j)));
k = k + j - 1;
temp = AB(j,:);
AB(j,:) = AB(k,:);
AB(k,:) = temp;
for i = j+1:r
factor = AB(i,j)
/ AB(j,j);
AB(i,:) = AB(i,:)
- factor * AB(j,:);
end
end
% Back substitution
x = zeros(r, 1);
x(r) = AB(r,c) / AB(r,r);
for i = r-1:-1:1
x(i) = (AB(i,c) -
AB(i,i+1:r) * x(i+1:r)) /
AB(i,i);
end
% Solutions
x
3. Solve the following linear clear all; x=
equation using Gauss Jordan close all;
clc;
Method 6.4837
% Coefficient matrix -2.4481
0.3b+5c+30=0 A = [0 0.3 5; -5.8531
-7a+4b-9c=-2.5 -7 4 -9;
6a+20b-3c+7.5=0 6 20 -3];
% Constant matrix
b = [-30;
-2.5;
7.5];
% Augmented matrix
AB = [A, b];
% Gauss-Jordan
Elimination Method
for j = 1:r
% Partial Pivoting
[~, k] =
max(abs(AB(j:end,j)));
k = k + j - 1;
temp = AB(j,:);
AB(j,:) = AB(k,:);
AB(k,:) = temp;
% Normalization
AB(j,:) = AB(j,:) /
AB(j,j);
% Elimination
for i = 1:r
if i ~= j
factor =
AB(i,j) / AB(j,j);
AB(i,:) =
AB(i,:) - factor *
AB(j,:);
end
end
end
% Solutions
x = AB(:,end);
x