Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Computational Methods in Electrical and Electronic Engineering

Report 2
DO LE DUY・ 1026322038
ECS-ID: a0192284

Introduction
We use Python with Jupyter Notebook to solve the tasks this report. The program is edited and ran on Visual Studio Code. We
only use matplotlib for general plotting; otherwise, no other third-party numerical functions is used.

In [ ]:
import matplotlib.pyplot as plt
import copy

We determine the Machine Epsilon of the environment to be as follow:

In [ ]:
import sys
EPS_env = sys.float_info.epsilon
print ("The EPS value is: ", EPS_env)

The EPS value is: 2.220446049250313e-16


This means the number of digits in the mantissa is 53. Python's float uses 64 bits as it is double precision. We also defined several
helper functions in as follow:

In [ ]:
def mul_mat_vec(A, x):
rs = len(A)
cs = len(A[0])
y = [0]*rs
for i in range(rs):
for j in range(cs):
y[i] += A[i][j] * x[j]
return y

def dot(v1, v2):


return sum(x*y for x, y in zip(v1, v2))

def transpose(A):
return list(zip(*A))

def res_norm(v1, v2):


val = 0
for i in range(len(v1)):
val += abs(v1[i] - v2[i])
return val

def norm(v):
import math
val = 0
for i in range(len(v)):
val += v[i]**2
return math.sqrt(val)

def mul_mat_mat(A, B):


rs = len(A)
cs = len(B)
C_T = []
for b_cl in transpose(B):
C_T.append(mul_mat_vec(A, b_cl))

return transpose(C_T)
The constants used in the tasks are defined in the snippet below:

In [ ]:
A = '''12 1 5 1 1 2 -4 1 2
1 16 -1 -4 -5 -2 1 2 3
5 -1 15 -5 3 1 -2 1 -4
1 -4 -5 10 3 -3 -1 4 1
1 -5 3 3 11 -1 4 1 1
2 -2 1 -3 -1 15 -5 2 5
-4 1 -2 -1 4 -5 15 4 -4
1 2 1 4 1 2 4 11 -1
2 3 -4 1 1 5 -4 -1 15'''

A = [[float(num) for num in line.split(" ")] for line in A.split("\n")]

alpha1 = [1.0]*9
alpha2 = [1e10]*9
alpha3 = [1e-10]*9

b1 = mul_mat_vec(A,alpha1)
b2 = mul_mat_vec(A,alpha2)
b3 = mul_mat_vec(A,alpha3)

Task 2.1
Task 2.1.1
This task requires to solve the linear system A ⋅ x = b using LU decomposition. The first step in this process is using Crout's
Method to find the L and U matrices such that A = LU , L is lower triangular and U is unit upper triangular. The defining
equations for Crout's Method are

i−1

lij = aij − ∑ lik ukj ,  where i ≥ j

k=1

and

i−1
aij − ∑ lik ukj
k=1
uij = ,  where i < j
lii

Then:

A ⋅ x = (L ⋅ U )x = L ⋅ (U x) = b

L ⋅ y = b

U ⋅ x = y

We will forward substitution to solve for U x and then backward substitution x. The functions are defined as follow:

In [ ]:
def crout(A):
n = len(A)
L = [[0.0]*n for i in range(n)]
U = [[0.0]*n for i in range(n)]

for i in range(n):
U[i][i] = 1.0

for j in range(i, n):


sum0 = sum(L[j][k] * U[k][i] for k in range(i))

L[j][i] = A[j][i] - sum0


for j in range(i, n):
sum1 = sum(L[i][k] * U[k][j] for k in range(i))

U[i][j] = (A[i][j] - sum1) / L[i][i]


return L, U

def forward_subs(L, b):


rs = len(L)
x=[0]*rs
for i in range(len(b)):
x[i] = b[i]
for j in range(i):
x[i] -= (L[i][j]*x[j])
x[i]=x[i]/L[i][i]
return x

def back_subs(U, Ux):


rs = len(U)
x=[0]*rs
for i in range(rs,0,-1):
x[i-1]=(Ux[i-1]-dot(U[i-1][i:],x[i:]))/U[i-1][i-1]

return x

def LU_sys_solver(L, U, b):


Ux = forward_subs(L, b)
x = back_subs(U, Ux)
return x

Consider the linear system:

Ax1 = b1

Using the process detail before, we could determine the vector x to be: 1

In [ ]:
L, U = crout(A)
x1 = LU_result1 = LU_sys_solver(L, U, b1)
print(x1)

[1.0000000000000007, 0.9999999999999983, 0.9999999999999978, 0.9999999999999958, 1.0000000000000009, 0.9999999999


99998, 0.999999999999998, 1.000000000000003, 1.0000000000000002]
We could confirm x is similar to α - the true solution. The different between the two is as follow:
1 1

In [ ]:
print([x1[i] - alpha1[i] for i in range(len(x1))])

[6.661338147750939e-16, -1.6653345369377348e-15, -2.220446049250313e-15, -4.218847493575595e-15, 8.88178419700125


2e-16, -1.9984014443252818e-15, -1.9984014443252818e-15, 3.1086244689504383e-15, 2.220446049250313e-16]
We could observe that the error is about the magnitude of the machine epsilon EPS, with the smallest error value equal to the
EPS.

Task 2.1.2
This task requires to use iterative methods to solve the linear system. The three iterative methods to be implement are Jacobi
Method, Gauss–Seidel Method, and SOR(successive over relaxation) Method.

Jacobi Method

In the Jacobi Method, the solution is obtained iteratively via

(k+1) −1 (k)
x = D (b − (L + U )x ).

We could rewrite it as an element-based formula for each row i:

(k+1) 1 ⎛ (k)

x = bi − ∑ aij x , i = 1, 2, … , n.
i j
aii ⎝ ⎠
j≠i

We implemented the The Jacobi Method and applied it to solve the task as follow:

In [ ]:
def jacobi(A, b, alpha, EPS = 1e-10, N = 10000, verbose = True):
# Initialize matrices, and vectors
rs = len(A)
D = [0.0] * rs
R = copy.deepcopy(A)
for i in range(rs):
D[i] = A[i][i]
R[i][i] = 0.0
x = [0.0] * rs
x_temp = [0.0] * rs

res_norm_lst = [] # Residal Norm List for Plot


error_list = [] # Error List for Plot
for k in range(N):
for i in range(rs):
x_temp[i] = (b[i] - dot(R[i], x)) / D[i]
x = x_temp
res_norm_lst.append(res_norm(b, mul_mat_vec(A, x)))
error_list.append(res_norm(alpha, x))
if res_norm_lst[-1] < EPS:
break
if verbose:
print("Iter count: ", k+1)
print("Solution: ", x)
plt.plot(range(k+1), res_norm_lst, label = "Residal Norm")
plt.plot(range(k+1), error_list, label = "Error Norm")
plt.yscale("log")
plt.legend(loc='best')
plt.title("Jacobi Method")
plt.show()
return x

jacobi_result = jacobi(A, b1, alpha1)

Iter count: 3295


Solution: [1.0000000000923475, 0.99999999983575, 0.999999999678591, 0.9999999994385916, 1.0000000002060625, 0.99
9999999740783, 0.9999999997076513, 1.0000000003888914, 0.9999999999928898]

Gauss–Seidel Method

The defining element-based equation for Gauss–Seidel Method is shown below:

i−1 n
(k+1) 1 (k+1) (k)
x = (bi − ∑ aij x − ∑ aij x )
i j j
aii
j=1 j=i+1

We will implement the Gauss–Seidel Method based on the above equation and applied it to solve the linear system:

In [ ]:
def ga_se(A, b, alpha, EPS = 1e-10, N = 10000, verbose = True):
# Initialize matrices, and vectors
rs = len(A)
D = [0.0] * rs
R = copy.deepcopy(A)
for i in range(rs):
D[i] = A[i][i]
R[i][i] = 0.0
x = [0.0] * rs

res_norm_lst = [] # Residal Norm List for Plot


error_list = [] # Error List for Plot
for k in range(N):
for i in range(rs):
x[i] = (b[i] - dot(R[i], x)) / D[i]

res_norm_lst.append(res_norm(b, mul_mat_vec(A, x)))


error_list.append(res_norm(alpha, x))
if res_norm_lst[-1] < EPS:
break
if verbose:
print("Iter count: ", k+1)
print("Solution: ", x)
plt.plot(range(k+1), res_norm_lst, label = "Residal Norm")
plt.plot(range(k+1), error_list, label = "Error Norm")
plt.yscale("log")
plt.legend(loc='best')
plt.title("Gauss Seidel Method")
plt.show()
return x

ga_se_result = ga_se(A, b1, alpha1)

Iter count: 3026


Solution: [1.000000000092233, 0.9999999998359536, 0.9999999996789889, 0.9999999994392867, 1.0000000002058076, 0.
9999999997411039, 0.9999999997080132, 1.0000000003884097, 0.9999999999928985]

SOR Method

Similarly, the defining element-based equation for SOR Method is shown below:

(k+1) (k) ω (k+1) (k)


x = (1 − ω)x + (bi − ∑ aij x − ∑ aij x ), i = 1, 2, … , n
i i j j
aii
j<i j>i

We choose the value of relxation coefficient to be w = 1.5 and implement it as follow:

In [ ]:
def sor(A, b, w, alpha, EPS = 1e-10, N = 10000, verbose = True):
# Initialize matrices, and vectors
rs = len(A)
D = [0.0] * rs
R = copy.deepcopy(A)
for i in range(rs):
D[i] = A[i][i]
R[i][i] = 0.0
x = [0.0] * rs
x_temp = [0.0] * rs

res_norm_lst = [] # Residal Norm List for Plot


error_list = [] # Error List for Plot
for k in range(N):
for i in range(rs):
x[i] = w * (b[i] - dot(R[i], x)) / D[i] + (1-w) * x[i]

res_norm_lst.append(res_norm(b, mul_mat_vec(A, x)))


error_list.append(res_norm(alpha, x))
if res_norm_lst[-1] < EPS:
break
if verbose:
print("Iter count: ", k+1)
print("Solution: ", x)
plt.plot(range(k+1), res_norm_lst, label = "Residal Norm")
plt.plot(range(k+1), error_list, label = "Error Norm")
plt.yscale("log")
plt.legend(loc='best')
plt.title("SOR Method")
plt.show()
return x

sor_result = sor(A, b1, 1.5, alpha1)

Iter count: 1116


Solution: [0.9999999999477229, 1.0000000000912115, 1.0000000001777134, 1.0000000003088279, 0.9999999998876647,
1.0000000001423095, 1.0000000001593543, 0.9999999997878721, 1.0000000000039089]

By studying the plots of residal norm over iterations, we could conclude that the error converges linearly as the difference
decreases exponentially over time.

Task 2.1.3
The error norm also decrease exponentially i.e converge linearly, and with the same rate as residal norm. As a result, the error
norm is always proportional to the residal norm after sufficient time has passed. We could conclude that for this problem, we
could reliably increase the accuracy of the solution by decreasing the condition for residal norm.

Task 2.1.4
Conjugate gradient method

The conjugate gradient method is an algorithm for solving Ax = b where A is a real, symmetric, positive-definite matrix. We
input an approximate initial solution vector x .
0

r0 := b − Ax0

p0 := r0

k := 0

 repeat 


r rk
k
αk :=

p Apk
k

xk+1 := xk + αk pk

rk+1 := rk − αk Apk

if rk+1 is sufficiently small, then exit loop


r rk+1
k+1
βk :=

r rk
k

pk+1 := rk+1 + βk pk

k := k + 1

end repeat
The algorithm is implemented as the subroutine linear_cg . We confirm the correctness of the subroutine by solving for the
solution of Ax 1
= b1 .

In [ ]:
def linear_cg(A, b, e = 1e-10, N = 1000):
n = len(A)
xk = [0.0] * n
rk = [mul_mat_vec(A, xk)[i] - b[i] for i in range(n)]
pk = [-r for r in rk]

num_iter = 0
x = [xk]
for iter in range(N):
apk = mul_mat_vec(A, pk)
rkrk = dot(rk, rk)

alpha = rkrk / dot(pk, apk)


xk = [xk[i] + alpha * pk[i] for i in range(n)]
rk = [rk[i] + alpha * apk[i] for i in range(n)]
beta = dot(rk, rk) / rkrk
pk = [-rk[i] + beta * pk[i] for i in range(n)]

num_iter += 1
x.append(xk)
print('Iteration: {}; x = {}; residual = {}'.format(num_iter, xk, norm(rk)))
if norm(rk) < e:
break

return x

x = linear_cg(A, b1, e = 1e-10)

Iteration: 1; x = [1.1976790319381074, 0.6273556833961516, 0.7414203531045427, 0.34219400912517356, 1.02658202737


55206, 0.7984526879587384, 0.45625867883356475, 1.42580837135489, 1.0265820273755206]; residual = 7.9322843846943
885
Iteration: 2; x = [1.1239633353811769, 0.7333152276550705, 0.6698437722024668, 0.37598994163348876, 1.12796982490
04662, 0.6669082635265703, 0.6941213885987522, 1.5106243308748601, 1.012768948698846]; residual = 1.1188689253687
654
Iteration: 3; x = [1.109299282154137, 0.7758355078662453, 0.6546165280016074, 0.35210031799115715, 1.157849277042
3459, 0.6759281675067013, 0.694852741205554, 1.4695685915700991, 1.0476937869624943]; residual = 0.53592423066330
16
Iteration: 4; x = [1.1043911573254652, 0.7793635555241604, 0.6621923868098872, 0.351521151632305, 1.2018946538098
345, 0.6830606584369792, 0.6915049325783603, 1.4460050456230744, 1.0320500615690578]; residual = 0.34280745972201
376
Iteration: 5; x = [1.0990605362514667, 0.8174095864190228, 0.6424052657640149, 0.3746339811530472, 1.233074711092
6898, 0.7081545539919184, 0.6690217834006685, 1.437490540944326, 0.9939318443210646]; residual = 0.07286030762967
691
Iteration: 6; x = [1.0997071267786025, 0.8199530643614047, 0.6444800027709047, 0.3739457415819264, 1.231408362209
6672, 0.7113260382311992, 0.6718484918750219, 1.4356318677864865, 0.9909052534842518]; residual = 0.0567700840007
4896
Iteration: 7; x = [1.1023061235273899, 0.8208131418981773, 0.6459348020928561, 0.38659228471586093, 1.22633417599
12314, 0.723236418901717, 0.6840974639617289, 1.4235064665940833, 0.9893506707939286]; residual = 0.0907379190976
6897
Iteration: 8; x = [1.1086726197534653, 0.8378618500417478, 0.6716432961391207, 0.4387478729269256, 1.206610921616
1214, 0.7406657119597907, 0.7123109984802011, 1.3815510439987146, 0.9876354900608108]; residual = 0.1628759736648
7355
Iteration: 9; x = [0.9999999999999895, 0.9999999999999929, 0.9999999999999889, 0.9999999999999944, 1.000000000000
011, 0.9999999999999745, 1.0000000000000175, 1.000000000000013, 0.9999999999999826]; residual = 1.051337881839387
6e-12
Firstly, we could observe that the subroutine start to converge from iteration 9, as the residal r 8 = 0.581 decreases to
r9 = 1.64e − 6 . As the CG method pick eigen-orthogonal search directions move to get closer to the solution at that direction,
after 9 iterations, the method has searched for 9 directions - the number is equal to the usual number of the eigenvectors of A.

We will quickly confirm if the change Δx after the kth iteration for the first 9 iteration is orthogonal to x with the snippet
k k

below:

In [ ]:
check_orth = [] # dot product of Delta x_k and x_k
for i in range(0, 9):
check_orth.append(
dot([x[i+1][j] - x[i][j] for j in range(len(x[i]))], x[i]) )

print(check_orth)

[0.0, 0.1510118739182188, 0.009093987297143058, 0.004522607268372308, 0.006203029905614638, 0.0002669372207960698


7, 0.0007352339432302895, 0.004078865917930044, 0.11493821419029535]

Task 2.2
Task 2.2.1
We use the function LU_sys_solver to solve for the solution of two prolems Ax 2 = b2 and Ax 3 = b3 . We also find the error
vectors of the two solutions x and x by comparing with the true solution α and α . The programs and results are shown
2 3 1 2

below:

In [ ]:
# Solving for Ax2 = b2
x2 = LU_sys_solver(L, U, b2)

print("x2: ", x2)


print("Error Vector: ", [x2[i] - alpha2[i] for i in range(len(x1))])

x2: [10000000000.00002, 9999999999.999958, 9999999999.999935, 9999999999.999878, 10000000000.00004, 9999999999.9


9994, 9999999999.99994, 10000000000.000088, 10000000000.000006]
Error Vector: [1.9073486328125e-05, -4.1961669921875e-05, -6.4849853515625e-05, -0.0001220703125, 4.005432128906
25e-05, -5.91278076171875e-05, -5.91278076171875e-05, 8.7738037109375e-05, 5.7220458984375e-06]

In [ ]:
# Solving for Ax3 = b3
x3 = LU_sys_solver(L, U, b3)

print("x3: ", x3)


print("Error Vector: ", [x3[i] - alpha3[i] for i in range(len(x1))])

x3: [1.0000000000000033e-10, 9.999999999999937e-11, 9.999999999999888e-11, 9.999999999999796e-11, 1.000000000000


007e-10, 9.999999999999906e-11, 9.999999999999894e-11, 1.0000000000000146e-10, 1e-10]
Error Vector: [3.2311742677852644e-25, -6.333101564859118e-25, -1.124448645189272e-24, -2.042102137240287e-24,
6.979336418416171e-25, -9.435028861932972e-25, -1.0598251598335667e-24, 1.4604907690389395e-24, 0.0]

Task 2.2.2
We solve the problem Ax 2 = b2 and Ax 3 = b3 with iteration methods as follow:

In [ ]:
# Solve Ax2 = b2

jacobi_result2 = jacobi(A, b2, alpha2)


ga_se_result2 = ga_se(A, b2, alpha2)
sor_result2 = sor(A, b2, 1.5, alpha2)

Iter count: 10000


Solution: [10000000000.000086, 9999999999.999847, 9999999999.999702, 9999999999.99948, 10000000000.00019, 999999
9999.99976, 9999999999.999727, 10000000000.000364, 9999999999.999994]
Iter count: 10000
Solution: [10000000000.000086, 9999999999.999847, 9999999999.999702, 9999999999.99948, 10000000000.00019, 999999
9999.99976, 9999999999.999727, 10000000000.000364, 9999999999.999994]

Iter count: 10000


Solution: [9999999999.999943, 10000000000.0001, 10000000000.000195, 10000000000.000341, 9999999999.999874, 10000
000000.00016, 10000000000.00018, 9999999999.99976, 10000000000.000004]

With true solution being much larger, the stopping condition before could not be satisfied. The residal norm and error norm still
follow the same trend, as the error norm is always larger than the residal norm.

To have the methods converge with the same number of iterations as when the true solution is α , we increase the EPS with the
1

same magnitude that the true solution increases.

In [ ]:
jacobi_result2 = jacobi(A, b2, alpha2, EPS=1)
ga_se_result2 = ga_se(A, b2, alpha2, EPS=1)
sor_result2 = sor(A, b2, 1.5, alpha2, EPS=1)

Iter count: 3295


Solution: [10000000000.923477, 9999999998.357496, 9999999996.7859, 9999999994.385899, 10000000002.06063, 9999999
997.407824, 9999999997.076506, 10000000003.888924, 9999999999.928898]

Iter count: 3026


Solution: [10000000000.922327, 9999999998.359539, 9999999996.789898, 9999999994.392883, 10000000002.058067, 9999
999997.411047, 9999999997.080141, 10000000003.884087, 9999999999.928986]

Iter count: 1116


Solution: [9999999999.477234, 10000000000.91211, 10000000001.777122, 10000000003.088257, 9999999998.876654, 1000
0000001.42308, 10000000001.593527, 9999999997.878736, 10000000000.039087]

In [ ]:
# Solve Ax2 = b2

jacobi_result3 = jacobi(A, b3, alpha3)


ga_se_result3 = ga_se(A, b3, alpha3)
sor_result3 = sor(A, b3, 1.5, alpha3)

Iter count: 5
Solution: [1.1372609791556192e-10, 7.106843444617866e-11, 4.301136802719157e-11, -2.086449420041737e-12, 1.38044
3040580933e-10, 5.225468869901263e-11, 4.6143830511130064e-11, 1.7114612293138368e-10, 9.932520798545972e-11]

Iter count: 5
Solution: [1.0438763710531638e-10, 9.760438668771425e-11, 8.709176688022779e-11, 8.252579887288617e-11, 1.097842
0847132347e-10, 9.37897809163082e-11, 8.964807713650862e-11, 1.1111487993368873e-10, 9.701512732007388e-11]
Iter count: 9
Solution: [8.55424714210626e-11, 1.2660170589580515e-10, 1.5308160895207729e-10, 1.9145600312007618e-10, 6.66175
8780508508e-11, 1.4231453750933576e-10, 1.473041747832121e-10, 3.682827764786573e-11, 1.0158445822959387e-10]

With true solution becoming much smaller, the routine quickly reach the stopping condition, however the error is very high. As
there are only a few iteration, not much could be observed about the change of residal and error norm.

To have the methods converge with the same number of iterations as when the true solution is α and decrease the error, we
1

decrease the EPS by the ratio of α and α .


1 3

In [ ]:
jacobi_result3 = jacobi(A, b3, alpha3, EPS=1e-20)
ga_se_result3 = ga_se(A, b3, alpha3, EPS=1e-20)
sor_result3 = sor(A, b3, 1.5, alpha3, EPS=1e-20)

Iter count: 3295


Solution: [1.0000000000923471e-10, 9.999999998357505e-11, 9.999999996785919e-11, 9.99999999438593e-11, 1.0000000
002060622e-10, 9.999999997407835e-11, 9.999999997076519e-11, 1.0000000003888906e-10, 9.999999999928899e-11]

Iter count: 3026


Solution: [1.0000000000922325e-10, 9.999999998359546e-11, 9.99999999678991e-11, 9.999999994392904e-11, 1.0000000
002058062e-10, 9.999999997411057e-11, 9.99999999708015e-11, 1.0000000003884073e-10, 9.999999999928986e-11]
Iter count: 1116
Solution: [9.999999999477239e-11, 1.0000000000912101e-10, 1.0000000001777105e-10, 1.000000000308823e-10, 9.99999
9998876665e-11, 1.0000000001423078e-10, 1.0000000001593516e-10, 9.999999997878755e-11, 1.0000000000039089e-10]

Task 2.2.3
From the element-based equation of the three iteration methods, we speculate that the magnitude of solution could be
approximated by the value of b and a . To minimize error, we want to account for the element that has the smallest magnitude.
ii

Therefore, we would set the tolerance as follow:

min(b)
EPS0
max(aii )

, whereas EPS is the default epsilon. Apply this to the solve Ax


0 2 = b2 and Ax 3 = b3 as follow:

In [ ]:
# Solve Ax2 = b2

D = [0.0]*len(A)
for i in range(len(A)):
D[i] = A[i][i]

EPS = min(b2) / max(D) * 1e-10

jacobi_result3 = jacobi(A, b2, alpha2, EPS=EPS)


ga_se_result3 = ga_se(A, b2, alpha2, EPS=EPS)
sor_result3 = sor(A, b2, 1.5, alpha2, EPS=EPS)

Iter count: 3447


Solution: [10000000000.344757, 9999999999.386812, 9999999998.800098, 9999999997.904118, 10000000000.769283, 9999
999999.032276, 9999999998.908588, 10000000001.451836, 9999999999.973455]
Iter count: 3177
Solution: [10000000000.346567, 9999999999.383596, 9999999998.7938, 9999999997.893122, 10000000000.773323, 999999
9999.027199, 9999999998.90286, 10000000001.45945, 9999999999.973316]

Iter count: 1165


Solution: [9999999999.80096, 10000000000.347282, 10000000000.676626, 10000000001.175837, 9999999999.572296, 1000
0000000.541832, 10000000000.606726, 9999999999.19234, 10000000000.01488]

In [ ]:
# Solve Ax3 = b3

D = [0.0]*len(A)
for i in range(len(A)):
D[i] = A[i][i]

EPS = min(b3) / max(D) * 1e-10

jacobi_result2 = jacobi(A, b3, alpha3, EPS=EPS)


ga_se_result2 = ga_se(A, b3, alpha3, EPS=EPS)
sor_result2 = sor(A, b3, 1.5, alpha3, EPS=EPS)

Iter count: 3447


Solution: [1.0000000000344754e-10, 9.999999999386823e-11, 9.999999998800115e-11, 9.999999997904146e-11, 1.000000
0000769277e-10, 9.999999999032291e-11, 9.999999998908603e-11, 1.0000000001451811e-10, 9.999999999973456e-11]
Iter count: 3177
Solution: [1.0000000000346568e-10, 9.999999999383594e-11, 9.999999998793798e-11, 9.999999997893112e-11, 1.000000
0000773325e-10, 9.999999999027196e-11, 9.999999998902856e-11, 1.0000000001459456e-10, 9.999999999973315e-11]

Iter count: 1165


Solution: [9.999999999800956e-11, 1.0000000000347286e-10, 1.000000000067664e-10, 1.0000000001175849e-10, 9.99999
9999572292e-11, 1.0000000000541839e-10, 1.0000000000606736e-10, 9.999999999192333e-11, 1.0000000000014884e-10]

Task 2.3
Task 2.3.1
Consider an n × n matrix A that has n linearly independent real eigenvalues such that |λ 1| ≥ |λ2 | ≥ ⋯ ≥ |λn | and
corresponding eigenvector v .
i

Let x be an arbitrary vector with n elements, we could write:


0

Ax0 = c1 Av1 + c2 Av2 + ⋯ + cn Avn

= c1 λ1 v 1 + c2 λ2 v 2 + ⋯ + cn λn v n

c2 λ2 cn λn
= c1 λ1 [v1 + v2 + ⋯ + v n ] = c1 λ1 x 1
c1 λ1 c1 λ1
And, similarly:

2 2
c2 λ2 cn λn
Ax1 = λ1 v1 + v2 + ⋯ + vn
c1 λ1 c1 λ1
2
2
c2 λ2 cn λn
= λ1 [v1 + v2 + ⋯ + v n ] = λ1 x2
c1 λ2 c1 λ
2
1 1

As λ is the largest eigenvalue, therefore, the ratio for all i . Thus when we repeat multiplying A for sufficiently large
λi
1 < 1 > 1
λ1

amount of times, all the terms that contain will approach zero. Then, the equation could be simplified to be:
λi

λ1

Axk = λ1 v1

To avoid large value, we also need to apply normalization after each iteration, which is usually done by factoring out the largest
element in the vector, resulting in the largest element in the vector becoming 1. As a result, after convergence, the factoring out
number will equal the largest eigenvalue and the resulting vector is the corresponding eigenvector.

We will set the stopping condition to assess convergence to be the norm of the residual vector being sufficiently small.

Based on the explaination above, we will implement the power method as follow:

In [ ]:
def power(A, N = 1000, e = 1e-10):
x = [1.0]*len(A)
m = 0
for iter in range(N):
x = mul_mat_vec(A, x)
m_old = m
m = max([comp for comp in x], key=abs)
x = [comp * 1.0/m for comp in x]

if abs(m_old - m) < e:
break

return m, x, iter

ma1, xa1, itera1 = power(A)

print("Largest Eigenvalue: {} after {} iter".format( ma1, itera1))

Largest Eigenvalue: 26.628503774804813 after 119 iter

Because the eigenvalues of A −1


are reciprocals of the eigenvalues of A, we can find the smallest eigenvalue of A by finding the
largest eigenvalue λ ′
max = 1/λmin of A −1
. Thus, in the inverse power method, instead of multiplying A as implemented in the
power method, we will repeatingly multiply A −1
.

k+1 −1 k k+1 k
x = A x ⟺ Ax = x

As a replacement to multiple the inverse matrix, we will solve the linear equation Ax k+1
= x
k
. The inverse power method is
implemented and applied as follow:

In [ ]:
def inverse_power(L, U, N = 1000, e = 1e-10):
x = [1.0]*len(L)
m = 0
for iter in range(N):
x = LU_sys_solver(L, U, x)
m_old = m
m = max([comp for comp in x], key=abs)
x = [comp * 1.0/m for comp in x]

if abs(m_old - m) < e:
break

return m, x, iter

N = 80
La, Ua = crout(A)
ma2, xa2, itera2 = inverse_power(La, Ua, N)
print("Smallest Eigenvalue: {} after {} iter".format(1/ma2, itera2))
print("Condition Number:", ma1*ma2)

Smallest Eigenvalue: 0.03903066558111274 after 7 iter


Condition Number: 682.245700357479
Thus, we found the largest absolute value eigenvalue to be 26.628500847635596 and smallest absolute value eigenvalue to be
0.03903066558112205. The condition number of the system is found to be: 682.

Task 2.3.2
The routine gen_sym_mat generate a symmetric matrix with n rows and n columns and elements ranging from 0 to 100. We will
create the symmetrix matrix B with the routine.

In [ ]:
import random

def gen_sym_mat(n):
AS = [[0.0] * n for r in range(n)]

for r in range(n):
for c in range(r, n):
AS[r][c] = random.randint(0,100)
AS[c][r] = AS[r][c]

return AS

B = gen_sym_mat(6)
B

[[45, 18, 75, 86, 46, 41],


Out[ ]:
[18, 72, 41, 35, 24, 57],
[75, 41, 9, 13, 60, 85],
[86, 35, 13, 98, 98, 44],
[46, 24, 60, 98, 59, 14],
[41, 57, 85, 44, 14, 20]]
We find the condition number of B as follow:

In [ ]:
mb1, xb1, iterb1 = power(B, N)
Lb, Ub = crout(B)
mb2, xb2, iterb2 = inverse_power(Lb, Ub, N)

print("Largest Eigenvalue: {} after {} iter".format( mb1, iterb1))


print("Smallest Eigenvalue: {} after {} iter".format( 1/mb2, iterb2))
print("Condition Number:", mb1*mb2)

Largest Eigenvalue: 303.84838114316045 after 25 iter


Smallest Eigenvalue: 3.7584224809688025 after 9 iter
Condition Number: 80.84465827929965

Task 2.3.3
In a iteration k of the QR method to find eigenvalues of a matrix A, we first use Gram Schmidt to find the QR decomposition of
matrix A , getting Q - an orthogonal matrix and R - an upper triangular matrix. Then we fine the matrix A
k k k k+1
= R k Qk and
continue to the next iteration. As k increases, the matrix A will finally converge to an upper triangular matr form, the diagonal
i

entries of such matrix will also be its eigenvalues.

However, as

−1 −1
Ak+1 = Rk Qk = Q Qk R k Qk = Q A k Qk
k k

Ak and A k+1
, and consequently all A are similar and have similar eigenvalues.
i

We will implement Gram Schmidt Method to find the QR decomposition of A and find all eigenvalues of A. In this section we will
the class numpy.array for faster acces to the column vector.
In [ ]:
import numpy as np
def gram_schmidt(A):

n, m = A.shape # get the shape of A

Q = np.array([[0.0] * n for i in range(n)]) # initialize matrix Q


u = np.array([[0.0] * n for i in range(n)]) # initialize matrix u

u[:, 0] = A[:, 0]
Q[:, 0] = u[:, 0] / norm(u[:, 0])

for i in range(1, n):

u[:, i] = A[:, i]
for j in range(i):
u[:, i] -= dot(A[:, i], Q[:, j]) * Q[:, j] # get each u vector

Q[:, i] = u[:, i] / norm(u[:, i]) # compute each e vetor

R = np.zeros((n, m))
for i in range(n):
for j in range(i, m):
R[i, j] = dot(A[:, j] , Q[:, i])

return Q, R

A_ = A
for i in range(40):
q, r = gram_schmidt(np.array(A_))
A_ = mul_mat_mat(r, q)

print("Eigenvalues: ")
for i in range(len(A_)):
print(A_[i][i])

Eigenvalues:
26.628501077264783
22.324107306017662
20.498455520616453
16.56935069903095
13.5118585429233
11.736032639984458
5.712657306865459
2.9800062417161004
0.03903066558110365

You might also like