Problem Set 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Math 171 Problem Set I

Pambid Ericka F.
October 2016

Part I

Linear Systems
1
1.1

TRUE OR FALSE
FALSE

Accuracy of a solution depends on both the stability of the algorithm and the condition of the problem
being solved. Accuracy is the closeness or proximity of a computed solution to the true solution of the
problem and if a problem is ill-conditioned, the relative change in the solution compared to the change
in input is big, thus if you will use this problem even in a stable algorithm, the output that you will get
will have very high deviation from the true solution

1.2

TRUE
cond(A) = kAkkA1 k
cond(A

1.3

) = kA

kk(A

1 1

k = kA

kkAk = kAkkA

(1)
1

(2)

FALSE

A matrix that has no zero entry maybe a singular matrix which may result to Ax=b being ill-conditioned.
Take for example the Hilbert Matrix. Its condition number is high although it does not have any zero in
its main diagonal. Therefore not all matrices with no zero entry in its diagonal necessarily makes Ax=b
a well-conditioned problem.

1.4

FALSE

Matrix multiplication is not commutive and take the

0
P = 1
0

following counter example: Let

1 0
0 0
0 1

1 2
A = 3 8
0 4


3 8 1
2
P A = 1 2 1 6= 8
0 4 1
4

1.5

1
1
1
1
3
0

1
1 = AP
1

FALSE

It may also fail if there is at least one zero entry in the main diagonal of a matrix.

2
2.1

ILL-CONDITIONED or WELL-CONDITIONED
Ill-Conditioned

Figure 1: Test I 2a:The condition number is very high as computed in the SciLab

2.2

Well-Conditioned

Figure 2: TestI:2bThe condition number is 1 which is the minimal condition number for matrices and is
the optimal condition number.

2.3

Well-conditioned

Figure 3: Test I2c:The condition number is 1 which is very small making the matrix well-conditioned

Gaussian Elimination Method with partial pivoting

Matrix A is transformed into an upper triangular matrix with its entries more or less within the same
range of values with the original matrix except of course for the entries below the lower diagonal. There
is a slight error because one of the entries under the main diagonal did not zero out but it is almost zero.
There

Figure 4: The Code of Gaussian Elimination with partial Pivoting

Figure 5: The results of each iteration

Figure 6: The Transformed Matrix

4
4.1

Hilbert Matrix
LU Factorization

This is the generalized code for finding the LU Factorization of the Hilbert Matrix.You just need to
change the value of n The following are the LU factorizations for n,4,8,16.When you will check if the LU
factorization is really equal to A,there is a very small error,maybe up to 6 decimal places error which

Figure 7: Scilab code for LU Factorization for a Hilbert Matrix

might be due to rounding up or truncation errors.

Figure 8: LU Factorization of Hilbert Matrix for n=4

Figure 9: LU Factorization of a Hilbert Matrix with n=8.

4.2

Solving a Linear System

At n=5,there are no significant digits anymore. As n increases, the condition number and the absolute
error also increases in value which means the we are diverging from the true solution. We cannot use
the residual vector as a measurement of accuracy because the Hilbert Matrix is ill-conditioned since as

Figure 10: A 16 x 16 Hilbert Matrix

Figure 11: LU factorization of a 16 by 16 Hilbert Matrix

your n approaches infinity, the values of the entries approaches zero making the Hilbert Matrix nearly
singular.
The following code is the Scilab Code for solving the linear system:
n=4
//n=8
//n=16
A=zeros(n,n)
x=zeros(n,1)
t=ones(n,1)
//disp (t)
i=1
while i<=n
for j=1:n
A(i,j)=1/(i+j-1)
end
i=i+1
end
U=A
k=1
D=eye(n,n)
L=eye(n,n)
while k<n
L=L*D
h=k
B=eye(n,n)
m=1
D=eye(n,n)
while h<n
B(h+1,k) =(-1*U(h+1,k))/U(k,k)
D(h+1,k) = -1*B(h+1,k)
h=h+1
end
U=B*U
k=k+1
end
//We are now solving the linear equation Ax=b using LU factorization
b=A*t
//disp (b)
//A=L*U
y=zeros(n,1)
y(1)=b(1)/L(1,1)
for i=2:n
m=0
for j=1:i
z=0
z=L(i,j)*y(j)
m=m+z
end
y(i)=(b(i)-m)/L(i,i)
end
//disp (y)
x(n)=y(n)/U(n,n)
i=n-1
while i>0
p=0
for j=i+1:n
z=0
9

z=U(i,j)*x(j)
p=p+z
end
x(i)=(y(i)-p)/U(i,i)
i=i-1
end
disp("The True Solution")
disp(t)
disp("The Approximate Solution:")
disp (x)
//disp (A*t)
//disp (A*x)
disp("Condition Number:")
disp (cond(A))
disp("absolute Error:")
er=abs(x-t)
disp(norm(er,%inf))
r= (b)- (A*x)
disp("residual vector")
disp(r)
disp("The norm of the residual vector:")
disp(norm(r,%inf))

10

Figure 12: The results for n=4

11

Figure 13: The Results for n=8

12

Figure 14: The Results for n=16

13

Part II

Non-Linear Systems
5
5.1

TRUE OR FALSE
FALSE

A small residual only guarantees accuracy to well-conditioned problem. For ill-conditioned problems,
there might be big changes to the final solution even if the residual is small.

6
6.1

Convergence Rates and Constant C


r=1 c=0.5:Linear Convergence
ke2 k
1
=
ke1 k
2
ke3 k
1
=
ke2 k
2
ke4 k
1
=
ke3 k
2
1
ke5 k
=
ke4 k
2
ke6 k
1
=
ke5 k
2

Since the ratios are constant, then we have


r=1
and
c = 0.5 < 1
giving us a linear convergence.

6.2

r=2 c=1:Quadratic Convergence


ke2 k
1
=
ke1 k
4
1
ke3 k
=
ke2 k
16
ke4 k
1
=
ke3 k
256

for
r=1
we will not have a constant c.
On the other hand for
r=2
:

ke2 k
ke3 k
ke4 k
=
=
=1
2
2
ke1 k
ke2 k
ke3 k2

Therefore
r=2
and
c=1
a quadratic convergence.
14

7
7.1

The Equivalent fixed point problems


Divergent
x2 + 2
3
2x
g10 (x) =
3
4
g10 (2) =
3
4
>1
3

g1 (x) =

7.2

Convergent

3x 2
3
g20 (x) =
3x 2
2
3
g20 (2) =
4
3
<1
4
g2 (x) =

7.3

Convergent
2
g3 (x) = 3 ( )
x
2
g10 (x) = 2
x
1
g10 (2) =
2
1
<1
2

7.4

Convergent
g3 (x) = 3
g10 (x) =

x2 2
2x 3

2x2 6x + 4
(2x 3)2

g10 (2) = 0
0<1

15

You might also like