Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

AMATH242/CS371 Spring 2024 Assignment III

Release date: Friday, June 28th


Due date: Monday, July 15th, 11:59 pm
Due date with your one time extension: Thursday, July 18th, 11:59 pm

• Questions below are either theoretical or computational. For the computational questions you may use any pro-
gramming language you prefer except symbolic ones like Maple or Mathematica.
• This assignment should be submitted to Crowdmark. Besides your written/typed solutions you must also submit
your code. Please make sure your upload has the correct orientation and ordering.
• If you want to use your one time 3-day extension, please email the instructor before the original due date.
• You are not allowed to post this assignment on sites like stackexchange.com, chegg.com, etc. This will be checked.
Offenders will face penalties from the Math Faculty (e.g. suspension).

Total points: 20

1. (0 points) Please sign the Academic Integrity Checklist on the last page of this pdf. If you do not sign the Academic
Integrity Checklist you will receive a 0 for this assignment.

2. (Theoretical, 3 points) When applying Gauss-Seidel iteration to solve a linear system 𝐴𝑥⃗ = 𝑏⃗ and when 𝐴 is one
of the two matrices below, we expect convergence of the iteration. You’re asked to argue why the convergence.
Use two different arguments for convergence of Gauss-Seidel iteration applied to 𝐴1 and 𝐴2 . (You’re allowed to
use a software/calculator to compute, for example, an inverse of a matrix, if needed.)

⎡4 −1 −1 0 0 −1⎤
⎢ ⎥
⎢−1 4 −1 0 0 0⎥ ⎡6 −2 −1⎤
⎢−1 −1 4 −1 0 0⎥ ⎢ ⎥
𝐴1 = ⎢ , 𝐴2 = ⎢−1 2 −1⎥ .
⎢0 0 −1 4 −2 0 ⎥⎥ ⎢−1 −2 6 ⎥
⎢0 0 0 −2 4 0⎥ ⎣ ⎦
⎢−1 0 0 0 0 4⎥
⎣ ⎦

3. (Computational, 7 points) We will be implementing Jacobi and Gauss-Seidel methods in this question, along
with exploring the vectorization∗ technique to accelerate both iterations.
(1) First, implement Jacobi, Gauss-Seidel methods based on their pseudocodes Algorithms 3.5, 3.6 in [YW] Lec
12. This also means you’re required to NOT use the matrix-based iteration. For example, for Jacobi’s method,
you should NOT create matrices 𝐷, 𝐿, 𝑈 from 𝐴 and write an one-line code for the following:

𝑥⃗(new) = −𝐷−1 (𝐿 + 𝑈 )𝑥⃗(old) + 𝐷−1 𝑏.

Instead, you should update components of the vector with loops:


𝑛
1
𝑥𝑖(new) = 𝑏𝑖 − ∑ 𝑎𝑖𝑗 𝑥𝑗(old) ,
𝑎𝑖𝑖 ( 𝑗=1,𝑗≠𝑖
)

and this is exactly what Algorithm 3.5 is doing.


The following requirements apply to your implemention throughout this question:
∗ See the MATLAB article on vectorization here: https://www.mathworks.com/help/matlab/matlab prog/vectorization.html

1
• Use vector 2-norm to evaluate the residuals;†
• Use relative tolerance 𝜏rel = 10−3 , i.e., the stopping criterion should be: ‖⃗𝑟(𝑘) ‖ / ‖⃗𝑟(0) ‖ ≤ 10−3 ;
• Use the zero vector (a vector with all its components being zeros) as the initial guess;
• Output the final iteration count when stopping criterion is met.
Apply your Jacobi and Gauss-Seidel codes to the two linear systems 𝐴𝑥⃗ = 𝑏⃗ and 𝐴bigger 𝑥⃗ = 𝑏⃗bigger , with their
data provided on LEARN (A.txt,b.txt,A bigger.txt,b bigger.txt‡ ).
Here is how you load matrix/vector data into MATLAB:
1 A = load ( 'A . txt ') ;
2 b = load ( 'b . txt ') ;

And here is how you load them into Python:


1 import numpy as np
2 A = np . loadtxt ( 'A . txt ')
3 b = np . loadtxt ( 'b . txt ', ndmin =2)

You are also asked to record the execution time of your code when solving each system. Here is how you do
it if, for example, your Jacobi iteration is written as a function jacobi itr(A,b,guess,tol) in MATLAB§ :
1 % The execution of codes between tic and toc is timed by MATLAB
2 % and the elapsed time is printed out ( see the documentation for examples )
3 tic
4 number_itr = jacobi_itr (A ,b , guess , tol ) ; % the final iteration count is the output
5 toc

And here is how you do it in Python¶ :


1 import time
2
3 # ......
4 # other codes here
5 # ......
6
7 start = time . time ()
8 nunmber_itr = jacobi_itr (A ,b , guess , tol )
9 T = time . time () - start
10 print ( ' Elapsed time is ' , T , ' seconds . ')

Finally, tabulate your results as below and make a brief comment comparing your results between Jacobi
and Gauss-Seidel: which iteration is faster? And how much faster?
𝐴𝑥⃗ = 𝑏⃗ 𝐴bigger 𝑥⃗ = 𝑏⃗bigger
Elapsed time (s) Iteration count Elapsed time (s) Iteration count
Jacobi method
Gauss-Seidel method
(2) Show that we can rewrite the Jacobi iteration formula as follows:
𝑛
1 1
𝑥𝑖(new) = 𝑏𝑖 − ∑ 𝑎𝑖𝑗 𝑥𝑗(old) ⟹ 𝑥𝑖(new) = 𝑥𝑖(old) + 𝑏𝑖 − 𝐴𝑖 𝑥⃗(old) ,
𝑎𝑖𝑖 ( 𝑗=1,𝑗≠𝑖
) 𝑎𝑖𝑖 ( )

where 𝐴𝑖 denotes the 𝑖th -row of 𝐴. (This is all you need to do for this subquestion. The remaining information
is for you to read and is useful for the next subquestion.)
For the Gauss-Seidel iteration, we have (see [YW] Lec 12, above Algorithm 3.6)
1
𝑥𝑖(new) = 𝑏𝑖 − ∑ 𝑎𝑖𝑗 𝑥𝑗(new) − ∑ 𝑎𝑖𝑗 𝑥𝑗(old) .
𝑎𝑖𝑖 ( 𝑗<𝑖 𝑗>𝑖
)
† This is the default when you use norm in MATLAB and numpy.linalg.norm in Python.
‡ The two systems result from discretizing the Poisson equation on a structured grid using finite difference method.
§ See MATLAB’s stopwatch timer: https://www.mathworks.com/help/matlab/ref/tic.html.
¶ See Python’s time module: https://docs.python.org/3/library/time.html#time.time.

2
Since Gauss-Seidel successively updates the vector component and hence works on only one vector, the
formula above becomes (by dropping superscripts “(new)” and “(old)”)

1 1
𝑥𝑖 = 𝑏𝑖 − ∑ 𝑎𝑖𝑗 𝑥𝑗 − ∑ 𝑎𝑖𝑗 𝑥𝑗 = 𝑏 − ∑𝑎 𝑥 .
𝑎𝑖𝑖 ( 𝑗<𝑖 𝑗>𝑖
) 𝑎𝑖𝑖 ( 𝑖 𝑗≠𝑖 𝑖𝑗 𝑗 )

This formula is precisely what is being implemented in Algorithm 3.6. Now, with the identical steps to
rewrite Jacobi’s formula, we can show:
𝑛
1 1
𝑥𝑖 = 𝑏𝑖 − ∑ 𝑎𝑖𝑗 𝑥𝑗 ⟹ 𝑥𝑖 ⇐ 𝑥𝑖 + 𝑏𝑖 − 𝐴𝑖 𝑥⃗ .
𝑎𝑖𝑖 ( 𝑗=1,𝑗≠𝑖
) 𝑎𝑖𝑖 ( )

(3) In the previous subquestion, we obtained a vectorized form of the 𝑗-loop in Algorithms 3.5 and 3.6. In
MATLAB (and NumPy), converting loops (that operate on scalars) to their vectorized form (that operate on
matrices/vectors) often lead to much faster codes. This is thanks to that much effort in developing MATLAB
(and NumPy) focuses on optimizing operations involving matrices and vectors. With 𝑗-loop vectorized, the
pseudocode in Algorithm 3.5 becomes:
Initial guess: 𝑥⃗(0)
𝑘 = 0; 𝑟⃗(0) = 𝑏⃗ − 𝐴𝑥⃗(0)
while ‖⃗𝑟(𝑘) ‖ / ‖⃗𝑟(0) ‖ > 𝜏rel do
for 𝑖 = 1 ∶ 𝑛 do
𝑥𝑖(𝑘+1) = 𝑥𝑖(𝑘) + (𝑏𝑖 − 𝐴𝑖 𝑥⃗(𝑘) )/𝑎𝑖𝑖
end
𝑟⃗(𝑘+1) = 𝑏⃗ − 𝐴𝑥⃗(𝑘+1) ; 𝑘 = 𝑘 + 1
end

Similarly, the pseudocode in Algorithm 3.6 becomes:


Initial guess: 𝑥⃗
𝑟⃗(0) = 𝑏⃗ − 𝐴𝑥⃗
𝑟⃗ = 𝑟⃗(0)
while ‖⃗𝑟‖ / ‖⃗𝑟(0) ‖ > 𝜏rel do
for 𝑖 = 1 ∶ 𝑛 do
𝑥𝑖 = 𝑥𝑖 + (𝑏𝑖 − 𝐴𝑖 𝑥)/𝑎
⃗ 𝑖𝑖
end
𝑟⃗ = 𝑏⃗ − 𝐴𝑥⃗
end

Now, modify your codes from the first subquestion and run them for the same tests. Tabulate your results
as below:
𝐴𝑥⃗ = 𝑏⃗ 𝐴bigger 𝑥⃗ = 𝑏⃗bigger
Elapsed time (s) Iteration count Elapsed time (s) Iteration count
Jacobi method
Gauss-Seidel method
The iteration counts should be the same as in the previous table (we didn’t change the iterations; we only
changed the way they are calculated). Make a brief observation on how much of an acceleration you gained
by vectorizing the 𝑗-loop.
4. (Theoretical, 4 points) In this question, we interpolate and approximate the natural logarithm function ln(𝑥).
When evaluating ln(𝑥), you may use a calculator. When presenting your final results, keep three significant digits.

(1) Find the quadratic interpolating polynomial of ln(𝑥) in the Lagrange form. The data points are (1, ln(1)),
(2, ln(2)), (3, ln(3)). Compute an approximation of ln(2.9) by evaluating the interpolating polynomial at
𝑥 = 2.9 and compute the relative error of the approximation.

3
(2) Find the Hermite interpolating polynomial of the function ln(𝑥). The 𝑥-value of the three data points are
𝑥 = 1, 2, 3 (same as the previous subquestion). Compute an approximation of ln(2.9) by evaluating the
Hermite interpolating polynomial at 𝑥 = 2.9 and compute the relative error of the approximation.
1
5. (Computational, 6 points) In this question, you’re asked to interpolate the Runge function 𝑓 (𝑥) = on
1 + 25𝑥 2
[−1, 1] with equidistant nodes and Chebyshev nodes, respectively:
2𝑘
• The 𝑛 + 1 equidistant nodes on [−1, 1]: 𝑥𝑘 = 𝑛 − 1, with 𝑘 = 0, 1, 2, … , 𝑛.
2𝑘+1
• The 𝑛 + 1 Chebyshev nodes on (−1, 1): 𝑥𝑘 = cos( 2(𝑛+1) 𝜋 ), with 𝑘 = 0, 1, 2, … , 𝑛.

(1) Given 𝑛 + 1 nodes {𝑥𝑘 }𝑛𝑘=0 (be it equidistant or Chebyshev), write down the Lagrange form of the ℙ𝑛 inter-
polating polynomial for the Runge function. You need to explicitly write out all Lagrange polynomials, i.e.,
𝑙𝑘 (𝑥)’s by their definitions.
(2) Implement the ℙ𝑛 interpolating polynomial in subquestion 1 with the following requirements:
• Your program should take as an input a set of 𝑛 + 1 nodes;
• For a given set of 𝑛 + 1 nodes, your program should evaluate the resulting interpolating polynomial at
400 equidistant nodes∥ in [−1, 1];
• Your program should also evaluate the Runge function at these 400 nodes;
• Your program should compute the errors between the interpolating polynomial and the Runge function
at these 400 nodes and find the maximum error (in terms of absolute value);
• Your program should generate a figure with two plots: one for the Runge function, the other the inter-
polating polynomial. You may use the data you obtained on the 400 nodes for plotting as well. Here is
an example of how the figure should look:

Feed your program with equidistant and Chebyshev nodes for 𝑛 = 4, 8, 12. Submit the six plots and tabulate
the maximum errors as below (with three significant digits):
equidistant Chebyshev
𝑛=4
𝑛=8
𝑛 = 12
Make the following observations on the tabulated results:
• Between the two choices of nodes, which one is better when it comes to the maximum error of the
interpolating polynomial?
• As 𝑛 grows, how does the maximum error behave for each set of nodes? Briefly explain why we would
expect the behavior for each case.

∥ This vector of nodes can be created using, for example, linspace(-1,1,400)

4
Academic Integrity Checklist
Please read the checklist below. Once you have verified these points, sign the checklist and submit with your assign-
ment or test.

• I understand that I am responsible for being honest and ethical in this assessment as per Policy 71 Student
Discipline∗∗ ;
• I have included in-text citations or footnotes when referencing words, ideas, or other intellectual property
from other sources in the completion of this assessment, if applicable;

• I have included a proper bibliography or works cited, which includes acknowledgement of all sources used to
complete this assessment, if applicable;
• The assessment was completed by my own efforts and I did not collaborate with any other person for ideas or
answers;
• This is the first time I have submitted this assessment (either partially or entirely) for academic evaluation.

Student Name (by signing or typing my name here I affirm my agreement to the foregoing statements)

Student I.D. Number

Date

∗∗ https://uwaterloo.ca/secretariat/policies-procedures-guidelines/policy-71

You might also like