Professional Documents
Culture Documents
Writing Your PHD Thesis in L Tex2E: A Using The Cued Template
Writing Your PHD Thesis in L Tex2E: A Using The Cued Template
Writing Your PHD Thesis in L Tex2E: A Using The Cued Template
LATEX2e
Using the CUED template
Krishna Kumar
Department of Engineering
University of Cambridge
1 Getting started 1
1.1 Steady-State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Discretizzazione delloperatore Laplaciano . . . . . . . . . . . . . 2
1.1.2 Error Analisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 My second chapter 11
2.1 Non Steady-State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Crank-Nicolson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 1
Getting started
where we have considered the diffusion coefficients constant and the source time inde-
pendent. The equation will be considered in polar coordinates, so we will have:
We chose = [0; 1] X [0; 2] as domain and f (r, ) = sin(rcos( )). The objective of
this work is the analisys and numerical resolution of this equation
The study will be divided in 2 parts
1.1 Steady-State
Since we are intersted in the stationary case, we set ut = 0. Chosing kr = ky = 1 we are in a
2D Poisson problem, of which we have the exact solution sin(rcos( )). We will have the
following:
2 Getting started
The valutation of the exact solution on will give us the necessary boundary condition
for the unicity of the solution:
u = sin(rcos( )) (r, )
What is considered:
The problem discretization using the 5 point Stencil and the analisys of the matrix A
derivng from the discretization;
The study of the erroe made with respect to the exact solution with different values of
the discretization step
The solution with two different iterative methods, with and without preconditionator,
and the confront with respect to time and number of iteration necesary to reach a
prefixed precision.
ri = (i 21 )r, j = ( j 1)
2
Where r = 2N+1 , = 2
M e i = 1,2,....N; j = 1,2,...M+1. Note that, by the choice of the
radial mesh width, the boundary values are defined on the grid points. Let the discrete values
1.1 Steady-State 3
Among the above representations, the boundary values are given by uN+1, j = g j , andui0 =
ui,M , ui1 = ui,M+1 ,because u is 2 periodic in .
At i = 1 we have
Let us order the unknowns ui j by first grouping the same ray, then moving counterclock-
wise to cover the whole domain. Thus, the unknown vector v is defined by
4 Getting started
u1 u1 j
u2 u2 j
. .
v= , uj =
.
.
. .
uM uN j
The remaining problem is to solve a large sparse linear system Av = b, where b is the
known vector that considers the values fi j and the boundaries and A is a matrix NxM X NxM
that can be written as
T 2D D D
D ... ...
A= .. ..
(1.1)
. .
D
D D T 2D
1
where D = diag(1 , 2 , . . . , N ) with i = (i1/2)2 ( )2
1 i N and T
2 1 + 1
1 1 ... ...
T = .. ..
(1.2)
. .
1 + N1
1 N 2
1
with i = 2(i1/2 1 i N. The known vector b is defined by
1.1 Steady-State 5
(r)2 f i j
b1
b2 .
. .
b= , bj =
.
.
(r)2 f N 1, j0
.
bM (r)2 f N j (1 + N )g j
The linear system so obtained is sparse. Every equation has at max 5 unknowns and so
every row of the matrix A has at max 5 element different from zero. To exploit the sparseness
I will use the Matlabs command for sparse matrices, that consent to not memorize the null
elements.
In order to consruct the matrix A the following code was used:
I = speye(M)
I2=ones(N,1)
D=sparse(diag(beta))
d=ones(M,1)
T=spdiags([1-[lambda(2:end);0], -2*I2, 1+[0;lambda(1:end-1)]],-1:1,N,N)
Ap= spdiags([d,d,d,d],[1,-1,M-1,-(M-1)],M,M)
A=kron(I,T)-2*kron(I,D)+kron(Ap,D)
Using the command spy(A) we obtained the sparsity pattern of A and T as in figure
Fig. 1.1 Sparsity pattern for A and T for N=16, M=32, nz is the number of nonzero of the
matrices
6 Getting started
We can find the condition number of A using condest(A). Following there are different
condition numbers with repect to different values of N, r and (in this work M is taken
as 2N).
N r K(A)
22 0.2222 0.7854 143.0244
23 0.1176 0.3927 1.9156e+03
24 0.0606 0.1963 2.8402e+04
25 0.0308 0.0982 4.3896e+05
26 0.0155 0.0491 6.9086e+06
27 0.0078 0.0245 1.0965e+08
28 0.0039 0.0123 1.7475e+09
29 0.0020 0.0061 2.7906e+10
210 0.0010 0.0031 4.4605e+11
As we can see there is an increase of K with respect to the decrease of the steps r and
u = A1 f
At this point I compared the vector found with the vector of the exact solution, calculated
in the internal point of the grid, in order to calculate the global error (in max norm). In figure
there is evolution of the error with repect to different values of N, r and
1.2 Iterative Methods 7
N r Error
22 0.2222 0.7854 0.0106
23 0.1176 0.3927 0.0023
24 0.0606 0.1963 5.5380e-04
25 0.0308 0.0982 1.3493e-04
26 0.0155 0.0491 3.3325e-05
27 0.0078 0.0245 8.2860e-06
28 0.0039 0.0123 2.0655e-06
29 0.0020 0.0061 5.1564e-07
210 0.0010 0.0031 1.2882e-07
We see a decrease of the error with respect to the decrease of the steps this scheme shows
a second order convergence.
In the practical use the method stops when it reaches a decided number of iterations
or some quantities (depending on the chosen method) are lesser than a certain requested
treshold.
In this case GMRES (Generalized Minimal Residual) is used.
Preconditioner are used in a way to improve this methods. They transform the system in
one mathematically equivalent, but more efficient to solve.
8 Getting started
1.2.1 GMRES
GMRES at every iteration determines the approximation of the solution such that the residue
r = b-Ax in two norm is minimum. However the number of necessary iterations depends on
the condition number of the matrix A, that as we have seen is large. A way to reduce the
computational cost is to interrupt the cicle after a prefixed number of iteration and to impose
r0 = rm and x0 = xm and restart the cicle. In this case is called restarted GMRES.
[U1,FLAG1,RELRES1,iter1,RESVEC1]=gmres(A,F,M,1e-6,M*N)
The GMRES command takes as input, in addition to the matrix A and the vector F, the
number of iteration before the restart, the requested tolerance and the maximum number
of restarts. It will output the solution of the system, information on convergence, relative
residue (final residue with respect to initial residue), number of restart, number of iteration of
the last cicle and the vector of the relative residue at every step of the last cicle. To decrease
the iterations number I used the preconditioner ILU and ILUT with tolerance 0.01 and 0.001,
that make an incomplete LU factorization of A. Martrix L was used as rigth preconditioner
and matrix U as left preconditioner.
GMRES ILU
setup . type =nofill ;
[L,U]= ilu(A, setup );
[U2 ,FLAG2 , RELRES2 ,iter2 , RESVEC2 ] = gmres(A,F,M,1e-6,M*N,L,U)
GMRES ILUT (0.01)
setup . type =ilutp ;
setup . droptol = 0.01;
[L,U]= ilu(A, setup );
[U3 ,FLAG3 , RELRES3 ,iter3 , RESVEC3 ] = gmres(A,F,M,1e-6,M*N,L,U)
GMRES ILUT (0.001)
setup . type =ilutp ;
setup . droptol = 0.001;
[L,U]= ilu(A, setup );
[U4 ,FLAG4 , RELRES4 ,iter4 , RESVEC4 ] = gmres(A,F,M,1e-6,M*N,L,U)
1.3 Conclusion 9
We conclude this section with a comparison between the utilized methods with respect to
the execution time (measured for N=27 ).
We notice that as expected GMRES is the slowest. On the contrary ILUT (0.01) is faster
than ILUT(0.001), even if the number of iteration of the former is higher (the difference
however is negligible). However they are greatly superior to the other two.
1.3 Conclusion
In conclusion we solved the Poisson problem utilizing the 5 point Stencil and ILUT (0.01),
for N = 29 . The error obtained is 5.1564e-07 and the execution time is 3.4366 seconds.
10 Getting started
My second chapter
u(r, , 0) = 1 (r, ) , t = 0
Differently from the steady-state case, where we imposed (kr = k ) = 1 , we will consider
kr and k arbitrary, considering the symmetric and non symmetric problem. This choice will
change the composition of the matrix D = diag(k 1 , k 2 , . . . , k N ) and T, composing A:
2(kr + k ) (1 + 1 )kr
(1 1 )kr ... ...
T =
.. ..
(2.1)
. .
(1 + N1 )kr
(1 N )kr 2(kr + k )
12 My second chapter
Having time derivatives, in addition to the spatial ones, its necessary to have a time
discretization in top of the spatial one. The idea is to determine, starting from the initial data,
the approximation of the successive times. Let be u(t) = f(u(t),t), the initial condition, k
the time step and Un the approximation of the solution at time tn (Un = u(tn )). One possibility
to determine the approximation at time tn+1 is to use the trapezoid method:
U n+1 U n
k = 12 ( f (U n ) + f (U n+1 ))
In this case we have to found a way to merge the two discretization and found a relation
the links Uin+1
j to Ui j calculated before.
2.2 Crank-Nicolson
We can operate the spatial discretization using the 5-point Stencil, as the previous chapter.
Discretizing time using the trapezoidal method we will obtain the two-dimensional version
of the Crank-Nicolson method:
Uin+1
j = (I 2k 2h )1 (I + 2k 2h ) Uinj + kF
Where 2h is equal to the matrix A of the previous chapter, with the updated T in order to
account for kr and k
Even in thsi case we used iteratived methods (GMRES, with and without preconditioners)
k sym k asym
GMRES 5.7089e 04 4.9668e-04
ILU 8.9029e 04 7.6740e-04
ILUT(0,01) 9.3025e 04 8.1998e-04
ILUT(0,01) 0.0011 7.6409e-04
This table shows execution times for different methods and symmetric diffusion coeffi-
cient (kr = k ) or asymmetric (kr = 0.1, k = 0.4)