Writing Your PHD Thesis in L Tex2E: A Using The Cued Template

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Writing your PhD thesis in

LATEX2e
Using the CUED template

Krishna Kumar

Department of Engineering
University of Cambridge

This dissertation is submitted for the degree of


Doctor of Philosophy

Kings College October 2017


Abstract

This is where you write your abstract ...


Table of contents

1 Getting started 1
1.1 Steady-State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Discretizzazione delloperatore Laplaciano . . . . . . . . . . . . . 2
1.1.2 Error Analisys . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 GMRES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 My second chapter 11
2.1 Non Steady-State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Crank-Nicolson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 1

Getting started

Lets consider the generic 2D Pisoon Equation:

ut + kx uxx + ky uyy = f (x, y)

where we have considered the diffusion coefficients constant and the source time inde-
pendent. The equation will be considered in polar coordinates, so we will have:

urr + 1r ur + r12 u = f (r, )

We chose = [0; 1] X [0; 2] as domain and f (r, ) = sin(rcos( )). The objective of
this work is the analisys and numerical resolution of this equation
The study will be divided in 2 parts

the Steady-State case, considering a Poisson problem in poar coordinates (k=1) ;

the non Steady-State case, symmetric (kr = k ) and asimmetric (kr = k ).

1.1 Steady-State
Since we are intersted in the stationary case, we set ut = 0. Chosing kr = ky = 1 we are in a
2D Poisson problem, of which we have the exact solution sin(rcos( )). We will have the
following:
2 Getting started

urr + 1r ur + r12 u = f (r, ) (r, )


sin(rcos( )) (r, )

The valutation of the exact solution on will give us the necessary boundary condition
for the unicity of the solution:

u = sin(rcos( )) (r, )

What is considered:

The problem discretization using the 5 point Stencil and the analisys of the matrix A
derivng from the discretization;

The study of the erroe made with respect to the exact solution with different values of
the discretization step

The solution with two different iterative methods, with and without preconditionator,
and the confront with respect to time and number of iteration necesary to reach a
prefixed precision.

1.1.1 Discretizzazione delloperatore Laplaciano


The equation, in polar coordinates, has an apparent singularity at the origin r = 0. It is
important to realize that the cause of singularity is due to the representation of the governing
equation in the polar coordinate system. Thus, the solution itself is no way singular at the
origin if f is smooth enough. Subsequently will bw shown that this singularity does not need
to be considered.
We choose a grid for which the grid points are half-integered in the radial direction and
integered in the azimuthal direction, that is,

ri = (i 21 )r, j = ( j 1)

2
Where r = 2N+1 , = 2
M e i = 1,2,....N; j = 1,2,...M+1. Note that, by the choice of the
radial mesh width, the boundary values are defined on the grid points. Let the discrete values
1.1 Steady-State 3

be denoted by ui j = u(ri , j ), fi j = f (ri ; j ), and g j = g( j ). Using the centered difference


method to discretize the equation, for i = 2; 3;...; N; j = 1; 2;...; M, we have

ui+1, j 2ui j +ui1, j ui+1, j ui1, j ui, j+1 2ui j +ui, j1


(r)2
+ r1i 2r + r12 ( )2
= fi j
i

Among the above representations, the boundary values are given by uN+1, j = g j , andui0 =
ui,M , ui1 = ui,M+1 ,because u is 2 periodic in .
At i = 1 we have

u2, j 2ui j +u0, j u2, j u0, j u1, j+1 2u1 j +u1, j1


(r)2
+ r11 2r + r12 ( )2
= f1 j
1

Becauser1 = r 2 , we immediately observe that the coefficient of u0 j in the preceding


equation is zero. It turns out that the scheme does not need any extrapolation for u0 j , so no
pole condition is needed.
This finite difference scheme can be rapresented by the 5 point stencil as shown in figure.

Let us order the unknowns ui j by first grouping the same ray, then moving counterclock-
wise to cover the whole domain. Thus, the unknown vector v is defined by
4 Getting started


u1 u1 j
u2 u2 j



. .
v= , uj =

.


.


. .
uM uN j

The remaining problem is to solve a large sparse linear system Av = b, where b is the
known vector that considers the values fi j and the boundaries and A is a matrix NxM X NxM
that can be written as


T 2D D D

D ... ...

A= .. ..
(1.1)
. .

D
D D T 2D

1
where D = diag(1 , 2 , . . . , N ) with i = (i1/2)2 ( )2
1 i N and T


2 1 + 1

1 1 ... ...

T = .. ..
(1.2)
. .

1 + N1
1 N 2

1
with i = 2(i1/2 1 i N. The known vector b is defined by
1.1 Steady-State 5

(r)2 f i j

b1
b2 .



. .
b= , bj =

.


.

(r)2 f N 1, j0

.
bM (r)2 f N j (1 + N )g j

The linear system so obtained is sparse. Every equation has at max 5 unknowns and so
every row of the matrix A has at max 5 element different from zero. To exploit the sparseness
I will use the Matlabs command for sparse matrices, that consent to not memorize the null
elements.
In order to consruct the matrix A the following code was used:

I = speye(M)
I2=ones(N,1)
D=sparse(diag(beta))
d=ones(M,1)
T=spdiags([1-[lambda(2:end);0], -2*I2, 1+[0;lambda(1:end-1)]],-1:1,N,N)
Ap= spdiags([d,d,d,d],[1,-1,M-1,-(M-1)],M,M)
A=kron(I,T)-2*kron(I,D)+kron(Ap,D)
Using the command spy(A) we obtained the sparsity pattern of A and T as in figure

Fig. 1.1 Sparsity pattern for A and T for N=16, M=32, nz is the number of nonzero of the
matrices
6 Getting started

We can find the condition number of A using condest(A). Following there are different
condition numbers with repect to different values of N, r and (in this work M is taken
as 2N).

N r K(A)
22 0.2222 0.7854 143.0244
23 0.1176 0.3927 1.9156e+03
24 0.0606 0.1963 2.8402e+04
25 0.0308 0.0982 4.3896e+05
26 0.0155 0.0491 6.9086e+06
27 0.0078 0.0245 1.0965e+08
28 0.0039 0.0123 1.7475e+09
29 0.0020 0.0061 2.7906e+10
210 0.0010 0.0031 4.4605e+11

As we can see there is an increase of K with respect to the decrease of the steps r and

1.1.2 Error Analisys


Since the matrix A is invertble, the system is solved explicitly for different values of N and
M, calculating the result vector as:

u = A1 f

At this point I compared the vector found with the vector of the exact solution, calculated
in the internal point of the grid, in order to calculate the global error (in max norm). In figure
there is evolution of the error with repect to different values of N, r and
1.2 Iterative Methods 7

N r Error
22 0.2222 0.7854 0.0106
23 0.1176 0.3927 0.0023
24 0.0606 0.1963 5.5380e-04
25 0.0308 0.0982 1.3493e-04
26 0.0155 0.0491 3.3325e-05
27 0.0078 0.0245 8.2860e-06
28 0.0039 0.0123 2.0655e-06
29 0.0020 0.0061 5.1564e-07
210 0.0010 0.0031 1.2882e-07

We see a decrease of the error with respect to the decrease of the steps this scheme shows
a second order convergence.

1.2 Iterative Methods


The linear system to solve is the one generated by the discretization as seen before. Iterative
methods are methods for the solution of linear systems, useful when there are matrices of
large dimension or sparse matrices. Iterative methods starts from an initial arbitrary value x0
and generates succession of the approximation of the exact solution. If x is the exact solution
of the system we have

limx+ ||xk x|| = 0

In the practical use the method stops when it reaches a decided number of iterations
or some quantities (depending on the chosen method) are lesser than a certain requested
treshold.
In this case GMRES (Generalized Minimal Residual) is used.
Preconditioner are used in a way to improve this methods. They transform the system in
one mathematically equivalent, but more efficient to solve.
8 Getting started

1.2.1 GMRES
GMRES at every iteration determines the approximation of the solution such that the residue
r = b-Ax in two norm is minimum. However the number of necessary iterations depends on
the condition number of the matrix A, that as we have seen is large. A way to reduce the
computational cost is to interrupt the cicle after a prefixed number of iteration and to impose
r0 = rm and x0 = xm and restart the cicle. In this case is called restarted GMRES.

[U1,FLAG1,RELRES1,iter1,RESVEC1]=gmres(A,F,M,1e-6,M*N)

The GMRES command takes as input, in addition to the matrix A and the vector F, the
number of iteration before the restart, the requested tolerance and the maximum number
of restarts. It will output the solution of the system, information on convergence, relative
residue (final residue with respect to initial residue), number of restart, number of iteration of
the last cicle and the vector of the relative residue at every step of the last cicle. To decrease
the iterations number I used the preconditioner ILU and ILUT with tolerance 0.01 and 0.001,
that make an incomplete LU factorization of A. Martrix L was used as rigth preconditioner
and matrix U as left preconditioner.

GMRES ILU
setup . type =nofill ;
[L,U]= ilu(A, setup );
[U2 ,FLAG2 , RELRES2 ,iter2 , RESVEC2 ] = gmres(A,F,M,1e-6,M*N,L,U)
GMRES ILUT (0.01)
setup . type =ilutp ;
setup . droptol = 0.01;
[L,U]= ilu(A, setup );
[U3 ,FLAG3 , RELRES3 ,iter3 , RESVEC3 ] = gmres(A,F,M,1e-6,M*N,L,U)
GMRES ILUT (0.001)
setup . type =ilutp ;
setup . droptol = 0.001;
[L,U]= ilu(A, setup );
[U4 ,FLAG4 , RELRES4 ,iter4 , RESVEC4 ] = gmres(A,F,M,1e-6,M*N,L,U)
1.3 Conclusion 9

Where the command ilu generates an incomplete factorization, setup.type specifics if we


have a tolerance, setup.droptol specific the tolerance value. As we can see preconditioning
accellerate the convergence of the method, expecially for elevated N (with increasing N we
have a bigger condition number of the matrix A).

N GMRES ILU ILUT(0.01) ILUT(0.01)


22 8 2 4 3
23 13 12 6 4
24 20 18 11 6
25 12 43 21 9
26 73 67 38 16
27 140 136 76 30

We conclude this section with a comparison between the utilized methods with respect to
the execution time (measured for N=27 ).

Method Execution Time


GMRES 612.53
ILU 1.2324
ILUT (0.01) 0.6007
ILUT (0.001) 0.6355

We notice that as expected GMRES is the slowest. On the contrary ILUT (0.01) is faster
than ILUT(0.001), even if the number of iteration of the former is higher (the difference
however is negligible). However they are greatly superior to the other two.

1.3 Conclusion
In conclusion we solved the Poisson problem utilizing the 5 point Stencil and ILUT (0.01),
for N = 29 . The error obtained is 5.1564e-07 and the execution time is 3.4366 seconds.
10 Getting started

Fig. 1.2 Solution ILUT (0.01) with N = 29


Chapter 2

My second chapter

2.1 Non Steady-State


Being not in the steady state, we will not impose ut = 0. We dont have the exact solution, so
we will impose some boundary condition (the difference with the steady state is that, haveng
the exact solution, we had already imposed the boundary condition) and initial condition.
The choice was to consider at the boundary an heat source and as initial condition one. We
will solve this system:

ut + kr (urr + 1r ur ) + k r12 u = f (r, ) (r, ) , t [0, 1]

u(r, ,t) = 2k ( 2N+1


2 )
2 (r, ) , t [0, 1]

u(r, , 0) = 1 (r, ) , t = 0

Differently from the steady-state case, where we imposed (kr = k ) = 1 , we will consider
kr and k arbitrary, considering the symmetric and non symmetric problem. This choice will
change the composition of the matrix D = diag(k 1 , k 2 , . . . , k N ) and T, composing A:


2(kr + k ) (1 + 1 )kr

(1 1 )kr ... ...

T =

.. ..
(2.1)
. .

(1 + N1 )kr
(1 N )kr 2(kr + k )
12 My second chapter

Having time derivatives, in addition to the spatial ones, its necessary to have a time
discretization in top of the spatial one. The idea is to determine, starting from the initial data,
the approximation of the successive times. Let be u(t) = f(u(t),t), the initial condition, k
the time step and Un the approximation of the solution at time tn (Un = u(tn )). One possibility
to determine the approximation at time tn+1 is to use the trapezoid method:

U n+1 U n
k = 12 ( f (U n ) + f (U n+1 ))

In this case we have to found a way to merge the two discretization and found a relation
the links Uin+1
j to Ui j calculated before.

2.2 Crank-Nicolson
We can operate the spatial discretization using the 5-point Stencil, as the previous chapter.
Discretizing time using the trapezoidal method we will obtain the two-dimensional version
of the Crank-Nicolson method:

Uin+1
j = (I 2k 2h )1 (I + 2k 2h ) Uinj + kF

Where 2h is equal to the matrix A of the previous chapter, with the updated T in order to
account for kr and k
Even in thsi case we used iteratived methods (GMRES, with and without preconditioners)

k sym k asym
GMRES 5.7089e 04 4.9668e-04
ILU 8.9029e 04 7.6740e-04
ILUT(0,01) 9.3025e 04 8.1998e-04
ILUT(0,01) 0.0011 7.6409e-04

This table shows execution times for different methods and symmetric diffusion coeffi-
cient (kr = k ) or asymmetric (kr = 0.1, k = 0.4)

You might also like