Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Acknowledgement

I take immense pleasure to thank Dr. Yogesh M. Desai, Professor, Department of Civil
Engineering for having permitted me to carry out this seminar work.
I wish to express my sincere gratitude to all my teachers for their able guidance and
valuable suggestions, which enabled this seminar work to assume its present form. I also
take this opportunity to thank all my friends for extending their support and helping me
complete this seminar work.

i
Abstract
A brief study of Numerical Methods for finding the solution on non-linear time-
dependent equations are incorporated in this article. Analytical theories provide
only a limited account for the array of complex phenomena governed by nonlin-
ear PDEs, hence numerical methods are found to be the best alternative. In this
paper,the focus is on numerical methods as: Finite Difference methods such as For-
ward Time Centered Space(FTCS), Backward Time Centered Space(BTCS), Mac
Cormack Method and Crank Nicolson type method are employed in finding solu-
tions, Finite Element Methods, Variational iteration method and Spectral methods
as Fourier-Galerkin and Fourier-Collocation methods. The numerical solutions are
compared with that of Exact solutions.

ii
Contents
1 Introduction 1

2 Finite Difference Method 3


2.1 Forward Time Centered Space(FTCS) Method . . . . . . . . . . . . . . . . 3
2.2 Backward Time Centered Space(BTCS) Method . . . . . . . . . . . . . . . 4
2.3 Mac-Cormack Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.1 Predictor Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Corrector Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Finite Element Method 9


3.1 Method of Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Variational Iteration Method 17

5 Crank Nicolson Type Method 19

6 Spectral Methods 20
6.1 Fourier-Galerkin methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.2 Fourier-Collocation methods . . . . . . . . . . . . . . . . . . . . . . . . . . 22

7 Summary and Conclusions 23

References 25

iii
List of Figures
1 Burgers Equation FTCS h = 0.2 k = 0.01 T = 0.1,0.4,0.7 AND 1.0[1] . . . 4
2 Burgers Equation BTCS h = 0.2 k = 0.01 T = 0.1,0.4,0.7 AND 1.0[1] . . . 5
3 Burgers Equation Mac-Cormack h = 0.2 k = 0.01 T = 0.1,0.4,0.7 and 1.0 [1] 7
4 Burgers Equation EXACT, FTCS, BTCS AND MACC h = 0.2 k = 0.01
T = 0.4[1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 The numerical solution obtained at different times for  = 1 , N = 40 , t =
0.0001[4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6 The numerical solution obtained at different times for  = 0 .1 , N = 40 ,
t = 0.0005.[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
7 The numerical solution obtained at different times for  = 0 .01 , N = 40 ,
t = 0.0005.[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8 The numerical solution obtained at different times for  = 0 .001 , N = 40 ,
t = 0.0001.[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
9 The numerical solution obtained at different times for  = 0 .00001 , N = 40 ,
t = 0.0001[4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
10 Exact and Numerical Solution with k = 0 .1 , t = 0 .001 , x = 0 .0125 at
different t = 0 .2 , 0 .4and0 .6 [5] . . . . . . . . . . . . . . . . . . . . . . . . . 20

iv
List of Tables
1 Comparison of the numerical solutions obtained with various values of
t for  = 1 , N = 80 at t = 0 .1 with the exact solution[4] . . . . . . . . . 13
2 Comparison of the numerical solutions obtained with different number
of basis elements for t = 0 .00001 ,  = 0 .1 , at t = 0 .1 with the exact
solution[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Comparison of the numerical solutions obtained for various values of ,
and N = 40, t = 0.0002, at different times with the exact solutions with
the exact solution[4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

v
1 Introduction
Partial differential equations (PDEs) provide a quantitative description for many central
models in physical, biological, and social sciences. The description is furnished in terms
of unknown functions of two or more independent variables, and the relation between
partial derivatives with respect to those variables.
A PDE is said to be nonlinear if the relations between the unknown functions and
their partial derivatives involved in the equation are nonlinear. Despite the apparent
simplicity of the underlying differential relations, nonlinear PDEs govern a vast array
of complex phenomena of motion, reaction, diffusion, equilibrium and more. Due to
their pivotal role in science and engineering, PDEs are studied extensively by specialists
and practitioners. Indeed, these studies found their way into many entries throughout
the scientific literature. They reflect a rich development of mathematical theories and
analytical techniques to solve PDEs and illuminate the phenomena they govern. Yet,
analytical theories provide only a limited account for the array of complex phenomena
governed by nonlinear PDEs.
Over the past sixty years, scientific computation has emerged as the most versatile
tool to complement theory and experiments. Modern numerical methods, in particular
those for solving nonlinear PDEs, are at the heart of many of these advanced scientific
computations. Numerical solutions of nonlinear PDEs were first put into use in practical
problems, by John von Neumann, in the mid-1940s as part of the war effort. Since
then, the advent of powerful computers combined with the development of sophisticated
numerical algorithms has revolutionized science and technology like anything.
Powered by modern numerical methods for solving for nonlinear PDEs, a whole new
discipline of numerical weather prediction was formed.Numerical methods replaced wind
tunnels in the design of new airplanes.The impact of Numerical models on nonlinear PDEs
could understood from the quote by Eitan Tadmor[9] " A Numerical solutions of nonlinear
PDEs found their way from financial models on Wall Street to traffic models on Main
Street".
The usual distinction that we make in nonlinear PDEs are between two main classes
of problems[9]. Boundary value problems and time dependent problems. The discussion
in this paper is to study various methods of solution of time dependent problems. Some
examples of time dependent PDEs are:
The time-dependent one-dimensional Schrdinger equation

h2 2
ih = + V (x)(x, t) = H(x, t)
t 2m x2

where i is the imaginary unit, is the time-dependent wave function, V (x ) is the potential
and H is the Hamiltonian operator.

1
Burgers equation which is the one-dimensional nonlinear partial differential equation

u u 2u
+u =  2 , a < x < b, t > 0
t x x

is a mathematical model of both turbulence theory and shock wave theory. In which 
is small parameter known as the kinematics viscosity of the fluid motion.The distinctive
feature of Burgers equation is that it is the simplest mathematical formulation of the
competition between nonlinear advection and the viscous diffusion.
In this paper,the focus is on four main classes of numerical methods: Finite difference
methods, Finite element methods, Variational methods and Spectral methods.
The best known methods, finite difference[2], consists of replacing each derivative by
a difference quotient in the classic formulation. It is simple to code and economic to
compute. In a sense, a finite difference formulation offers a more direct approach to
the numerical solution of partial differential equations than does a method based on other
formulations. The drawback of the finite difference methods is accuracy and flexibility[10].
General finite difference methods such as FTCS, BTCS and Mac Cormack methods are
discussed first. Another method which uses the finite difference algorithm with a simple
modification known as Crank Nicolson type method[5] is also discussed in this paper.
Finite element method (FEM) is a numerical method for solving a differential or
integral equation[11]. It has been applied to a number of physical problems, where the
governing differential equations are available. The method essentially consists of assuming
the piece wise continuous function for the solution and obtaining the parameters of the
functions in a manner that reduces the error in the solution. FEM with discretization in
time is used this paper to solve Burgers equation.
Variational iteration method is an iteration method proposed by He[6] is discussed
later in this paper. Variational iteration method,which is a modified general Lagrange
multiplier method, has been shown to solve effectively, easily and accurately, a large class
of nonlinear problems with approximations which converge rapidly to accurate solutions.
Later on Spectal methods such as Fourier Galerkin and Fourier Collocation methods
are discussed.
Much like the theory of nonlinear PDEs, the numerical analysis of their approximate
solutions is still a "work in progress".

2
2 Finite Difference Method
A finite difference method proceeds by replacing the derivatives in the differential equa-
tions with finite difference approximations. This gives a large but finite algebraic system
of equations to be solved in place of the differential-equation, something that can be done
on a computer..Three finite difference schemes FTCS[1], BTCS[1] and MacCormack[1]
Method have been used to solve special case of one dimensional Burgers equation with
unit viscosity. Numerical results are analyzed and compared with exact solutions.

2.1 Forward Time Centered Space(FTCS) Method


The non-linear second order, Burgers Equation with unit viscosity is:

u u 2u
+u = (1)
t x x2

With boundary condition

u(9.0, t) = 2.0, u(9.0, t) = 2.0t > 0

and initial condition

2 sinh x
u(x, 0.1) =
cosh x e 0.1
In FTCS[5] Method, Forward difference approximation is used to evaluate the time deriva-
tive.
ui,j ui.j+1 ui.j
=
t k
and Central Difference approximation for the spatial derivative

ui,j ui+1.j ui1.j


=
x 2h

2 ui,j ui+1.j 2ui.j + ui1.j


=
x2 h2
Using FTCS Method, we can write (1) as

ui,j+1 ui,j ui+1,j ui1,j ui+1,j 2ui,j + ui1,j


+ ui,j =
k 2h h2

k k
ui,j+1 = ui,j ui,j (ui+1,j ui1,j ) + 2 (ui+1,j 2ui,j + ui1,j )
2h h

ui,j+1 = ui,j c ui,j (ui+1,j ui1,j ) + d (ui+1,j 2ui,j + ui1,j )

3
k k
where c = and d = 2
2h h

ui,j+1 = (1 2d)ui,j + (d + cui,j )ui1,j + (d cui,j )ui+1,j

ui,j+1 = (d + cui,j )ui1,j + (1 2d)ui,j + (d cui,j )ui+1,j

Using above scheme, solution at T = 0.1, 0.4, 0.7 and 1.0 is approximated with h = 0.2
and k = 0.01. The results are presented in Figure 1

Figure 1: Burgers Equation FTCS h = 0.2 k = 0.01 T = 0.1,0.4,0.7 AND 1.0[1]

2.2 Backward Time Centered Space(BTCS) Method


In BTCS Method, Backward difference approximation is used to evaluate the time deriva-
tive.
ui,j ui.j ui.j1
= + O(k)
t k
Drop truncation error terms and shift the time step by one: (j 1 ) j and j (j + 1 )
and Central difference approximation is used to evaluate spatial derivative at tj +1 .
Hence using BTCS Method, we can write (1) as

ui,j+1 ui,j ui+1,j+1 ui1,j+1 ui+1,j+1 2ui,j+1 + ui1,j+1


+ ui,j =
k 2h h2

k k
ui,j+1 ui,j + ui,j (ui+1,j+1 ui1,j+1 ) = 2 (ui+1,j+1 2ui,j+1 + ui1,j+1 )
2h h

4
k k k k k
ui,j = ( 2
+ ui,j )ui1,j+1 + (1 + 2 2 )ui,j+1 + ( 2 ui,j 2 )ui+1,j+1
h 2h h h h

ui,j = (d + cui,j )ui1,j+1 + (1 + 2d)ui,j+1 + (dui,j d)ui+1,j+1


k k
where c = and d = 2
2h h
Solution at T = 0.1, 0.4, 0.7 and 1.0 is approximated using the BTCS method with h =
0.2 and k = 0.01. The results are presented in the Figure 2

Figure 2: Burgers Equation BTCS h = 0.2 k = 0.01 T = 0.1,0.4,0.7 AND 1.0[1]

2.3 Mac-Cormack Method


Mac-Cormack Method has a predictor step and a corrector step.Average value of the time
derivative are calculated by using a predictor step and a corrector step. Both forward
and backward differences are used for space derivatives in calculating the average value of
the time derivative. Because of using forward difference for the predictor and backward
difference for the corrector steps, the method has 2nd order accuracy
The non-linear second order, Burgers Equation with unit viscosity is:

u u 2u
+u =
t x x2

OR ut + uux = uxx

ut = uux + uxx (2)

5
2.3.1 Predictor Step

By using Forward spatial derivatives, we can write (1) as

ui,j+1 ui,j ui+1,j ui,j ui1,j 2ui,j + ui+1,j


+ ui,j =
t x x2

t t
ui,j+1 = ui,j ui,j (ui+1,j ui,j ) + (ui1,j 2ui,j + ui+1,j )
x x2
ui,j+1 = ui,j c ui,j (ui+1,j ui,j ) + d (ui1,j 2ui,j + ui+1,j ) (3)
t t
where c = and d =
x x2

ui,j+1 = d ui1,j + (1 2d + cui,j )ui,j + (d c)ui+1,j

hence the predicted value is

ui,p = d ui1,j + (1 2d + cui,j )ui,j + (d c)ui+1,j

2.3.2 Corrector Step

By using Taylor series expansion for u(x , t) and ut (x , t)

ui,j 1 2 ui,j
ui.j+1 = ui.j + t + 2 t2 + O(t3 )
t 2 t

and
ui,j+1 ui,j 2 ui,j
= + 2 t + O(t2 )
t t t
ui,j+1 ui,j
2 ui,j t
t
= 2 =
t t

ui,j+1 ui,j
ui,j 1
ui.j+1 = ui.j + t + ( t t
) t2
t 2 t

1 ui,j+1 ui,j
ui.j+1 = ui.j + ( + )t
2 t t
Using Equation 2

1 ui,j+1 2 ui,j+1 ui,j 2 ui,j


ui,j+1 = ui,j + (u + u + 2 ) t
2 x 2x x x

Using Backward Difference approximations for spatial derivatives

1 ui,j+1 ui1,j+1 ui1,j+1 2ui,j+1 + ui+1,j+1


ui,j+1 =ui,j + (ui,j+1 +
2 x x2
ui+1,j ui,j ui1,j 2ui,j + ui+1,j
ui,j + )t
x x2

6
1
ui,j+1 =ui,j + (c ui,j+1 (ui,j+1 ui1,j+1 ) + d (ui1,j+1 2ui,j+1 + ui+1,j+1 )
2
c ui,j (ui+1,j ui,j ) + d (ui1,j 2ui,j + ui+1,j ))
t t
where c = and d =
x x2
Using equation 3, we have

1
ui,j+1 = ui,j + (cui,j+1 (ui,j+1 ui1,j+1 )+d(ui1,j+1 2ui,j+1 +ui+1,j+1 )+ui,j+1 ui,j )
2

Now the corrected value will be

1
ui,c = (ui,j + ui,j+1 + dui1,j+1 ui,j+1 (c(ui,j+1 ui1,j+1 ) + 2d) + dui+1,j+1 ))
2
1
ui,c = (ui,j + ui,p + dui1,p ui,p (c(ui,p ui1,p ) + 2d) + dui+1,p ))
2

1
ui,c = (ui,j + dui1,p + ui,p (1 c(ui,p ui1,p ) 2d) + dui+1,p ))
2
The solution using Mac-Cormach method is

1
ui,c = (ui,j + dui1,p + ui,p (1 c(ui,p ui1,p ) 2d) + dui+1,p ))
2

Solution at T = 0.1, 0.4, 0.7 and 1.0 is approximated using the Mac-Cormack method
with h = 0.2 and 0.01. The results are presented in the Figure 3 Solution of Burgers

Figure 3: Burgers Equation Mac-Cormack h = 0.2 k = 0.01 T = 0.1,0.4,0.7 and 1.0 [1]

equation by various Finite Difference Methods were discussed, now in order to check
the accuracy they are compared with the Exact Solution at T=0.4.It seems that all the
methods behave almost similarly and deviate from the exact solution from -2 to 2 as in

7
Figure 4.It seems that the BTCS implicit method is giving more accurate results for time
0.7 and 1.0. However at the time 0.1 all methods behave in a similar manner and deviate
a lot for x lies between -0.5 to 0.5. At time 0.4 second Mac-Cormack method seems to be
more accurate.

Figure 4: Burgers Equation EXACT, FTCS, BTCS AND MACC h = 0.2 k = 0.01 T =
0.4[1]

8
3 Finite Element Method
The Galerkin finite element method constructed on the method of discretization in time
was applied to solve the one-dimensional nonlinear Burgers equation. The system of non-
linear equations obtained for each time step was solved by using the Newton method.The
method of discretization in time[4] allows us to convert an initial boundary- value prob-
lem of the second order in two variables x, t into the solution of p ordinary differential
equations with corresponding boundary conditions. Burgers equation which is the one-
dimensional nonlinear partial differential equation,

u u 2u
+u =  2 , a < x < b, t > 0
t x x

with initial condition u(x, 0) = sin(x) in 0 < x < 1

and boundary conditions


u(0, t) = 0, u(1, t) = 0, t > 0

The exact solution of Burgers Equation with the prescribed conditions was given by Cole
as P 2 2
n=1 an en t n sin(nx)
u(x, t) = 2 (4)
a0 + n2 2 t cos(nx)
P
n=1 an e

where a0 and an (n = 1 , 2 , ...) are Fourier coefficients and defined by the following
equations Z 1

a0 = e(2) 1(1cos(x)) dx
0
Z 1
1(1cos(x))
an = e(2) cos(nx)dx, n0
0

3.1 Method of Solution


1. The interval [0,T] is divided into p subintervals of lengths t = T /p where T is
total time and p is chosen as a positive integer.

2. The derivative u/t is replaced by the difference quotient (zj (x ) zj 1 (x ))/t at


each of the points of division tj = jt(j = 1 , 2 , ..., p). Hence, the partial differential
equation is converted to p ordinary differential equations.

3. Starting with the function z0 (x ) = u(x , 0 ), successively for j = 1 , 2 , ..., p, the solu-
tions of the ordinary differential equations with boundary conditions are obtained

Let us consider the Burgers equation with the given initial condition and the boundary
conditions. The method of discretization in time[4] leads to the problem of finding,

9
successively for j = 1 , 2 , ..., p, the functions zj (x ) which are the solutions of the problems

1
zj00 (x) + zj (x)zj0 (x) + (zj (x) zj1 (x))dx = 0 (5)
t

zj (0) = 0, zj (1) = 0, (6)

where z0 (x ) = u(x , 0 ). The boundary-value problems (5) and (6) can be solved either
exactly or numerically. But the exact solution of the boundary-value problems (5) and
(6) becomes more difficult with increasing j. So, we use the finite element method to solve
it. The weak form of Equation (5) is
Z 1
1
(x)(zj00 (x) + zj (x)zj0 (x) + (zj (x) zj1 (x))dx) = 0
0 t (7)
j = 1, 2, ..., p

where (x ) is test function i.e. (x ) and its derivative 0 (x ) exist on the interval [0, 1]
and square integrable. The linear space of all test functions is denoted by H[0, 1]. To
construct the test functions (x ) we choose a finite linearly independent set of N + 1 test
basis functions 1 (x ), 2 (x ), ..., N +1 (x )., where (x ) H [0 , 1 ] . Hence, the test function
(x ) can be written in the form

N
X +1
(x) = ai i (x) (8)
i=1

where the coefficients ai are arbitrary real numbers.


We construct approximate solution of (7) by applying the Galerkin method. We shall
assume that Galerkin approximation zjN of the function zj (x ) in Eq. (7) is in the form

N
X +1
zjN = cji i (x) (9)
i=1

where i (x ) H [0 .1 ] are linear independent trial basis functions and cij are yet undeter-
mined coefficients. The test basis functions i (x ) and the trial basis functions i (x ) are
chosen the same according to Galerkin method;

i (x) = i (x), i = 1, 2, ......, N + 1.

In the finite element method, the trial basis functions i (x ) (i = 1 , 2 , ..., N + 1 )/ are
selected by a systematic technique: The interval [0, 1] is divided into N sub intervals or
elements 1 , 2 , ...N of equal length h. Let xi and xi+1 be the endpoints of the ith
element i , i.e.
i = [xi , xi+1 ], h = xi+1 xi

10
So that xi = 0 and xN +1 = 1 , and we call the endpoints as nodes. We define the trial
basis functions i (x )(i = 1 , 2 , ..., N + 1 ) by

(xxi1 )
if x ii
h


i (x) = 1 xxh
i
if x i ,


x
/ i1 i

0

such that the basis functions i (x ) satisfy the following properties

1. Each trial function i (x ) H [0 , 1 ](i = 1 , 2 , ..., N + 1 )

2. Each trial function i (x ) is defined piecewise over the elements and in this case it
is a linear function on j (x )(j = 1, 2, ..., N ) .

3. i (xj ) = ij (i , j = 1 , 2 , ..., N + 1 ) so that the value of the test function


PN +1
i (x ) = i=1 ai i (x ) is i (xj ) = ai at node xj

4. if |i j | 2 then i (x )j (x ) = 0

The test function i (x ) of the form (8) must be vanish at x = 0 and x = 1, then in view
of condition (3) we have a1 = 0 and aN +1 = 0 . So, the test function is in the form

N
X
(x) = ai i (x), (10)
i=2

where the coefficients ai are arbitrary real numbers. Similarly, since zjN must satisfy the
given boundary conditions, the approximate solution of zj given by (9) is in the form

N
X +1
zjN = cji i (x) (11)
i=2

Hence, integrating by parts of the weak form (7) and using u(0 ) = 0 , u(1 ) = 0 we
obtain
Z 1 Z 1
0 0 1
(x) zj (x)zj0 (x) +

(zj (x) (x)dx + zj (x) dx)
0 0 t
Z 1 (12)
1
= (x)zj1 (x)dx
0 t

for j = 1 , 2 , ..., p. Substituting (10) and (11) into (12) and after a simple arrangement
we have
N N X N N
X
j
X
j j 1 X j 1 j
 cn Akn + cn cm Bknm + cn Ckn = D (13)
n=2 n=2 m=2
t n=2 t k

wherek = 2, 3, ....., N, j = 1, 2, 3, ..., p,

11

Z 1


2/h k=n

Akn = 0k 0n dx = 1/h k = n 1 or k = n + 1
0


0 otherwise,



1/6 k = n 1 and k = m 1 (simultaneously),


1/6 k = n 1 and k = m (simultaneously),






k = n and k = m 1 (simultaneously),

Z 1


1/3

Bknm = k n 0m dx = 1/3 k = n and k = m + 1 (simultaneously),
0




1/6 k = n + 1 and k = m (simultaneously),


1/6 k = n + 1 and k = m + 1 (simultaneously),







0 otherwise,

Z 1


2h/3 k = n

Ckn = k n dx = h/6 k = n 1 or k = n + 1
0


0 otherwise,
Z 1
i
Dk = k zi1 dx.
0

For each j, Equation. (13) is a nonlinear system of equations which consists of number
N - 1 equations and unknowns. Thus, the nonlinear systems obtained were solved by the
generalized Newton method for the nonlinear system of equations. The results obtained
for various values  were compared with the exact solution in the tables.

3.2 Results
The nonlinear systems obtained from Equation (13) for each value of the j were solved by
using Newton method[4]. The reason for choosing Newton method is that this method
converges very rapidly to solution.
Furthermore, Jacobian matrix in the algorithm of the Newton method is tridiagonal
matrix due to choosing the basis function in the finite element method as linear functions.
This is an advantage in view of computing time. The resulting system of equations
corresponding to each value j has been solved by using a direct method.
For the purpose of verification, the numerical solutions of the Burgers equation ob-
tained for various e values at different times have been compared with the exact solution.
In order to evaluate the numerical solution, the number of both divisions of the interval
[0,T] and basis elements was increased.
Obviously, as p is increased (or equivalently t is decreased), it was displayed that the

12
Table 1: Comparison of the numerical solutions obtained with various values of
t for  = 1 , N = 80 at t = 0 .1 with the exact solution[4]

x Numerical Solutions Exact Solutions


t = 0 .0002 t = 0 .0001 t = 0 .0004 t = 0 .00002
0.1 0.10965 0.10959 0.10956 0.10955 0.10954
0.2 0.21000 0.20990 0.20984 0.20982 0.20979
0.3 0.29219 0.29204 0.29196 0.29193 0.29190
0.4 0.34827 0.34810 0.34800 0.34796 0.34792
0.5 0.37194 0.37176 0.37165 0.37162 0.37158
0.6 0.35940 0.35922 0.35912 0.35908 0.35905
0.7 0.31020 0.31005 0.30996 0.30993 0.30991
0.8 0.22804 0.22793 0.22786 0.22784 0.22782
0.9 0.12080 0.12074 0.12071 0.12070 0.12069

Table 2: Comparison of the numerical solutions obtained with different number of basis
elements for t = 0 .00001 ,  = 0 .1 , at t = 0 .1 with the exact solution[4]

x Numerical Solutions Exact Solutions


N = 10 N = 20 N = 40 N = 80 N = 100
0.1 0.22439 0.22370 0.22353 0.22348 0.22348 0.22345
0.2 0.43773 0.43631 0.43595 0.43586 0.43585 0.43580
0.3 0.62816 0.62590 0.62534 0.62520 0.62518 0.62512
0.4 0.78205 0.77882 0.77801 0.77781 0.77779 0.77772
0.5 0.88307 0.87872 0.87764 0.87737 0.87734 0.87728
0.6 0.91155 0.90604 0.90468 0.90434 0.90430 0.90425
0.7 0.84525 0.83897 0.83741 0.83702 0.83697 0.83692
0.8 0.66513 0.65924 0.65777 0.65740 0.65735 0.65731
0.9 0.37063 0.36698 0.36605 0.36582 0.36579 0.36575

numerical solutions coincided with the exact solution, Table 3.2. The effect of increase of
the basis functions on the numerical solution was given in Table 3.2. The harmony between
the exact solutions and numerical solutions of the problem for  = 0 .1 and  = 0 .01 at
different values of time was shown in Table 3.2. The plots of the numerical solutions
obtained for values of viscosity ranging from large to very small were shown in Figure 5
to 9.
As it is expected, the method of solution presented provides high accuracy.

13
Table 3: Comparison of the numerical solutions obtained for various values of ,
and N = 40, t = 0.0002, at different times with the exact solutions with the exact
solution[4]

x t  = 0 .1  = 0 .01
Numerical Solutions Exact Solutions Numerical Solutions Exact Solutions
0.25 0.4 0.30898 0.30889 0.34197 0.34191
0.6 0.24082 0.24074 0.26900 0.26896
0.8 0.19576 0.19568 0.22151 0.22148
1.0 0.16265 0.16256 0.18822 0.18819
0.5 0.4 0.56987 0.56963 0.66080 0.66071
0.6 0.44743 0.44721 0.52948 0.52942
0.8 0.35944 0.35924 0.43919 0.43914
1.0 0.29208 0.29192 0.37446 0.37442
0.75 0.4 0.62581 0.62544 0.91043 0.91026
0.6 0.48745 0.48721 0.76732 0.76724
0.8 0.37405 0.37392 0.64746 0.64740
1.0 0.28754 0.28747 0.55610 0.55605

Figure 5: The numerical solution obtained at different times for  = 1 , N = 40 , t =


0.0001[4].

14
Figure 6: The numerical solution obtained at different times for  = 0 .1 , N = 40 ,
t = 0.0005.[4]

Figure 7: The numerical solution obtained at different times for  = 0 .01 , N = 40 ,


t = 0.0005.[4]

15
Figure 8: The numerical solution obtained at different times for  = 0 .001 , N = 40 ,
t = 0.0001.[4]

Figure 9: The numerical solution obtained at different times for  = 0 .00001 , N = 40 ,


t = 0.0001[4].

16
4 Variational Iteration Method
The variational iteration method, which is a modified general Lagrange multiplier method[6],
has been shown to solve effectively, easily and accurately, a large class of nonlinear prob-
lems with approximations which converge rapidly to accurate solutions. The basic idea
of Variational iteration[7] method is as follows.
Consider the following nonlinear equation

Lu(t) + N u(t) = g(t) (14)

where L is a linear operator, N is a nonlinear operator, and g(t) is a known analyti-


cal function. According to variational iteration method, we can construct the following
correction functional.
Z 1
un+1 (t) = un (t) + ()(Lun () + N un () g())d (15)
0

where is general Lagrange multiplier which can be identified via variational theory. u0 (t)
is an initial approximation with possible unknowns, and un is considered as restricted
variation, i.e. un = 0 . Therefore, we first determine the Lagrange multiplier that will
be identified optimally via integration by parts.
The successive approximations un+1 (t) for the solution u(t) will be readily obtained
upon using the obtained Lagrange multiplier and by using any selective function u0 .
Consequently, the exact solution may be obtained by using u = limn un . Now applying
the method of Variational Iteration[7] to solve one-dimensional Burgers equation.
Consider the following one-dimensional Burgers equation with the following initial and
boundary conditions;
u u 2u
+u =v 2
t x t
Initial condition
u(x, 0) = f (x), 0 x 1

Boundary conditions

u
u(0, t) = f1 (t), (0, t) = f2 (t), t>0
t

where u = u(x , t) is the unknown function we are looking for in some domain, v is a
parameter v > 0 , and u u
x
is the nonlinear term.
For solving Burgers equation with the given initial conditions via Variational Iteration
Method, its correctional functional can be written as follows;
t
2 un
Z  u
n un 
un+1 (x, t) = un (x, t) + () (x, ) + un (x, ) v 2 (x, ) d
0 x x

17
To make this correction functional stationary, having

un (x, 0) = 0,

we derive Z 1
un+1 (x, t) = un (x, t) + ()(un (x, ))0 d = 0
0

Its stationary conditions can be determined as follows:

() = 0,

1 + ()|=t = 0

From which the Lagrange multiplier can be identified as = 1 , and the following iter-
ation formula is obtained,
Z t
un un 2 un 
un+1 (x, t) = un (x, t) (x, ) + un (x, ) v 2 (x, ) d (16)
0 x x

Beginning with u0 = u(x , 0 ) = f (x ) the approximate solution of Burgers equation can


be determined by iterative formula (16), Similarly to solve Burgers equation with given
Boundary conditions by Variational iteration method, we have
x
2 un
Z  u
n un 
un+1 (x, t) = un (x, t) + () (, t) + un (, t) v 2 (, t) d
0 t

To find optimal value of , having un (0 , t) = 0 , leads to


x
2 un
Z  u
n un 
un+1 (x, t) = un (x, t) + () (, t) + un (, t) v 2 (, t) d
0 t

This yields to:


Z x
un+1 (x, t) = un (x, t) v()u0n (, t)|x0 0
()(un (, t))|x0 + 00 un (, t)d = 0
0

Therefore, the stationary conditions are obtained as:

00 = 0

1 + v0 ()|=x = 0

()|=x = 0

This results in :
1
() = (x ).
v

18
And desired iterative relation can be constructed as:

1 x 2 un
Z  u
n un 
un+1 (x, t) = un (x, t) + (x ) (, t) + un (, t) v 2 (, t) d (17)
v 0 t

The approximate solution of Burgers equation can be determined via iterative formula
(17) beginning with u0 = f1 (t) + xf2 (t)
Now the approximate solution maybe obtained by u = limn un
The proposed technique can give much better analytical approximations for nonlin-
ear differential equation[7] than the perturbation solutions. This is mainly because the
proposed technique is based on a general weighted residual method, the weighed factor
or the general Lagrange multiplier can be determined by variational theory, leading to
converge rapidly to accurate solutions.

5 Crank Nicolson Type Method


Crank Nicolson method is an implicit finite difference scheme to solve PDEs numerically.
Crank Nicolson type[5] scheme is a new difference scheme. The scheme is obtained by
discretizing ut = kuxx like Crank-Nicolson scheme where as discretization of uxx is obtained
by average central difference at t = tn and t = tn+1 so that the scheme remains linear at
t = tn+1 . The method is shown to be second order in time and space and consistent.
For discretization we use the following notation

tn = t0 + nt, t is step size in t, xj = x0 + j, 0 < j < N, N x = 1.

The numerical solution at t = tn and x = xj is denoted byujn . The solution at t = tn for


0 < j < N are denoted by un = ((u0n , u1n , ..., uxn )T
Consider one dimensional non linear Burgers equation with homogeneous Dirichlets
boundary conditions.
ut + uux = kuxx 0<x<1

u(x, 0) = u0 , u(0, t) = u(1, t) = 0, t > 0

Approximate ut by forward difference, uux by central difference at t = tn and t = tn+1


so that the scheme will remain linear in u at t = tn+1 and kuxx is approximated by usual
Crank-Nicolson expression. The discretization gives

un+1
j unj 1
+ [un (un+1 un+1
j1 ) + uj
n+1 n
(uj+1 unj1 ]
t 4x j j+1 (18)
k
= (un+1 2un+1 + un+1 n n n
j1 + uj+1 2uj + uj1 )
2(x)2 j+1 j

19
Figure 10: Exact and Numerical Solution with k = 0 .1 , t = 0 .001 , x = 0 .0125 at
different t = 0 .2 , 0 .4and0 .6 [5]

t kt
Define p = 4x
and r = 2 (x )2

(r + punj )un+1 n n n+1


j+1 + (2r + 1 + puj+1 puj1 )uj (r + punj )un+1
j1
(19)
= runj+1 + (1 2r)unj + runj1

The scheme (19) is called as Crank Nicolson type method.


By using the initial condition,u(0 , x ) = sin(x ) the exact solution will be same as that of
(4).Numerical results obtained by Crank Nicolson type method is compared with exact
solution in Figure (10).

6 Spectral Methods
Spectral methods[8] employ spectral representations of approximate solutions for non-
linear PDEs.Fourier Spectral methods for solving Time dependent PDEs are discussed
here.
we restrict ourselves to problems defined on [0 , 2 ] and assume that the solutions,
u(x ), can be periodically extended. Furthermore, we assume that u(x ) and its derivatives
are smooth enough to allow for any Fourier expansions which may become required.

20
Fourier-Galerkin and Fourier-collocation methods are discussed in this section.

6.1 Fourier-Galerkin methods


In the Fourier-Galerkin method[8], we seek solutions uN (x , t) from the finite dimensional
space
BN span{einx }|n|N/2 , i.e.
X
uN = an (t)einx (20)
|n|N/2

Note that an (t) are unknown coefficients which will be determined by the method and
the Equation (20) gives truncated Fourier Series. In general, the coefficients an (t) of the
approximation are not equal to the Fourier coefficients un ; only if we obtain the exact
solution of the problem will they be equal. In the Fourier-Galerkin method, the coefficients
an (t) are determined by the requirement that the residual RN (x , t) is orthogonal to BN .
If we express the residual in terms of the Fourier series,
X
R(x, t) = Rn (t)einx
|n|

the orthogonality requirement yields


Z 2
1 N
Rn (t) = RN (x, t)einx dx = 0 |n|
2 0 2

These are (N + 1 ) ordinary differential equations to determine the (N + 1 ) unknowns,


an (t), and the corresponding initial conditions are
X
uN (x, 0) = an (0)einx ,
|n|N/2

Z 1
1
an (0) = g(x)einx dx
2 1

The method is defined by the requirement that the orthogonal projection of the residual
onto the space BN is zero.
Consider the nonlinear problem

u(x, t) u(x, t)
= u(x, t)
t x

with smooth, periodic initial conditions.


As usual, we seek a solution of the form
X
uN = an (t)einx ,
|n|N/2

21
and require that the residual

u(x, t) u(x, t)
RN (x, t) = u(x, t)
t x

be orthogonal to BN
The second term is

u(x, t) X X
u(x, t) = al (t)(ik)ak (t)ei(l+k)x
x
|l|N/2 |k|N/2
N/2+k (21)
X X
inx
= (ik)ank (t)ak (t)e
|l|N/2 n=N/2+k

As a result of the nonlinearity, the residual RN (x , t) B2N , and not BN . Projecting this
onto BN and setting equal to zero we obtain a set of (N + 1 ) ODEs

dan (t) X N
= (ik)ank (t)ak (t), |n|
dt 2
|k|N/2

In this example, we obtain the Fourier-Galerkin equation with relative ease due to the
fact that the non linearity was only quadratic. Whereas quadratic non linearity is quite
common in the equations of mathematical physics, there are many cases in which the
non linearity is of a more complicated form, and the derivation of the Fourier-Galerkin
equations may become untenable.

6.2 Fourier-Collocation methods


To form the Fourier-collocation method[8] we require, instead, that the residual vanishes
identically on some set of gridpoints yj , referred as Collocation grids .This method is
also known as the pseudospectral method. In the following, we deal with approximations
based on the interpolation grid

2
xj = j, j [0, ......N 1]
N

where N is even. However, the discussion holds true for approximations based on an odd
number of points, as well.
In the Fourier-collocation method we seek solutions,

uN BN = span{(cos(nx), 0 n N/2) (sin(nx), 1 n N/2 1)}

of the form
X
uN (x, t) = an (t)einx
|n|N/2

22
Now the difference between the Fourier-Galerkin and the Fourier- collocation method will
appear: we require that the residual (RN (x , t) ) vanish at the grid points,yj , i.e,

RN (yj , t) = 0, j [0, .........., N 1]

This yields N equations to determine the N point values, uN (xj , t), of the numerical
solution.
Consider the nonlinear problem

u(x, t) u(x, t)
= u(x, t)
t x

where the initial conditions are given and the solution and all its derivatives are smooth
and periodic over the time-interval of interest.
We construct the residual

uN (x, t) u(x, t)
RN (x, t) = uN (x, t)
t x

The residual is required to vanish at the gridpoints xj , j = 0 , ..., N ?1 , leading to

duN (xj , t) uN (x, t)


uN (xj , t) |x=xj = 0,
dt x

i.e.,
N 1
duN (xj , t) X
uN (xj , t) Djk uN (xk , t) = 0,
dt k=0

Note that obtaining the equations is equally simple for a nonlinear problem as for a con-
stant coefficient linear problem.The application of the Fourier-collocation method is easy
even for problems where the Fourier-Galerkin method fails(for cases other than quadratic
non linearity) due to the fact that we can easily evaluate the non linear function in terms
of point values.

7 Summary and Conclusions


In this article a brief study of various numerical methods for solving non linear time
dependent equations were discussed. Starting with the conventional numerical method
of Finite difference methods.Later the better efficient method of Finite element methods
was discussed. Later on Variational iteration method by He and then spectral methods
were discussed.
In finite difference method FTCS scheme, BTCS scheme and Mac Cormack methods
were discussed. The numerical results obtained were compared with that of the exact
solutions. As expected due to the truncation of the series there was some error involved

23
in the results as shown by the graphs.One more difference scheme known as Crank Nicolson
type method was used and found to give better accuracy.
Later on in the discussion Finite element method was used,and their results at different
time and different values of small parameter are shown in various graphs. The results when
compared with exact solutions were in excellent agreement.
Then Variational iteration method by He was used. The study showed better results
as they converge to exact solutions with very less calculations required.
At last Spectral methods were discussed. Fourier Galerkin and Fourier Collocation
methods were discussed. For the specified type of boundary condition and quadratic non
linearity they were found easiily converging to exact solutions.
A brief study of Numerical methods for solving non-linear time-dependent PDEs shows
that, this is the pivotal point of research for tackling even real life problems by hu-
man.Future developments of numerical methods for nonlinear PDEs will continue to be
influenced by unknown numerical algorithms, which are yet to be developed.

24
References
[1] Rafiq, M., et al. "Some Finite Difference Methods for One Dimensional Burger?s
Equation for Irrotational Incompressible Flow Problem." Pak. J. Engg. and Appl.
Sci. Vol (2011): 13-16.

[2] Kutluay, S., A. R. Bahadir, and A. zdes. "Numerical solution of one-dimensional


Burgers equation: explicit and exact-explicit finite difference methods." Journal of
Computational and Applied Mathematics 103.2 (1999): 251-261.

[3] Aksan, E. N. "A numerical solution of Burgers equation by finite element method
constructed on the method of discretization in time." Applied mathematics and com-
putation 170.2 (2005): 895-904.

[4] Recktenwald, Gerald W. "Finite-difference approximations to the heat equation."


Class Notes (2004).

[5] Wani, Sachin S., and Sarita H. Thakar. "Crank-Nicolson type method for Burgers
equation." International Journal of Applied Physics and Mathematics 3.5 (2013):
324.

[6] He, Ji-Huan. "Approximate solution of nonlinear differential equations with convo-
lution product nonlinearities." Computer Methods in Applied Mechanics and Engi-
neering 167.1 (1998): 69-73.

[7] Biazar, Jafar, and Hossein Aminikhah. "Exact and numerical solutions for non-linear
Burgers equation by VIM." Mathematical and Computer Modelling 49.7 (2009):
1394-1400.

[8] Hesthaven, Jan S., Sigal Gottlieb, and David Gottlieb. Spectral methods for time-
dependent problems. Vol. 21. Cambridge University Press, 2007.

[9] Tadmor, Eitan. "A review of numerical methods for nonlinear partial differential
equations." Bulletin of the American Mathematical Society 49.4 (2012): 507-554.

[10] Kurtz, L. A., et al. "A comparison of the method of lines to finite difference techniques
in solving time-dependent partial differential equations." Computers and Fluids 6.2
(1978): 49-70.

[11] Rezzolla, Luciano. "Numerical methods for the solution of partial differential equa-
tions." Lecture Notes for the COMPSTAR School on Computational Astrophysics
(2011): 8-13.

25

You might also like