Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

1 Applications of linear algebra to di erential equations

1.1 Ordinary di erential equations


1.1.1 First order linear di erential equations with constant coef cients

Let us consider the homogeneous linear system of di erential equations


x01 (t)

= a x (t) + a x (t) + ::: + a nxn (t);


= a x (t) + a x (t) + ::: + a nxn (t);
::: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0
xn (t) = an x (t) + an x (t) + ::: + ann xn (t);
x0 (t)
2

11

12

21

22

(1)

where the aij are known constants and primes denote derivatives with respect
to t.
The system (1) can be written in matrix form as
x0 = Ax;

where
0
B
B
x=B
B
@

x1 (t)
x2 (t)

...
xn (t)

1
C
C
C
;
C
A

(2)

0
1
0 0 1
a
a
::: a n
x (t)
B
C
a
a
::: a n C
B
B
x0 (t) C
B
C
B
C
0
B
C
B
C
A = B ................... C; x = B
:
.
C
.
B
C
@
A
.
@
A
0
11

12

21

22

1
2

an1

an2

: : : ann

(3)

xn (t)

The problem of nding a solution x(t) to (1) such that x(a) = x , where x
is a given vector, is called an initial value problem. In the case of a two point
boundary value problem, instead of initial condition x(a) = x , we have
0

xi (a)

=
xi (b) =

= 1; :::; k;
i = k + 1; :::; n:

xi0 ;

xi0 ;

(4)

1.1.2 Solving systems by nding eigenvalues and eigenvectors

Using the scalar equation as a quide, we assume that the vector equation (2)
has a solution of the form
x(t) = et u;
(5)
where u is a constant vector.
Substituting the solution (5) in left and right side of equation (2) yields
d
dt

(et u) = et u;

(6)

A(et u)

= et Au:
Evidently, the vector equation (2) is satis ed only, if
Au

(7)

= u;

(8)

i.e.

(A I )u = 0:
(9)
Therefore, if  is an eigenvalue of A and u is a corresponding eigenvector,
then x(t) = et u is a solution to x0 (t) = Ax(t) (u = 0 gives a trivial solution).
Obviously, the solutions
x (t) = e1 t u ;    ; xn (t) = e t un ;
1

(10)

associated with eigenvalues  , ..., n and eigenvectors u , ..., un , are linearily


independent. It is easy to verify that any linear combination of x (t), ..., xn (t)
is also a solution. The general solution to the vector equation (2) is given by
1

x(t) = c x (t) + c x (t) +    + cn xn (t):


1

(11)

Substituting (10) in (11) yields


x(t) = c

e1 t u1

+ c e2 tu +    + cne t un:


2

(12)

The initial value problem consists in nding a solution to (2) that satis es
an initial condition
x(0) = x

(or x(a) = x in general):


0

(13)

It follows from (12), (13) that


c1 u1

+ c u +    + cnun = x :
2

(14)

Determining the integration constants from linear algebraic system (14) and
inserting in (12) one obtains the solution to initial value problem (2), (13).
General solution algorithm using linear algebra software (MATLAB,
MAPLE,...) is following:
1. nd the eigenvalues and eigenvectors
2. compose the general solution
3. determine the integration constants
4. compose the solution to the initial value problem.
Note: Steps 3-4 must be lled only for initial value problems.
Example 1: Solve the initial value problem
x01

= 6x + 8x ;
= x + 4x ;
x (0) = 1; x (0) = 0:
1

x0

(15)

(16)

In matrix notation the system (15) reads


!

x01
x02

x
= 61 84
:
(17)
x
The characteristic polynomial of the matrix A is
!
6

8
f () = det(A I ) = det
(18)
1 4  = ( 2)( 8):
Hence, the eigenvalues are  = 2 and  = 8. Substituting the eigenvalues
 = 2 and  = 8 in (9), we can determine corresponding eigenvectors
1
2

u =
1

2
1

and u = 41

(19)

Now, the general solution of equation (15) can be presented as


x=

x1
x2

=c e

2
4
t
1 +c e 1 =
8

2c e t + 4c e t
c e t+c e t
2

!
:

(20)

The initial conditions (16) in matrix notation take the form


!

x1 (0)
x2 (0)

x(0) =

= 10

(21)

It follows from (20),(21) that


2c + 4c = 1;
c + c = 0:
1

(22)

Solving system (22) yields: c =


and c = . Inserting c =
and
c = into general solution (20) gives the solution to the initial value problem
(15), (16) as
0 1
2 1
! B 3e t + 3e t C
B
C
C:
x = xx = B
(23)
B
@ 1 t 1 tC
A
6e + 6e
1

1
2

Example 2: Find the general solution to the linear system of di erential equations
x01
x0
x0

2
3

The matrix of system (24)

= x;
= 3x 2x ;
= 2x + 3x :
1

0
1 0
B
A=@ 0 3

0
2C
A
2 3

(24)

(25)

has the eigenvalues  =  = 1 and  = 5. So,  = 1 is an eigenvalue of


multiplicity 2. Generally, if  is an eigenvalue of A of multiplicity k, and the
rank of the matrix (A I ) is equal to n k, then we can nd k linearily
independent eigenvectors of A associated with the eigenvalue . For given A
with (25)
r = rank (A) = 1; k = 2; n k = 1:
(26)
1

Therefore, we can nd two linearily independent eigenvectors associated with


 = 1 using relation (9)
0 1
0 1
1
0
B
C
B
u = @ 0 A; u = @ 1 C
A:
1

0
1
Computing the eigenvector u associated with the eigenvalue  = 5
0
1
0
u =B
@ 1 C
A;
1

(27)

(28)

we can determine the general solution to system (24) as


x(t) =
0
= B
@

x1
x2

c1 et
c2 et
c2 et

0 1
1
B
t
=c e @ 0 C
A+c
1

+c e t
3

c3 e5t

0 1
0
B
C
t
e @ 1 A+c

1
C
A:

0
1
0
B
C
e t@ 1 A =

(29)

Exercises
1. Solve the initial value problem
x01

= x + 12x ;
= 3x + x ;
x (0) = 0; x (0) = 1:
1

x0

(30)

2. Find the general solution of the following linear system of di erential


equations:
x01

x0
x0

2
3

= 3x x x ;
= 12x + 5x ;
= 4x 2x x
1

(31)

3. Solve two point boundary value problem with di erential equations


(30) and boundary conditions x (1) = 1 and x (0) = 0.
1

1.1.3 Solution by converting a system to diagonal form

We assume, that nxn matrix A is diagonalisable over K , i.e. it has n linearily


independent eigenvectors v , ..., vn belonging to eigenvalues  , ..., n over
K (here K = R or K = C )
1

Avi

= ivi;

= 1; :::; n:

(32)

Assembling the eigenvectors into nxn matrix S


S

= (v ; :::; vn)

(33)

we can write
AS

= ( v ;  v ; :::; nvn) = (v ; :::; vn)D = S 


1

(34)

where  is a nxn diagonal matrix with entries  , ..., n. Multiplying (34)
on the right by S one obtains the matrix A in diagonally factorised form
1

= S S :
(35)
Substituting (35) in (2) and multiplying obtained result on the left by S
yields
S x0 = DS x;
(36)
or
y0 = D y:
(37)
Here a vector y(t) is de ned as
A

y = S x:

(38)

Evidently, the system of di erential equations (37) is a diagonal system,


because the matrix D is diagonal. In explicit form the system (37) reads
y10

= y;
y0 =  y ;
::: . . . . . . . . . .
yn0 = n yn :

(39)

y i (t )

(40)

Integrating (39), we obtain


= Cie t
i

or in matrix form

y = E (t)c;
(41)
where
0 1 t
1
0 1
e
0 ::: 0
c
B
0 e2 t : : : 0 C
B
B
C
c C
B
C
B
C
C
B
c=B
;
E
(
t
)
=
:
(42)
.
B
C
B
C
. . . . . . . . . . . . . . . . . . . . .C
B
C
@ .. A
@
A
cn

t
0 0 ::: e
Substituting (38) in (41) and multiplying obtained result on the left by S
one obtains the general solution of system (2) in terms of original variables
1
2

x(t) = SE (t)c:
(43)
The vector of integration constants c can be determined satisfying the initial
conditions x(a) = x (or boundary conditions (4)).
Satisfaction conditions x(a) = x gives
0

c = [E (t = a)]

and

(44)

x(t) = SE (t)[E (t = a)] S x :


In most practical case (a = 0), the formula (45) reduces to
1

x(t) = SE (t)S x
1

(45)
(46)

General solution algorithm using linear algebra software is following:


1. nd the eigenvalues and eigenvectors
2. compose matrices S and E (t)
3. compute the general solution to (2) by (43) or solution to the initial
value problem by (46).
Example 1: Consider the system
x01
x0

= x x;
= 2x + 4x ;
1

nd the general solution.


7

(47)

In matrix notation the system (47) becomes


!
!
!
x0
1
1
x
= 2 4
:
x0
x
The characteristic polynomial of matrix A
!
1
1
A=
2 4
is
!
(1
)
1
f () = det(A I ) = det
2
(4 ) =
1

(48)
(49)

(50)

= (1 )(4 ) + 2 = ( 2)( 3):


Therefore the eigenvalues are  = 2 and  = 3. The eigenvector v associated with  = 2 is obtained by solving equation (A  I )v = 0
!
1
v =
(51)
1 :
Analogically
!
1
v =
(52)
2 ;
1

and therefore

1 1 ; E (t) = e t 0 :
S=
(53)
1 2
0 et
Finally, we can compute the general solution of system (47) using formula (53)
!
!
!
!
!
x
1
1
e t 0
c
c e t+c e t
x= x =
=
:
1 2
0 et
c
c e t 2c e t
(54)
2

Example 2: Solve the following initial value problem


x0 = x ;
x0 = x ;
x0 = 8x
14x + 7x ;
x (0) = 4; x (0) = 6; x (0) = 8:
1

(55)
(56)

For the system (55) the matrix

0
1
0 1 0
B
C
A=@ 0 0 1 A

(57)

4 7

has eigenvalues  = 1,  = 2 and  = 4. Associated eigenvectors are


1

0 1
0 1
0 1
1
1
1
B
C
B
C
B
v1 = @ 1 A ; v2 = @ 2 A ; v3 = @ 4 C
A;

respectively.
The general solution of system (55) is given by
0
x = B
@
0
= B
@

x1
x2
x3

1 0
10 t
10
1 1 1
e
0 0
C
A=B
@1 2 4 C
AB
@ 0 et 0 C
AB
@
2

1 4 16

+
+2
+4

c1 et
c1 et
c et
1

c2 e t
c2 e2t
c e 2t
2

+
+4
+ 16

c3 e t
c3 e4t
c3 e4t
4

(58)

16

0 0

e4t

1
C
A:

c1
c2
c3

1
C
A=

(59)

From (44) the integration constants are


0
c=B
@

c1
c2
c3

0 4 1
1
B
B 3 C
C
C
3 C
A=B
B
@ 1 C
A

(60)

3
and the solution of the initial value problem (45)-(46) can be presented as
0
4 et + 3e t 1 e t 1
C
B 3
C
3
C
B
0 1 B
C
B
x
C
B
4
4
C
B
C
B
t
t
t
C
:
(61)
x=@ x A=B
e + 4e
e
C
B
3
3
C
B
x
C
B
C
B
A
@ 4 t
16
t
t
e + 12e
e
3
3
2

1
2
3

Exercises
1. Consider a linear system of di erential equations
0
1
1 0 0
x0 = B
@0 2 1C
Ax
0 0 3

(62)

a) nd the general solution


b) nd the solution to the initial value problem determined by the initial
conditions x (0) = 2; x (0) = 7; x (0) = 20:
2. Find the general solution to the linear system of di erential equations.
1

x01
x0
x0

2
3

= x + 2x + 3x ;
= x;
= 2x + x + 2x :
1

(63)

1.1.4 Principal matrix solution


Theorem 1: For any constant vector c,
x(t) = eAt c
is a solution of system (2). Moreover, the solution of (2) given by
x(t) = eAt x0

(64)
(65)

satis es the initial condition x(0) = x :


0

Proof: Evidently
x(t) = eAt c = [I + At + A
x0 (t) =

and

d
dt

(eAtc) = A[I + At + A

t2

2!

+A

t2

+A

t3

+ :::]c = AeAt c = Ax(t); (67)

2!
3!

t3

3!

x(0) = eA x0 = I x0 = x0 :
0

10

+ :::]c;

(66)

(68)

De nition 1: The matrix eAt is called the principal matrix solution


of the system (2).
General solution algorithm using linear algebra software:
1. compute eAt
2. compute the general solution of system (2) by (64) or solution to
the initial value problem by (65).
Example 1: Consider the system
0 0 1 0
10
x
1 0 0
B
@ x0 C
A=B
@0 2 0C
AB
@
x0
0 0 1
nd the general solution.
1. Computing eAt

x1
x2
x3

1
2
3

1
C
A

(69)

t
+ A t3! + ::: =
+ At + A 2!
0
1 0
1
1 0 0
t 0 0
= B
@0 1 0C
A+B
@ 0 2t 0 C
A+
0 0 1
0 0 t
0 t2
1 0 t3
1
0
0
0
0
B
C B
C
2 t2
3 t3
+ B
02 C
03 C
@ 0
A+B
@ 0
A + ::: =
t
t
0 0
0 0
0
1
2
0
0
1
+
t+ t +
B
C
t3 + :::
B
C
B
C
2
B
C
t
B
C
0
1 + 2t + +
0
C
= B
=
B
C
t3
B
C
+
+
:::
B
C
B
0
0
1 + t3 + t2 + C
@
A
t
+ + :::
0 t
1
e
0 0
B
= @ 0 et 0 C
A
0 0 et
2. computing the general solution of system (2) by (64)
0 1 0 t
10 1 0 t 1
x
e
0 0
c
c e
B
C
B
C
B
C
B
t
x(t) = @ x A = @ 0 e 0 A @ c A = @ c e t C
A:
t
t
x
0 0 e
c
c e
eAt

2!

3!

2!

3!

2!

3!

2!

3!

(2 )
2!

(2 )
3!

2!

3!

1
2

11

(70)

(71)

Exercises
1. Solve the initial value problem
0 0 1 0
10 1
x
7 0 0
x
B
C
B
C
B
0
x
0
6
0
=
@ A @
A@ x C
A;
0
x
0 0 3
x
1

0 1
1
x(0) = B
@2C
A:

(72)

1.2 Weighted residual methods for solving ODE and


PDE
Below we discuss some approximation methods to solve arbitrary linear differential equations. Both, ordinal (ODE) and partial (PDE) di erential equations, are considered. With limited scope we consider as example the following one-dimensional di erential equation
Lu + g

= 0;

2 [a; b];

(73)

where u(x) is the unknown function and g(x) is a known function. The
capital L denotes a linear di erential operator, which speci es the2 actual
form of the di erential equation (73) (for example L = dxd or L = dxd 2 ). The
boundary conditions are given by
u(a)

= ua;

u(b)

= ub:

(74)

Multiplying (73) by an arbitrary weight function v(x) and integrating


over the interval [a; b] one obtains
Zb
a

v (Lu + g )dx

= 0:

(75)

Evidently (73) and (75) are equivalent, because v(x) is an arbitrary function.
Now we seek a numerical solution to (75), (74) in the form
u

Here

=a

+ ::: + an

n:

(76)

, ..., n are functions of x and a , ..., an are unknown coecients.


In vector form (76) becomes
1

u

= a;
12

(77)

where

0 1
a
B
Ba C
C
C;
a=B
.
B
@ .. C
A
1

=(

; :::; n ):

(78)

an

In (75) we may substitute u by u to obtain


Zb
a

v (Lu

+ g)dx = 0:

(79)

However, substituting u(x) by its approximation u(x) in (73), generally it


appears that (73) is not satis ed exactly, e.g.
Lu

+ g = e:

(80)

Here e(x) is a measure for the error.


It follows from (79)-(80) that
Zb

vedx

= 0:

(81)

Obviously, the residual, e(x), depends on the unknown parameters given by


vector a. Therefore the coecients a , ..., an must be determined so, that
expression (81) is satis ed.
Generally
v = c V + ::: + cn Vn ;
(82)
where V , ..., Vn are known functions of x and c , .., cn are certain parameters.
In terms of vector notation (82) reads
1

= Vc;

where

(83)

0 1
c
B
c C
C
B
B
:
c = B .. C
@ . C
A
1

V = (V

Evidently

; :::; Vn );

(84)

cn

= vT = cT VT ;
13

(85)

and therefore (see (81))


cT

Zb
a

V T e dx

= 0:

(86)

Relation (86) holds for arbitrary cT - matrices, i.e.


Zb
a

or

Zb
a

V1 e dx

Ve dx = 0:

= 0; : : : ;

Zb
a

(87)

Vn e dx

= 0:

(88)

Now, we have n equations (88) to determine coecients a ; :::; an: Inserting


(77) in (80) yields
e = L( a) + g = L( )  a + g;
(89)
and the condition (86) can be rewritten as
1

b
a

Zb

( ) dx]  a =

VT L

Introducing the matrix K and the vector f as


K

Zb
a

V L( ) dx;
T

(90)

VT g dx:
Zb
a

(91)

VT g dx;

we can write (90) in compact form


Ka

= f:

(92)

Finally, we have n linear equations (92) for determing n coecients a ; :::; an:
1

1.2.1 Point collocation method

In the point collocation method, the weight function v (82) is chosen such
that
V = [ (x x ) (x x )::: (x xn )];
(93)
where the xed points xi 2 [a; b]; (i = 1; :::; n) are called collocation points.
Here Dirac's delta function (x xi) is de ned as

1; if x = xi
(x xi ) =
(94)
0 else:
14
1

Inserting (93) in (88) gives


Zb
a

Vi e(x) dx

Zb

(x

= e(xi)

Z x+

xi )e(x) dx

xi

(x

xi ) dx

Z x+
i

xi

(x

xi )e(x) dx

= e(xi ) = 0;

= 1; :::; n; (95)

i.e. the residual e(x) is forced to be zero at the collocation points.


The linear system (92) takes the form
0
10 1
L( (x )) L( (x )) ::: L( n (x ))
a
B
C
B
C
B
... C
B
C
.......................................C
K = B
B
C
@
A=
@
A
an
L( (xn )) L( (xn )) ::: L( n (xn ))
1

0
g (x
B
.
B
@ ..

(96)

1
)C
C
A:

g (xn )

findexsubdomain collocation method


1.2.2 Subdomain collocation method

The weight function v in (82) is de ned such that


 1; if x  x  x
i
i
Vi =
i = 1; :::; n:
(97)
0 else;
This means that the average of the residual over each subdomain is forced
to be zero
+1

Zb
a

Vi e(x) dx

Z x +1
i

xi

e(x) dx

= 0;

= 1; :::; n:

(98)

Instead of system (96) we have


0 R x2
10 1
R x2
R x2
L( ) dx
L( ) dx :::
L( n ) dx
x
x
x
1
1
1
a
B
C
B
C
B
... C
B
C
B
C
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
K = B
C
@
A=
@R
A
R
R
x +1
x
x
an
L( ) dx x +1 L( ) dx ::: x +1 L( n ) dx
x
1

15

0 R x2
x1 g dx
B
.
B
@ R ..
x +1

g dx

xn

(99)

1
C
C
A:

1.2.3 Least square method

In the least square method the functions Vi[x] in (82) are de ned as
Vi

@e

and due to (88) we have

Zb

@e

a @ai

@ai

e dx

= 1; :::; n;

= 0;

(100)

= 1; :::; n:

(101)

Our interest is the square of the error over interval [a; b]


=

Zb

Next we compute the derivatives


@J
@ai

=2

Zb
a

It implies from (101), (103) that


@J

@e
@ai

= 0;

@ai

(102)

e2 dx:

dx;

= 1; :::; n:

(103)

= 1; :::; n:

(104)

Therefore J is stationary and the square of the error e(x) attains its minimum. In explicit form the linear system (92) reads
0 Rb
1
Rb
Rb
L( )L( ) dx
L( )L( ) dx ::: a L( )L( n ) dx
a
a
B
B.............................................................C
C
C
K = B
B
C
1

@R
b

a L( n )L(

0 1
a
B
. C=
B
@ .. C
A
1

an

) dx

0 Rb
a L(
B
B
@R
b

Rb

a L( n )L(

...

)g dx

( n )g dx

aL

1
C
C
A:

16

) dx

:::

Rb

a L( n )L( n ) dx

(105)

Evidently, the matrix K is symmetric.


1.2.4 Galerkin's method

In the Galerkin's method the functions Vi(x) in (82) are de ned as


Vi

i;

= 1; :::; n:

(106)

Inserting (106) in (88) yields


Zb
a

= 0;

i e dx

= 1; :::; n:

(107)

The linear system (92) becomes

0 Rb
10 1
Rb
Rb
L( ) dx
L( ) dx ::: a
L( n ) dx
a
a
a
B
C
B
C
B
C
B
C
B
= B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C @ ... C
A=
@R
A
Rb
Rb
an
b
1

( ) dx

nL

0 Rb
g dx
a
B
...
B
@R
b
1

n g dx

( ) dx

nL

:::

( n ) dx

nL

1
C
C
A:

(108)

Example 1: We consider the two point boundary value problem


d2 u
dx2

+ u + x = 0;

2 [0; 1];

(109)

u(0)

= 0; u(1) = 0:
(110)
The function g(x) and the di erential operator L are determined as
g (x)

= x;

d2
dx2

+ 1:

(111)

The two point boundary value problem (109)-(110) has the exact solution
sin x
u(x) =
(112)
sin 1 x:
17

Next we approximate the solution of (109)-(110) with trigonometric series


u

= b + (a sin(px) + b cos(px)) + (a sin(2px) + b sin(2px)) + ::: +


+ (an sin(npx) + bn cos(npx)):
(113)
0

The boundary conditions (110) are satis ed if


p

= ;

b0

= 0;

b1

= 0; :::; bn = 0:

(114)

Taking n = 2 in (113) we get approximation u with two terms as


u

= a sin x + a sin 2x:


1

(115)

Comparing (76) and (115) yields


=(
Substituting
(1
(1

and

) = (sin x sin 2x):

(116)

in (96) gives 2  2 linear system

 2 ) sin x1
 2 ) sin x2

(1 4 ) sin 2x
(1 4 ) sin 2x
2

a1
a2

x1
x2

(117)

According to the point collocation method we choose the collocation


points x , x and solve (117) with respect a and a : For x = and x =
one obtains a = 0:0651; a = 0:005 and
1

u

= 0:0651 sin x 0:005 sin 2x:

(118)

Subdomain collocation method leads to the following solution


Taking x = (two subdomains with equal length) we can write the
system (99) as
1

0 R1
2 (1
B
B
@R
0

11 (1
2

 2 ) sin(x) dx
 2 ) sin(x) dx

R 21

The system (119) has solutions a = 0:0885;


1

u

10

(1 4 ) sin(2x) dx C B
C
A@
R
1 (1 4 ) sin(2x) dx
2
a2

a1
a2

11 x dx
2

(119)
= 0:0102 and therefore

= 0:0885 sin x 0:0102 sin 2x:


18

0 R1
1
2 x dx
B
C
A= B
@R

(120)

1
C
C
A:

Let us use the least square method now


For the present example the system (105) reduces to

0
B
@

1
0

0
B
@

1
0

(1

a1
a2

(1

 2 )2 sin2 (x) dx

 2 )(1

1
0

(1

 2 )(1

4 ) sin(x) sin(2x) dx
2

1
0 R
(1
C
B
A= @ R
1

) sin(x)x dx

(1 4 ) sin(2x)x dx

(1 4 ) sin (2x) dx
2

1
C
A:

Solving (121) we get a = 0:0718


1

u

4 ) sin(x) sin(2x) dx

a2

= 0:0083 and

= 0:0718 sin x 0:0083 sin 2x:

(122)

Galerkin's method gives the following solution


For the considered example the system (108) reduces to

0
B
@

1
0

0
B
@

1
0

a1
a2

(1

 2 ) sin2 (x) dx

sin(2x)(1

1
0

 2 ) sin(x) dx

sin(x)(1 4 ) sin(2x) dx
2

1
0

(1 4 ) sin (2x) dx
2

1
0 R
1
sin(x)x dx
C
C
A= B
@R
A:

1
C
A

(123)

sin(2x)x dx

Solving system (123) yields a = 0:0718;


1

u

a2

= 0:0083 and therefore

= 0:0718 sin x 0:0083 sin 2x:

(124)

Comparison of solutions:
It is seen in Fig. 1., that the numerical results, obtained by four
weighted residual methods are quite close to the exact solution. However, in
present example only two terms in trigonometric series are considered.
Exercises
1. Solve the two point boundary value problem (109)-(110) taking
n = 3 and n = 4: Compare results.
19

1
C
A

(121)

You might also like