Tex Thesis

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Dedication

To my parents Rabah and Naima, my husband Walid

my children Rafif-Iline and Mohamed-Amine

my grand-mother Zohra and aunty Hassina

my brothers, my sisters and all my family

I dedicate this thesis.


Acknowledgment

I thank first and foremost Allah for giving me the courage, the patience, and the will to achieve this thesis.
I wish to express my dearest gratitude to my supervisor Prof. Azedine RAHMOUNE for his continuous
support, ideas, and patience in guiding me during the preparation of my thesis.

Secondly, I would like to thank members of my thesis committee: M.C.A. Rebiha ZEGHDANE
from the Mathematics Department, University Mohamed El Bachir El Ibrahimi of Bordj Bou Arréridj ,
Pr. Mostefa NADIR from the Mathematics Department, University of Mohamed Boudiaf of M’sila, Pr.
Abdelbaki MEROUANI from the Mathematics Department, University Ferhat Abbas of Setif 1, M.C.A.
Rebiha BENTERKI from the Mathematics Department, University Mohamed El Bachir El Ibrahimi
of Bordj Bou Arréridj, and M.C.A. Bachir GAGUI from the Mathematics Department, University of
Mohamed Boudiaf of M’sila.

Finally, I would also like to thank my comrades and my friends who supported me morally and to
anyone who participated directly or indirectly in the realization of this work.

Beyond all of them, a special thought goes to my family who have supported me during all these years
to them I dedicate this work.
Abstract

The main objective of this thesis is to study the convergence and stability of spectral methods and there
use to solve some types of integral equations such as quadratic Urysohn integral equations in the half line.
A rational Legendre collocation method are proposed for solve them, Finally, several numerical examples
are given to show the effectiveness and stability of the propodes method.

Keywords: Collocation method, rational approximation, Legendre polynomials, convergence analysis,


stability.

I
Résumé

L’objectif principal de cette thèse est d’étudier la stabilité et la convergence des méthodes spectrales et
leur utilisation pour résoudre des équations intégrales par exemples les équations quadratique de type
Urysohn dans le domaine non bornée. En utilisant la méthode de Legendre collocation pour résoudre
ce type d’équations. Enfin, plusieurs exemples numériques sont donnés pour montrer l’efficacité et la
stabilité de nos approches.

Mots clés: La méthode de collocation, approximation rationnelle, les polynômes de Legendre analyse
de la convergence, la stabilité.

II
Contents

list of Tables IV

list of Figures V

List of Symbols VI

Introduction 1

1 Preliminaries 2

1.1 Preliminary concept of integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.2 Classification of integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.3 Classification of Integro-differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Polynomial basic functions and Quadratures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.1 Jacobi Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.2 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.3 Gegenbauer Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Well - and Ill-conditioned problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3.2 Condition number of a problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.3.3 Examples of stable and instable problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.3.4 Stability of an Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Basic spectral methods for integral equations 21

2.1 Spectral Methods Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.1.1 Why Spectral Methods? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

III
CONTENTS

2.1.2 Basic principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.1.3 Choice of trial and test functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.1.4 Projection operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1.5 Collocation method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.1.6 Galerkin’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2 Convergence and stability for a linear integral equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2.1 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.2.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.2.3 Integral equations and ill-posed problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Rational Legendre collocation method for resolution of quadratic integral equations 26

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 Orthogonal rational Legendre functions for the semi-infinite interval . . . . . . . . . . . . . . . . . . . . . . 27

3.3 Rational Lagrange interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 RLCM for quadratic Urysohn integral equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.4.1 Principle of method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.4.2 Error estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.4.3 Illustrative examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.4.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.5 RLCM for quadratic Hammerstien integral equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.5.1 Description of the method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.5.2 Convergence analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.5.3 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.5.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Conclusion and prospects 48

Bibliography 49

IV
List of Tables

3.1 Some values of uN


s (x) at selected points . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Some values of uN
s (x) at selected points . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Some values of uN
s (x) at selected points . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Stability results of Example 1 with s = 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5 Stability results of Example 2 with s = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 39

V
List of Figures

1.1 Jacobi polynomials Jn1,1 (x) (left) and Jn1,0 (x) (right) with n = 0, 1, ..., 5. . . . . . . . . . 7
1.2 Legendre Polynomials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Gegenbauer Polynomials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Well-conditioned versus ill-conditioned problem . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Example of ill-conditioned problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Example of well-conditioned problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Stable and unstable algorithm with respect to a solution obtained using an exact analytical
procedure with an infinite number of digits . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8 The integrand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.9 Absolute value of In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.10 Value of In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.11 Relative error of the approximation of I1 depending on N . . . . . . . . . . . . . . . . . 20

3.1 Graphic of u64 (x), max error=1.7988e-012. . . . . . . . . . . . . . . . . . . . . . . . . 36


3.2 Graphic of u64 (x), max error=1.2046e-007. . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Graphic of u64 (x), max error=1.5629e-012. . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Numerical results of RLC-scheme for Example 2 for N = 128, s = 2. . . . . . . . . . . 46
3.5 Numerical results of RLC-scheme for Example 3 for N = 64, s = 1. . . . . . . . . . . 47

VI
List of Symbols

VII
Introduction

Many problems arising in applied mathematics or mathematical physics, can be formulated in two distinct
but connected ways, namely as differential equations or integral equations. Integral equations are among
the most important branches of mathematics. The importance of these equations in all branches of science
and engineering has prompted several researchers to study certain integral equations analytically and
numerically. One of the things to keep in mind about (IE) is that most of these equations are not explicitly
soluble so mathematicians have resorted to solving them by numerical methods. With the advantage of
numeric computing machines, especially computers, these methods have now become an essential tool for
the investigation of the various fundamental scientific problems that are difficult, namely impossible to
solve in the past. One of these methods is the spectral method.
The spectral methods is a method for approximating the solution of problems by using finite expansions
in terms of orthogonal functions with the trial functions (also called the expansion or approximating
functions) for this method equal to some global smooth basis functions, an example of which would be the
Fourier series, the orthogonal polynomials. These types of methods were originally used for the solution
of boundary value problems, for more information about the history of the method, see [1]. However as
explained by a number of researchers, among these are [2, 3, 4, 5, 6, ?]

1
Chapter

1 Preliminaries

The subject of integral equations has held an eminent place in the attention of mathematicians, such
equations arise naturally in applications in diverse areas of applied mathematics and physical sciences,
engineering, biology, and many other fields, they also provide an effective technique for solving a wide
range of practical problems.

Abel is the initiator of the mention of integral equations, in 1823 he proposed a generalization of
the Tautochrone problem whose solution referred to the solution of an integral equation, recently has
been dubbed ”an integral equation of the first kind”, and in 1837 Liouville proved that determining a
particular solution of a linear differential equation of the second order might be obtained by solving an
integral equation. We recall in this section the history, definitions, and classifications of integral and
integro-differential equations (For more information see [7, 8, 5] ).

1.1 Preliminary concept of integral equations

The name integral equation for any equation involving the unknown function under the integral sign
was introduced by Du Bois-Reymond in 1888. Afterward, in 1896, Vito Volterra built up a theory of
integral equations, viewing their solutions as a problem of finding the inverses of certain integral operators,
without forgetting to mention the famous paper of Fredholm which he published in 1903 and it presented
the fundamentals of the Fredholm integral equation theory. Poincaré, Fréchet, Hilbert, Schmidt, Hardy,
and Riesz have also participated in the development of this area of research.

2
CHAPTER 1. PRELIMINARIES

1.1.1 Definition

An integral equation is defined as an equation in which the unknown function ϕ(x) to be determined
appears under the integral sign. A typical form of an integral equation in ϕ(x) is of the form
Z β(x)
ϕ(x) = f (x) + λ k(x, t)ϕ(t)dt (1.1.1)
α(x)

where k(x, t) is called the kernel of the integral equation (1.1.1), and α(x) , β(x) are the limits of
integration. the kernel k(x, t) and the function f (x) are given functions , and λ is a constant parameter.

1.1.2 Classification of integral equations

The most frequently used integral equations fall under two major classes, namely Volterra and Fredholm
integral equations. Of course, we have to classify them as homogeneous or nonhomogeneous; and also
linear or nonlinear. In some practical problems, we come across singular equations also.

• Fredholm integral equations


The most standard form of Fredholm linear integral equations is given by the form:
Z b
µ(x)ϕ(x) = f (x) + λ k(x, t)ϕ(t)dt (1.1.2)
a

where the limits of integration a and b are constants. If the function µ(x) = 1, then the equation
(1.1.2) is called Fredholm integral equation of second kind; whereas if µ(x) = 0, then the equation
(1.1.2) is called Fredholm integral equation of the first kind.

• Volterra integral equations


The most standard form of Volterra linear integral equations is given by the form:
Z x
µ(x)ϕ(x) = f (x) + λ k(x, t)ϕ(t)dt (1.1.3)
a

where the limits of integration are function of x. If the function µ(x) = 1, then the equation (1.1.3)
is called Volterra integral equation of second kind; whereas if µ(x) = 0, then the equation (1.1.3) is
called Volterra integral equation of the first kind.

• Singular integral equations


When one or both limits of integration become infinite or when the kernel becomes infinite at a
certain point in the interval, the integral equation is called singular. For examples, the integral

3
CHAPTER 1. PRELIMINARIES

equations Z ∞
ϕ(x) = f (x) + λ ϕ(t)dt (1.1.4)
0

and Z x
1
ϕ(x) = f (x) + λ ϕ(t)dt (1.1.5)
0 x−t

are classified as the singular integral equations.

♦ Remark

- If the unknown function ϕ(x) appearing under the integral sign is given in the functional form
F (ϕ(x)) e.g sin(ϕ(x)); ϕ2 (x) etc. then the Volterra and Fredholm integral equations are classified
as nonlinear integral equations.

- if we set f (x) = 0, in Volterra or Fredholm integral equations, then the resulting equation is called a
homogeneous integral equation, otherwise it is called nonhomogeneous integral equation.

1.1.3 Classification of Integro-differential equations

In the early 1900, VitoVolterra studied a new types of equations have been developed and termed as
the integro-differential equations. In this type of equations, the unknown function ϕ(x) appears as the
combination of the ordinary derivative and under the integral sign.
The most standard form of integro - differential equations is given by the form:
Z b(x)
ϕ(n) (x) = f (x) + λ k(x, t)ϕ(t)dt , ϕ(j) (0) = bj , 0 ≤ j ≤ n − 1; (1.1.6)
a(x)

dn ϕ
where ϕ(n) (x) = indicates the nth derivative of ϕ(x). Because the resulted equation in (1.1.6)
dxn
combines the differential operator and the integral operator, then it is necessary to define initial conditions

ϕ(0), ϕ (0), ...ϕ(n−1) (0) for the determination of the particular solution ϕ(x) .

♦ Note : The classification of these equations remains the same as the one mentioned earlier for
integral equations.

4
CHAPTER 1. PRELIMINARIES

1.2 Polynomial basic functions and Quadratures

A spectral method of solution of partial differential and integral equations is based on the expansion
of the solution in a basis set of linearly independent functions. The choice of basis set for a particular
problem is dictated in part by both the interval of interest and the anticipated behaviour of the solutions.
The most usable in spectral methods are the orthogonal polinomials.

A sequence of orthogonal polynomials is an infinite sequence of polynomials P0 (x), P1 (x), P2 (x).


with real coefficients, in which each Pn (x) is of degree n, and tells that the polynomials of the sequence
are orthogonal two to two for a given scalar product.
The scalar product of function is the integral product of these functions, over a limited interval
Z b
⟨f, g⟩ = f (x)g(x)dx
a

more generally, we can introduce "a weight function" w(x) in the integral (on an integration interval ]a, b[
,we must be at finite and strictly positive values, and the integral product of the weight function by a
polynomial must be finished, the bounds a, b can be infinite).
Z b
⟨f, g⟩ = f (x)g(x)w(x)dx
a

with this scalar product definition, two functions are orthogonal to each other if their scalar product is equal
to zero. Here are some examples of basic orthogonal functions. (for more details, see, e.g.,[9, 10, 1, 11, 12])

1.2.1 Jacobi Polynomials

Jacobi polynomials (occasionally called hypergeometric polynomials) Jnα,β (x) are a class of classical
orthogonal polynomials. They are orthogonal with respect to the weight (1 − x)α (1 + x)β on the interval
[−1, 1]. The Gegenbauer polynomials, and thus also the Legendre, Zernike and Chebyshev polynomials,
are special cases of the Jacobi polynomials

• They are defined by the formulas

(−1)n −α −β d
n
Jnα,β (x) = (1 − x) (1 + x) [(1 − x)α+n (1 + x)β+n ]
2n n! dxn

5
CHAPTER 1. PRELIMINARIES

= 2−n Σnm=0 Cn+α


m n−m
Cn+β (x − 1)n−m (x + 1)m

where the Cnm are binomial coefficients.

• They are generated by the three-term recurrence relation

α,β α,β α,β


Jn+1 (x) = (aα,β α,β α,β
n x − bn )Jn (x) − cn Jn−1 (x) n ≥ 1,

1 1
J0α,β (x) = 1, J1α,β (x) = (α + β + 2)x + (α − β)
2 2
Where
(2n + α + β + 1)(2n + α + β + 2)
aα,β
n =
2(n + 1)(n + α + β + 1)
(β 2 − α2 )(2n + α + β + 1)
bα,β
n =
2(n + 1)(n + α + β + 1)(2n + α + β)
(2 + α)(n + β + 1)(2n + α + β + 2)
cα,β
n =
(n + 1)(n + α + β + 1)(2n + α + β)

• The set of Jakobi polynomials forms an orthogonal system, namely,

Z 1
α β 2α+β+1 Γ(n + α + 1)Γ(n + β + 1)
(1 − x) (1 + x) Jnα,β (x)Jm
α,β
(x)dx = δnm , α, β > −1.
−1 2n + α + β + 1 Γ(n + α + β + 1)n!

• Rodrigue’s formula

(−1)n −α −β d
n
Jnα,β (x)J α,β (x) = (1 − x) (1 + x) {(1 − x)α (1 + x)β (1 − x2 )n }
2n n! dxn

• Jakobi-Gauss quadrature formulas :the Gauss-Jacobi rule is defined by


Z 1
(1 − x)α (1 + x)β f (x)dx ≈ Σni=0 wi f (xi )
−1

where the nodes and weight given by


− For Jacobi-Gauss : {xi }ni=0 are the zeros of Jnα,β (x) and

Gα,β
n (xi )
wi = .
Jn (xi )∂Jnα,β (xi )
α,β

6
CHAPTER 1. PRELIMINARIES

− For Jacobi-Gauss-Radau : x0 = −1, {xi }ni=1 are the zeros of Jnα,β+1 (x) and

2α+β+1 (β + 2)Γ2 (β + 1)n!Γ(n + α + β)


w0 =
Γ(n + β + 2)Γ(n + α + β + 2)

1 Gα,β+1
n−1 (xi )
wi = .
1 + xi Jn−1 (xi )∂Jnα,β+1 (xi )
α,β+1

n−1
− For Jacobi-Gauss-Lobatto : x0 = −1, xn = 1, {xi }i=1 are the zeros of ∂Jnα,β (x) and

2β+1 (β + 2)Γ2 (β + 1)Γ(n)Γ(n + α + 1)


w0 =
Γ(n + β + 1)Γ(n + α + β + 2)

2α+1 (β + 2)Γ2 (α + 1)Γ(n)Γ(n + β + 1)


wn =
Γ(n + α + 1)Γ(n + α + β + 2)
1 Gα+1,β+1
n−2 (xi )
wi = 2 α+1,β+1 α+1,β+1 .
1 + xi Jn−2 (xi )∂Jn−1 (xi )
Where
2α+β (2n + α + β + 2)Γ(n + α + 1)Γ(n + β + 1)
Gα,β
n =
(n + 1)Γ(n + α + β + 2)

Figure 1.1: Jacobi polynomials Jn1,1 (x) (left) and Jn1,0 (x) (right) with n = 0, 1, ..., 5.

1.2.2 Legendre Polynomials

The well-known Legendre polynomials are orthogonal in the interval I = [−1, 1] with respect to the
uniform weight function ω(x) = 1. They can be determined with the help of the following recurrence
formula :
2n + 1 n
Ln+1 (x) = xLn (x) − Ln−1 (x) n ≥ 1,
n+1 n+1

7
CHAPTER 1. PRELIMINARIES

(2n + 1)Ln (x) = L′n+1 (x) − L′n−1 (x) n ≥ 1.

• The first of these polynomials are


L0 (x) = 1

L1 (x) = x
1
L2 (x) = (3x2 − 1)
2
1
L3 (x) = (5x3 − 3x)
2

• The set of Legendre polynomials forms an orthogonal system, namely,


Z 1
2
Ln (x)Lm (x)dx = δn,m ,
−1 2n + 1

where δn,m ,is the Kronecker delta function.


• Sturm-Liouville Problem

(1 − x2 )L′′n (x) − 2xL′n (x) + n(n + 1)Ln (x) = 0

• Rodrigue’s formula

1 dn
Ln (x) = [(x2 − 1)n ] , n ≥ 0.
2n n! dxn

• Legendre-Gauss quadrature formulas :


Z 1
f (x)dx ≈ Σnj=0 wj f (xj )
−1

− For legendre- Gauss (LG) : {xj }N


j=0 are the zeros of LN +1 (x) and

2
wj (x) = , 0≤j≤N
(1 − x2j )[L′N +1 (xj )]2

−For legendre- Gauss-Radau (LGR) : {xi }N


j=0 are the zeros of LN +1 (x) + LN (x) and

2 1 1 − xj
w0 = 2
, wj (x) = , 1≤j≤N
(N + 1) (N + 1) [LN (xj )]2
2

8
CHAPTER 1. PRELIMINARIES

−1
−For legendre- Gauss-Lobatto (LGL) : x0 = −1, xN = 1, {xi }N ′
j=1 are the zeros of LN (x) and

2 1
wj = , 0≤j≤N
N (N + 1) [LN (xj )]2

• Rodrigue’s formula

(−1)n Γ(λ + 1/2)Γ(n + 2λ) 2 −λ+1/2 d


n
Cnλ (x) = (1 − x ) [(1 − x2 )n+λ−1/2 ].
2n n! Γ(2λ)Γ(n + λ + 1/2) dxn

Figure 1.2: Legendre Polynomials.

1.2.3 Gegenbauer Polynomials

Gegenbauer polynomials or ultraspherical polynomials Cnλ (x) (λ > −1/2) are orthogonal polynomials
on the interval [−1, 1] with respect to the weight function (1 − x2 )λ−1/2 . They are a subset of the Jacobi
polynomials with α = β = λ − 1/2 and have applications to potential theory and harmonic analysis. They
reduce to Legendre polynomials for λ = 1/2 and to Chebyshev polynomials for λ = 0.
• The polynomials can be defined in terms of their generating function (Stein and W eiss 1971):


1
C λ (x)tn
X
=
(1 − 2xt + t2 )λ n=0 n

9
CHAPTER 1. PRELIMINARIES

• They can be defined by the recurrence relation (Suetin 2001):

1
Cnλ (x) = λ
[2x(n + λ − 1)Cn−1 λ
(x) − (n + 2λ − 2)Cn−2 (x)].
n

• The first of these polynomials are


C0λ (x) = 1

C1λ (x) = 2λx

C2λ (x) = 2λ(λ + 1)x2 − λ



C3λ (x) = (λ + 1)(λ + 2)x3 − 2λ(λ + 1)λx
3
• The set of Gegenbaeur polynomials forms an orthogonal system, namely,
Z 1
Cnλ (x)Cm
λ
(x)(1 − x2 )λ−1/2 dx = γnλ δn,m
−1

where δn,m is the Kronecker function and

π21−2λ Γ(n + 2λ)


γnλ = .
n!(n + λ)[Γ(λ)]2

Figure 1.3: Gegenbauer Polynomials.

10
CHAPTER 1. PRELIMINARIES

1.3 Well - and Ill-conditioned problems

For every problem that we try to solve is based on an expression of some form or another. To have
confidence in our solution we would first need to know that the expression is continuous in its inputs so
that we would not get completely different results from slight changes in the input. However, this is not
enough. We also need to know that the expression is Well − or Ill − Conditioned Problems.

1.3.1 Definition

A problem is called well-conditioned if a small perturbation of the input data , leads to small variations of
the results , i.e. of the same magnitude order. On the other hand, If small changes in the input lead to large
changes in the output, then we call the problem ill-conditioned.

Figure 1.4: Well-conditioned versus ill-conditioned problem

11
CHAPTER 1. PRELIMINARIES

1.3.2 Condition number of a problem

Denoting by P the problem under consideration, if d represents the input data and r the output results, it is
possible to define the condition number of the problem through the following inequality:

∥δr∥ ∥δd∥
≤ K(P ) (1.3.1)
∥r∥ ∥d∥

where ∥.∥ is a given norm which is able to measure the involved quantities. The condition number bounds
the propagation of the input relative error in the output results, it is closely related to the maximum
accuracy that can be attained in the solution. and we define the relative condition number to be
( )
∥δr∥/∥r∥
K(P ) = sup , δd ̸= 0
∥δd∥/∥d∥

♦ Now our problem is a system of equations:

Ax = b (1.3.2)

Input data is A ∈ R(n,n) and b ∈ R(n) , result is x ∈ R(n) . The condition number K(A) is involved in the
answer to the question: how much can a change in the right hand side of a system of linear equations
affect the solution?
The following system obtained by altering the right-hand side.

A(x + δx) = b + δb (1.3.3)

Think of δb as being the error in b and δx as being the resulting error in x, although we need not make any
assumptions that the errors are small. Consequently,

∥δx∥ ∥δb∥
≤ K(A) (1.3.4)
∥x∥ ∥b∥

∥δb∥ ∥δx∥
The quantity ∥b∥
is the relative change in the right-hand side, and the quantity ∥x∥
is the resulting relative
change in the solution.
More generally Kp (A) will denote the condition number of A in the p − norm where p = 1, p =
2 and p = inf by:
Kp (A) = ∥A∥p ∥A−1 ∥p (1.3.5)

12
CHAPTER 1. PRELIMINARIES

♦ Relation between condition number and stability


With the concept of condition we are now able to characterize problems. Now we will have a look at the
characterization of stability and its relation with condition number.
If K(P ) is big the problem will be ill − conditioned Otherwise, the problem is called well − conditioned.

1.3.3 Examples of stable and instable problems

Example 1 Intersection of two straight lines


− In this example we consider the intersection of two straight lines ∆1 and ∆2 given by the equations:

∆1 : x + 3.5y = 8

and
∆2 : 2.1x + 7y = 16.1

The intersection point is (1, 2). Now, if we change the coefficient of the second straight line from 2.1 to
2.12 , the new solution becomes (0.8333, 2.048)
− If we analyze the relative error of the system, we can see that the input relative error is approximately
∥δd∥ |2.1−2.12|
∥d∥
= |2.1|
! ≈ 9.5 × 10−3 while the relative errors on the solution are 1.67 × 10−1 for the variable x
−2
and 2.38 × 10 for y respectively. These values can be considered too high when compared to the order
of magnitude of the input error. This is due to the fact that the two straight lines ∆1 and ∆2 are almost
parallel (ill-conditioned).
Now, let’s consider the system where ∆1 is replaced by the orthogonal line ∆3 to the line ∆2 ,

∆3 : 3.3333x + y = −1.3333

Using the previous perturbation, the solution now moves from (1, 2) to (0.9992, 1, 9974). In this case we
have the same relative input error, but the relative errors on the solution are 7.86 × 10−4 for the variable
x and 1.31 × 10−3 fory , which can be considered small (same order of magnitude of the input). This is
due to the fact that the two straight lines ∆2 and ∆3 are almost orthogonal (well-conditioned).

13
CHAPTER 1. PRELIMINARIES

Figure 1.5: Example of ill-conditioned problem

Figure 1.6: Example of well-conditioned problem

14
CHAPTER 1. PRELIMINARIES

Example 2 The linear models of physics, astronomy,.., often lead to the resolution of large linear systems
which are represented by AX = B. it sometimes happens that a small variation on B leads to a large
variation on X, In this case, we say that the matrix or the problem is ill-conditioned.
− We consider AX = B with
   
10 7 8 8 32
   
7 5 6 5 23
   
A=  
,B =  
8 6 10 9 33
  
   
7 5 9 10 31

and exact solution X = [1, 1, 1, 1]T .


If we change the right-hand side of the problem to
 
32.1
 
22.9
 
B=  
33.1
 
 
30.9

so that the exact solution becomes X = [9.2, −12.6, 4.5, −11]T .


We remark that very small variations on B lead to large variations on X. On the other hand Kp (A) =
∥A∥p ∥A−1 ∥p = 4488, where p is the matrix norm associated with the infinite norm on R4 is big then the
problem is ill − conditioned.

1.3.4 Stability of an Algorithm

With the concept of condition we are now able to characterize problems. Now we will have a look at the
characterization of numerical algorithms. When we study an algorithm our interest is the same as for an
expression: we want small changes in the input to only produce small changes in the output. An algorithm
or numerical process is called ”stable” if small errors in the inputs and at each step lead to small errors
in the solution. Hence, even when a problem is well-conditioned, if we try to solve it with an unstable
algorithm, the obtained results will be meaningless.

15
CHAPTER 1. PRELIMINARIES

Figure 1.7: Stable and unstable algorithm with respect to a solution obtained using an exact analytical procedure with an infinite number of digits

The following examples refer to a comparison between stable and unstable algorithms for two given
problems.

Example 3 Example of integral computation


− In the next two steps, we compare two algorithms solving the following integral:

1Z 1 n x
In = x e dx n≥0 (1.3.6)
e 0

Both algorithms are based on the following theoretical considerations:

R1 n 1
• 0 < In < x dx
0 = ,
1+n
• In+1 < In ,

• limn→0 In = 0.

16
CHAPTER 1. PRELIMINARIES

Figure 1.8: The integrand

• The first strategy (unstable formulation) is to develop an algorithm based on the following recursive
formula:

• For n = 0 wa have
1Z 1 x 1
I0 = e dx = (e − 1) = 1 − e−1 ,
e 0 e

• For n > 0 we can used integration by parts having

1
Z 1 Z 1 
n x
In = x e dx = [xn ex ]10 −n x n−1 x
e dx = 1 − nIn−1
e 0 0

The developed program starts from n = 1 ,where I1 = e−1

To perform the error analysis we denote by In′ = In + εn the approximate value of the integral at step
n with respect to the exact value In and making an error εn . Hence, it is possible to write the following
recursive formula for the error

εn = In′ − In = (1 − nIn−1
′ ′
) − (1 − nIn−1 ) = −n(In−1 − In−1 ) = −nεn−1

and εn , with respect to the first error ε1 , is

εn = (−1)n−1 n!ε1

17
CHAPTER 1. PRELIMINARIES

As a consequence, even if ε1 is small, the error εn grows up to infinity as a factorial.

Figure 1.9: Absolute value of In

• The second strategy (stable formulation) is to develop an algorithm based on the following recursive
formula:

• For n = N we set: IN = 0

• For n > 0 we rewrite the previous recursive formula In = (1 − nIn−1 ) in terms of n − 1 as follows:

1
In−1 = (1 − In )
n

The developed program starts from N = 100 and computes I1 as the last integral.
To perform the error analysis we denote by In′ = In + εn the approximate value of the integral at step n
with respect to the exact value In and making an error εn . Hence, it is possible to write the following
recursive formula for the error

εn = In′ − In = (1 − nIn−1
′ ′
) − (1 − nIn−1 ) = −n(In−1 − In−1 ) = −nεn−1

1
giving εn−1 = ∗ = − εn Hence ε1 can be expressed in terms of εN as
n

(−1)N −1
ε1 = εN
N!

18
CHAPTER 1. PRELIMINARIES

As a consequence, even if εN is “big”, the error ε1 decreases to zero, since we have a factorial as
denominator.

Figure 1.10: Value of In

♦ The relative error for I1 starting from for IN = 0 different values of N The relative error on ε1 is
|I exact − I1 |
computed as errrel = 1 exact where I1 depends on N and I1exact = e−1
|I1 |
− Note that the values are visible only until N = 17, after that limit the values are less then ε and are not
visible in a logarithmic scale.

19
CHAPTER 1. PRELIMINARIES

Figure 1.11: Relative error of the approximation of I1 depending on N

20
Chapter

Basic spectral methods for integral equa-


2 tions

2.1 Spectral Methods Theory

Spectral methods are a very powerful tool in the numerical resolution of integral equations. The
great importance of these methods makes us ask the question, What are the characteristics of the spectral
methods that make them the best methods to solve integral equations?

2.1.1 Why Spectral Methods?

There are major benefits of spectral methods compared to other approaches. Mainly we can say that:

• Firstly, spectral discretizations of integral equations, based for example on Fourier bases or orthogo-
nal polynomials of (Chebychev, Legendre, Hermite, . . ) provided very low approximations errors.
In many cases, these approaches may converge exponentially for a spectral development of order
N, the difference between the analytical solution (exact) and the numerical solution tends towards
zeros rapidly with the increase of the spectral development order.

• Since the numerical accuracy of the spectral methods is so high, the number of grid points needed to
reach the desired precision is very small, therefore a spectral method requires less memory than
other methods. This minimization is really crucial, especially for the execution of algorithms.

• There is a high performance of implementations of algorithms necessary for basic transformation for
most spectral methods, and the developer of a spectral method does not need to apply these codes.

21
CHAPTER 2. BASIC SPECTRAL METHODS FOR INTEGRAL EQUATIONS

2.1.2 Basic principle

Spectral methods are used extensively for the discretization of integral equations. The main idea is to
approximate the solution ϕ(x) of integral equation on some interval Ω not necessarily bounded by a sum
of some trial(or basis) functions (for example, Fourier bases, orthogonal polynomials . . .), then to choose
the coefficients of this combination.
N
X
ϕ(x) ≈ ϕN (x) = ck uk (x) (2.1.1)
k=0

where {uk } are the basis functions, and the expansion {ck } are the coefficients to be determined. Substi-
tuting ϕk in equation
Lϕ = f (2.1.2)

Where L represents an integral operator. leads to the residual:

RN (x) = LϕN − f (x) ̸= 0, x∈Ω (2.1.3)

This residue would be null if {ϕN } was the exact solution and the notion of the spectral methods is to
force the residual to zero by requiring:
Z
⟨RN , Ψj ⟩ := RN (x)Ψj (x)ω(x)dx = 0, 0 ≤ j ≤ N (2.1.4)

Where Ψj are the test functions, and ω is a positive weight function; or

N
X
⟨RN , Ψj ⟩ := RN (xk )Ψj (xk )ωk = 0, 0 ≤ j ≤ N (2.1.5)
k=0

Where {xk }N N
k=0 are a set of collocation points, and {ωk }k=0 are the weights of a numerical quadrature
formula.

2.1.3 Choice of trial and test functions

− The choice of trial/test functions is one of the features which distinguish spectral methods from
other methods. The most commonly used trial/test functions are trigonometric functions or orthogonal
polynomials.
It is obvious that we will like that our base is characterized by a number of properties, an easy calculation,

22
CHAPTER 2. BASIC SPECTRAL METHODS FOR INTEGRAL EQUATIONS

a fast convergence, and that it is complete. This means that any solution can be represented with arbitrarily
great precision by taking the N truncation sufficiently large.
The following table summarizes the Choice of basic functions.

Periodic No Periodic Halfe line Real line


θ ∈ [0, 2π] x ∈ [−1, 1] [0, inf[ ] − inf, inf[
Fourier Chebychev or Legendre Laguerre Hermite

−The choice of test functions distinguishes between the three most commonly used spectral schemes,
namely, the Galerkin, collocation, and tau versions.

• Collocation: the test functions are translated Dirac delta functions such that Ψk (xj ) = δjk (x), where
{xj } are called collocation points. Hence, the residual is forced to zero at {xj }, i.e.,RN (xj ) = 0.

• Galerkin: the test functions are the same as the trial functions. (i.e, Ψk (x) = uk (x) in (2.1.4))

• Tau: The test functions are different from the trial ones.

2.1.4 Projection operator

2.1.5 Collocation method

2.1.6 Galerkin’s method

2.2 Convergence and stability for a linear integral equation

The concepts of stability and convergence are crucial in the analysis of numerical methods in differential
and integral equations. For every problem that we try to solve is based on an expression of some form or
another.
To have confidence in our solution we would first need to know that the expression is continuous in its
inputs, so that we would not get completely different results from slight changes in the input. However,
this is not enough. We also need to know that the expression is Well − or Il l−Conditioned Problems.
In Numerical Analysis there are always two fundamental questions we should consider when solving the
problem (2.1.4) How to find conditions under which ϕN converges to ϕ as N → inf and How to bounded
the error ∥ϕ − ϕN ∥?

23
CHAPTER 2. BASIC SPECTRAL METHODS FOR INTEGRAL EQUATIONS

2.2.1 Convergence Analysis

2.2.2 Stability

It is the property that ensures that the difference between the numerical solution obtained and the exact
solution is limited. There are three types of stability The stability of a physical problem, of a mathematical
problem end the stability of a numerical method.

♢ Stability of a physical problem: chaotic system


A problem is said to be chaotic if a small variation of the initial data results in a totally unpredictable
variation of the results. This notion of chaos, linked to the physics of a problem, is independent of the
mathematical model used and even more of the numerical method used to solve this mathematical problem.
Many problems are chaotic, such as fluid turbulence.

♢ Stability of a mathematical problem: sensitivity


A problem is said to be sensitive or poorly conditioned if a small variation of the data or parameters results
in a large variation of the results. This notion of conditioning, linked to the mathematical problem, is
independent of the numerical method used to solve it. To model a physical problem that is not chaotic, we
will build a mathematical model that will be as conditioned as possible.

♢ Stability of a numerical method


A method is said to be unstable if it is subject to significant propagation of numerical discretization and
rounding errors. A problem can be well conditioned while the numerical method chosen to solve it is
unstable. In this case, it is imperative to change the numerical method. On the other hand, if the initial
problem is poorly conditioned, no numerical method can remedy it. We will then have to try to find a
different mathematical formulation of the same problem.

2.2.3 Integral equations and ill-posed problems

24
CHAPTER 2. BASIC SPECTRAL METHODS FOR INTEGRAL EQUATIONS

25
Chapter

Rational Legendre collocation method for


3 resolution of quadratic integral equations

3.1 Introduction

Quadratic integral equations are a class of nonlinear integral equations having many important uses in
engineering and sciences. For example, they appear naturally in radiative transfer theory, kinetic theory of
gases, the theory of neutron transport and the traffic theory (see, e.g., [13, 14, 15, 16, 17, 18] and reference
therein). This is why several authors have increasingly focused on the question of the existence and
uniqueness of solutions to these kinds of equations. For instance, Banaś and al. [19] studied the solvability
of a nonlinear quadratic integral equation of Hammerstein type on an unbounded interval in some Banach
space, consisting of all real functions defined, bounded and continuous on R+ . In [20, 21, 22], the
authors investigated the existence of solutions for the Urysohn integral equation on an unbounded interval.
However, to the best of our knowledge, the numerical treatment of such equations is not explored in the
literature review.
In this chapter we investigate the numerical solution of quadratic Urysohn integral equations on the
half-line, namely
Z ∞
u(x) = a(x) + f (x, u(x)) k(x, t, u(t))dt, x ∈ [0, ∞), (3.1.1)
0

and the Hammerstien integral equation on an unbounded interval defined by:


Z ∞
u(x) = a(x) + f (x, u(x)) k(x, t)g(t, u(t))dt x ∈ R+ (3.1.2)
0

where k(x, t, .), a(x) and f (x) are given continuous functions and u(x) is unknown function. To do this,

26
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

we will proceed in the same way as in [?] we derive the so-called rational Legendre functions that can be
obtained by combining the classical Legendre polynomials with algebraic mapping, and then we apply the
rational Legendre spectral collocation method to solve the integral equation. In the next section, we recall
some conditions and theorems for the existence of the unique solution of Eq. (3.1.2) which were proposed
by the authors of [20]. Some properties of rational Legendre functions are first presented in section ??. In
section 3.5.1, we describe the rational Legendre collocation method to solve Eq. (3.1.2) and discusses the
convergence of the approximate solution to the exact solution in the L∞ (R+ ). Some numerical results are
given to demonstrate the stability of the proposed method are presented in section 3.5.3.

3.2 Orthogonal rational Legendre functions for the semi-infinite


interval

In this section, we introduce rational Legendre functions and recall some basic properties. Moreover, we
present function approximations using orthogonal rational Legendre functions basis in some weighted
L2ρs [0, ∞) space.
The well-known Legendre polynomials are orthogonal in the interval I = [−1, 1] with respect to the
uniform weight function. They can be determined with the help of the following recurrence formula [23]:

(n + 1)Pn+1 (y) = (2n + 1)yPn (y) − nPn−1 (y) n ≥ 1.

Besides-
P0 (y) = 1, P1 (y) = y, Pn (1) = 1, Pn (−1) = (−1)n .

The set of Legendre polynomials forms an orthogonal system, namely,


Z 1
2
Pn (y)Pm (y)dy = δn,m ,
−1 2n + 1

where δn,m ,is the Kronecker delta function. Furthermore, for any function U ∈ L2 (I), we write


X 2j + 1 Z 1
U (y) = cj Pj (y) with cj = U (y)Pj (y)dy.
j=0 2 −1

For a given positive integer N , let PN denote the space of all algebraic polynomials of degree not exceeding
N . we denote the collocation points by {σiN }N
i=0 which is the set of (N + 1) Gauss-Legendre points, and

27
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

by {ωiN }N
i=0 the corresponding weights. The associated Gauss-Legendre quadrature formula is defined by :

Z 1 N
ϕ(σiN )ωiN ,
X
ϕ(y)dy = ∀ϕ ∈ P2N +1 .
−1 i=0

Let us consider the following one to one invertible mapping between x ∈ R+ and y ∈ I, with s > 0 of the
form:
x−s s(1 + y)
y = θs (x) = = xs , x = φs (y) = . (3.2.1)
x+s 1−y
It is clear that
dy 2s dx ′ 2sy
= θs′ (x) = , = φs (y) = , (3.2.2)
dx (x + s)2 dy (1 − y)2
where s is a positive scaling factor. The rational Legendre functions can be defined by

Rs,n (x) := Pn (θs (x)) , n = 0, 1, 2, . . . . (3.2.3)

They are orthogonal in the semi-infinite interval R+ with respect to the weight function given by

2s
ρs (x) = θs′ (x) = , (3.2.4)
(x + s)2

equivalently Z ∞
2
Rs,n (x)Rs,m (x)ρs (x)dx = δn,m .
0 2n + 1
It is not hard to show that {Rs,j }∞ 2 2
j=0 forms a complete basis in Lρs (R+ ). For any function u ∈ Lρs (R+ ),
the following expansion holds

X 2j + 1 Z ∞
u(x) = ûs,j Rs,j (x) with ûs,j = u(x)Rs,j (x)ρs (x)dx. (3.2.5)
j=0 2 0

3.3 Rational Lagrange interpolation

First, we define PsN the finite dimensional approximation subspace spanned for a given positive integer N
by the set of rational Legendre functions as

PsN := {v | v(x) = ϕ(θs (x)), ∀ϕ ∈ PN }, (3.3.1)

28
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

where PN denotes the set of all polynomials on I who have a degree at most N . The set of rational
s
Legendre-Gauss {ζN,i }N
i=0 , which is defined as

s
ζN,i = φs (σiN ), 0 ⩽ i ⩽ N. (3.3.2)

The associated rational Gauss-Legendre quadrature formula is defined by :

Z ∞ Z ∞ Z 1 N
ϕ(σiN )ωiN
X
v(x)ρs (x)dx = ϕ(θs (x))ρs (x)dx = ϕ(y)dy =
0 0 −1 i=0
N
s
))ωiN
X
= ϕ(θs (ζN,i
i=0
N
s
)ωiN , ∀v ∈ Ps2N +1 .(3.3.3)
X
= v(ζN,i
i=0

The rational Lagrange basis functions are defined by the follow formula:

N s
θs (x) − θs (ζN,j )
LN
Y
i,s (x) = s s
, 0 ≤ i ≤ N,
j=0,j̸=i θs (ζN,i ) − θs (ζN,j )

then it is clear that the functions LN


i,s (x) satisfy

LN s
i,s (ζN,j ) = δi,j .

For any u ∈ C(R+ ), we can define the Lagrange interpolating polynomial INs u ∈ PsN satisfying

INs u ∈ PsN such that INs u(ζN,j


s s
) = u(ζN,j ), 0 ⩽ j ⩽ N, (3.3.4)

which can be expanded as


N
INs u(x) = s
)LN
X
u(ζN,i i,s (x). (3.3.5)
i=0

The following estimate quoted from lemma 5.5 of [24].

Lemma 3.1 Let {LN N


i,s (x)}i=1 be the N −th rational Lagrange interpolation functions associated with the
rational Legendre collocation points. Then

N
∥INs ∥∞ |LN 1/2
X
:= sup i,s (x)| = O(N ).
x∈R+ i=0

29
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

For notational convenience, we introduce

u(x) = u(φs (y)) := Us (y), (3.3.6)

it’s clear that Us ∈ C(I). In order to describe the approximation errors, we introduce new differential
operators as follows:
du dx
Dx u = Vs (x) , Vs (x) := , (3.3.7)
dx dy
and an induction argument leads to
!
d d u
  
Dxm u = Vs (x) Vs (x) · · · Vs (x) ··· = ∂ym Us , m = 0, 1, . . . (3.3.8)
dx dx dx

To prove error estimates for the above scheme, we begin by defining the following weighted Hilbert space
with some useful lemmas about rational Lagrange interpolation based on the Gauss- Rational Legendre
points. For a nonnegative integer m, define

Hρms (R+ ) = {u | Dxr u ∈ L2ρs (R+ ) 0 ≤ r ≤ m},

related to the following semi-norm and the norm:

m
!1/2
|u|sm = ∥Dxm u∥L2ρs (R+ ) , ∥u∥sm = ∥Dxr u∥2L2ρs (R+ )
X
.
r=0

Also, it is convenient to introduce the semi-norms


 1/2
m
|u|ρm;N ∥Dxr u∥2L2ρs (R+ ) 
X
s
:= |u|Hρm;N (R+ ) =  .
s
r=min(m,N +1)

In the following, we prove the below lemma, which estimates the error between the approximate and exact
solutions.

Lemma 3.2 Assume that u ∈ Hρms (R+ ) we have

∥u − INs u∥∞ ≤ cN 1/2−m |u|ρm;N


s
,

where c is a positive constant independent of N and u.

Proof. Let IN be the Lagrange interpolation operator associated with the Legendre collocation points.

30
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

From (3.3.6), we have

∥u − INs u∥∞ = sup |u(x) − INs u(x)| = sup |Us (y) − IN Us (y)|
x∈R+ y∈I
= ∥Us − IN Us ∥∞ . (3.3.9)

Finally, according to lemma 1 of [25], it is mentioned that for any Us ∈ C(I) and m ≥ 0,

∥IN Us − Us ∥∞ ⩽ cN 1/2−m |Us |m;N , (3.3.10)

which implies
∥u − INs u∥∞ ≤ cN 1/2−m |u|ρm;N
s
. (3.3.11)

3.4 RLCM for quadratic Urysohn integral equation

3.4.1 Principle of method

The solution of Eq.(3.1.2) may be obtained by simply collocating, that is forcing the resulting to be exact
s
at the Gauss-Legendre points {ζN,j }N
j=0 , namely

Z ∞
s s s s s
u(ζN,j ) = a(ζN,j ) + f (ζN,j , u(ζN,j )) k(ζN,j , t, u(t))dt, j = 0···N (3.4.1)
0

The main difficulty in obtaining high order accuracy is to compute the integral terms in (3.5.1) accurately.
To overcome this difficulty, we need to use the rational Gauss-Legendre quadrature formula. Then the
equation (3.5.1) can be written as follows:

N
s s s s s s s
))ρsN,i ,
X
u(ζN,j ) = a(ζN,j ) + f (ζN,j , u(ζN,j )) ks (ζN,j , ζN,i , u(ζN,i j = 0 · · · N, (3.4.2)
i=0

where
k(x, t, u(t))
ks (x, t, u(t)) = . (3.4.3)
ρs (t)

31
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Using uN s
s,j , 0 ≤ j ≤ N , to approximate the function value u(ζN,j ), and use

N
uN uN N
X
s (x) = s,j Lj,s (x), (3.4.4)
j=0

to approximate the function u(x), namely

s
u(ζN,j ) ∼ uN
s,j , u(x) ∼ uN
s (x). (3.4.5)

Then, the discrete spectral Legendre-collocation method for solving the resulting equation leads to the
following collocation equation:

N
uN s s N s s
, uN s
X
s,j = a(ζN,j ) + f (ζN,j , us,j ) ks (ζN,j , ζN,i s,i )ρN,i , j = 0 · · · N, (3.4.6)
i=0

which is a nonlinear system of the form

u = H(u), H(u) = A + F(u)M(u)W, (3.4.7)

where M, W, A and F are given by:

s s
M(u) = (ks (ζN,j , ζN,i , uN s
s,j ))0≤i,j≤N , W = diag((ρN,i )0≤i≤N ),

s s
A = (a(ζN,j ))0≤j≤N , F(u) = diag((f (ζN,j , uN
s,j ))0≤j≤N ),

and the unknown is the vector u ≡ [uN N N T


s,0 , us,1 , . . . , us,N ] .

To achieve a highly accurate numerical solution of (3.5.7), we would need to apply the following
iterative process
u(k) = H(u(k−1) ). (3.4.8)

Finally, the recurrence relation (3.5.8) with the initial value u(0) = A.

3.4.2 Error estimates

In the sequel, for the convergence analyses, we assume the following hypothesis:

(x) 6
g(x, t)
sup < ∞.
x,t∈R+ ρs (t)

32
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Theorem 3.1 Let u be the exact solution to Eq.(3.1.2) and the approximate solution uN
s obtained by
using the spectral-collocation scheme (3.5.6). If u ∈ Hρms (R+ ), then for m ≥ 1,

∥u − uN
s ∥∞ = O(N
1/2−m
). (3.4.9)

Proof. Let the resulting quadratic Urysohn integral equation


Z ∞
u(x) = a(x) + f (x, u(x)) ks (x, t, u(t))ρs (t)dt, (3.4.10)
0

while using the approximate solution, we have


Z ∞
uN s s s
s (x) = IN a(x) + IN f (x, IN u(x))
s
IN,N ks (x, t, INs u(t))ρs (t)dt. (3.4.11)
0

Subtracting (3.5.11) from (3.5.10), we get the error equation


Z ∞
u(x) − uN s
s (x) = a(x) − IN a(x) + f (x, u(x)) (ks (x, t, u(t)) − ks (x, t, INs u(t)))ρs (t)dt
0
Z ∞
+ (f (x, u(x)) − f (x, INs u(x))) ks (x, t, INs u(t))ρs (t)dt
0
Z ∞
+ f (x, INs u(x)) (ks (x, t, INs u(t)) − IN,N
s
ks (x, t, INs u(t)))ρs (t)dt
0
Z ∞
+ (f (x, INs u(x)) − INs f (x, INs u(x))) s
IN,N ks (x, t, INs u(t))ρs (t)dt
0

= J0 (x) + J1 (x) + J2 (x) + J3 (x) + J4 (x), (3.4.12)

where

J0 (x) = a(x) − INs a(x), (3.4.13)


Z ∞
J1 (x) = f (x, u(x)) (ks (x, t, u(t)) − ks (x, t, INs u(t)))ρs (t)dt, (3.4.14)
0
Z ∞
J2 (x) = (f (x, u(x)) − f (x, INs u(x))) ks (x, t, INs u(t))ρs (t)dt, (3.4.15)
0
Z ∞
J3 (x) = f (x, INs u(x)) (ks (x, t, INs u(t)) − IN,N
s
ks (x, t, INs u(t)))ρs (t)dt, (3.4.16)
0
Z ∞
J4 (x) = (f (x, INs u(x)) − INs f (x, INs u(x))) s
IN,N ks (x, t, INs u(t))ρs (t)dt. (3.4.17)
0

By the triangle inequality, we have

4
∥u − uN
X
s ∥∞ ≤ ∥Jk ∥∞ . (3.4.18)
k=0

33
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

It follows immediately from Lemma 3.2 that

∥J0 ∥∞ ≤ cN 1/2−m |a|ρm;N


s
. (3.4.19)

On the other hand from assumption (iii), we have

|J1 (x)|
Z ∞
= |f (x, u(x)) (ks (x, t, u(t)) − ks (x, t, INs u(t)))ρs (t)dt|
0
Z ∞
≤ |f (x, u(x))| |ks (x, t, u(t)) − ks (x, t, INs u(t))ρs (t)dt|
0
Z ∞
≤ |f (x, u(x))| sup |ks (x, t, u(t)) − ks (x, t, INs u(t))|| ρs (t)dt|.
x,t∈R+ 0

Since ks : R+ × R+ × R → R is continuous function with respect to the third variable then

∀εN > 0, ∃γN , |u(t) − INs u(t))| ⩽ γN ⇒ |ks (x, t, u(t)) − ks (x, t, INs u(t))| < εN , (3.4.20)

from assumption (ii) and Lemma 3.2, we obtain

∥J1 ∥∞ ≤ C1 N 1/2−m |u|m;N . (3.4.21)

From assumption (iii), it follows that


Z ∞
J2 (x) = (f (x, u(x)) − f (x, INs u(x))) ks (x, t, INs u(t))ρs (t)dt
0
Z ∞
= (f (x, u(x)) − f (x, INs u(x))) k(x, t, INs u(t))dt
0
Z ∞
≤ C|u(x) − INs u(x)| g(x, t)h(|INs u(t)|)dt
0
Z ∞
≤ C∥u − INs u∥∞ h(∥INs u∥∞ ) sup g(x, t)dt.
x∈R+ 0

Then by Lemma 3.2, we have


∥J2 ∥∞ ≤ C2 N 1/2−m |u|m;N
ρs . (3.4.22)

Also, by Lemma 3.2, we have


Z ∞
|J3 (x)| ≤ |f (x, INs u(x))| |ks (x, t, INs u(t)) − IN,N
s
ks (x, t, INs u(t))|ρs (t)dt
0
Z ∞
≤ c∥ks (x, ., INs u(.)) s
− IN,N ks (x, ., INs u(.))∥∞ | ρs (t)dt|.
0

34
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Then
∥J3 ∥∞ ≤ C3 N 1/2−m |ks (x, ., INs u(.))|ρm;N
s
. (3.4.23)

Now, from Lemma 3.1, Lemma 3.2 and assumption (x) , we can write
Z ∞
|J4 (x)| ≤ |f (x, INs (x)) − INs f (x, INs u(x))| s
|IN,N ks (x, t, INs u(t))|ρs (t)dt
0
Z ∞
≤ ∥f − INs f ∥∞ sup s
|IN,N ks (x, t, INs u(t))|| ρs (t)dt|.
x,t∈R+ 0

Therefore

∥J4 ∥∞ ≤ C4 N 1/2−m |f |m;N


ρs
s
sup |IN,N ks (x, t, INs u(t))|
x,t∈R+

≤ C4 N 1/2−m |f |m;N
ρs
s
sup |ks (ζN,j s
, ζN,i , INs u(ζN,i
s
))|∥INs ∥2∞ .
0≤i,j≤N

3.4.3 Illustrative examples

In this section, we present some typical numerical examples to illustrate our theoretical results. All
computations were performed using MATLAB. For the next part, we obtain the stability at some points
for various values of perturbation ε from the first and second examples.

Example 4 [20] Let us consider the following quadratic Urysohn integral equation:
Z +∞
−4x2
u(x) = xe + arctan(x + u(x)) e−t(x+1) u2 (t)dt, x ∈ [0, ∞). (3.4.24)
0

In Table 3.1, we evaluate uN


s at some points by using RLC method with s = 1.5. Also, we represent the
approximate solution evaluating max error by computing the difference between u128 and u64 on 1000
equally spaced points (Figure 3.5)
Table 3.1: Some values of uN
s (x) at selected points

N 0.5 5 10 15
4 0.1 0.00 0.0 0.0
8 0.1 0.00 0.00 0.00
16 0.191 0.00 0.001 0.000
32 0.19115 0.0040215 0.001356 0.00057
64 0.19115470283 0.004021573357 0.00135659880 0.000574038487
128 0.19115470283 0.004021573357 0.00135659880 0.000574038487

35
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

0.25

0.2

0.15
u64 (x)

0.1

0.05

0
0 5 10 15
x

Figure 3.1: Graphic of u64 (x), max error=1.7988e-012.

36
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Example 5 [20] Let us consider the following quadratic Urysohn integral equation:
Z +∞
x q  
2 2
u(x) = 2 + u(x) ln 1 + |u(x)|e−t(x +2)/(x +1) dt, (3.4.25)
x + 16 0

In Table 3.2, we evaluate uN


s at some points by using RLC method with s = 4. Also, we represent the
approximate solution evaluating max error by computing the difference between u128 and u64 on 1000
equally spaced points (Figure 3.2)
Table 3.2: Some values of uN
s (x) at selected points

N 0.5 10 20 40 80
4 0.0 0.1 0.06 0.02 0.01
8 0.034 0.12 0.06 0.029 0.01
16 0.03412 0.12806 0.062935 0.0296 0.01409
32 0.0341211 0.12806851 0.06293505 0.0296506 0.01409470
64 0.034121139 0.1280685181 0.062935051 0.0296506246 0.014094706
128 0.034121139 0.1280685181 0.062935051 0.0296506246 0.0140947059

0.3

0.25

0.2
u64(x)

0.15

0.1

0.05

0
0 20 40 60 80 100
x

Figure 3.2: Graphic of u64 (x), max error=1.2046e-007.

37
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Example 6 [22] Let us consider the following quadratic Urysohn integral equation:
q
−x
u2 (x) + 1 Z +∞ q
u(x) = xe + e−(x+t+1) 1 + |u(t)|dt, x ∈ [0, ∞). (3.4.26)
x+1 0

In Table 3.3, we evaluate uN


s at some points by using RLC method with s = 2. Also, we represent the
approximate solution evaluating max error by computing the difference between u128 and u64 on 1000
equally spaced points (Figure 3.3)
Table 3.3: Some values of uN
s (x) at selected points

N 0.5 5 10 15
8 0.501 0.03 0.0004 0.0006
16 0.501879 0.0341 0.0004 0.00002
32 0.50187982 0.03418301 0.0004558 0.0000045
64 0.5018798271510 0.0341830194573 0.000455811180 0.000004596928
128 0.501879827151055 0.034183019457366 0.000455811180218 0.000004596928060

0.7

0.6

0.5

0.4
u64(x)

0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8
x

Figure 3.3: Graphic of u64 (x), max error=1.5629e-012.

3.4.4 Stability

To demonstrate the stability of Examples we consider the non linear system of algebraic equations (3.5.1),
and investigate the effect of perturbation ε in the input of system (A + ε), then observe that output of

38
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

system will not change so much. Now in T able 4 we show the stability of Example 1 for various values of
perturbation ε = 10−2 , 10−3 and 10−4 And the same principal is applied for the second example presented
in T able 5. for various values of perturbation ε the approximate solutions have very little change.
Table 3.4: Stability results of Example 1 with s = 1.5

x uN uN (ε = 10−2 ) uN (ε = 10−3 ) uN (ε = 10−4 )


0.5 0.1911547 0.2022581 0.1922591 0.1912651
5 0.0040216 0.0146030 0.0050771 0.0041271
10 0.0013566 0.0115974 0.0023793 0.0014588
20 0.0002872 0.0103658 0.0012944 0.0003880
40 0.0000449 0.0100682 0.0010469 0.0001451
80 0.0000061 0.0100131 0.0010066 0.0001061

Table 3.5: Stability results of Example 2 with s = 4

x uN uN (ε = 10−2 ) uN (ε = 10−3 ) uN (ε = 10−4 )


10 0.1280685 0.1478543 0.1299914 0.1282602
20 0.0629351 0.0787363 0.0644728 0.0630884
30 0.0405221 0.0549253 0.0419221 0.0406617
40 0.0296506 0.0433387 0.0309789 0.0297830
50 0.0233010 0.0365493 0.0245847 0.0234290
60 0.0191572 0.0321049 0.0204100 0.0192821
70 0.0162472 0.0289750 0.0174771 0.0163698
80 0.0140947 0.0266539 0.0153067 0.0142154
90 0.0124396 0.0248650 0.0136373 0.0125589
100 0.0111282 0.0234446 0.0123142 0.0112464

3.5 RLCM for quadratic Hammerstien integral equation

3.5.1 Description of the method

The solution of Eq.3.1.2 may be obtained by simply collocating, that is, forcing the resulting to be exact at
s
the Gauss-Legendre points {ζN,j }N
j=0 , namely

Z ∞
s s s s s
u(ζN,j ) = a(ζN,j ) + f (ζN,j , u(ζN,j )) k(ζN,j , t)g(t, u(t))dt, j = 0···N (3.5.1)
0

39
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

The main difficulty in obtaining high order accuracy is to compute the integral terms in 3.5.1 accurately.
To overcome this difficulty, we need to use the rational Gauss-Legendre quadrature formula. Then the
equation0 3.5.1 can be written as follows:

N
s s s s s s s s s
X
u(ζN,j ) = a(ζN,j ) + f (ζN,j , u(ζN,j )) k(ζN,j , ζN,i )g(ζN,i , u(ζN,i ))wN,i , j = 0 · · · N, (3.5.2)
i=0

where
s ωiN
wN,i = s
. (3.5.3)
ρs (ζN,i )
Using uN s
s,j , 0 ≤ j ≤ N , to approximate the function value u(ζN,j ), and use

N
uN uN N
X
s (x) = s,j Lj,s (x), (3.5.4)
j=0

to approximate the function u(x), namely

s
u(ζN,j ) ∼ uN
s,j , u(x) ∼ uN
s (x). (3.5.5)

Then, the discrete spectral Legendre-collocation method for solving the resulting equation leads to the
following collocation equation:

N
uN s s N s s s
, uN s
X
s,j = a(ζN,j ) + f (ζN,j , us,j ) k(ζN,j , ζN,i )g(ζN,i s,i )wN,i , j = 0 · · · N, (3.5.6)
i=0

which is a nonlinear system of the form

u = H(u), H(u) = A + F(u)KG(u)W, (3.5.7)

where K, W, A, G and F are given by:

s s s s
K = (k(ζN,j , ζN,i ))0≤i,j≤N , W = diag((wN,i )0≤i≤N ), G(u) = (g(ζN,i , uN
s,i ))0≤i≤N

s s
A = (a(ζN,j ))0≤j≤N , F(u) = diag(F (ζN,j , uN
s,j ))0≤j≤N ),

and the unknown is the vector u ≡ [uN N N T


s,0 , us,1 , . . . , us,N ] .

To achieve a highly accurate numerical solution of (3.5.7), we would need to apply the following

40
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

iterative process
u(k) = H(u(k−1) ). (3.5.8)

Finally, the recurrence relation (3.5.8) with the initial value u(0) = A.

3.5.2 Convergence analysis

Theorem 3.2 Let u be the exact solution to Eq. (3.1.2) and the approximate solution uN
s obtained by
using the spectral-collocation scheme in the previous section. If u ∈ Hsm (R+ ), then for m ≥ 1,

∥u − uN
s ∥∞ = O(N
1/2−m
). (3.5.9)

Proof. Let the quadratic Hammerstien integral equation


Z ∞
u(x) = a(x) + f (x, u(x)) k(x, t)g(t, u(t))dt, (3.5.10)
0

while using the approximate solution, we have


Z ∞
uN
s (x) = INs a(x) + INs f (x, INs u(x)) s
IN,N k(x, t)INs g(t, INs u(t))dt. (3.5.11)
0

Subtracting (3.5.11) from (3.5.10), we get the error equation


Z ∞
u(x) − uN
s (x) = a(x) − INs a(x) + f (x, u(x)) s
(k(x, t) − IN,N k(x, t))g(t, u(t))dt
0
Z ∞
+ (f (x, u(x)) − f (x, INs u(x))) s
IN,N k(x, t)g(t, u(t)))dt
0
Z ∞
+ f (x, INs u(x)) s
IN,N k(x, t)(g(t, u(t)) − g(t, INs u(t)))dt
0
Z ∞
+ (f (x, INs u(x)) − INs f (x, INs u(x))) s
IN,N k(x, t)g(t, INs u(t))dt
0
Z ∞
+ INs f (x, INs u(x)) s
IN,N k(x, t)(g(t, INs u(t)) − INs g(t, INs u(t)))dt
0

= J0 (x) + J1 (x) + J2 (x) + J3 (x) + J4 (x) + J5 (x), (3.5.12)

41
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

where

J0 (x) = a(x) − INs a(x), (3.5.13)


Z ∞
s
J1 (x) = f (x, u(x)) (k(x, t) − IN,N k(x, t))g(t, u(t))dt, (3.5.14)
0
Z ∞
J2 (x) = (f (x, u(x)) − f (x, INs u(x))) s
IN,N k(x, t)g(t, u(t)))dt, (3.5.15)
0
Z ∞
J3 (x) = f (x, INs u(x)) s
IN,N k(x, t)(g(t, u(t)) − g(t, INs u(t)))dt, (3.5.16)
0
Z ∞
J4 (x) = (f (x, INs u(x)) − INs f (x, INs u(x))) s
IN,N k(x, t)g(t, INs u(t))dt, (3.5.17)
0
Z ∞
J5 (x) = INs f (x, INs u(x)) s
IN,N k(x, t)(g(t, INs u(t)) − INs g(t, INs u(t)))dt. (3.5.18)
0

By the triangle inequality, we have

5
∥u − uN
X
s ∥∞ ≤ ∥Jk ∥∞ . (3.5.19)
k=0

It follows immediately from lemma 3.2 that

∥J0 ∥∞ ≤ cN 1/2−m |a|sm;N . (3.5.20)

On the other hand from assumption (ii), we have


Z ∞
s
|J1 (x)| = |f (x, u(x)) (k(x, t) − IN,N k(x, t))g(t, u(t))dt|
0
Z ∞
s
≤ |f (x, u(x))|| (k(x, t) − IN,N k(x, t))g(t, u(t))dt|,
0
Z ∞
s
≤ |(k(x, t) − IN,N k(x, t))||b(t)c(|u(t)|)|dt
0

Then by lemma 3.2, we have


s
lim |k(x, t) − IN,N k(x, t)| = 0.
N →∞

Hence, using Proposition 4.2.3 in [26], we get


Z ∞
s
lim |k(x, t) − IN,N k(x, t)|dt = 0.
0 N →∞

This implies for every εN > 0, there exists a positive integer N (depending on εN ) such that
Z ∞
s
|k(x, t) − IN,N k(x, t)|dt ≤ εN . (3.5.21)
0

42
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

By taking εN = N 1/2−m |k(x, .)|m;N


s ,and from assumption (vii) we get

|J1 |∞ ≤ KCN 1/2−m |k(x, .)|m;N


s .

Then
∥J1 ∥∞ ≤ C1 N 1/2−m |k(x, .)|sm;N . (3.5.22)

Z ∞
|J2 (x)| = |(f (x, u(x)) − f (x, INs u(x))) s
IN,N k(x, t)g(t, u(t)))dt|
Z0 ∞
≤ |(f (x, u(x)) − f (x, INs u(x)))| s
|IN,N k(x, t)||g(t, u(t)))|dt
0
Z ∞
≤ C|u(x) − INs u(x)| s
|IN,N k(x, t) + k(x, t) − k(x, t)||b(t)c(|u(t)|)|dt
Z ∞0 Z ∞ 
≤ C∥u − INs u∥∞ s
|IN,N k(x, t) − k(x, t)|b(t)c(|u(t)|)dt + |k(x, t)|b(t)c(|u(t)|) dt
0 0

From (3.5.21), assumption (vii) and lemma (3.2) we obtain


 
|J2 (x)| ≤ CN 1/2−m |u|sm;N CN 1/2−m |k(x, .)|sm;N + K

Then
 
∥J2 ∥∞ ≤ C2 N 1/2−m |u|m;N
s + |k(x, .)|sm;N . (3.5.23)

We can write
Z ∞
|J3 (x)| = |f (x, INs u(x)) s
IN,N k(x, t)(g(t, u(t)) − g(t, INs u(t)))dt|
0
Z ∞
s
≤ |IN,N k(x, t)||(g(t, u(t)) − g(t, INs u(t)))|dt
0

From assumption (iv) we have

∀εN > 0, ∃δN , |u(t) − INs u(t))| ≤ δN ⇒ |g(t, u(t)) − g(t, INs u(t))| ≤ εN .

By taking εN = N 1/2−m |u|m;N


s , and (3.5.21) and assumption (iii′ ) in [27] , we obtain

∥J3 ∥∞ ≤ C3 N 1/2−m (|u|m;N


s + |k(x, .)|sm;N ). (3.5.24)

43
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Also, we can write


Z ∞
|J4 (x)| = |(f (x, INs u(x)) − INs f (x, INs u(x))) s
IN,N k(x, t)g(t, INs u(t))dt|
0
Z ∞
≤ |(f (x, INs u(x)) − INs f (x, INs u(x)))| s
|IN,N k(x, t)g(t, INs u(t))|dt
0
Z ∞ Z ∞ 
≤ ∥f − INs f ∥∞ s
|IN,N k(x, t)g(t, INs u(t)) − k(x, t)g(t, INs u(t))|dt + |k(x, t)g(t, INs u(t))| dt
0 0

From (3.5.21), assumption (vii) and lemma (3.2) we obtain


 
|J4 (x)| ≤ CN 1/2−m |f |sm;N CN 1/2−m |k(x, .)|sm;N + K

Then
 
∥J4 ∥∞ ≤ C4 N 1/2−m |f |m;N
s + |k(x, .)|sm;N . (3.5.25)

By lemma (3.2)we have


Z ∞
|J5 (x)| = INs f (x, INs u(x)) s
IN,N k(x, t)(g(t, INs u(t)) − INs g(t, INs u(t)))dt
0
Z ∞
s
≤ |IN,N k(x, t)(g(t, INs u(t)) − INs g(t, INs u(t)))|dt
0
Z ∞
≤ ∥g − INs g∥∞ s
|IN,N k(x, t) − k(x, t) + k(x, t)|dt
0

from (3.5.21) and assumption (iii′ ) in [27] we obtain


 
∥J5 ∥∞ ≤ C5 N 1/2−m |g|m;N
s + |k(x, .)|sm;N . (3.5.26)

Finally, the statement of the theorem follows from the triangle inequality.

44
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

3.5.3 Numerical examples

In this section, we test numerically the presented approach in the present paper with solving quadratic
Hammerstien integral equation on the half line. We compare the numerical results of our method for
different positive scaling factor s, and study the stability with some investigation values of ε to illustrate
the effectiveness and accuracy of the proposed method.

Example 7 Consider the quadratic Hammerstien integral equation having the form:
q Z +∞ q
u(x) = a(x) + 5 + u(x)2 e−(x+t+1) |u(t)| + 1dt, (3.5.27)
0

where a(x) is chosen so that the exact solution is u(x) = e−x .Table 1 shows the L∞ errors obtained by
using the RLCs-scheme described in section 3 with different positive scaling factor s.

N s=1 s=2 s=3 s=4


4 1.58e-02 1.31e-02 5.98e-03 1.39e-02
8 1.39e-03 5.63e-04 4.33e-04 2.52e-04
16 3.66e-05 5.68e-06 2.25e-06 7.75e-07
32 1.32e-07 3.60e-09 4.45e-10 6.25e-11
64 4.65e-12 3.38e-14 4.44e-15 8.88e-15
Table 1: Maximum absolute error of example 1 for different degree N with different scaling factor s.

Example 8 [27] Consider the quadratic Hammerstien integral equation having the form
 Z +∞ 2 −t q
x xe

u(x) = xe−4x + xu(x) + |u(t)|dt, (3.5.28)
x2 + 16 0 x2 + 1

We apply the suggested method with different degree N . The numerical results obtained for this exam-
ple are given in Tables 2. Also, the approximate solution INs u is plotted for N = 64 and s = 1 in figure 3.2.

N s=1 s=2 s=3 s=4


4 4.01e-03 7.17e-03 2.20e-02 1.87e-02
8 1.40e-04 2.27e-04 2.86e-04 2.14e-03
16 2.15e-06 6.77e-06 1.29e-05 2.00e-05
32 3.20e-07 9.09e-07 1.67e-06 2.58e-06
64 4.23e-08 1.20e-07 2.20e-07 3.39e-07
128 4.85e-09 1.37e-08 2.52e-08 3.88e-08

45
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

Table 2: Maximum absolute error of example 2 for different degree N with different scaling factor s.

Figure 3.4: Numerical results of RLC-scheme for Example 2 for N = 128, s = 2.

Example 9 [22] For the third example, consider the following quadratic integral equation on the half
line

x u(x)2 Z +∞ e−x (e − 1)u(t)


u(x) = + )dt, (3.5.29)
10x2 + 1 x + 1 0 (x + 1)(t + e)(t + 1)

If applying a technique described in the previous section, for different degree N , and with different scaling
factor s we obtain the following results :

46
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

N s=1 s=2 s = 2.5 s=3


4 2.59e-02 4.51e-02 8.30e-02 1.15e-01
8 2.72e-03 1.35e-02 2.00e-02 2.10e-02
16 1.64e-05 6.81e-05 4.83e-04 1.17e-03
32 1.03e-10 2.00e-08 3.90e-07 5.26e-07
64 3.33e-16 1.39e-15 1.06e-13 1.21e-12
Table 3: Maximum absolute error of example 3 for different degree N with different scaling factor s.

Figure 3.5: Numerical results of RLC-scheme for Example 3 for N = 64, s = 1.

47
CHAPTER 3. RATIONAL LEGENDRE COLLOCATION METHOD FOR RESOLUTION OF
QUADRATIC INTEGRAL EQUATIONS

3.5.4 Stability

In order to demonstrate the stability of Examples 2 and 3, we consider the nonlinear system of algebraic
equations (3.5.7), and investigate the effect of perturbation ε in the input of system (A + ε), then observe
that output of system will not change so much. Now in Table 4 we show the stability of Example 2 for
various values of perturbation ε = 10−2 , 10−3 and 10−4 . And the same principal is applied for the third
example presented in Table 5. for various values of perturbation ε the approximate solutions have very
little change.

Table 4: Stability results of Example 2 with s = 2


x uN uN (ε = 10−2 ) uN (ε = 10−3 ) uN (ε = 10−4 )
1 0.02489 0.03741 0.02623 0.02504
5 0.02000 0.04369 0.02254 0.02030
10 0.01456 0.04793 0.01782 0.01492
15 0.01057 0.05513 0.01470 0.01100
20 0.00818 0.06583 0.01338 0.00871
30 0.00558 0.08099 0.01224 0.00624
50 0.00339 0.10612 0.01233 0.00427

Table 5: Stability results of Example 3 with s = 1


x uN uN (ε = 10−2 ) uN (ε = 10−3 ) uN (ε = 10−4 )
10 0.00999 0.01999 0.01099 0.01009
20 0.00500 0.01500 0.00600 0.00510
30 0.00333 0.01333 0.00433 0.00343
40 0.00250 0.01250 0.00350 0.00260
50 0.00200 0.01200 0.00300 0.00210
60 0.00167 0.01167 0.00267 0.00177
70 0.00143 0.01143 0.00243 0.00153
80 0.00125 0.01125 0.00225 0.00135
90 0.00111 0.01111 0.00211 0.00121
100 0.00100 0.01100 0.00200 0.00110

48
Conclusion and prospects

49
Bibliography

[1] C. Canuto, M. Y. Hussaini, A. Quarteroni, T. A. Zang, Spectral methods: fundamentals in single


domains, Springer Science & Business Media, 2007.

[2] M. Golberg, Introduction to the numerical solution of cauchy singular integral equations, in: Numeri-
cal solution of integral equations, Springer, 1990, pp. 183–308.

[3] W. Han, K. E. Atkinson, Theoretical numerical analysis: A functional analysis framework, Springer,
2009.

[4] K. E. Atkinson, A survey of numerical methods for solving nonlinear integral equations, The Journal
of Integral Equations and Applications (1992) 15–46.

[5] M. A. Golberg, Solution methods for integral equations, Springer, 1979.

[6] H. Brunner, Collocation methods for Volterra integral and related functional differential equations,
Vol. 15, Cambridge university press, 2004.

[7] B. Moiseiwitsch, Department of applied mathematics and theoretical physics the queen’s university
of belfast, Recent Studies in Atomic and Molecular Processes (2012) 139.

[8] I. Busbridge, Cambridge Tracts in Mathematics and Mathematical Physics, no. 50, University Press,
1960.

[9] A. D. Polyanin, A. V. Manzhirov, Handbook of mathematics for engineers and scientists, Chapman
and Hall/CRC, 2006.

[10] J. Shen, T. Tang, L.-L. Wang, Spectral methods: algorithms, analysis and applications, Vol. 41,
Springer Science & Business Media, 2011.

[11] D. S. Kim, T. Kim, S.-H. Rim, Some identities involving gegenbauer polynomials, Advances in
Difference Equations 2012 (1) (2012) 1–11.

50
BIBLIOGRAPHY

[12] M. Abramowitz, I. A. Stegun, Handbook of mathematical functions (applied mathematics series 55),
Washington: National Bureau of Standards (1964).

[13] R. P. Agarwal, D. O’Regan, P. J. Y. Wong, Positive Solutions of Differential, Difference and Integral
equations, 1st Edition, Kluwer Academic Publishers, Dordrecht, 1999.

[14] L. W. Busbridge, The Mathematics of Radiative Transfer, 1st Edition, Cambridge Univ. Press,
Cambridge, 1960.

[15] S. Chandrasekhar, Radiative Transfer, Dover Publications, Inc, New York, 1960.

[16] K. M. Case, P. F. Zweigel, Linear Transport Theory, 1st Edition, Addison-Wesley., Reading, MA,
1967.

[17] C. Corduneanu, Integral Equations and Applications, 1st Edition, Cambridge Univ. Press., Cambridge,
1991.

[18] K. Deimling, Nonlinear Functional Analysis, 1st Edition, Springer., Berlin. Germany, 1985.

[19] J. R. M. J. Banaś, K. Sadarangani, On solutions of a quadratic integral equation of hammerstein type,


Mathematical and Computer Modelling. 43 (2006) 97–104.

[20] J. Banaś, L. Olszowy, On solutions of a quadratic urysohn integral equation on an unbounded interval,
Dynamic Systems and Applications. 17(2) (2008) 255–270.

[21] M. A. Darwish, J. BanaV, E. O. Alzahrani, The existence and attractivity of solutions of an urysohn
integral equation on an unbounded interval, Abstract and Applied Analysis 2013 (2013) 147–409.

[22] B. Ilhan, I.zdemir, Existence and asymptotic behavior of solutions for some nonlinear integral
equations on an unbounded interval, Electronic Journal of Differential Equations 2016 (271) (2016)
1–15.

[23] C. Canuto, M. Y. Hussaini, A. Quarteroni, T. A. Zang, Spectral methods: Fundamentals in Single


Domains, 1st Edition, Springer-Verlag, Berlin, 2006.

[24] M. Hammad, R. M. Hafez, Y. H. Youssri, E. H. Doha, Exponential jacobi-galerkin method and its
applications to multidimensional problems in unbounded domains, Applied Numerical Mathematics
157 (2020) 88–109. doi:10.1016/j.apnum.2020.05.017.

51
BIBLIOGRAPHY

[25] E. H. Doha, M. A. Abdelkawy, A. Z. Amin, D. Baleanu, Shifted jacobi spectral collocation method
with convergence analysis for solving integro-differential equations and system of integro-differential
equations, Nonlinear Analysis: Modelling and Control 24 (3) (2019) 332–352. doi:10.15388/
NA.2019.3.2.

[26] R. Timoney, Chapter 4. the dominated convergence theorem and application (2018) 1–9. Accessed
17 July 2022.

[27] J. O. Banas, D. O’regan, K. Sadarangani, On solutions of a quadratic hammerstein integral equation


on an unbounded interval, Dynamic Systems and Applications 18 (2) (2009) 251.

52

You might also like