Professional Documents
Culture Documents
2004 Probabilistic Neural Method Combined
2004 Probabilistic Neural Method Combined
Abstract
In this paper, we combine a probabilistic neural method with radial-bias functions in order to
construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of
Algeria (Sahara). The determination of lithofacies is a crucial problem in reservoir
characterization. Our objective is to facilitate the experts’ work in the geological domain and
allow them to obtain quickly the structure and nature of the land around the drilling. This
study intends to design a tool that helps automatic deduction from numerical data. We use a
probabilistic formalism to enhance the classification process initiated by a self-organized map
procedure. Our system gives the lithofacies, from well-log data, of the reservoir wells
concerned in a way that is easy to read by a geology expert who identifies the potential for oil
production at a given source and so forms the basis for estimating the financial returns and
economic benefits.
Underground cores are used in this paper to correlate the lithofacies: neutron
Log’s porosity (NPHI), sonic (Dt), bulk density (RHOB), gamma
data file
NPHI, Dt, RHOB PRSOM Facies
ray (GR), and deep resistivity (Rt). We describe briefly
Rt, GR the usefulness of each parameter used in this study for the
identification of rocks.
Figure 1. Global conception scheme of the Geonal system. • Neutron porosity log (NPHI). This measures the rock’s
reaction to fast neutron bombardment. The unit is
dimensionless. The recorded parameter is an index of
disadvantages. First, convergence during training is slow and hydrogen for a given formation of lithology, generally
there is no guarantee of reaching the user-defined acceptable of limestone (as well as sandstone or dolomite). NPHI
error range. Second, when test data are located outside is useful mainly for lithology identification, porosity
the training data range, ANNs cannot classify them; thus, evaluation and differentiation between liquids and gases
the discriminating ability is not assured. ANNs adequately (in combination with density data).
deal with well-bounded and stable problems, because training • Travel time or sonic log (Dt) (measured in µs ft−1). This
135
S Chikhi and M Batouche
136
Probabilistic neural method for reservoir characterization
137
S Chikhi and M Batouche
neuron. The mean and variance of every neuron of the multi-dimensional space. The problem of exact interpolation
consists in associating each input vector with its accurate
map are calculated at each observation of the training
desired output value (Barbaroux et al 2000). A certain number
set. To this end, the quadratic distance heap (between
of modifications of the exact interpolation procedure give rise
the observation z and its winning neuron) and the heap of
to neuronal models based on radial-bias functions and are
neighbourhood relations are used.
called radial-bias networks. The goal is to get a smoothed
interpolation function for which the number of basis functions
5.2. PRSOM algorithm
is no longer represented by the size of the data set but by
First of all, it is necessary to fix some parameters: a certain number of representatives obtained from a data
quantification. If k is the number of these representatives,
• the shape and dimension of the map,
for an input z the output y of such a network has the form
• the number of neurons on the map,
• the maximum temperature that generates the
k
neighbourhood of minimum size, and where {fc , c = 1, . . . , k} is the set of the k basis functions, and
• the number of cycles required. λ0 is a bias. In practice, the form of the basis functions most
used is the function
Then each cycle of the algorithm takes place according to the
z − Wc 2
principles shown in table 2. The first operation of the PRSOM fc (z) = exp − .
training cycle consists in evaluating the temperature that will 2(σc )2
be constant during the entire cycle: We have used a specific radial-bias function in the affectation
1 process which we have called Rbf (for Radial-bias function) in
hmin cyclemax −1
h(t) = hmax , (1) equations (2) and (3). If a neuron is provided with an average
hmax and variance, we can calculate the value of the activation
where h(t) is the temperature at the iteration t; hmin, hmax are the provoked by the observation in the normal law of the neuron
minimum and the maximum temperatures, parameters fixed in (equation (2)). The winning neuron is the one for which this
the beginning of algorithm; N is the number of observations in activation is the strongest (equation (3)).
the training set and cyclemax is the maximum number of cycles.
n
1 (z[i] − wc[i] ) 2
Rbf (z, c) = √ exp−0.5
( 2π σc )n i=1
σc
5.2.1. Competition phase driven by radial-bias functions.
All observations of the training set are studied in order to (2)
choose the winning neuron for each of them. When we present
an observation to the network, we look at the behaviour of g = χ (x) = argmax(Rbf (z, c)), (3)
C
every neuron towards it. For this, we make the hypothesis that
the observation is from the mixture of densities covered by the where
neuron. Here, we introduce the radial-bias functions approach, c is any neuron of the map
which is at the origin an exact interpolation technique in a wc is the mean vector associated with neuron c
138
Probabilistic neural method for reservoir characterization
139
S Chikhi and M Batouche
appreciate the quality of the regroupings. Qe is obtained from 5.5. Convergence proof
the squared root of intra-class variance. The calculation of Qe
During the execution of the algorithm, we obtain a sequence of
measures the distance between the neurons of the map and the
weight sets W 0 , W 1 , . . . , W t , . . . and a sequence of variance
observations that they represent (equation (7)). The reduction sets σ 1, . . . , σ t, . . . such that for each iteration t, the cost
of Qe gives an account of the neuron adaptation to partitions function E verifies (Anouar 1996, Chikhi and Shout 2003)
that they defined. Figure 8 shows that Qe begins to stabilize E(χ t, W t , σ t) E(χ t−1, Wt, σ t) E(χ t−1, Wt−1, σ t−1).
from 40 cycles; we have the certainty that the learning process The competition phase (or affectation phase) is executed
is correctly performed. We will stop the execution of PRSOM by using the affectation function χ (equation (3)). The argmax
if Qe stabilizes completely or if the maximum number of cycles function returns the index c for which the probability is the
is reached. highest. Then, we obtain the inequality E(χ t, W t , σ t)
E(χ t−1, W t σ t). The adaptation phase (or the minimization
i (z[i] − wg[i] )
2 phase) is executed by using the steepest descent method which
z
Qe = , (7) allows us to obtain the inequality E(χ t−1, W t σ t) E(χ t−1,
N
W t−1 σ t−1). Since the function E(χ ,W, σ ) decreases at
where each iteration, the algorithm converges in a limited number
z is any observation of the training set of iterations. The stationary point is a local minimum of
g is the winning neuron for the z observation E(χ ,W, σ ).
wg is the weight vector associated with the winning
neuron g 6. Results and interpretation
wg [i] is the ith component of vector wg
N is the number of observations forming the In order to have a good idea of expected results, an expert
training set. provided us, based on the reading of the logfacies, with an
140
Probabilistic neural method for reservoir characterization
7. Conclusion
141
S Chikhi and M Batouche
the stratigraphy of a specific part of a well (precise age and Derguini K 1995 Applications sédimentologiques des diagraphies
particular depths) to refine the different types of rock. This is Algerian Geol. J. 1 29
especially useful in the characterization and management of Dreyfus C 2002 Réseaux de Neurones Méthodologie et Applications
(Paris: Eyrolles) chapter 7
reservoirs. Frayssinet D 2000 Utilisation des réseaux de neurones en traitement
des données de diagraphies: prédiction et reconstitution de
faciès lithologiques Master Thesis CNAM, Paris
Acknowledgment Frayssinet D, Thiria S and Badran F 2000 Use of neural networks in
log data processing: prediction and rebuilding of lithologic
The authors would like to thank the technical direction of facies EAGE Conf. (Paris, France, 6–8 November 2000) pp 1–8
ENAFOR (Entreprise NAtionale de FORage) for the data Gedeon T D, Kuo H and Wong P M 2002 Rule extraction using
provided. fuzzy clustering for a sedimentary rock data set Int. J. Fuzzy
Syst. 4 600–5
Gottlib-Zeh S, Briqueu L and Veillerette A 1999 Indexed
References self-organization map: a new calibration system for a
geological interpretation of logs Proc. IAMG ’99 vol 6
Adegbenga O E 2003 High resolution sequence stratigraphic and pp 183–8 (Norway: Lippard, Naess and Sinding-Larsen)
Jerry F L 2003 Applying sequence stratigraphic methods to
142