Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

NANJING INSTITUTE OF GEOPHYSICAL PROSPECTING AND INSTITUTE OF PHYSICS PUBLISHING JOURNAL OF GEOPHYSICS AND ENGINEERING

J. Geophys. Eng. 1 (2004) 134–142 PII: S1742-2132(04)75463-1

Probabilistic neural method combined


with radial-bias functions applied to
reservoir characterization in the Algerian
Triassic province
S Chikhi and M Batouche

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


LIRE Laboratory, Computer Science Departement, Mentouri University Route de Aı̈n El Bey,
Constantine 25 000, Algeria
E-mail: slchikhi@yahoo.com

Received 30 January 2004


Accepted for publication 19 April 2004
Published 17 May 2004
Online at stacks.iop.org/JGE/1/134
DOI: 10.1088/1742-2132/1/2/005

Abstract
In this paper, we combine a probabilistic neural method with radial-bias functions in order to
construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of
Algeria (Sahara). The determination of lithofacies is a crucial problem in reservoir
characterization. Our objective is to facilitate the experts’ work in the geological domain and
allow them to obtain quickly the structure and nature of the land around the drilling. This
study intends to design a tool that helps automatic deduction from numerical data. We use a
probabilistic formalism to enhance the classification process initiated by a self-organized map
procedure. Our system gives the lithofacies, from well-log data, of the reservoir wells
concerned in a way that is easy to read by a geology expert who identifies the potential for oil
production at a given source and so forms the basis for estimating the financial returns and
economic benefits.

Keywords: lithofacies, well-log data, classification, logfacies, reservoir characterization,


probabilistic formalism

1. Introduction wells. This method provides indirect information about the


subsurface and is far less expensive. Our purpose is to
The prediction of lithofacies is an important issue for many describe an automated method, based on neural networks, for
geological and engineering disciplines. The determination predicting reservoir rock characteristics from differed well-log
of lithofacies, which involves rock-type identification, can be data. In an attempt to solve such reservoir characterization
used to correlate many important characteristics of a reservoir. problems and using differed well-logging measurements,
For petroleum reservoir characterization, the primary task is some researchers in geosciences have recently employed
to identify the lithofacies. Generally, one identifies lithofacies statistical methods (Baddari and Nikitin 1996, Rabaute 1999,
through direct observation of underground cores, which are Tapias et al 2001) and artificial neural networks (ANNs)
small cylindrical rock samples retrieved from wells at selected (Frayssinet 2000, Yang et al 1996). In many applications
well depths. The recovery of cores is an expensive process and of ANNs such as permeability prediction modelling and
not always total. That is why a lower-cost method providing reservoir parameter estimation, hybrid neural networks were
similar or higher accuracy is desirable. In this paper, we used (Aminzadeh et al 2000, Platon et al 2003). They have
use differed well logging which consists of a set of records demonstrated their effectiveness in prediction, estimation and
of digital measurements obtained at different depths in the characterization. However, ANNs have several significant

1742-2132/04/020134+09$30.00 © 2004 Nanjing Institute of Geophysical Prospecting Printed in the UK 134


Probabilistic neural method for reservoir characterization

Underground cores are used in this paper to correlate the lithofacies: neutron
Log’s porosity (NPHI), sonic (Dt), bulk density (RHOB), gamma
data file
NPHI, Dt, RHOB PRSOM Facies
ray (GR), and deep resistivity (Rt). We describe briefly
Rt, GR the usefulness of each parameter used in this study for the
identification of rocks.

Figure 1. Global conception scheme of the Geonal system. • Neutron porosity log (NPHI). This measures the rock’s
reaction to fast neutron bombardment. The unit is
dimensionless. The recorded parameter is an index of
disadvantages. First, convergence during training is slow and hydrogen for a given formation of lithology, generally
there is no guarantee of reaching the user-defined acceptable of limestone (as well as sandstone or dolomite). NPHI
error range. Second, when test data are located outside is useful mainly for lithology identification, porosity
the training data range, ANNs cannot classify them; thus, evaluation and differentiation between liquids and gases
the discriminating ability is not assured. ANNs adequately (in combination with density data).
deal with well-bounded and stable problems, because training • Travel time or sonic log (Dt) (measured in µs ft−1). This

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


sets may cover the entire expected input space. But, in measures variations in speed of acoustic wave propagation
reservoir characterization problems, variables are generally according to depth. It must be done in an ‘open’ hole,
neither well bounded nor stable. New lithofacies and new which means before the pose of the protective intubations.
values of important rock properties are often encountered. It is useful mainly for the determination of porosity in non-
Works concerning the use of the self-organizing map (SOM) of clayey formations and the identification of lithology (in
Kohonen to recognize rock types in a drilling hole are rather combination with neutron and density).
rare. Gottlib-Zeh et al (1999) have used indexed SOM for • Bulk density log (RHOB) (measured in g cm−3). This
a geological interpretation of logs. Frayssinet et al (2000) measures the bulk density of a rock by measuring
have used neural networks (multi-layer perceptron) to predict the bombardment of medium–high energy gamma rays.
missing log measurements in a drill hole and then used a SOM The most easily measured densities are between 2 and
technique to reconstruct the lithologic facies of the hole. 3 g cm−3. RHOB is useful mainly for the determination
We use a PRSOM neural network, which is a probabilistic of porosity in hydrocarbon zones and the differentiation
variant of the SOM technique (Kohonen 1988), to propose a between liquids and gases (in combination with neutron
logfacies-recognizer system (geonal). The global conception data).
scheme of the geonal system is shown in figure 1. PRSOM is • Gamma ray log (GR) (measured in API (American
an unsupervised learning algorithm, which adapts the map to a Petroleum Institute)). This measures the radioactivity
set of learning samples. This algorithm allows one to realize a of rocks and is useful mainly for geological correlations,
partition of the data space, with each subset associated with depth correlation, differentiation between clean zones and
a neuron of the map. PRSOM is a probabilistic model which clayey zones and evaluation of clay content.
associates with a neuron c of the map a spherical Gaussian • Deep resistivity (Rt) (measured in ohms). This measures
density function fc; it approximates the density distribution the fluid resistivities deep in the wells by using long
of the data using a mixture of normal distributions (see focusing electrodes and a distant return electrode. The
section 5). rock resistivity measure is a database of all reservoir
evaluations. It is useful mainly for getting an idea about
the quantities of water in the rock and hence the porosity
2. Logfacies and lithofacies and saturation of the lithology, nature and percentage of
clay, nature and percentage of minerals, and texture of the
Well-log measurements can be classified into logfacies. rock (pore distribution). Therefore the true resistivity of
Logfacies are defined as the collective set of log responses, a rock is a result of the matrix and its fluid content.
reflecting both the rock and fluid properties, allowing
discrimination among beds or sedimentary units. Logfacies In this study, all logs are scaled uniformly between –1 and
commonly correspond to lithofacies when they are calibrated +1 and results are given in the scaled domain. Figures 2
with core descriptions. Thus, logfacies may be constructed and 3 show the interaction between logs used respectively
as surrogates for lithofacies. Well-log data (Aider 1996, in the DF02 and DF03 wells. Figure 4 shows well-logging
Derguini 1995, Rabaute 1999) are records of geological measurements (logfacies) for DF02.
properties of subsurface rock formations at depth retrieved
by electrical, physical or radioactive devices. They permit 3. Supervised and unsupervised classification
several measurements. Classifications that are required to
identify logfacies can then be used to predict lithofacies in Neural network learning may be broadly divided into
uncored wells or uncored intervals in cored wells. Due to the supervised and unsupervised. In supervised learning, the
heterogeneous nature of rocks, associating well-log data with network learns from a training set consisting of inputs and
lithofacies can be difficult. Carbonate lithofacies identification the desired outputs. Learning is accomplished by adjusting
using log data is more demanding because they can be defined the network weights so that the difference between the desired
using any set of rock properties. The following well-log data outputs and the network-computed outputs is minimized.

135
S Chikhi and M Batouche

Table 1. Legend of rock classes.

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


4. The self-organizing map

To represent the facies of the drill holes DF02 and DF03,


Figure 2. Crossplots of Rt versus the other logs used for well DF02. we used a self-organizing map (SOM) (Dreyfus 2002,
Kohonen 1988). The SOM algorithm realizes a partition of
the data space that permits us to associate each vector of the
data space with a particular neuron on the map. We have
used available underground cores (only 503 points) as labels.
Therefore, we are able to know the exact nature of the rocks
present at the corresponding depth that is used to label each
neuron of the map. Thus, the map becomes a classifier. We
have used log data from well DF02 as a learning base for
training the map and then, when the network is stable and
represents the data space relatively well, we reuse the map
with log data from well DF03. Details of this algorithm can
be found in Chikhi and Shout (2003). The best map obtained
has a size 17 × 5 (figure 5) and the lithofacies obtained for
DF02 and DF03 are shown in figures 6 and 7 respectively (see
also table 1).

5. Probabilistic method combined with radial-bias


functions

In this section, we will consider the second algorithm which


is used to obtain a topological map. We already have a good
Figure 3. Crossplots of RHOB and Dt versus NPHI and GR logs for map and we will use this algorithm to refine a map obtained
well DF03. using the SOM algorithm. We combine the non-supervised
PRSOM algorithm and radial-bias functions to solve problems
BPNNs are examples of supervised learning. Unsupervised for which we possess information about the expected result,
learning (Bougrain and Alexandre 1999, Dreyfus 2002) such as that from underground cores, and some expertise in our
requires only input data. During the learning process, the application. Two layers of weights characterize the radial-bias
network weights are adjusted so that similar inputs produce network architecture whereas the PRSOM algorithm turns on
similar outputs. Kohonen’s self-organizing maps (SOMs) a topological map and performs the first learning phase taking
(Dreyfus 2002, Kohonen 1988) and adaptive resonance into account a topological structure.
theory (ART) neural networks (Carpenter and Grossberg
2002, Chang et al 2000) are examples of unsupervised 5.1. Principle of the algorithm
learning. They extract statistical regularities from the This paper is based on the good classification scheme already
input data automatically rather than using desired outputs obtained using the SOM technique well detailed in Chikhi
to guide the learning processes. Several researchers have and Shout (2003) and we introduce here the concept of the
employed pattern recognition with unsupervised learning new PRSOM technique. PRSOM introduces the notion of
neural networks as pattern-recognizers to solve a lithofacies density to the topological map. We used the data obtained
identification problem, e.g., SOMs and ART neural networks from different observation subsets that were governed by a
(Chikhi and Shout 2003, Mofazzal 2001). normal law. Each neuron is the representative of one of

136
Probabilistic neural method for reservoir characterization

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


Figure 4. The logfacies of well DF02.

Figure 5. 3D projection of the best map (17 × 5) obtained for DF03


(with DF02 as the learning base).
Figure 6. Lithofacies obtained for DF02 by SOM.
these subsets. By the action of the neighbourhood, a neuron
gives an account of the subset that it represents, and also and variance, PRSOM takes into account the influence of
of subsets represented by its neighbours. Therefore, each all observations of the cycle over the neurons of the map.
neuron represents a mixture of densities. The data set is This influence is pondered by the neighbourhood relation that
viewed as the mixture of all density mixtures represented by exists between the winning neuron by which the observation is
each neuron. Consequently, a neuron is constituted by two affected and the neuron c. The sum of the influences exercised
elements: first, a vector of five measurements that corresponds by observations is normalized by the sum of neighbourhood
to the mean vector and second, the variance of the normal law relations binding the neuron c to the winning neuron of each
associated with this neuron (Anouar 1996, Chikhi and Shout observation. As in the SOM algorithm, a ‘temperature’
2003, Frayssinet 2000). A neuron defines a Gaussian density adjusts the influence of the neighbourhood. In PRSOM, this
function. To limit calculations, we consider that the variance temperature is calculated at the beginning of every cycle and
is identical in all directions. When an observation is presented remains constant throughout the cycle. Two major stages
to the network, the PRSOM algorithm calculates the activation characterize the PRSOM algorithm:
provoked by this observation in the density function of each • The first stage is called the competition phase. It is based
neuron of the map. The winning neuron is the one with the on radial-basis functions and consists of choosing the
strongest activation. All neurons of the map are modified winning neuron.
after the presentation of all observations of the training set • The second stage is called the adaptation phase. It consists
(one cycle). To modify parameters of a neuron c, average of evaluating parameters (mean and variance) of this

137
S Chikhi and M Batouche

Table 2. The PRSOM algorithm.

(1) Evaluate the temperature h of the cycle according to equation (1)


(2) For every observation z of the training set
• present the observation z to the classifier
• calculate the value of the activation produced in the function of
density of every neuron c, according to equation (2)
• choose the winning neuron g according to equation (3)
• read the following observation
(3) At the end of the cycle:
For every neuron c of the map:
For every observation z of the training set
• establish the topological distance δ(c, g) between the neuron c
and the winning neuron g of this observation z
• evaluate the neighbourhood between the winning neuron g and
the neuron c (equation (4))
• add the contribution of the observation z to heaps that constitute
numerators and denominators of equations (5) and (6)

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


Calculate the mean vector of the neuron c according to
equation (5)
Calculate the variance of the neuron c according to equation (6)
(4) Return to (1) until all required cycles are completed.
Figure 7. Lithofacies obtained for DF03 by SOM.

neuron. The mean and variance of every neuron of the multi-dimensional space. The problem of exact interpolation
consists in associating each input vector with its accurate
map are calculated at each observation of the training
desired output value (Barbaroux et al 2000). A certain number
set. To this end, the quadratic distance heap (between
of modifications of the exact interpolation procedure give rise
the observation z and its winning neuron) and the heap of
to neuronal models based on radial-bias functions and are
neighbourhood relations are used.
called radial-bias networks. The goal is to get a smoothed
interpolation function for which the number of basis functions
5.2. PRSOM algorithm
is no longer represented by the size of the data set but by
First of all, it is necessary to fix some parameters: a certain number of representatives obtained from a data
quantification. If k is the number of these representatives,
• the shape and dimension of the map,
for an input z the output y of such a network has the form
• the number of neurons on the map,
• the maximum temperature that generates the 
k

neighbourhood of maximum size, y(z) = λc fc (z) + λ0 ,


• the minimum temperature that generates the c=1

neighbourhood of minimum size, and where {fc , c = 1, . . . , k} is the set of the k basis functions, and
• the number of cycles required. λ0 is a bias. In practice, the form of the basis functions most
used is the function
Then each cycle of the algorithm takes place according to the  
z − Wc 2
principles shown in table 2. The first operation of the PRSOM fc (z) = exp − .
training cycle consists in evaluating the temperature that will 2(σc )2
be constant during the entire cycle: We have used a specific radial-bias function in the affectation
  1 process which we have called Rbf (for Radial-bias function) in
hmin cyclemax −1
h(t) = hmax , (1) equations (2) and (3). If a neuron is provided with an average
hmax and variance, we can calculate the value of the activation
where h(t) is the temperature at the iteration t; hmin, hmax are the provoked by the observation in the normal law of the neuron
minimum and the maximum temperatures, parameters fixed in (equation (2)). The winning neuron is the one for which this
the beginning of algorithm; N is the number of observations in activation is the strongest (equation (3)).
the training set and cyclemax is the maximum number of cycles.   
n  
1  (z[i] − wc[i] ) 2
Rbf (z, c) = √ exp−0.5

( 2π σc )n i=1
σc
5.2.1. Competition phase driven by radial-bias functions.
All observations of the training set are studied in order to (2)
choose the winning neuron for each of them. When we present
an observation to the network, we look at the behaviour of g = χ (x) = argmax(Rbf (z, c)), (3)
C
every neuron towards it. For this, we make the hypothesis that
the observation is from the mixture of densities covered by the where
neuron. Here, we introduce the radial-bias functions approach, c is any neuron of the map
which is at the origin an exact interpolation technique in a wc is the mean vector associated with neuron c

138
Probabilistic neural method for reservoir characterization

wc[i] is the ith component of the vector wc where


σc is the variance of neuron c c is the neuron to modify
z is the observation vector presented to the map input g is the winning neuron of the influential observation
z[i] is the ith component of vector z h(t) is the temperature of the t cycle
g is the winning neuron of vector z Kh(t) is the function of neighbourhood to the t cycle
n is the dimension of the observation space, in our δ(g, c) is the topological distance between the neuron c to
case, n = 5. modify and the winning neuron g
wc is the mean vector associated with neuron c
σc is the variance of neuron c.
5.2.2. Adaptation phase. To proceed to the calculation of
the parameters average and variance of every neuron c of the
5.3. Learning and processing
map, PRSOM takes into consideration every observation of
the training set. Several heaps are incremented by the impact The data of wells DF02 and DF03 are normalized between
of every observation: –1 and +1 (figure 2). Then vectors corresponding to the five
log measures (RHOB, Rt, NPHI, GR and Dt) are all presented

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


• Values of each vector component of the input observation (figure 1) to the SOM algorithm. We also use log data from
are first treated separately. The tie of the neighbourhood, well DF02 as a learning base for training the map and then we
which binds the winning neuron to the neuron c that we reuse the network, obtained in 200 cycles, with log data from
want to modify, ponders them, and then they are added in well DF03. Then the PRSOM algorithm considers this map.
zones of separated heaps. These heaps are destined for Initially, averages are kept constant to allow every neuron
the calculation of the different components of the mean to evaluate its variance. Then, every neuron is provided with
vector of neuron c (equation (5)). a normal law. As shown in figure 8, 40 cycles are sufficient
• The quadratic distance heap between the observation z for the quantification error stabilization (see the next section).
and its winning neuron, weighted by the neighbourhood, Then, averages are freed in order to allow a simultaneous
is destined for the calculation of the variance of update of variances and averages. One thousand cycles are
neuron c. launched with a very weak temperature, varying from 3 to 1 so
• The heap of neighbourhood relations is used to normalize that the map structure is minimally disturbed. Table 3 presents
the calculation of the different components of the the classifier performances according to the map size and the
mean vector, as well as the calculation of the variance algorithm used. We divided the training set into three groups:
(equation (6)). • Observations that correspond to zones of underground
cores. We have the exact class for these data, which is
These values are not modified in an adaptive way through the necessary. We will call these 503 data the ‘core set’.
treatment, as is the case for the SOM technique. At each end of • Observations that correspond to a zone for which
the cycle, PRSOM calculates the average and the variance of the expert provided an approximation of rock classes
every neuron globally so that its associated normal law regains composing the facies. We will call the totality of these
the mixture of densities defined by itself and its neighbours. 742 data the ‘expert set’. The expert set only concerns
The influence zone around a neuron depends on the well DF02.
topological distance. The importance of the contribution • Well-logging measures for which we did not have cores,
brought by an observation depends on a function of nor the evaluation of the expert. We cannot provide any
neighbourhood that is a kernel function K similar to the quantitative appreciation of results for these data.
function of neighbourhood used in SOM. The metrics used to
Table 3 contains three columns: SOM, PRSOM V
establish the distance between two neurons is the topological
(update variances with constant averages) and PRSOM V+M
distance, which is the difference between the indices denoting (simultaneous update of variances and averages). We see
the positions of the neurons on the map. In the implementation, that the only variance establishment does not modify the
we used a very simplified function of neighbourhood, which results greatly. On the other hand, modifications brought
is given by equation (4). We ponder by this function of by the simultaneous update of averages and variances are
neighbourhood the contribution of the observation to the important. We note that the PRSOM algorithm very often
calculation of the mean and variance of the neuron according improves the topological map performances developed by the
to equations (5) and (6). SOM algorithm.
 
δ(c, g)
Kh(t) (δ(c, g)) = exp −0.5 (4) 5.4. The numeric criteria
h(t)
Table 3 shows the method’s performance by considering
z (Kh(t) (δ(c, ϕ(z)))z[i] )
wc = (5) ‘qualitative’ errors, i.e. by considering the geological aspect
z Kh(t) (δ(c; χ (z))) of the obtained lithofacies. In this case, we measure the
  n  PRSOM performances by considering ‘quantitative’ errors.
i=1 (z[i] − wχ (z)[i] )
2
z Kh(t) (δ(c, χ (z)))
σc =  n , (6) We evaluate the quality of the training through a criterion that
z Kh(t) (δ(c, χ (z))) we called the ‘quantification error’ (Qe) that permits us to

139
S Chikhi and M Batouche

Figure 8. PRSOM quantitative error Qe.

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020



Figure 9. Neurons on the map ( ) and corresponding input data vectors (∗).

Table 3. Improvement of performances by the PRSOM algorithm.


Percentage of error number Percentage of error number
over the core set over the (core + expert) set
(%) 503 observations (%) 1245 observations Total Number
Size of number of neurons
the map SOM PRSOM V PRSOM V+M SOM PRSOM V PRSOM V+M of neurons used
13 × 7 14.54 13.34 10.21 32.77 34.66 29.88 91 91
20 × 7 11.46 13.34 11.08 36.54 35.52 30.69 140 138
17 × 5 12.29 11.38 08.78 36.72 35.83 25.89 85 85
10 × 10 13.58 12.76 09.71 37.28 35.64 30.42 100 100
20 × 20 07.93 08.68 05.29 33.39 28.34 26.55 400 350
25 × 25 05.28 05.96 02.68 28.45 24.88 19.77 625 515

appreciate the quality of the regroupings. Qe is obtained from 5.5. Convergence proof
the squared root of intra-class variance. The calculation of Qe
During the execution of the algorithm, we obtain a sequence of
measures the distance between the neurons of the map and the
weight sets W 0 , W 1 , . . . , W t , . . . and a sequence of variance
observations that they represent (equation (7)). The reduction sets σ 1, . . . , σ t, . . . such that for each iteration t, the cost
of Qe gives an account of the neuron adaptation to partitions function E verifies (Anouar 1996, Chikhi and Shout 2003)
that they defined. Figure 8 shows that Qe begins to stabilize E(χ t, W t , σ t)  E(χ t−1, Wt, σ t)  E(χ t−1, Wt−1, σ t−1).
from 40 cycles; we have the certainty that the learning process The competition phase (or affectation phase) is executed
is correctly performed. We will stop the execution of PRSOM by using the affectation function χ (equation (3)). The argmax
if Qe stabilizes completely or if the maximum number of cycles function returns the index c for which the probability is the
is reached. highest. Then, we obtain the inequality E(χ t, W t , σ t) 
 E(χ t−1, W t σ t). The adaptation phase (or the minimization
i (z[i] − wg[i] )
2 phase) is executed by using the steepest descent method which
z
Qe = , (7) allows us to obtain the inequality E(χ t−1, W t σ t)  E(χ t−1,
N
W t−1 σ t−1). Since the function E(χ ,W, σ ) decreases at
where each iteration, the algorithm converges in a limited number
z is any observation of the training set of iterations. The stationary point is a local minimum of
g is the winning neuron for the z observation E(χ ,W, σ ).
wg is the weight vector associated with the winning
neuron g 6. Results and interpretation
wg [i] is the ith component of vector wg
N is the number of observations forming the In order to have a good idea of expected results, an expert
training set. provided us, based on the reading of the logfacies, with an

140
Probabilistic neural method for reservoir characterization

represent well the input data space. We present the lithofacies


of the wells DF02 and DF03 obtained by using the SOM
method (figures 6 and 7 respectively) and lithofacies obtained
by using the PRSOM method (figures 10 and 11 respectively).
We note two points:
• The upper Triassic (2825–2890 m) is composed
essentially of evaporitic rocks (salt, anhydrite, dolomitic
marl, marly limestone, etc) separated by banks of
dolomitic clayey, fine to coarse clayey and coarse
sandstones. The lower Triassic (2890–2955 m) is
composed essentially of red and green clay and argillite,
quartzite and quartzitic sandstone and clay–sandstone
separated by micaceous fine sandstone, chalky sand and
dolomitic clay. This resulting description, well illustrated

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


by either figures 6 and 7, or figures 10 and 11, is close
enough to the global model of wells that are available
in the studied zone in the south of Algeria and likely to
contain hydrocarbons (Aider 1996, Aliev et al 1971).
Figure 10. Lithofacies obtained for DF02 by the PRSOM method. • We have obtained the most detailed prediction using the
PRSOM method.

7. Conclusion

Lithology controls strategies for reservoir management and


is, with porosity, the primary key to making a reliable
reservoir model. Neural network analysis is one of
the latest technologies available to the petroleum industry
(Aminzadeh 2003). The proposed method enhanced the
performance of the geology expert in finding quickly the
lithology of a given well. Then, many studies and analyses
(lateral reservoir characterization, rock porosity, permeability,
reservoir volume, hydrocarbon distribution, etc) will be
simplified. Using the SOM and PRSOM methods combined
with the radial-bias function neural network technique, we
have successfully estimated the lithology of the wells DF02
and DF03. The PRSOM neural network using data from
three wells (DF01, DF02, DF03) provided a better and finer
identification. Indeed, layers of rocks found in facies of
Figure 11. Lithofacies obtained for DF03 by the PRSOM method. figures 10 and 11 are thinner than those found in facies of
figures 6 and 7. Some points on log selection and expert
estimate of the types of rocks present in part of well DF03. interaction may be improved. We also need some precise
The expert reads the curves of the measurements to evaluate information about the exact situation of the wells. The results
the class of corresponding rocks. This correspondence is not described here open the field to neural research for log data
always very well defined, because in estimating only one study, generalization of neural networks to a drilling field
point of a well, we might get up to four possible cases. and, possibly, construction of a universal classification tool.
It is thus possible for the expert to provide the ‘precise’ We also think that if we quantitatively combine seismic and
limits between the various benches of rock. These limits are well-log data with stratigraphical information (Adegbenga
sometimes evaluated with a margin of a few metres. These 2003, Jerry 2003), we would generate reliable lithology
estimates were initially proposed according to the reading of and rock-property models. It is also interesting to consider
the logfacies; they do not bring the same information as the the uncertainty (Claude 2003) related to such earth science
underground cores. For this reason, an expert’s opinion can applications by using neural-fuzzy (Ouenes 2000, Wong Kok
be erroneous; it makes it impossible to determine in a very 1999), rule based (Gedeon et al 2002, Tamhane et al 2002) or
sensitive way the changes of rock type when the layers are fuzzy-ARTMAP (Wong et al 1996).
very thin. However, with these estimates, it will be possible By implementing our system, we have demonstrated that
to quantify the performances of our classifier: figure 8 shows it is possible to predict a global lithofacies as in most works
that the PRSOM quantitative error Qe stabilizes between 0.22 using different techniques (Ouenes 2000, Wong Kok 1999,
and 0.26 from 40 cycles; figures 5 and 9 show that the neurons Wong et al 1996, Yang et al 1996), but we can also detail

141
S Chikhi and M Batouche

the stratigraphy of a specific part of a well (precise age and Derguini K 1995 Applications sédimentologiques des diagraphies
particular depths) to refine the different types of rock. This is Algerian Geol. J. 1 29
especially useful in the characterization and management of Dreyfus C 2002 Réseaux de Neurones Méthodologie et Applications
(Paris: Eyrolles) chapter 7
reservoirs. Frayssinet D 2000 Utilisation des réseaux de neurones en traitement
des données de diagraphies: prédiction et reconstitution de
faciès lithologiques Master Thesis CNAM, Paris
Acknowledgment Frayssinet D, Thiria S and Badran F 2000 Use of neural networks in
log data processing: prediction and rebuilding of lithologic
The authors would like to thank the technical direction of facies EAGE Conf. (Paris, France, 6–8 November 2000) pp 1–8
ENAFOR (Entreprise NAtionale de FORage) for the data Gedeon T D, Kuo H and Wong P M 2002 Rule extraction using
provided. fuzzy clustering for a sedimentary rock data set Int. J. Fuzzy
Syst. 4 600–5
Gottlib-Zeh S, Briqueu L and Veillerette A 1999 Indexed
References self-organization map: a new calibration system for a
geological interpretation of logs Proc. IAMG ’99 vol 6
Adegbenga O E 2003 High resolution sequence stratigraphic and pp 183–8 (Norway: Lippard, Naess and Sinding-Larsen)
Jerry F L 2003 Applying sequence stratigraphic methods to

Downloaded from https://academic.oup.com/jge/article-abstract/1/2/134/5127530 by guest on 21 May 2020


reservoir characterization studies of D-07, D-08 and E-01
Sands, Block 2 Meren Field, Offshore Niger Delta AAPG Conf. constructing petrophysical models AAPG Conf. and Exhibition
and Exhibition (Barcelona, Spain, 21–24 September) (Barcelona, Spain, 21–24 September)
Aider H 1996 Contribution des diagraphies différées à l’évaluation Kohonen T 1988 Self organization and associative memory Springer
de paramètres des réservoirs Second Scientific and Technical Series in Information Sciences vol 8, 2nd edn (Berlin:
Days of the Sonatrach (Algiers, 21–24 April) pp 115–30 Springer)
Aliev M et al 1971 Structures Géologiques et Perspectives en Mofazzal H B 2001 An intelligent system’s approach to reservoir
Pétrole et en Gaz du Sahara Algérien (Algiers: Sonatrach) characterization in cotton valley MSc Thesis West Virginia
Aminzadeh F 2003 Challenging problems in the oil industry and University
applications of AI and soft computing Int. Joint Conf. Artificial Ouenes A 2000 Practical application of fuzzy logic and neural
Intelligence (Mexico, 9–15 Aug.) networks to fractured reservoir characterization Comput.
Aminzadeh F, Barhen J, Glover C W and Toomarian N B 2000 Geosci. 26 953–62
Reservoir parameter estimation using hybrid neural network Platon E, Gary A and Glenn J 2003 Pattern matching in facies
Comput. Geosci. 26 869–75 analysis from well log data—a hybrid neural network-based
Anouar F 1996 Modélisation probabiliste des cartes auto-organisées application AAPG Conf. and Exhibition (Barcelona, Spain,
Application en classification et en régression PhD Thesis 21–24 September)
CNAM Paris chapter 3 Rabaute A 1999 Obtenir une représentation en continu de la
Baddari K and Nikitin A A 1996 Statistical processing of lithologie et de la minéralogie. Exemples d’applications du
geophysical data Second Scientific and Technical Days of the traitement statistique de données de diagraphie aux structures
Sonatrach (Algiers, 21–24 April) pp 205–20 sédimentaires en régime de convergence de plaques PhD
Barbaroux et al 2000 Generalized fractal dimensions: equivalences Thesis Montpellier II University
and basic properties Journal de Mathématiques Pures et Tamhane D, Wong P M and Aminzadeh F 2002 Integrating
Appliquées linguistic descriptions and digital signals in petroleum
Bougrain L and Alexandre F 1999 Unsupervised connectionist reservoirs Int. J. Fuzzy Syst. 4 586–91
algorithms for clustering an environmental data set: a Tapias O, Soto C P, Perez H H and Bejarano A 2001 Reservoir
comparison Neurocomputing 28 177–89 engineer and artificial intelligence techniques for data analysis
Carpenter G A and Grossberg S 2002 Adaptive resonance theory SPE Asia Pasific Oil and Gas Conf. and Exhibition (Jakarta,
The Handbook of Brain Theory and Neural Indonesia) pp 17–19
Networks 2nd edn, ed M A Arbib (Cambridge, MA: MIT Press) Wong Kok W 1999 A neural fuzzy approach for well log and
Chang H et al 2000 Lithofacies identification using multiple hydrocyclone data interpretation PhD Thesis Curtin University
adaptive resonance theory neural networks and group decision of Technology chapter 3
expert system Comput. Geosci. 26 591–601 Wong P M, Gedeon T D and Taggart I J 1996 Fuzzy ARTMAP: a
Chikhi S and Shout H 2003 Using probabilistic neural networks to new tool for lithofacies recognition. AI Applications 10 29–39
construct well facies WSEAS Trans. Syst. 2 839–43 Yang H, Chen H C and Fang J H 1996 Determination
Claude C S 2003 Practical analysis for reservoir uncertainty of carbonate lithofacies using hierarchical neural
AAPG Conf. and Exhibition (Barcelona, Spain, networks Proc. Artificial Neural Networks in Engineering
21–24 September) Conf. (ANNIE ’96) (St. Louis, MO, USA) pp 481–6

142

You might also like