Professional Documents
Culture Documents
Journal of Petroleum Science and Engineering: Hossein Kaydani, Ali Mohebbi
Journal of Petroleum Science and Engineering: Hossein Kaydani, Ali Mohebbi
Journal of Petroleum Science and Engineering: Hossein Kaydani, Ali Mohebbi
art ic l e i nf o a b s t r a c t
Article history: This paper presents a novel approach of permeability prediction by combining cuckoo, particle swarm
Received 22 May 2012 and imperialist competitive algorithms with Levenberg–Marquardt (LM) neural network algorithm in
Accepted 1 November 2013 one of heterogeneous oil reservoirs in Iran. First, topology and parameters of the Artificial Neural
Available online 21 November 2013
Network (ANN) as decision variables were designed without the optimization method. Then, in order to
Keywords: improve the effectiveness of forecasting when ANN was applied to a permeability predicting problem,
permeability the design was performed using Cuckoo Optimization Algorithm (COA) algorithm. The validation test
neural network result from a new well data demonstrated that the trained COA–LM neural model can efficiently
cuckoo optimization algorithm accomplish permeability prediction. Also, the comparison of COA with particle swarm optimization and
particle swarm optimization
imperialist competitive algorithms showed the superiority of COA on fast convergence and best optimum
imperialist competitive algorithm
solution achievement.
well logs data
& 2013 Elsevier B.V. All rights reserved.
1. Introduction birds and was called Cuckoo Optimization Algorithm (COA). COA
mimics the breeding behavior of cuckoos, where each individual
Permeability prediction is one of the main challenges in searches for the most suitable nest to lay an egg in order to
reservoir engineering. On the other hand, suggesting engineering maximize the egg's survival rate, which is an efficient search pattern.
methods to solve reservoir modeling and managements is impos- Application of the COA in different optimization problems has
sible without the knowledge of the actual permeability values proven its capability to deal with difficult optimization problems,
(Biswas et al., 2003). The reservoir rock permeability cannot be especially multi-dimensional problems (Rajabioun, 2011).
measured directly with exception of using core plugs as direct In this study, the COA optimization and the LM algorithm were
measurement. However, direct measuring methods are expensive combined in order to form a two-stage learning algorithm for
and time-consuming (Mohaghegh et al., 1997). In recent years optimizing the parameters of the feed forward neural network.
intelligent techniques such as Artificial Neural Networks (ANNs) The proposed method was applied in permeability prediction in
have been increasingly applied to predict reservoir properties one of the heterogeneous oil reservoirs in Iran. Then, the results of
using well log data (Mohaghegh et al., 1994). Moreover, previous the hybrid model were compared with experimental measure-
investigations have indicated that neural networks can predict ments from a new well data to find the efficiency of the present
formation permeability even in highly heterogeneous reservoirs algorithm. Moreover, the optimal network design was done with
using geophysical well log data with good accuracy (Mohaghegh particle swarm optimization and Imperialist Competitive Algo-
et al., 1996; Aminian et al., 2000). rithm (ICA) and their results were compared with cuckoo optimi-
In spite of the wide range of applications, a significant amount of zation algorithm on designing neural network structure with LM
time and effort has been expended to find the optimum or near training algorithm.
optimum structure for a neural network for the desired task. To
mitigate these deficiencies, design of neural networks using optimi-
zation algorithms such as Genetic Algorithm (GA) and Particle Swarm 2. Artificial neural networks
Optimization (PSO) have been proposed (Boozarjomehry and Svrcek,
2001; Wang, 2005; Zhang et al., 2007; Kaydani et al., 2011). Artificial neural network (ANN) is an especially efficient algo-
Recently, a new evolutionary algorithm has been proposed by rithm to approximate any function with finite number of disconti-
Rajabioun (2011), which was inspired by special lifestyle of cuckoo nuities by learning the relationships between input and output
vectors (Ganguly, 2003; Richon and Laugier, 2003). Feed forward
type neural networks have an input, an output and, in most
n
Corresponding author. applications, have one hidden layer. The number of inputs and
E-mail addresses: amohebbi2002@yahoo.com, amohebbi@uk.ac.ir (A. Mohebbi). outputs of the neural networks are determined by considering the
0920-4105/$ - see front matter & 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.petrol.2013.11.009
18 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23
characteristics of the application. Each neuron of a layer is The profit of a habitat is obtained by evaluation of profit
generally connected to the neurons in the proceeding layer. A function fp at a habitat, so
neuron has two components: (1) a weighted sum, which performs
prof it ¼ f p ðhabitatÞ ¼ f p ðX 1 ; X 2 ; …; X Nvar Þ ð4Þ
a weighted summation of its inputs with components (X1, X2, X3,
…, Xn), i.e., S ¼∑WiXi þb, where b is the bias of the networks; and from this point of view, the entire algorithm searches to maximize
(2) a linear, nonlinear or logic function, which gives an output a profit function. To use COA in cost minimization problems, one
corresponding to S. In general, the output from neural j in layer k can easily maximize the following profit function:
can be calculated by the following equation:
! Prof it ¼ costðhabitatÞ ¼ f c ðX 1 ; X 2 ; …; X Nvar Þ ð5Þ
Nk 1
ujk ¼ F k ∑ wijk uiðk 1Þ þ bjk Þ ð1Þ
i¼1
By nature, each cuckoo lays from 5 to 20 eggs. These values are
used as the upper and lower limits of egg dedication to each
Coefficients wijk and bjk are the connection weight and bias of cuckoo at different iterations. Another habit of real cuckoos is that
the network, respectively; they are the fitting parameters of the they lay eggs within a maximum distance from their habitat,
model and Fk is the transfer function of layer k. The output of the which is called Egg Laying Radius (ELR) and defined as (Rajabioun,
network is created at the output layer. Then the network's output 2011)
pattern is compared with the target vector according to Mean
Number of current cuckoo0 s eggs
Squared Error (MSE). The MSE of the neural network can be ELR ¼ α ðvar hi var low Þ ð6Þ
Total number of eggs
defined as follows:
where varhi and varlow are the upper limit and lower limit of
∑ni¼ 1 ðOi T i Þ2
MSE ¼ ð2Þ variables respectively, and α is an integer, supposed to handle the
n
maximum value of ELR.
where Oi is the desired output, Ti is the network output, and n is Each cuckoo starts laying eggs randomly in some other host
the number of data in the training data set or the cross validation birds' nests within her ELR. Some of these eggs, which are more
data set (Demuth et al., 2006). similar to the host bird's eggs have the opportunity to grow up and
For neural networks based on supervised learning, the data are become a mature cuckoo. Other eggs are detected by host birds
usually split into a training set, validation set, and a test set. and are killed. When young cuckoos grow and become mature,
Various networks are trained by minimization of an appropriate they immigrate to new and better habitats. After the cuckoo
error function defined with respect to the training set. Perfor- groups are formed in different areas with the K-means clustering
mance of the networks is then compared by evaluating the error method, the society with the best profit value is selected as the
function using the training and validation set, and the network goal point for other cuckoos to immigrate. Fig. 1 shows the
having the smallest error with respect to the validation set is movement of cuckoos towards the destination habitat. In this
selected (Niculescu, 2003). movement, λ and φ are the random numbers, which are defined by
(Rajabioun, 2011)
habitat ¼ ½X 1 ; X 2 ; …; X Nvar ð3Þ Fig. 1. Immigration of a sample cuckoo toward goal habitat (Rajabioun, 2011).
H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23 19
The prototype oil reservoir is located at the southwest of Iran. In this study, six wells with suitable distances from each other
With the data acquired (NISOC, 2006), the reservoir can be are selected, where all of the essential information including well
classified as a structural complex and heterogonous reservoir. logs and core permeability are available. Among the available
petrophysical data, the inputs to the network are Spectral Gamma
Ray (SGR), Water Saturation (SWT), Total Porosity (PHIT), Bulk
4.1. Reservoir history Density (RHOB), Sonic porosity (DT), Neutron Porosity (NPHI) and
depth, while the output is permeability values from the core
All the 68 wells of the mentioned reservoir that were drilled analysis. SGR is a valid indicator for shale regions and SWT shows
until 2012 have been logged. Therefore, the petrophysical data of a permeable region. Therefore, we include them in inputs vector.
almost all the wells were available. In this field only six wells were Also, porosity logs, namely PHIT, RHOB, DT and NPHI were selected
cored in the reservoir layer (see Fig. 2). It should be noted that core as inputs according to a previous study (Biswas et al., 2003) that
data are often only available from few wells in a reservoir while observed the correlation between porosity and permeability.
the well logs are available from the majority of the wells. Thus, Furthermore, because of considering the effect of the vertical
evaluation of permeability from the well log data represents a indicator and overburden pressure on permeability, the depth of
significant technical as well as economical advantage (NISOC, each of the cores was selected. In Fig. 3 inputs and output of the
2006). network are shown graphically.
20 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23
Table 1
Correlation coefficients between parameters before and after normalizing.
N: Normalized.
0.025 60
Training
0.02
Average of Min MSEs
Cross validation
50
0.015 R=0.99
0.005
30
0
5 7 9 11 13 15 17
Neurons number in hidden layer
20
Fig. 4. Performance of ANNs with different neurons in their middle layer.
10
Table 3
Performance of ANN for test data set trained with-
out optimization methods.
0
Parameter Best network 0 10 20 30 40 50
Core permeability (mD)
AAD 0.155
NMSE 0.026 Fig. 5. Hybrid model predictions versus core measurements permeability for
R 0.831 training data set.
R2 0.691
Well log responses were as inputs to the network, while AAD 0.099
permeability values of core data were the outputs. The database NMSE 0.012
R 0.978
to be introduced to the neural network training were broken down R2 0.957
randomly into three groups: training, cross-validation, and testing.
Typically, 80% of the data is used for the training process and the
other 20% of the data is categorized as validation and testing. The model predictions versus core measurements permeability for a
number of hidden layers and the number of input and output tainting data set. The ANNs with more than one single hidden
neurons in the ANN model were defined as 1, 7 and 1, respectively. layer were also used, but the result was not improved.
Fig. 4 shows the optimal number of the neurons of a single hidden In this case, to evaluate the hybrid model generalization on the
layer network using this method. The training results for the ANN chosen data set or new well data points, the performance of the
on the cross-validation data showed the lowest MSE when the model is reported in Table 4. This table reveals agreement between
number of hidden neurons was 11. It was found that the 7–11–1 the predicted values of permeability by the COA–LM optimization
architecture was the best model in terms of MSE, which means 7, model and the permeability of the actual core measurements.
11, and 1 neurons in the input, hidden and output layers respec- Fig. 6 compares the actual permeability values, which were
tively. More than a single hidden layer ANNs were also used measured in laboratory (and never seen by the network during
throughout this stage, but the result was unsatisfactory. the optimized training) with the network's prediction for each
Table 3 gives the efficient ANNs performances in terms of sample. This figure shows that there is an acceptable agreement (i.
Normalized Mean-Square Error (NMSE) (Wei et al., 2010), average e. linear correlation coefficient of 98%) between the predicted
absolute deviation (AAD), the linear correlation coefficient (R) and values and the experimental data.
squared linear correlation coefficient (R2) between the core per- Comparison of the results presented in Tables 3 and 4 reveals
meability and neural network output in new well data set. In brief, that the correlation coefficient between core permeability and
the ANN predictions are optimum if R2, R, AAD and NMSE are output of ANN without COA is 0.831 while this parameter between
found to be close to 1, 1, 0 and 0 respectively. core permeability and the COA–LM model output is 0.978. There-
fore, the hybrid algorithm has higher accuracy than the design
6.2. COA optimization method without COA. To achieve the optimal solution in the first model (i.
e. ANN without COA), the LM training algorithms must be run
The number of hidden layers and the number of input and several times for every fixed hidden neurons number, but in the
output neurons in the COA–LM model were defined as 1, 7, and 1, COA–LM training model this solution is achieved by searching
respectively. The Mean Square Error (MSE) was used as a criterion ability of COA algorithm in all of the space solutions.
for choosing the best design in the COA–LM optimization model,
which can be calculated according to Eq. (2). It was found that the 6.3. Comparison of COA with PSO and ICA
best ANN model of the COA–LM was obtained in a structure with
the 7–11–1 architecture in terms of the least cost function In order to do a comparison among the algorithms, COA, PSO and
(minimum MSE), which means 7, 11, and 1 neurons in the input, ICA were applied to the optimum design of neural networks structure.
hidden and output layers respectively. Fig. 5 shows the hybrid Fig. 7 shows the proposed flowchart of the optimized LM neural
22 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23
50 0.031
PSO
ICA
R=0.98 0.026
40
COA
1 mD= 9.869 E-16 m2
Prediction permeability (md)
0.021
0.016
20
0.011
10
0.006
0
0 15 30 45 0.001
Core permeability (md) 0 50 100 150 200
Iterations
Fig. 6. The hybrid model predictions versus core measurements permeability for
the new well data set. Fig. 8. Cost function minimization plot for three optimization algorithms in neural
network design.
START
Table 5
Performance of neural network optimum model by different methods.
Initialize a feed-forward neural network and optimization algorithm
Parameter COA–LM ICA–LM PSO–LM
References Mohaghegh, S., Arefi, R., Ameri, S., Rose, D., 1994. Design and Development of an
Artificial Neural Network for Estimation of Formation Permeability. SPE Paper
28237.
Aminian, K., Bilgesu, H.I., Ameri, S., Gil, E., 2000. Improving the Simulation of Water Mohaghegh, S., Ameri, S., Aminian, K., 1996. A methodological approach for
Flood Performance with the Use of Neural Networks. SPE Paper 65630. reservoir heterogeneity characterization using artificial neural networks.
Atashpaz-Gargari, E., Lucas, C., 2007. Imperialist competitive algorithm: An algo- J. Pet. Sci. Eng. 16, 263–274.
rithm for optimization inspired by imperialistic competition. In: Proceedings of Mohaghegh, S., Balan, B., Ameri, S., 1997. Permeability Determination from Well Log
the IEEE Congress on Evolutionary Computation, pp. 4661–4667. Data. SPE Paper 30978.
Biswas, D., Suryanarayana, P.V., Frink, P.J., Rahman, S., 2003. An Improved Model to Niculescu, S.P., 2003. Artificial neural networks and genetic algorithms in QSAR.
Predict Reservoir Characteristics During Underbalanced Drilling. SPE Paper J. Mol. Struct. Theochem. 622, 71–83.
84176. NISOC Report, 2006. Geological Studies Report for Mansuri Oil Field Development
Boozarjomehry, R.B., Svrcek, W.Y., 2001. Automatic design of neural network in Bangestan Formation.
structures. Comput. Chem. Eng. 25, 1075–1088. NeuroSolutions for Excel Release 5 software help, Copyright 2005, NeuroDimen-
Demuth, H., Beale, M., Hagan, M., 2006. Neural Network Toolbox User's Guide. the sion, Inc.
Math Works Inc. Rajabioun, R., 2011. Cuckoo optimization algorithm. Appl. Soft Comput. 11,
Ganguly, S., 2003. Prediction of VLE data using radial basis function network. 5508–5518.
J. Comput. Chem. Eng. 27, 1445–1454. Richon, D., Laugier, S., 2003. Use of artificial neural networks for calculating derived
Huang, Y.F., Huang, G.H., Dong, M.Z., Feng, G.M., 2003. Development of an artificial thermodynamic quantities from volumetric property data. Fluid Phase Equilib.
neural network model for predicting minimum miscibility pressure in CO2 210, 247–255.
flooding. J. Pet. Sci. Eng. 37, 83–95. Wang, L., 2005. A hybrid genetic algorithm–neural network strategy for simulation
Kaydani, H., Mohebbi, A., Baghaie, A., 2011. Permeability prediction based on optimization. J. Appl. Math. Comput. 170, 1329–1343.
reservoir zonation by a hybrid neural genetic algorithm in one of the Iranian Wei, H.L., Billings, S.A., Zhao, Y.F., Guo, L.Z., 2010. An adaptive wavelet neural network for
heterogeneous oil reservoirs. J. Pet. Sci. Eng. 78, 497–504. spatio-temporal system identification. Neural Netw. 23, 1286–1299.
Kennedy, J., Eberhart, R.C., 1995. Particle swarm optimization. In: Proceedings of the Zhang, J.R., Zhang, J., Lok, T.M., Lyu, M., 2007. A hybrid particle swarm optimization
IEEE International Conference on Neural Networks in 1995, vol. 4, pp. 1942– back-propagation algorithm for feed forward neural network training. J. Appl.
1948. Math. Comput. 185, 1026–1037.