Journal of Petroleum Science and Engineering: Hossein Kaydani, Ali Mohebbi

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Journal of Petroleum Science and Engineering 112 (2013) 17–23

Contents lists available at ScienceDirect

Journal of Petroleum Science and Engineering


journal homepage: www.elsevier.com/locate/petrol

A comparison study of using optimization algorithms and artificial


neural networks for predicting permeability
Hossein Kaydani, Ali Mohebbi n
Department of Chemical Engineering, College of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran

art ic l e i nf o a b s t r a c t

Article history: This paper presents a novel approach of permeability prediction by combining cuckoo, particle swarm
Received 22 May 2012 and imperialist competitive algorithms with Levenberg–Marquardt (LM) neural network algorithm in
Accepted 1 November 2013 one of heterogeneous oil reservoirs in Iran. First, topology and parameters of the Artificial Neural
Available online 21 November 2013
Network (ANN) as decision variables were designed without the optimization method. Then, in order to
Keywords: improve the effectiveness of forecasting when ANN was applied to a permeability predicting problem,
permeability the design was performed using Cuckoo Optimization Algorithm (COA) algorithm. The validation test
neural network result from a new well data demonstrated that the trained COA–LM neural model can efficiently
cuckoo optimization algorithm accomplish permeability prediction. Also, the comparison of COA with particle swarm optimization and
particle swarm optimization
imperialist competitive algorithms showed the superiority of COA on fast convergence and best optimum
imperialist competitive algorithm
solution achievement.
well logs data
& 2013 Elsevier B.V. All rights reserved.

1. Introduction birds and was called Cuckoo Optimization Algorithm (COA). COA
mimics the breeding behavior of cuckoos, where each individual
Permeability prediction is one of the main challenges in searches for the most suitable nest to lay an egg in order to
reservoir engineering. On the other hand, suggesting engineering maximize the egg's survival rate, which is an efficient search pattern.
methods to solve reservoir modeling and managements is impos- Application of the COA in different optimization problems has
sible without the knowledge of the actual permeability values proven its capability to deal with difficult optimization problems,
(Biswas et al., 2003). The reservoir rock permeability cannot be especially multi-dimensional problems (Rajabioun, 2011).
measured directly with exception of using core plugs as direct In this study, the COA optimization and the LM algorithm were
measurement. However, direct measuring methods are expensive combined in order to form a two-stage learning algorithm for
and time-consuming (Mohaghegh et al., 1997). In recent years optimizing the parameters of the feed forward neural network.
intelligent techniques such as Artificial Neural Networks (ANNs) The proposed method was applied in permeability prediction in
have been increasingly applied to predict reservoir properties one of the heterogeneous oil reservoirs in Iran. Then, the results of
using well log data (Mohaghegh et al., 1994). Moreover, previous the hybrid model were compared with experimental measure-
investigations have indicated that neural networks can predict ments from a new well data to find the efficiency of the present
formation permeability even in highly heterogeneous reservoirs algorithm. Moreover, the optimal network design was done with
using geophysical well log data with good accuracy (Mohaghegh particle swarm optimization and Imperialist Competitive Algo-
et al., 1996; Aminian et al., 2000). rithm (ICA) and their results were compared with cuckoo optimi-
In spite of the wide range of applications, a significant amount of zation algorithm on designing neural network structure with LM
time and effort has been expended to find the optimum or near training algorithm.
optimum structure for a neural network for the desired task. To
mitigate these deficiencies, design of neural networks using optimi-
zation algorithms such as Genetic Algorithm (GA) and Particle Swarm 2. Artificial neural networks
Optimization (PSO) have been proposed (Boozarjomehry and Svrcek,
2001; Wang, 2005; Zhang et al., 2007; Kaydani et al., 2011). Artificial neural network (ANN) is an especially efficient algo-
Recently, a new evolutionary algorithm has been proposed by rithm to approximate any function with finite number of disconti-
Rajabioun (2011), which was inspired by special lifestyle of cuckoo nuities by learning the relationships between input and output
vectors (Ganguly, 2003; Richon and Laugier, 2003). Feed forward
type neural networks have an input, an output and, in most
n
Corresponding author. applications, have one hidden layer. The number of inputs and
E-mail addresses: amohebbi2002@yahoo.com, amohebbi@uk.ac.ir (A. Mohebbi). outputs of the neural networks are determined by considering the

0920-4105/$ - see front matter & 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.petrol.2013.11.009
18 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23

characteristics of the application. Each neuron of a layer is The profit of a habitat is obtained by evaluation of profit
generally connected to the neurons in the proceeding layer. A function fp at a habitat, so
neuron has two components: (1) a weighted sum, which performs
prof it ¼ f p ðhabitatÞ ¼ f p ðX 1 ; X 2 ; …; X Nvar Þ ð4Þ
a weighted summation of its inputs with components (X1, X2, X3,
…, Xn), i.e., S ¼∑WiXi þb, where b is the bias of the networks; and from this point of view, the entire algorithm searches to maximize
(2) a linear, nonlinear or logic function, which gives an output a profit function. To use COA in cost minimization problems, one
corresponding to S. In general, the output from neural j in layer k can easily maximize the following profit function:
can be calculated by the following equation:
! Prof it ¼  costðhabitatÞ ¼  f c ðX 1 ; X 2 ; …; X Nvar Þ ð5Þ
Nk  1
ujk ¼ F k ∑ wijk uiðk  1Þ þ bjk Þ ð1Þ
i¼1
By nature, each cuckoo lays from 5 to 20 eggs. These values are
used as the upper and lower limits of egg dedication to each
Coefficients wijk and bjk are the connection weight and bias of cuckoo at different iterations. Another habit of real cuckoos is that
the network, respectively; they are the fitting parameters of the they lay eggs within a maximum distance from their habitat,
model and Fk is the transfer function of layer k. The output of the which is called Egg Laying Radius (ELR) and defined as (Rajabioun,
network is created at the output layer. Then the network's output 2011)
pattern is compared with the target vector according to Mean
Number of current cuckoo0 s eggs
Squared Error (MSE). The MSE of the neural network can be ELR ¼ α  ðvar hi  var low Þ ð6Þ
Total number of eggs
defined as follows:
where varhi and varlow are the upper limit and lower limit of
∑ni¼ 1 ðOi  T i Þ2
MSE ¼ ð2Þ variables respectively, and α is an integer, supposed to handle the
n
maximum value of ELR.
where Oi is the desired output, Ti is the network output, and n is Each cuckoo starts laying eggs randomly in some other host
the number of data in the training data set or the cross validation birds' nests within her ELR. Some of these eggs, which are more
data set (Demuth et al., 2006). similar to the host bird's eggs have the opportunity to grow up and
For neural networks based on supervised learning, the data are become a mature cuckoo. Other eggs are detected by host birds
usually split into a training set, validation set, and a test set. and are killed. When young cuckoos grow and become mature,
Various networks are trained by minimization of an appropriate they immigrate to new and better habitats. After the cuckoo
error function defined with respect to the training set. Perfor- groups are formed in different areas with the K-means clustering
mance of the networks is then compared by evaluating the error method, the society with the best profit value is selected as the
function using the training and validation set, and the network goal point for other cuckoos to immigrate. Fig. 1 shows the
having the smallest error with respect to the validation set is movement of cuckoos towards the destination habitat. In this
selected (Niculescu, 2003). movement, λ and φ are the random numbers, which are defined by
(Rajabioun, 2011)

3. Optimization algorithms λ  Uð0; 1Þ ð7Þ

In recent years, Evolutionary Computation (EC) and metaheur- φ  Uð  ω; ωÞ ð8Þ


istis optimization algorithms have been a noticeable part for
where ω is a parameter that constrains the deviation from goal
solving real mathematics and engineering problems. In the next
habitat and a value of about π/6 (Radian) for ω results in a good
sections, we describe some of optimization algorithms, which
convergence of countries to the global minimum.
were used in this study.
There is always equilibrium in birds' population so a number
Nmax controls and limits the maximum number of live cuckoos in
3.1. Cuckoo optimization algorithm
the environment, which can be the result of food limitations, and
other environmental conditions. After some iterations, all the
Cuckoo Optimization Algorithm (COA) is a new evolutionary
cuckoo population move to one best habitat with maximum
optimization method, which is inspired by the life of a bird family,
similarity of eggs to the host birds and also with the maximum
called Cuckoo. Special lifestyle of these birds and their character-
food resources. This habitat will produce the maximum profit ever.
istics in egg laying and breeding has been the basic procedure of
There will be least egg losses in this best habitat. Convergence of
this evolutionary optimization algorithm. Like other evolutionary
more than 95% of all cuckoos to the same habitat puts an end to
algorithms, it starts with an initial population including mature
cuckoos and their eggs. The effort to survive among cuckoos
constitutes the basis of COA. During the survival competition
some of the cuckoos or their eggs, are destroyed. The survived
cuckoo societies immigrate to a better environment and start
reproducing and laying eggs. Cuckoos' survival effort hopefully
converges to a state in which there is only one cuckoo society, all
with the same profit values.
Each variable in this algorithm can be interpreted as a habitat,
which is equivalent to chromosome in GA or particle in PSO
algorithm. In N variable optimization problem, the habitat array
is defined as follows, which represents the current position of
variables (Rajabioun, 2011):

habitat ¼ ½X 1 ; X 2 ; …; X Nvar  ð3Þ Fig. 1. Immigration of a sample cuckoo toward goal habitat (Rajabioun, 2011).
H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23 19

Cuckoo Optimization Algorithm. More details of COA are available


in literature (Rajabioun, 2011).

3.2. Imperialist competitive algorithm

Imperialist Competitive Algorithm (ICA) is an evolutionary


optimization method, which is inspired by imperialistic competi-
tion. Like other evolutionary algorithms, it starts with an initial
population, which is called country and is divided into two types:
the colonies and the imperialists, which together form empires. All
the colonies of initial countries are divided among the mentioned
imperialists based on their power. The power of each country, the
counterpart of fitness value in the GA, is inversely proportional to
its cost. Imperialistic competition among these empires forms the
proposed evolutionary algorithm (Atashpaz-Gargari and Lucas,
2007).
Each variable in the country can be interpreted as a socio-
political characteristic of a country. From this point of view, the
entire algorithm searches for the best country that has the best
combination of socio-political characteristics. From optimization
point of view this leads to finding the optimal solution of the
problem, the solution with the least cost value. More details of ICA
are available in literature (Atashpaz-Gargari and Lucas, 2007).
Fig. 2. Locations of wells in mentioned field.

3.3. Particle swarm optimization

Particle Swarm Optimization (PSO) is an evolutionary compu-


tation technique with the mechanism of individual improvement,
population cooperation and competition, which is based on the
simulation of simplified social models, such as bird flocking, fish
schooling and the swarming theory (Kennedy and Eberhart, 1995).
PSO starts with the random initialization of a population (swarm)
of individuals (particles) in the search space and works on the
social behavior of the particles in the swarm. Therefore, it finds the
global best solution by simply adjusting the trajectory of each
individual towards its own best location and towards the best
particle of the swarm at each time step (i.e. generation). However,
the trajectory of each individual in the search space is adjusted by
dynamically altering the velocity of each particle, according to its
own flying experience and the flying experience of the other
particles in the search space. The process is repeated until a user
defined stopping criterion is reached. More details of PSO are
available in literature (Kennedy and Eberhart, 1995).
Fig. 3. Feed-forward multilayer ANN scheme with network inputs data.

4. Case study 4.2. Data analysis

The prototype oil reservoir is located at the southwest of Iran. In this study, six wells with suitable distances from each other
With the data acquired (NISOC, 2006), the reservoir can be are selected, where all of the essential information including well
classified as a structural complex and heterogonous reservoir. logs and core permeability are available. Among the available
petrophysical data, the inputs to the network are Spectral Gamma
Ray (SGR), Water Saturation (SWT), Total Porosity (PHIT), Bulk
4.1. Reservoir history Density (RHOB), Sonic porosity (DT), Neutron Porosity (NPHI) and
depth, while the output is permeability values from the core
All the 68 wells of the mentioned reservoir that were drilled analysis. SGR is a valid indicator for shale regions and SWT shows
until 2012 have been logged. Therefore, the petrophysical data of a permeable region. Therefore, we include them in inputs vector.
almost all the wells were available. In this field only six wells were Also, porosity logs, namely PHIT, RHOB, DT and NPHI were selected
cored in the reservoir layer (see Fig. 2). It should be noted that core as inputs according to a previous study (Biswas et al., 2003) that
data are often only available from few wells in a reservoir while observed the correlation between porosity and permeability.
the well logs are available from the majority of the wells. Thus, Furthermore, because of considering the effect of the vertical
evaluation of permeability from the well log data represents a indicator and overburden pressure on permeability, the depth of
significant technical as well as economical advantage (NISOC, each of the cores was selected. In Fig. 3 inputs and output of the
2006). network are shown graphically.
20 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23

Table 1
Correlation coefficients between parameters before and after normalizing.

Depth DT-FL NPHI-COR RHOB-COR SGR-COR PHIT (%) SWT

Permeability 0.043 0.43 0.388  0.393  0.154 0.402  0.291


N.Depth N.DT NPHI-COR N.RHOB N.SGR N.PHIT N.SWT
Normalized permeability 0.129 0.497 0.461  0.449  0.181 0.525  0.274

N: Normalized.

In most cases, data should be scaled, normally linearly, to a Table 2


smaller interval in order to increase the training speed, facilitate Parameters of COA optimization approach.
modeling and predicting. In this work, each group of training data
Parameter Value
including logs and core measurements are normalized to [0.2, 1] as
follows: Maximum number of cuckoos 200
at same time
X  X min Number of k-means clusters 4
X n ¼ 0:8  þ 0:2 ð9Þ
X max  X min Maximum number of iteration 300
Minimum number of eggs for 5
where, X is the original data; Xmin is the minimum of X; Xmax is the each cuckoo
maximum of X; and Xn is the result of normalization. The Maximum number of eggs for 10
correlation coefficient between the dependent and independent each cuckoo
parameters can be improved in normalized distribution data.
Table 1 compares the correlation coefficient between normalized
and non-normalized data. Step 4: dedicate some eggs and ELR to each cuckoo.
The basis for well logging data is the mean sea level and Step 5: let cuckoos lay eggs inside their corresponding ELR.
measuring of depth is according to the length of wire driven into Step 6: kill some eggs by host birds; lets other eggs hatch and
the well. In coring operation, the depth basis for extracted cores is chicks grow.
the rotary table elevation of drilling rig and depth measuring is Step 7: evaluate the habitant of each newly grown cuckoo.
according to the length of drilling strings driven into the well. Step 8: limit cuckoos maximum number in environment and
Therefore, to use well logging data and comparing with laboratory kill those who live in worst habitats.
core data, depth matching operation appears to be necessary. The Step 9: cluster cuckoos and find best group and select goal
details of depth matching are given elsewhere (Kaydani et al., habitat; immigrate new cuckoo population toward this point.
2011). Step 10: if stop conditions satisfied, go to step 11; if not go to 3
(Rajabioun, 2011).
Step 11: decode near optimal solution to neural networks
5. ANN designing by using COA algorithms structure.
Step 12: train the network by LM up to desired MSE or up to
The LM feed-forward neural network is one of the main maximum number of iterations.
algorithms for understanding neural networks, which has some Step13: display the performance by presenting testing para-
drawbacks in petroleum engineering applications (Huang et al., meters.
2003). To address some of these drawbacks, the Cuckoo Optimiza- In order to optimize the network structure, the parameters of
tion Algorithm (COA) and the Levenberg–Marquardt (LM) algo- the network, such as number of neurons in the hidden layers,
rithm are combined in order to form a hybrid learning algorithm weights and biases of layers, are treated as variables, which must
with the advantage of both algorithms to optimize the weights be optimized with COA. Here, the numbers of neurons in the
and threshold of the feed forward neural network. First, the COA is hidden layer are allowed to take only integer values, which
employed to search in the solution space and find near optimal bounded in a range between a lower bound of L/2, where L is
solutions. Then, the LM training, which can provide the second the number of input variables, and an upper bound of 3L
order of convergence rate, is applied according to these near (NeuroDimension Software Inc., 2005). The other parameters,
optimal weights to find the best solution around the global weights and biases are coded and allowed to take real values,
optimum. which had a lower bound of 2 and an upper bound of þ2. The
When the COA evolves the weights and thresholds of the feed COA parameters are initially specified, as given in Table 2. For
forward neural network, every habitat represents a set of weights implementation of hybrid algorithms for permeability prediction,
and thresholds. In the encoding strategy, every habitat is encoded a computer program has been written under MATLAB R2009
as an array to represent all weights and thresholds of the neural software, which had been produced by MathWorks, Inc.
network's structure.
The goal in this algorithm is to minimize a cost function. The
Mean Square Error (MSE) was used as a cost function in ICA 6. Results and discussion
optimization, which can be calculated according to Eq. (2). The
searching mechanism of the implemented hybrid strategy can be Well logs are inputs to the network and core permeability
succinctly described as follows: values are the outputs. The optimal number of the neurons of a
Step 1: initialize a feed forward neural network structure; hidden layer network was obtained first by using the trial and
initialize the parameters in the COA. error method and then by optimization algorithms with LM
Step 2: set up encoding relationship according to Eq. (4) training algorithm, which form a two-stage learning algorithm.
between the neural network structure and the COA parameters. All the 1260 available data points were organized from all the
Step 3: initialize cuckoo habitants with some random point on selected wells as input and output. However, the data from
the profit function (Rajabioun, 2011). one well was left as new well data set from the ANN training
H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23 21

0.025 60

Training
0.02
Average of Min MSEs

Cross validation
50
0.015 R=0.99

1 mD= 9.869 E-16 m2

Predicted permeability (mD)


0.01
40

0.005

30
0
5 7 9 11 13 15 17
Neurons number in hidden layer
20
Fig. 4. Performance of ANNs with different neurons in their middle layer.

10
Table 3
Performance of ANN for test data set trained with-
out optimization methods.
0
Parameter Best network 0 10 20 30 40 50
Core permeability (mD)
AAD 0.155
NMSE 0.026 Fig. 5. Hybrid model predictions versus core measurements permeability for
R 0.831 training data set.
R2 0.691

procedure in order to indicate capability prediction of the best Table 4


network and comparison between optimization methods. Performance of COA–LM neural model for new
data set.

6.1. Training without optimization Parameter Best network

Well log responses were as inputs to the network, while AAD 0.099
permeability values of core data were the outputs. The database NMSE 0.012
R 0.978
to be introduced to the neural network training were broken down R2 0.957
randomly into three groups: training, cross-validation, and testing.
Typically, 80% of the data is used for the training process and the
other 20% of the data is categorized as validation and testing. The model predictions versus core measurements permeability for a
number of hidden layers and the number of input and output tainting data set. The ANNs with more than one single hidden
neurons in the ANN model were defined as 1, 7 and 1, respectively. layer were also used, but the result was not improved.
Fig. 4 shows the optimal number of the neurons of a single hidden In this case, to evaluate the hybrid model generalization on the
layer network using this method. The training results for the ANN chosen data set or new well data points, the performance of the
on the cross-validation data showed the lowest MSE when the model is reported in Table 4. This table reveals agreement between
number of hidden neurons was 11. It was found that the 7–11–1 the predicted values of permeability by the COA–LM optimization
architecture was the best model in terms of MSE, which means 7, model and the permeability of the actual core measurements.
11, and 1 neurons in the input, hidden and output layers respec- Fig. 6 compares the actual permeability values, which were
tively. More than a single hidden layer ANNs were also used measured in laboratory (and never seen by the network during
throughout this stage, but the result was unsatisfactory. the optimized training) with the network's prediction for each
Table 3 gives the efficient ANNs performances in terms of sample. This figure shows that there is an acceptable agreement (i.
Normalized Mean-Square Error (NMSE) (Wei et al., 2010), average e. linear correlation coefficient of 98%) between the predicted
absolute deviation (AAD), the linear correlation coefficient (R) and values and the experimental data.
squared linear correlation coefficient (R2) between the core per- Comparison of the results presented in Tables 3 and 4 reveals
meability and neural network output in new well data set. In brief, that the correlation coefficient between core permeability and
the ANN predictions are optimum if R2, R, AAD and NMSE are output of ANN without COA is 0.831 while this parameter between
found to be close to 1, 1, 0 and 0 respectively. core permeability and the COA–LM model output is 0.978. There-
fore, the hybrid algorithm has higher accuracy than the design
6.2. COA optimization method without COA. To achieve the optimal solution in the first model (i.
e. ANN without COA), the LM training algorithms must be run
The number of hidden layers and the number of input and several times for every fixed hidden neurons number, but in the
output neurons in the COA–LM model were defined as 1, 7, and 1, COA–LM training model this solution is achieved by searching
respectively. The Mean Square Error (MSE) was used as a criterion ability of COA algorithm in all of the space solutions.
for choosing the best design in the COA–LM optimization model,
which can be calculated according to Eq. (2). It was found that the 6.3. Comparison of COA with PSO and ICA
best ANN model of the COA–LM was obtained in a structure with
the 7–11–1 architecture in terms of the least cost function In order to do a comparison among the algorithms, COA, PSO and
(minimum MSE), which means 7, 11, and 1 neurons in the input, ICA were applied to the optimum design of neural networks structure.
hidden and output layers respectively. Fig. 5 shows the hybrid Fig. 7 shows the proposed flowchart of the optimized LM neural
22 H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23

50 0.031
PSO

ICA
R=0.98 0.026
40
COA
1 mD= 9.869 E-16 m2
Prediction permeability (md)

0.021

Cost function (MSE)


30

0.016
20

0.011

10

0.006

0
0 15 30 45 0.001
Core permeability (md) 0 50 100 150 200
Iterations
Fig. 6. The hybrid model predictions versus core measurements permeability for
the new well data set. Fig. 8. Cost function minimization plot for three optimization algorithms in neural
network design.

START
Table 5
Performance of neural network optimum model by different methods.
Initialize a feed-forward neural network and optimization algorithm
Parameter COA–LM ICA–LM PSO–LM

AAD 0.099 0.104 0.119


Set up encoding and decoding relationship between the NMSE 0. 012 0.017 0.020
neural network and the optimization algorithm R 0.978 0.961 0.943
R2 0.957 0.924 0.889

Implementation of the optimization algorithm (PSO or ICA or COA)


search methods. Table 5 shows R2, R, AAD and NMSE for all three
models. From this table, it can be seen that the COA–LM gives the best
Export near critical weights and biases to neural network results compared with the other methods. Therefore, the good
estimation of global minimum solution can be result from the
optimum LM neural network modeling.
Train the network by LM up to desired MSE or up to
maximum number of iterations
7. Conclusions
Display the performance by presenting testing parameters
In this study, an attempt was made to develop a methodology
for combining COA, ICA and PSO with LM feed forward neural
END network to predict permeability for a heterogeneous oil reservoir.
The COA was used in order to minimize the effort required to find
Fig. 7. Flowchart of the hybrid an optimization – LM neural network.
the optimal architecture and parameters of the feed forward ANNs.
The performance of the network with respect to the predictions
network modeling with optimization algorithms for permeability made on the test sets showed that this new approach was able to
prediction. For all three methods, the maximum number of iterations sufficiently estimate the permeability of the reservoir with high
is set to 200. Fig. 8 illustrates a sample cost minimization plot of MSE correlation coefficient. Comparison of the prediction efficiency of
function of all three algorithms. This figure shows that the COA the COA optimized ANN model with that of the ANN model without
converges to a desirable tolerance (i.e. MSE¼ 0.001) by about 150 optimization showed that the COA–LM neural model produces a
iterations, whereas ICA and PSO cannot converge to this stopping higher accuracy than without the optimization method. Also,
criteria within 200 iterations. In a multi-dimensional problem like comparison of COA with PSO and ICA showed the superiority of
optimization of neural network structure, the optimization function COA in fast convergence and global optima achievement. Therefore,
has lots of local minima and is one of the difficult problems to solve. In the COA–LM neural network model is able to predict rock perme-
this type of problem, the final results are more important since they ability with sufficient and reliable geophysical well log data very
reflect the ability of the algorithm in escaping from local optima and close to laboratory core permeability measurements.
achieving a near-global optimum. As one can see clearly from Fig. 8,
the COA has a faster convergence than the ICA and PSO. In other
words, the amount of computation (i.e. iterations) required to achieve Acknowledgments
the global minimum for the ICA and PSO is much greater than that for
the COA. The progress of COA algorithm toward the near-global The authors would like to gratefully thank National Iranian oil
minimum indicates the superiority of COA algorithm especially in company (NIOC) and National Iranian South Oilfields Company
multi-dimension of problems over the other well-known heuristic (NISOC) for their help and financial support.
H. Kaydani, A. Mohebbi / Journal of Petroleum Science and Engineering 112 (2013) 17–23 23

References Mohaghegh, S., Arefi, R., Ameri, S., Rose, D., 1994. Design and Development of an
Artificial Neural Network for Estimation of Formation Permeability. SPE Paper
28237.
Aminian, K., Bilgesu, H.I., Ameri, S., Gil, E., 2000. Improving the Simulation of Water Mohaghegh, S., Ameri, S., Aminian, K., 1996. A methodological approach for
Flood Performance with the Use of Neural Networks. SPE Paper 65630. reservoir heterogeneity characterization using artificial neural networks.
Atashpaz-Gargari, E., Lucas, C., 2007. Imperialist competitive algorithm: An algo- J. Pet. Sci. Eng. 16, 263–274.
rithm for optimization inspired by imperialistic competition. In: Proceedings of Mohaghegh, S., Balan, B., Ameri, S., 1997. Permeability Determination from Well Log
the IEEE Congress on Evolutionary Computation, pp. 4661–4667. Data. SPE Paper 30978.
Biswas, D., Suryanarayana, P.V., Frink, P.J., Rahman, S., 2003. An Improved Model to Niculescu, S.P., 2003. Artificial neural networks and genetic algorithms in QSAR.
Predict Reservoir Characteristics During Underbalanced Drilling. SPE Paper J. Mol. Struct. Theochem. 622, 71–83.
84176. NISOC Report, 2006. Geological Studies Report for Mansuri Oil Field Development
Boozarjomehry, R.B., Svrcek, W.Y., 2001. Automatic design of neural network in Bangestan Formation.
structures. Comput. Chem. Eng. 25, 1075–1088. NeuroSolutions for Excel Release 5 software help, Copyright 2005, NeuroDimen-
Demuth, H., Beale, M., Hagan, M., 2006. Neural Network Toolbox User's Guide. the sion, Inc.
Math Works Inc. Rajabioun, R., 2011. Cuckoo optimization algorithm. Appl. Soft Comput. 11,
Ganguly, S., 2003. Prediction of VLE data using radial basis function network. 5508–5518.
J. Comput. Chem. Eng. 27, 1445–1454. Richon, D., Laugier, S., 2003. Use of artificial neural networks for calculating derived
Huang, Y.F., Huang, G.H., Dong, M.Z., Feng, G.M., 2003. Development of an artificial thermodynamic quantities from volumetric property data. Fluid Phase Equilib.
neural network model for predicting minimum miscibility pressure in CO2 210, 247–255.
flooding. J. Pet. Sci. Eng. 37, 83–95. Wang, L., 2005. A hybrid genetic algorithm–neural network strategy for simulation
Kaydani, H., Mohebbi, A., Baghaie, A., 2011. Permeability prediction based on optimization. J. Appl. Math. Comput. 170, 1329–1343.
reservoir zonation by a hybrid neural genetic algorithm in one of the Iranian Wei, H.L., Billings, S.A., Zhao, Y.F., Guo, L.Z., 2010. An adaptive wavelet neural network for
heterogeneous oil reservoirs. J. Pet. Sci. Eng. 78, 497–504. spatio-temporal system identification. Neural Netw. 23, 1286–1299.
Kennedy, J., Eberhart, R.C., 1995. Particle swarm optimization. In: Proceedings of the Zhang, J.R., Zhang, J., Lok, T.M., Lyu, M., 2007. A hybrid particle swarm optimization
IEEE International Conference on Neural Networks in 1995, vol. 4, pp. 1942– back-propagation algorithm for feed forward neural network training. J. Appl.
1948. Math. Comput. 185, 1026–1037.

You might also like