Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Chem. Eng. Technol. 2008, 31, No.

4, 493–500 493

Hanieh Rasouli1 Research Article


Fariborz Rashidi1
Amir Ebrahimian2 Estimating the Bubble Point Pressure and
1
Reservoir Engineering Formation Volume Factor of Oil Using
Department, Amirkabir
Univrsity of Technology Artificial Neural Networks
(Tehran Polytechnic), Tehran,
Iran.
The phase performance of hydrocarbons is a very complicated behavior that hy-
2
ACS Laboratories Pty Ltd., drocarbons show at the time of phase change or when they remain in a particular
Windsor, Queensland, phase. Process design is almost impossible without a good understanding of this
Australia. behavior. Artificial Neural Networks have been widely utilized for engineering ap-
plications during the last two decades. Two models are presented for the predic-
tion of the bubble point pressure and the oil formation volume factor for hydro-
carbon mixtures using the Artificial Neural Networks (ANNs) approach. For this
purpose, five-layer neural networks were designed and trained using 106 experi-
mental data points. After the training step, 9 experimental data points were also
used for the model evaluation step and as a reliability check. The output of the
models for both the training and predicted data are compared with the empirical
equations of Standing, Glaso and Marhoun. It is concluded that the ANNs ap-
proach has an excellent capability for these purposes compared to the conven-
tional methods.

Keywords: Artificial Neural Networks, Bubble point pressure, Hydrocarbons, Modeling,


Phase performance
Received: November 10, 2007; revised: December 23, 2007; accepted: January 16, 2008
DOI: 10.1002/ceat.200700434

1 Introduction 1.2 Methods of Calculating the Formation Volume


Factor at the Saturation Pressure
During the past decades, many researchers have proposed
methods and models for prediction of the phase behaviors of The calculation of the formation volume factor is usually based
reservoir fluids [1]. The special importance of PVT, e.g., the on the results of experimental data that are usually presented
bubble point, formation volume factor and gas oil ratio, in cal- as figures or mathematical correlations. In 1981, Standing pro-
culations and evaluations of reservoir recoveries, explains the posed the following empirical expression, Eq. (2) [2]:
necessity for accurate estimates of these properties.
Bo ˆ 0:9759‡
"   #1:2
cg 0:5 …2†
1.1 Formation Volume Factor 0:00012 Rs ‡1:25t
co
The ratio of the volume of oil and the gas dissolved in it at a In the same year, Glaso also proposed the following expres-
given temperature and pressure to the volume of oil at stan- sion, Eq. (3) [3]:
dard conditions is known as the volume factor [1]. This is al-
ways a number greater than or equal to unity. The volume fac- B0 = 1 + 10A (3)
tor can be mathematically expressed as:
…Vo †P; T where
Bo ˆ (1)
…Vo †SC
A = –6.58511 + 2.91329log (Bob ) – 0.27683(log (Bob ))2 (4)
– Bob from Eq. (4) is given by Eq. (5):
Correspondence: H. Rasouli (hrasouli@aut.ac.ir), Reservoir Engineer-  0:526
cg
ing Department, Faculty of Chemical Engineering, Amirkabir University Bob ˆ Rs ‡0:968 …T 460† (5)
of Technology (Tehran Polytechnic), P. O. Box 15875-4413, Tehran, Iran co

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


494 H. Rasouli et al. Chem. Eng. Technol. 2008, 31, No. 4, 493–500

In 1988, Marhoun suggested an equation for the determina- tions [6]. ANNs are considered as a different paradigm for
tion of volume factor with respect to the gas-oil ratio, oil grav- computing and are being successfully applied across an extra-
ity, gas gravity and temperature, Eq. (6) [4]: ordinary range of problem domains, in areas as diverse as fi-
nance, medicine, engineering, geology and physics [7].
Bo = 0.497069 + 0.862963 · 10–3T + 0.182594 · 10–2F + The neuron is the smallest information processing unit and
0.318099 · 10–5F2 (6) is the building block of the neural networks. It is an analytical
unit that receives signals from other neurons and through
in which F is defined as below: gateways called dendrites and then combines them. If p and a
are inputs and outputs of a neuron, respectively, the effective-
F = Rsacgbcoc (7) ness of p over a is indicated by the scalar w. Another input,
which is a constant is multiplied by the bias term and is then
where a = 0.74239, b = 0.323294 and c = –1.20204. summed by wp. This is the net input, n, for the transformation
function, f. In this manner, the neuron output is defined as be-
low [8]:
1.3 Bubble Point Pressure
a = f(wp + b) (12)
For a given hydrocarbon system, the highest pressure at which
the first gas bubble forms is called the bubble point of that sys- The input model is given in Fig. 1.
tem.

1.4 Methods of Bubble Point Pressure Prediction

If experimentally obtained bubble point data are not available,


it is necessary to estimate this oil property based on its feasibly
measurable parameters. Standing proposed the following cor-
relation in 1981 [2]:
" #
. 0:83
Pb ˆ 18:2 Rs a
…10† 1:4 (8)
c g
Figure 1. Schematic of a single neuron.
where a = 0.00091(T(°R) – 460) – 0.0125(API).
In 1980 Glaso also proposed the following correlation, It should be noted that parameters w and b are adjustable
Eq. (9) [3]: and the function f is selected by the designer. The function f
can be linear or nonlinear. Therefore, step, linear, sigmoid, hy-
log (Pb) = 1.7669 + 1.7447 log(Pb ) – 0.30218[log(Pb )]2 (9) perbolic-tangent functions and others can be named. In this
study, the sigmoid function is used. This neural net used is
The factor Pb is given by Eq. (10): called the feed-forward neural net. The feed-forward (unidi-
 . a
rectional) neural net, one of the most popular neural networks
Pb ˆ Rs c …T †b …API †c (10) provides a flexible tool to generalize linear regression since it
g
does not require any relationships between the variables [9].
where a = 0.816, b = 0.172 and c = –0.989 Usually a feed-forward neural net is arranged in multiple
Furthermore, Marhoun proposed the following expression layers, i.e., one input layer, one output layer and one or more
in 1988, Eq. (11) [4]: hidden layers.

Pb = a RsbcgccodTe (11)
2.1 The Structure of the Neural Networks
where a = 5.38088 · 10–3, b = 0.715082, c = –1.87784, d =
3.1437 and e = 1.32657. Neurons are connected to each other in a special arrangement
to form a neural network. The connections can be the pathway
used to form a mono-layer network or a multi-layer one. Mul-
2 Artificial Neural Networks ti-layer networks are composed of an input layer from where
input data are fed, an output layer that provides the network
Artificial Neural Networks (ANNs) are dynamic systems that response, and some hidden layers situated between these two
can transfer the governing rules behind a set of experimental and which connect them. The number of neurons and layers,
data to the structure of the network. This new approach of arrangement of neurons and their dimensions constitute the
neural networks was first introduced in the 1940s by McCul- structure of the neural network [10]. A schematic of the net-
loch and Pitts [5], when they showed the capabilities of these works that were finally adopted for this study are given as ex-
networks for the calculation of all arithmetic and logic func- amples in Figs. 2 and 3.

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


Chem. Eng. Technol. 2008, 31, No. 4, 493–500 Bubble point pressure 495

networks have been trained by a set of 106 experimental data


available for Iranian oil compositions. Each set of data are
composed of the bubble point pressure, Pb, oil formation vol-
ume factor, Bo, gas oil ratio, Rs, temperature, gas gravity and
oil gravity. The range of data is given in Tab. 1. The bubble
point and oil formation volume factor are selected as the out-
puts while the gas gravity, oil gravity, gas oil ratio and temper-
ature are adopted as the inputs.
For stabilization purposes, attempts were made to limit the
domain of data variations. This was achieved through the use
of dimensionless variables and the following expression,
Eq. (13):
Oold min…Oold †
Onew ˆ (13)
max…Oold † min…Oold †
In the process, the outputs vary between zero and unity. The
O symbol denotes the output, which can be Pb or Bob. The
symbol Oold is the old output and Onew represents the varied
output.
Figure 2. Structure of the ANN final design for Bo.

Table 1. The range of training data.


Property Min Max
Bubble Point Pressure psia 393 4200
Oil Formation Volume Factor Bob 1.0955 2.027
Gas Solubility SCF/STB 83 1708
Gas Gravity 0.6 24 1.872
Oil Gravity 0.5537 0.89
Reservoir Temperature oF 100 306

3.1 The Neural Network Architecture

Different network architectures were designed for obtaining


accurate models for estimation of Pb and Bob as functions of
the other four variables. None of the neural networks gave ac-
ceptable matches with one or two hidden layers. However, the
use of three hidden layer networks gives good results for Pb
and Bob.
Figure 3. Structure of the ANN final design for Pb. For finding the optimum network design a trial and error
attempt was undertaken, starting with one hidden layer and
the number of hidden units was set almost equal to the num-
2.2 The Concept of Learning
ber of inputs divided by two. Hidden units were then gradually
Learning is a process through which the neural network adjusts added. The maximum number of hidden units is rarely re-
itself for a stimulator, i.e., the way in which after a proper ad- quired to exceed more than 4 times the number of inputs. The
justment of the network parameters it gives a proper response. architectures were retrained at least 3 times (up to 10 times is
In fact, the network adjusts its parameters in response to the in- recommended) with different initial weight randomizations
put stimulator during the learning stage so that the network’s and only the best one was saved for comparison with other ar-
output converges to the required response. Learning rules are chitectures. A network with too few hidden units leads to only
explained in mathematical forms and are known as learning a rough discovery of hidden dependencies in the data, whereby
equations. The learning is finished when the real and the re- the network produces a significant number of errors. A net-
quired outputs converge. Supervised, non-supervised and en- work with too many hidden units will tend to memorize all
hanced learning, etc., are some common types of learnings [10]. the data instead of finding relations, and this also leads to big-
ger network errors.
Following this Pb was estimated, with the evaluation phase
3 Methodology consisting of six neurons in the first hidden layer, ten in the
second one and six in the third one. Furthermore, another
For the purpose of an accurate PVT model, a sufficient num- structure with nine neurons in the first layer, twelve neurons
ber of experimental data should be available. In this study, the in the second layer and seven neurons in the third layer was

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


496 H. Rasouli et al. Chem. Eng. Technol. 2008, 31, No. 4, 493–500

used for estimation of Bob. Other designs were disqualified


since they were not capable of predicting data other than the
training data.

3.2 Training and Evaluation Phase of the Designed


Networks

A number of codes were programmed in MATLAB. The algo-


rithm used is the Back Propagation algorithm, and the logic
sigmoid function was used for the activation purposes,
Eq. (14):
1
f …x† ˆ (14)
1 ‡ exp…x†
The network training system is offline and feeding the train-
ing data to the network was undertaken in a pattern by pattern
and stochastic manner. The training data were proposed to the
network in a stochastic manner for prevention of early satura-
tion of neurons. There was no need to the use of the momen- Figure 5. Variations of error during the training process for the
tum coefficient since the fuzzy behavior was not very compli- bubble point pressure.
cated.
The exponential learning processes were complete for the
tered over the whole range of the available data. The data used
bubble point pressure after 145 iterations and for the oil for-
for the evaluation phase are given in Tab. 2.
mation volume factor after 345 times. The variations of error
versus the training step are given for both bubble point pres-
sures and oil formation volume factors in the Figs. 4 and 5. Table 2. Data used for the evaluation phase.
After the completion of the training phase, the network was γo γg
Bob Rs T Pb
exposed to the evaluation data. The success of the network in
1.17 0.8315 0.89 100 211 400
this phase does not qualify the design of the network and one 1.21 0.7877 1.065 189 217 500
should also receive acceptable confirmation from the evalua- 1.223 0.84 0.918 305 153 1242
tion data that the network has not used them in its training 1.36 0.7375 0.804 429 213 1300
stage. This is usually achieved by applying ca. 10 % of the 1.22 0.8219 0.673 357 172 1600
number of training data for evaluation, and thereby, the net- 1.259 0.834 0.802 580 100 1875
work qualification is certified or rejected [11]. Nine randomly 1.32 0.746 0.697 665 202 2000
1.61 0.86 0.91 1039 193 3305
selected sets of experimental data were taken from the 106 sets
2.011 0.771 0.758 1172 291 3427
of data that were available. These testing data were also scat-

4 Comparison between Neural Network


Model and Experimental Models
The models of the three aforementioned researchers, i.e.,
Standing, Glaso and Marhoun, were used for comparison pur-
poses. A Fortran90 code was developed to calculate the bubble
point pressures and oil volume factors of these 106 petroleum
samples.
The neural network models are also based on the fact that the
oil volume factor and bubble point pressure are a function of
temperature, gas solubility, gas gravity and oil gravity. The Pb re-
sults for three models and for the training step data of the neural
network are presented in Tab. 3. The results for Bo at different
bubble point pressures for the three conventional models and
for training step data are given in Tab. 4. The formation volume
factors at bubble point pressures (Bo@Pb) and the Pb results for
nontrained data for the four models are given in Tabs. 5–8.
Graphical presentation of results of the different models in-
Figure 4. Variations of error during the training process for the cluding the ANN model of the present study are given in
oil formation volume factor. Figs. 6–9 for both the training and testing data.

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


Chem. Eng. Technol. 2008, 31, No. 4, 493–500 Bubble point pressure 497

Table 3. The Pb results of empirical models and the neural network.


ANN Standing Glaso Marhoun
Average relative error (%) -0.04 20.61 4.04 -8.48
Average absolute relative error (%) 0.63 24.09 11.93 15.98
Minimum absolute relative error (%) 0 2 0.07 0.5
Maximum absolute relative (%) 11.6 74.85 39 49.6

Table 4. The Bo results of empirical models and the neural network.


ANN Standing Glaso Marhoun
Average relative error (%) -0.001 -0.399 1.906 -3.326
Average absolute relative error (%) 0.037 3.894 4.842 4.498
Minimum absolute relative error (%) 0 0 0.069 0
Maximum absolute relative (%) 0.451 26.748 20.74 38.795

Table 5. The results of four models for the volume factor at the bubble point pressure for nontrained data.

No Bob exp. Bob Standing Bob Glaso Bob Marhoun Bob ANN Error% Standing Error% Glaso Error% Marhoun Error% ANN
1 1.17 1.119 1.089 1.147 1.165 4.36 6.92 1.97 0.43
2 1.21 1.179 1.148 1.217 1.219 2.56 5.12 -0.58 -0.74
3 1.223 1.189 1.166 1.201 1.221 2.78 4.66 1.8 0.16
4 1.36 1.295 1.265 1.345 1.357 4.78 6.99 1.1 0.22
5 1.22 1.203 1.176 1.227 1.213 1.39 3.61 -0.57 0.57
6 1.259 1.284 1.271 1.273 1.254 -1.99 -0.95 -1.11 0.4
7 1.32 1.394 1.367 1.436 1.36 -5.61 -3.56 -8.79 -3.03
8 1.61 1.636 1.61 1.559 1.582 -1.61 0 3.17 1.74
9 2.01 1.769 1.717 1.757 1.86 11.99 14.58 12.59 7.46

Table 6. The concise output of Tab. 5.

Statistical Parameters For the Pb Process Data


Not Used in the Training
ANN Standing Glaso Marhoun
Average relative error (%) 0.8 2.07 4.15 1.06
Average absolute relative error (%) 1.639 4.118 5.154 3.519
Minimum absolute relative error (%) 0.16 1.39 0 0.57
Maximum absolute relative error (%) 7.46 11.99 14.58 12.59

Table 7. The results of four models for bubble point pressure for nontrained data.
No Pb exp. Pb Standing Pb Glaso Pb Marhoun ANN Error% Standing Error% Glaso Error% Marhoun Error% ANN
1 400 428 355 553 405 -7 11.25 -38.25 -1.25
2 500 487 450 530 489 2.6 10 -6 2.2
3 1242 1032 1165 1073 1192 16.91 6.2 13.61 4.03
4 1300 878 1106 1325 1281 32.46 14.92 -1.92 1.46
5 1600 1438 1675 2107 1683 10.13 -4.69 -31.69 -5.19
6 1875 1726 2144 1911 1624 7.95 -14.35 -1.92 -2.61
7 2000 1507 1987 2416 2001 24.65 0.65 -20.8 -0.05
8 3305 3595 4007 3098 3497 -8.77 -21.24 6.26 -5.81
9 3427 3283 3548 4068 3468 4.2 -3.53 -18.7 -1.2

Table 8. The concise output of Tab. 7.


Statistical Parameters For the Bob Process
Data not Used in the Training
ANN Standing Glaso Marhoun
Average relative error (%) -0.935 9.235 -0.087 -11.046
Average absolute relative error (%) 2.642 12.740 9.647 15.461
Minimum absolute relative error (%) 0.050 2.600 0.650 1.920
Maximum absolute relative (%) 5.810 32.460 21.240 38.250

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


498 H. Rasouli et al. Chem. Eng. Technol. 2008, 31, No. 4, 493–500

(a) (b)

(c) (d)

Figure 6. Estimated vs. experimental Pb results for the training data, and comparison between the different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.

(a) (b)

(c) (d)

Figure 7. Estimated vs. experimental Bob results for the training data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


Chem. Eng. Technol. 2008, 31, No. 4, 493–500 Bubble point pressure 499

(a) (b)

(c) (d)

Figure 8. Estimated vs. experimental Pb results for the testing data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.

(a) (b)

(c) (d)

Figure 9. Estimated vs. experimental Bob results for the testing data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com


500 H. Rasouli et al. Chem. Eng. Technol. 2008, 31, No. 4, 493–500

5 Conclusions Greek symbols


co oil gravity
In this study, two neural network models were used to deter- cg gas gravity
mine the bubble point pressures and the oil volume factors fro
hydrocarbon mixtures. Mean absolute errors of 0.037 % for Subscripts
the volume factor and 0.063 % for the bubble point pressure
were obtained. Attempts were made to reach the best network b bubble Point
architecture in terms of the number of neurons and network g gas
layers. It can be firmly stated that the Artificial Neural Network o oil
method gives better results than previously published methods s solubility
when there is a sufficient amount of experimental data.
References
Symbols used
[1] T. Ahmed, Hydrocarbon Phase Behavior, 1st ed., Gulf Publish-
a [–] output of neuron ing Co., Houston, TX 1990.
API [–] American Petroleum Institute [2] M. B. Standing, Volumetric and Phase Behavior of Oil Field
Index for Oil Gravity Hydrocarbon Systems, Society of Petroleum Engineers, Dallas,
b [–] bias term TX 1981.
Bo [–] formation volume factor [3] O. Glaso, J. Pet. Technol. 1981, 32 (5), 785.
Bo@Pb or Bob [–] oil formation volume factor at [4] M. A. Marhoun, J. Pet. Technol. 1988, 5, 650.
its bubble Point Pressure [5] W. S. McCulloch, W. Pitts, Bull. Math. Biophys. 1943, 5, 115.
f [–] transform function [6] M. T. Hagan, Neural Network Design, PWS Publishing Com-
p [–] input of neurons pany, Boston, MA 1995.
P [–] Pressure [7] S. Mohaghegh, B. Balan, S. Ameri, SPE Form. Eval. 1997,
Pb [psia] bubble point pressure 12 (3), 170.
Rs [SCF/STB] gas solubility or gas oil ratio [8] D. Anderson, G. McNeill, Artificial Neural Networks Technol-
[standard cubic foot per ogy, DACS report, Kaman Sciences Corporation, Utica, NY
standard conditions barrel] 1992.
SC [–] ambient standard conditions in [9] A. Malallah, S. Nashawi, J. Pet. Sci. Eng. 2005, 49, 193.
terms of temperature and [10] M. A. Arbib, Handbook of Brain Theory and Neural Networks,
pressure 2nd ed., MIT Press, Cambridge, MA 2003.
T [°F] temperature [11] S. Mohaghegh, J. Pet. Technol. 2000, 52 (9), 64.
Vo [–] oil volume
w [–] connection weight

© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim http://www.cet-journal.com

You might also like