Professional Documents
Culture Documents
Estimating The Buble Point Pressure and Formation Volume Factor of Oil Using Artificial Neral Networks
Estimating The Buble Point Pressure and Formation Volume Factor of Oil Using Artificial Neral Networks
4, 493–500 493
In 1988, Marhoun suggested an equation for the determina- tions [6]. ANNs are considered as a different paradigm for
tion of volume factor with respect to the gas-oil ratio, oil grav- computing and are being successfully applied across an extra-
ity, gas gravity and temperature, Eq. (6) [4]: ordinary range of problem domains, in areas as diverse as fi-
nance, medicine, engineering, geology and physics [7].
Bo = 0.497069 + 0.862963 · 10–3T + 0.182594 · 10–2F + The neuron is the smallest information processing unit and
0.318099 · 10–5F2 (6) is the building block of the neural networks. It is an analytical
unit that receives signals from other neurons and through
in which F is defined as below: gateways called dendrites and then combines them. If p and a
are inputs and outputs of a neuron, respectively, the effective-
F = Rsacgbcoc (7) ness of p over a is indicated by the scalar w. Another input,
which is a constant is multiplied by the bias term and is then
where a = 0.74239, b = 0.323294 and c = –1.20204. summed by wp. This is the net input, n, for the transformation
function, f. In this manner, the neuron output is defined as be-
low [8]:
1.3 Bubble Point Pressure
a = f(wp + b) (12)
For a given hydrocarbon system, the highest pressure at which
the first gas bubble forms is called the bubble point of that sys- The input model is given in Fig. 1.
tem.
Pb = a RsbcgccodTe (11)
2.1 The Structure of the Neural Networks
where a = 5.38088 · 10–3, b = 0.715082, c = –1.87784, d =
3.1437 and e = 1.32657. Neurons are connected to each other in a special arrangement
to form a neural network. The connections can be the pathway
used to form a mono-layer network or a multi-layer one. Mul-
2 Artificial Neural Networks ti-layer networks are composed of an input layer from where
input data are fed, an output layer that provides the network
Artificial Neural Networks (ANNs) are dynamic systems that response, and some hidden layers situated between these two
can transfer the governing rules behind a set of experimental and which connect them. The number of neurons and layers,
data to the structure of the network. This new approach of arrangement of neurons and their dimensions constitute the
neural networks was first introduced in the 1940s by McCul- structure of the neural network [10]. A schematic of the net-
loch and Pitts [5], when they showed the capabilities of these works that were finally adopted for this study are given as ex-
networks for the calculation of all arithmetic and logic func- amples in Figs. 2 and 3.
Table 5. The results of four models for the volume factor at the bubble point pressure for nontrained data.
No Bob exp. Bob Standing Bob Glaso Bob Marhoun Bob ANN Error% Standing Error% Glaso Error% Marhoun Error% ANN
1 1.17 1.119 1.089 1.147 1.165 4.36 6.92 1.97 0.43
2 1.21 1.179 1.148 1.217 1.219 2.56 5.12 -0.58 -0.74
3 1.223 1.189 1.166 1.201 1.221 2.78 4.66 1.8 0.16
4 1.36 1.295 1.265 1.345 1.357 4.78 6.99 1.1 0.22
5 1.22 1.203 1.176 1.227 1.213 1.39 3.61 -0.57 0.57
6 1.259 1.284 1.271 1.273 1.254 -1.99 -0.95 -1.11 0.4
7 1.32 1.394 1.367 1.436 1.36 -5.61 -3.56 -8.79 -3.03
8 1.61 1.636 1.61 1.559 1.582 -1.61 0 3.17 1.74
9 2.01 1.769 1.717 1.757 1.86 11.99 14.58 12.59 7.46
Table 7. The results of four models for bubble point pressure for nontrained data.
No Pb exp. Pb Standing Pb Glaso Pb Marhoun ANN Error% Standing Error% Glaso Error% Marhoun Error% ANN
1 400 428 355 553 405 -7 11.25 -38.25 -1.25
2 500 487 450 530 489 2.6 10 -6 2.2
3 1242 1032 1165 1073 1192 16.91 6.2 13.61 4.03
4 1300 878 1106 1325 1281 32.46 14.92 -1.92 1.46
5 1600 1438 1675 2107 1683 10.13 -4.69 -31.69 -5.19
6 1875 1726 2144 1911 1624 7.95 -14.35 -1.92 -2.61
7 2000 1507 1987 2416 2001 24.65 0.65 -20.8 -0.05
8 3305 3595 4007 3098 3497 -8.77 -21.24 6.26 -5.81
9 3427 3283 3548 4068 3468 4.2 -3.53 -18.7 -1.2
(a) (b)
(c) (d)
Figure 6. Estimated vs. experimental Pb results for the training data, and comparison between the different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.
(a) (b)
(c) (d)
Figure 7. Estimated vs. experimental Bob results for the training data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.
(a) (b)
(c) (d)
Figure 8. Estimated vs. experimental Pb results for the testing data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.
(a) (b)
(c) (d)
Figure 9. Estimated vs. experimental Bob results for the testing data, and comparison between different models: (a) ANN model – this
study, (b) Standing, (c) Glaso and (d) Marhoun.