Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

‫ﻟﯿﻨﮏ ﻫﺎى ﻣﻔﯿﺪ‬

‫ﻋﻀﻮﯾﺖ‬ ‫ﮐﺎرﮔﺎه ﻫﺎى‬ ‫ﺳﺮوﯾﺲ‬ ‫ﻓﯿﻠﻢ ﻫﺎى‬ ‫ﺑﻼگ‬ ‫ﺳﺮوﯾﺲ ﻫﺎى‬


‫درﺧﺒﺮﻧﺎﻣﻪ‬ ‫آﻣﻮزﺷﻰ‬ ‫ﺗﺮﺟﻤﻪ ﺗﺨﺼﺼﻰ‬ ‫آﻣﻮزﺷﻰ‬ ‫ﻣﺮﮐﺰ اﻃﻼﻋﺎت ﻋﻠﻤﻰ‬ ‫وﯾﮋه‬
‫‪STRS‬‬

‫ﻣﻮﺿﻮﻋﺎت داغ‬
‫ﻧﯿﻤﻪ اول ‪1400‬‬
A NEW APPROACH FOR ESTIMATING COMPRESSIBILITY FACTOR OF NATURAL GAS BASED ON
ARTIFICIAL NEURAL NETWORK
M.R. Nikkholgh1, A.R. Moghadassi*1, F. Parvizian1

1. Department of Chemical Engineering, Faculty of Engineering, Arak University,


Phone/Fax: +(98) 861-2225946, Shariati St., Farmandari Sq., Arak - Markazi, Iran.

• E-mail: a-moghadassi@araku.ac.ir
Abstract
In this work, the ability of Artificial Neural Network or ANN based on back-propagation algorithm to
modeling and predicting of compressibility factor of natural gas has been investigated. The MSE

D
analysis based on results, are used to verifying the suggested approach. Results show, a good

I
agreement between experimental data and ANN predictions. An important feature of the model is its
needlessness to any theoretical knowledge or human experience during the training process. This work
clearly shows the ability of ANN on calculating z-factor for natural gas only based on the
experimental data, instead of using equations of state.
Keywords: Artificial Neural Network, Compressibility factor, natural gas.

f S
o
1. Introduction
Natural gas is a subcategory of petroleum that is a naturally occurring, complex mixture of hydrocarbons, with a
minor amount of inorganic compounds [1]. Knowledge of the pressure-volume-temperature (PVT) behavior of natural

e
gases is necessary to solve many petroleum engineering problems, designing and analyzing natural gas production and
processing systems. Gas reserves, gas metering, gas pressure gradients, pipeline flow and compression of gases are

v
some of the problems requiring precise calculation of gas density.

i
Due to intermolecular forces real gases do not behave ideally, particularly at elevated pressures, and ideal gas
law is extended to real systems by including a compressibility factor, z. Compressibility factor, is a key parameter,
which can be determined from various theoretical empirical equation of state or a generalized chart for gases.

c h
Artificial neural network is a model based on some experimental results that is proposed to predict the required
data because of avoiding more experiments. This model provides a connection between input and output variables and
bypass underlying complexity inside the system. The ability to learn the behavior of the data generated by a system

A r
certifies versatility of neural network [1]. Speed, simplicity, and capacity to learn are the advantages of ANN compared
to classical methods. This model has been widely applied to estimate the physical and thermodynamic properties of
chemical compounds.
ANN has recently been used to predict some pure substances and petroleum fraction’s properties [2], activity
coefficients of isobaric binary systems [3], dew point pressure [4], thermodynamic properties of refrigerants [5,6], and
activity coefficient ratio of electrolytes in amino acid's solutions [7], etc. However, few publications are available in
literature for ANN applications in predicting PVT properties.
Defining the ANN and selecting the best ANN predictor to predict the compressibility factor in desired
temperature and pressure ranges instead of empirically derived correlations are the main focus of this work. Finally
results of the ANN model is evaluated against with the unseen data and then compared with the empirical models.

2. Compressibility Factor Theory


All gases behave ideally when the pressure approaches zero. Due to intermolecular forces real gases do not
behave ideally, particularly at elevated pressures. Eq. (1) is extended to real systems by including a compressibility
factor, z, as:

P ν = zRT (1)
Gas compressibility factor is also called deviation factor, or z-factor. In fact, its value reflects how much the real
gas deviates from the ideal gas at given pressure and temperature. Definition of the compressibility factor is expressed
as:

www.SID.ir
Vactual
z= (2)
Videal gas

The gas compressibility factor can be determined on the basis of measurements in PVT laboratories. For a given
amount of gas, if temperature is kept constant and volume is measured at 14.7 psia and an elevated pressure P 1, z-factor
can then be determined with the following formula:
p1 V1
z= (3)
14.7 V0
where V0 and V1 are gas volumes measured at 14.7 psia and P 1, respectively.
Very often the z-factor is estimated with the chart developed by Standing and Katz [8]. This chart has been set
up for computer solution by a number of individuals. Brill and Beggs yield z-factor values accurate enough for many

natural gas.

I D
engineering calculations. Hall and Yarborough presented more accurate correlation to estimate compressibility factor of

Nonetheless, there is no single equation that accurately estimates compressibility factor of natural gas under all

S
conditions. Therefore the new attempts have been made to develop an alternative to a simple method which can be used
for all conditions.

3. ARTIFICIAL NEURAL NETWORKS

o f
In order to find relationship between the input and output data derived from experimental works, a more
powerful method than the traditional ones are necessary. ANN is an especially efficient algorithm to approximate any
function with finite number of discontinuities by learning the relationships between input and output vectors [9]. These
algorithms can learn from the experiments, and also are fault tolerant in the sense that they are able to handle noisy and

e
incomplete data. The ANNs are able to deal with non-linear problems, and once trained can perform estimation and
generalization rapidly [10]. They have been used to solve complex problems that are difficult to be solved if not

i v
impossible by the conventional approaches, such as control, optimization, pattern recognition, classification, and so on,
specially it is desired to have the minimum difference between the predicted and observed (actual) outputs [11].
Artificial neural networks are biological inspirations based on the various brain functionality characteristics.

h
They are composed of many simple elements called neurons that are interconnected by links and act like axons to
determine an empirical relationship between the inputs and outputs of a given system. Multiple layers arrangement of a

c
typical interconnected neural network is shown in Figure (1). It consists of an input layer, an output layer, and one
hidden layer with different roles. Each connecting line has an associated weight.

A r
Fig. 1: The Neural Network Architecture
Artificial neural networks are trained by adjusting these input weights (connection weights), so that the
calculated outputs may be approximated by the desired values. The output from a given neuron is calculated by
applying a transfer function to a weighted summation of its input to give an output, which can serve as input to other
neurons, as follows [12].
N k −1
α jk = F k ( ∑w
i =1
ijk α i ( k −1) + β jk ) (4)

www.SID.ir
Where α jk are jth neuron outputs from kth layer and β jk is the bias weight for neuron j in layer k. The model fitting
parameters wijk are the connection weights. The nonlinear activation transfer functions Fk may have many different
forms. The classical ones are threshold, sigmoid, Gaussian and linear function, etc. [13]. For more details of various
activation functions see [14].
The training process requires a proper set of data i.e. input ( I i ) and target output ( t i ). During training the
weights and biases of the network are iteratively adjusted to minimize the network performance function [15]. The
typical performance function that is used for training feed forward neural networks is the network Mean Squares Errors
(MSE) Eq. (5).
N N

∑ ∑ (t
1 1
MSE = (e i ) 2 = i − αi )2 (5)
N N

D
i =1 i =1

There are many different types of neural networks, differing by their network topology and/or learning

S I
algorithm. In this paper the back propagation learning algorithm, which is one of the most commonly used algorithms is
designed to predict the compressibility factor of natural gas. Back propagation is a multilayer feed-forward network
with hidden layers between the input and output [16]. The simplest implementation of back propagation learning is the
network weights and biases updates in the direction of the negative gradient that the performance function decreases

f
most rapidly. An iteration of this algorithm can be written as follows [17]:
xk +1 = xk − l k g k (6)

e o
where xk is a vector of current weights and biases, g k is the current gradient, and α k is the learning rate.

There are various back propagation algorithms such as Scaled Conjugate Gradient (SCG), Levenberg-Marquardt
(LM) and Resilient back Propagation (RP). LM is the fastest training algorithm for networks of moderate size and it has

v
the memory reduction feature to be used when the training set is large. One of the most important general purpose back
propagation training algorithms is SCG.

h i
The neural nets learn to recognize the patterns of the data sets during the training process. Neural nets teach
themselves the patterns of the data set letting the analyst to perform more interesting flexible work in a changing
environment. Although neural network may take some time to learn a sudden drastic change, but it is excellent to adapt

c
constantly changing information [17]. However the programmed systems are constrained by the designed situation and
they are not valid otherwise. Neural networks build informative models whereas the more conventional models fail to

r
do so. Because of handling very complex interactions, the neural networks can easily model data, which are too difficult
to model traditionally (inferential statistics or programming logic). Performance of neural networks is at least as good as
classical statistical modeling, and even better in most cases [18]. The neural networks built models are more reflective

A
of the data structure and are significantly faster.
Neural networks now operate well with modest computer hardware. Although neural networks are
computationally intensive, the routines have been optimized to the point that they can now run in reasonable time on
personal computers. They do not require supercomputers as they did in the early days of neural network research.

4. RESULTS AND DISCUSSION


Developing the neural network model to accurately predict the z-factor of natural gas requires its exposure to a
large data set during the training phase. A set of data containing pseudo-reduced pressure, pseudo-reduced temperature
and compressibility factor was collected from Handbook of Natural Gas Engineering [8]. Table (1) lists samples of
these data which were used for training and testing the neural network.
Table 1: Minimum and Maximum of Data which using in ANN
Properties min max

Pseudo-Reduced Pressure 0.2 15


Pseudo-Reduced Temperature 1.05 3
Compressibility Factor 0.351 1.707

The back propagation method with LM, SCG and RP learning algorithm has been used in feed forward, single
hidden layer network. Input layer neurons have no transfer functions. The compressibility factor depends only on T pr ,

www.SID.ir
Ppr . Consequently, inputs are the pseudo-reduced temperature and pseudo-reduced pressure while output is the
compressibility factor.
Neural network training can be made more efficient if certain preprocessing steps are performed on the network
inputs and targets [19]. All of the inputs (before feeding to the network) and output data in training phase, were used for
pre-processing by normalizing the inputs and targets in the range [-1, 1].
The neurons in the hidden layer perform two tasks: summing the weighted inputs connected to them and passing
the result through a nonlinear activation function to the output or adjacent neurons of the corresponding hidden layer.
Training and testing of the ANN was performed with the toolbox of MATLAB.
More than 80% of data set is used to train each ANN and the rest have been used to evaluate their accuracy and
trend stability. Number of hidden neurons has been systematically varied to obtain a good estimate of the trained data.
The selection criterion is the net output MSE. The MSE of various hidden layer neurons are shown in Figure (2). As it

D
can be seen the optimum number of hidden layer neurons is determined to be 100 for minimum MSE. Similarly the
MSE of various training algorithms was calculated and listed in table (2) for the obtained 100 hidden layer neurons. As
Table (2) shows the Levenberg-Marquardt (LM) algorithm has the minimum MSE.

I
S
Table 2: MSE of different algorithms for 100 Neurons
Training Algorithm MSE

f
Trainlm 0.000635
Trainscg 0.047300

o
Trainrp 0.163400

According to these tables the Levenberg-Marquardt (LM) algorithm is the most suitable algorithm with the
minimum MSE. Consequently, LM provides the best minimum error average for both training and testing of the

e
network. Figure (3) shows the LM algorithm relative error fluctuations.

v
h i
r c
algorithm

A
Fig. 2: Determining the best Number of Neuron for different Fig. 3: Relative Error for Training the Network

The results show that the ANN can predict compressibility factor very close to the experimentally measured
ones. Figures (4) and (5) show the scatter diagrams that compare the experimental data versus the computed neural
network data for both training and testing the ANN.

Fig. 4: Comparison between the ANN results and Exp. Fig. 5: Comparison between the ANN results and Exp. Data for

www.SID.ir
Data for Training Testing
As it may be seen, a tight cloud of points about the 45º line is obtained for the new data points. This indicates an
excellent agreement between the experimental and the calculated data. Figure (6) compares the ANN simulation and
experimental data.

I D
f S
o
Fig. 6: Comparison between ANN and Exp. Data for Compressibility Factor

5. CONCLUSION

e
Nowadays, there is no single equation that accurately estimates compressibility factor of natural gas under all
conditions [20]. In this work, the ability of Artificial Neural Network based on back-propagation algorithm to modeling

v
and predicting of compressibility factor of natural gas has been investigated. Learning algorithm and number of hidden

h i
layer neurons have been systematically varied to obtain a good estimate of the trained data. More than 80% of data set
is used to train each ANN and the rest have been used to evaluate their accuracy and trend stability. The MSE analysis
based on results, are used to verifying the suggested approach.
The optimum number of hidden layer neurons is determined to be 100 for minimum MSE. Also, the Levenberg-

r c
Marquardt algorithm is the most suitable algorithm with the minimum MSE. Consequently, LM provides the best
minimum error average for both training and testing of the network. These results have been shown in previous figures.
Results show, a good agreement between experimental data and ANN predictions.

REFERENCES A
1. Valles, H. R., A neural network method to predict activity coefficients for binary systems based on molecular
functional group contribution, M. S. Thesis, University of Puerto Rico (2006).
2. Bozorgmehry, R. B., Abdolahi, F., Moosavian, M. A., Characterization of basic properties for pure properties for
pure substances and petroleum fractions by neural network, Fluid phase equilibria, 231, p. 188 (2005).
3. Biglin, M., Isobaric vapor-liquid equilibrium calculations of binary systems using a neural network, J. Serb.
Chem. Soc., 69, p. 669 (2004).
4. Zambrano, G., Development of Neural Network Models for the Prediction of Dew point Pressure of Retrograde
Gases and Saturated Oil Viscosity of Black Oil Systems, M. S. Thesis, Texas A&M University, Texas (2002).
5. Ganguly, S., Prediction of VLE data using radial basis function network, Computers and chemical engineering,
27, p. 1445 (2003).
6. Sozen, A., Arcakilioglu, E., Ozalp, M., Formulation based on artificial neural network of thermodynamic
properties of ozone friendly refrigerant/absorbent couples, Applied thermal engineering, 25, p. 1808 (2005).
7. Dehghani, M.R., Modarress, H., Bakhshi, A., Modeling and prediction of activity coefficient ratio of electrolytes
in aqueous electrolyte solution containing amino acids using artificial neural network, Fluid phase equilibria,
244, p.153 (2006).
www.SID.ir
8. Katz, D. L., Cornell, D., Vary, J. A., Kobayashi, R., Elenbass, J. R., Poehmann, F. H., Weinaug, “Handbook of
Natural Gas Engineering,”Mc Graw-Hill Book Company, Ic., New York, 1959.
9. Hagan, M. T., Demuth, H. B., Beal, M., Neural Network Design, PWS Publishing Company, Boston (1996).
10. Sozen, A., Arcakilioglu, E., Ozalp, M., Investigation of thermodynamic properties of refrigerant /absorbent
couples using artificial neural networks, Chemical engineering and processing, 43, p.1253 (2004).
11. Richon, D., Laugier, S., Use of artificial neural networks for calculating derived thermodynamic quantities
from volumetric property data, Fluid phase equilibria, 210, p. 247 (2003).
12. Gharbi, R., Estimating the isothermal compressibility coefficient of under saturated Middle East crudes using
neural networks, Energy & Fuels, 11, p. 372 (1997).
13. Lang, R.I.W., A Future for Dynamic Neural Networks, Dept. Cybernetics, University of Reading, UK (2001).
Bulsari, A. B., Neural Networks for Chemical Engineers, Elsevier, Amsterdam (1995).
14. Demuth, H., Beale M., Neural Network Toolbox User’s Guide (2002).
15. Osman, E. A., Al-MArhoun, M. A., Using artificial neural networks to develop new PVT correlations for

D
Saudi crude oils, In: 10th Abu Dhabi International Petroleum exhibition and conference, (2002).

I
16. Parvizian, F., Prediction of thermodynamic properties (P-V-T) of pure component by using of Artificial Neural
Network, M.S. Thesis, Arak University, Iran (2008).
17. Zambrano, G., Development of Neural Network Models for the Prediction of Dew point Pressure of

f S
Retrograde Gases and Saturated Oil Viscosity of Black Oil Systems, M. S. Thesis, Texas A&M University,
Texas (2002).
18. Demuth, H., Beale, M., Neural network toolbox for use with MATLAB, Handbook, (2002).
19. Padmavathi, G., Mandan, M. G., Mitra, S. P., Chaudhuri, K. K., Neural modelling of Mooney viscosity of

o
polybutadiene rubber, Computers and Chemical Engineering, 29, 1677-1685 (2005).
20. Guo, B, Ghalambor, A., Natural Gas Engineering Handbook, TISP Technical publishing (2005).

v e
h i
r c
A

www.SID.ir
‫ﻟﯿﻨﮏ ﻫﺎى ﻣﻔﯿﺪ‬

‫ﻋﻀﻮﯾﺖ‬ ‫ﮐﺎرﮔﺎه ﻫﺎى‬ ‫ﺳﺮوﯾﺲ‬ ‫ﻓﯿﻠﻢ ﻫﺎى‬ ‫ﺑﻼگ‬ ‫ﺳﺮوﯾﺲ ﻫﺎى‬


‫درﺧﺒﺮﻧﺎﻣﻪ‬ ‫آﻣﻮزﺷﻰ‬ ‫ﺗﺮﺟﻤﻪ ﺗﺨﺼﺼﻰ‬ ‫آﻣﻮزﺷﻰ‬ ‫ﻣﺮﮐﺰ اﻃﻼﻋﺎت ﻋﻠﻤﻰ‬ ‫وﯾﮋه‬
‫‪STRS‬‬

‫ﻣﻮﺿﻮﻋﺎت داغ‬
‫ﻧﯿﻤﻪ اول ‪1400‬‬

You might also like