Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Fuel 137 (2014) 145–154

Contents lists available at ScienceDirect

Fuel
journal homepage: www.elsevier.com/locate/fuel

A computational intelligence scheme for prediction equilibrium water


dew point of natural gas in TEG dehydration systems
Mohammad Ali Ahmadi a,⇑, Reza Soleimani b, Alireza Bahadori c,⇑
a
Department of Petroleum Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Iran
b
Department of Gas Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Ahwaz, Iran
c
Southern Cross University, School of Environment, Science and Engineering, Lismore, NSW, Australia

h i g h l i g h t s

 Particle swarm optimization (PSO) is used to estimate the water dew point of natural gas in equilibrium with TEG.
 The model has been developed and tested using 70 series of the data.
 Back-propagation (BP) algorithm is used to estimate the water dew point of natural gas in equilibrium with TEG.
 PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statistical criteria.

a r t i c l e i n f o a b s t r a c t

Article history: Raw natural gases are frequently saturated with water during production operations. It is crucial to
Received 4 November 2013 remove water from natural gas using dehydration process in order to eliminate safety concerns as well
Received in revised form 24 July 2014 as for economic reasons. Triethylene glycol (TEG) dehydration units are the most common type of natural
Accepted 24 July 2014
gas dehydration. Making an assessment of a TEG system takes in first ascertaining the minimum TEG con-
Available online 5 August 2014
centration needed to fulfill the water content and dew point specifications of the pipeline system. A flex-
ible and reliable method in modeling such a process is of the essence from gas engineering view point and
Keywords:
the current contribution is an attempt in this respect. Artificial neural networks (ANNs) trained with par-
Gas dehydration
Triethylene glycol
ticle swarm optimization (PSO) and back-propagation algorithm (BP) were employed to estimate the
Equilibrium water dew point equilibrium water dew point of a natural gas stream with a TEG solution at different TEG concentrations
Particle swarm optimization and temperatures. PSO and BP were used to optimize the weights and biases of networks. The models
Artificial neural network were made based upon literature database covering VLE data for TEG–water system for contactor tem-
peratures between 10 °C and 80 °C and TEG concentrations ranging from 90.00 to 99.999 wt%. Results
showed PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statistical
criteria.
Ó 2014 Elsevier Ltd. All rights reserved.

1. Introduction due to formation of gas hydrates, reduction of line capacity due


to formation of free water (liquid), corrosion, and the decrease of
All natural gas streams contain significant amounts of water natural gas heating value.
vapor as they exit from oil and gas reservoirs. Water vapor in Various techniques can be executed to dehydrate natural gas.
natural gas can make several operational problems during the Among these gas dehydration methods, glycol absorption pro-
processing and transmission of natural gas such as line plugging cesses, in which glycol is considered as liquid desiccant (absorption
liquid), is the most common dehydration process used in the gas
industry since it approximate the features that fulfill the commer-
⇑ Corresponding authors at: Department of Petroleum Engineering, Ahwaz cial application criteria.
Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Iran. In a typical TEG system, shown in Fig. 1, water-free TEG (lean or
Tel.: +98 9126364936 (M.A. Ahmadi), Southern Cross University, School of dry TEG) enters at the top of the TEG contactor where it is flow
Environment, Science and Engineering, Lismore, NSW, Australia. Tel.:+61 2 6626 countercurrent with wet natural gas stream flowing up the tower.
9412 (A. Bahadori).
Elimination of water from natural gas via TEG is based on physical
E-mail addresses: ahmadi6776@yahoo.com (M.A. Ahmadi), Alireza.bahadori@
scu.edu.au (A. Bahadori). absorption.

http://dx.doi.org/10.1016/j.fuel.2014.07.072
0016-2361/Ó 2014 Elsevier Ltd. All rights reserved.
146 M.A. Ahmadi et al. / Fuel 137 (2014) 145–154

Nomenclature
grad the gradient of the performance function
Acronyms r1, r2 random number
ANN artificial neural network SH hidden neuron’s net input signal
TEG triethylene glycol Td equilibrium water dew point temperature
VLE Vapor–Liquid Equilibrium T contactor temperatures
BP back-propagation vi velocity of ith particle
MEG monoethylene glycol wH Weight between input and hidden layer
FFNN feed-forward neural network xi position of ith particle
GA genetic algorithm xg gbest value
ICA imperialist competitive algorithm xi,p pbest value of particle i
MSE mean square error Ypre predicted output
PA pruning algorithm Yexp actual output
DEG diethylene glycol OH output of the hidden neuron
TREG tetraethylene glycol
PSO particle swarm optimization
Greek symbols
HGAPSO hybrid genetic algorithm and particle swarm optimiza-
tion
u activation function
R2 correlation coefficient
x inertia weight
MLP multilayer perceptron
a learning rate
TST Twu–Sim–Tassone
SPSO stochastic particle swarm optimization Subscripts
UPSO unified particle swarm optimization i particle i
j input j
Symbols used k in Eq. (7) kth iteration
bH bias associated with hidden neurons m number of neuron in the input layer
bO bias associated with output neuron z zth experimental data
c1, c2 trust parameters
wt% weight percent Superscripts
°C centigrade degree n iteration number
kPa kilopascals max maximum
psia pounds per square inch absolute min minimum
K number of input training data pre predicted
A input signal (vector) exp experimental
W vector of weights and biases

In TEG system, specification of the minimum concentration of to determine water content and water dew point throughout nat-
TEG to fulfill the water dew point of exit gas has always been oper- ural gas systems. Although, these methods (i.e. TST equation of
ationally important. Indeed, the one single change that can be state and simple correlation) have good predictive capability,
made in a TEG system, which will produce the largest effect on applications of presented methods are typically limited to the
dew point depression, is the degree of TEG concentration (purity). system which they have been adapted for. As a matter of fact,
To that end, it is needed to have a liquid–vapor equilibrium aforementioned schemes need tunable parameters which should
relation/model for water–TEG system. be adjusted based upon experimental data points. Without exper-
Several equilibrium correlations [1–7] for estimation the imental data points and adjusted parameters, aforementioned
equilibrium water dew point of natural gas with a TEG dehydration models are totally not reliable. In such circumstances, it is prefer-
system can be found in the literature. Generally, the correlations able to develop and employed general models competent to pre-
presented by Worley [4], Rosman [5] and Parrish et al. [1] work sat- dict phase behaviors of such systems. Among the various
isfactorily and are suitable for most TEG system designs. However, predictive methods, artificial neural network (ANN) is one of
according to the literature [8], previously published correlations the competent methods enjoy great flexibility and capable to
are unable to estimate precisely the equilibrium water concentra- explain multiple mechanisms of action [12]. ANNs are computa-
tion above TEG solutions throughout the vapor phase. tional schemes, either hardware or software which imitates the
Parrish et al. [1] and Won [7] generated correlations in which computational abilities of the human brain by using numbers of
equilibrium concentrations of water throughout the vapor phase interconnected artificial neurons. The inimitability of ANN lies
have been ascertained at 100% TEG (unlimited dilution). Moreover, in its ability to acquire and create interrelationships between
the other approaches employ data extrapolations at lower concen- dependent and independent variables without any prior knowl-
trations to predict equilibrium throughout the unlimited dilution edge or any assumptions of the form of the relationship made
area [8]. The effect of pressure on TEG–water equilibrium is small in advance [13].
up to about 13,800 kPa (2000 psia) [1]. In the last two decades, ANNs have become one of the most suc-
Recently, Bahadori and Vuthaluru [9] proposed a simple corre- cessful and widely applied techniques in many fields, including
lation for the prompt prediction of equilibrium water dew point chemistry, biology, materials science, engineering, etc. Especially,
of a natural gas stream with a TEG solution in terms of TEG con- in the field of modeling of Vapor–Liquid Equilibrium (VLE) ANNs
centrations and contactor temperatures. In addition, Twu et al. have successful track records [14–24].
[10] employed the Twu–Sim–Tassone (TST) equation of state Implications of artificial intelligent based approaches in various
(EOS) [11] to specify the water–TEG system phase behavior. Fur- complicated engineering aspects have got a noticeable attentions
thermore, they presented an approach for employing the TST EOS through recent years such as application back propagation (BP)-
M.A. Ahmadi et al. / Fuel 137 (2014) 145–154 147

Fig. 1. Basic TEG dehydration unit.

feed forward neural network [25], couple of genetic algorithm (GA) den layers. In a NN, each neuron—except neurons located in the
and fuzzy logic [26], particle swarm optimization (PSO) [27–29], input layer—obtains and treats inputs from other neurons. The
hybridized of PSO and GA (HGAPSO) [30,31], unified particle treated info is obtainable at the output termination of the neuron.
swarm optimization (UPSO) [32], fuzzy decision tree (FDT) Fig. 2 demonstrates the technique which the hidden layer’s neuron
[33,34], imperialist competitive algorithm (ICA) [35–37], least throughout a MLP handles the info.
square support vector machine (LS-SVM) [38–40], and pruning Herein, each input to the 3th hidden neuron in a 3-layer feed-
algorithm (PA) [41] have been applied to determine network struc- forward neural network is denoted by a1, a2, a3, . . . , am, collectively
ture and involved parameters. they are referred to as the input vector. Every input is mint by a
In this study, PSO is employed to specify the optimum values of relevant weight wH3,2, wH3,3, . . . , wH3,m which demonstrate the syn-
the interconnection weights throughout feed-forward neural net- aptic neural links throughout natural nets and proceed in such a
work in order to predict equilibrium water dew point temperature method as to decrease or increase the input signs to the neuron.
of a natural gas stream with a TEG solution at different TEG con- As a matter of fact, the factors of weight are adjustable constants
centrations and contactor temperatures. Modeling results confirm inside the network which specify the strength of the input signs.
the integrity and show the ability of the suggested hybrid model Weighted inputs are applied to the summation block, labeled R.
for the estimation of water dew point with adequate precision in The neuron has also a bias, bH3, that is collected with the weighted
comparison with the real recorded data which are published in inputs to create the net input. A bias demonstrates a weight which
the previous literatures (see Appendix A) [1,6]. does not join an input and an output of two neurons, but that is
product by a unique sign and led to the neuron. A bias puts a spe-
cific degree of the output sign of a neuron which is autonomous
2. Artificial neural network from input signs. The algebraic formulation for net can be
expressed as following as:
Artificial neural network (ANN), usually denoted to as neural X
m
H
network (NN), are an attempt at mimicking the information pro- SH3 ¼ NET ¼ wH3;j  aj þ b3 ð1Þ
cessing competences of biological nervous systems. The leading- j¼1

edge picture of neural networks first came into being in the


The neuron performs as a mapping or activation function (NET)
1940s by McCulloch and Pitts [42], who illustrated that networks
to generate an outcome OH3 that can be shown as:
of artificial neurons could, in principle, handle any arithmetic or !
logical function. The fundamental element of processing through- X
m
H
out NN is a neuron (node) in which simple computations are car-
OH3 ¼ uðNETÞ ¼ u wH3;j  aj þ b3 ð2Þ
j¼1
ried out from a vector of input values. A neuron executes a
nonlinear transformation of the weighted sum of the incoming where u stands for the neuron transfer function or the neuron acti-
neuron inputs to yield the output of the neuron (see Fig. 2). vation function. Three of the most commonly used activation func-
One of most conventional type of ANN approaches is multilayer tions are shown below.
perceptron (MLP) which belongs to a common category of config-  Log-Sigmoid function (logsig)
urations named ‘‘feed-forward NN’’, a simple class of NN able of 1
uðsÞ ¼ ð3Þ
approaching general types of functions, counting integrable and 1 þ es
continuous functions [43]. In the feed-forward NN, the track of sign  Hyperbolic tangent function (tansig)
movement is from the input layer, via hidden layers, to the output es  es
layer. Throughout the MLP configuration, the neurons are assem- uðsÞ ¼ ð4Þ
es þ es
bled into layers. The last and first layers are named output and
input layers correspondingly, because they illustrate outputs and  Linear function (purelin)
inputs of the general network. The residual layers are named hid-
uðsÞ ¼ s ð5Þ
148 M.A. Ahmadi et al. / Fuel 137 (2014) 145–154

Fig. 2. Schematic of an artificial neuron within the hidden layer in a 3 layer feed-forward neural network.

It is worth to mention that w and b are both adaptable vari-


ables of the neuron. The principal concept of NN is that such vari-
ables can be modified with the purpose of the network shows
some interesting or desired performance. The thresholds and fac-
tors of weight are updated throughout process of training. There-
fore, to do a specific work we can train the network by regulating
the bias or weight factors. There are numerous categories of
approaches for training NN. Back propagation (BP) approach is
one of the most conventional types of training methods for
MLP-FFNNs. ANN training via dint of BP, which is one of the gra-
dient descent algorithms, is an iterative optimization approach
where the introduced objective function is minimized by updat-
ing the interconnection weights properly. The mean-squared-
error (MSE) is a frequently engaged objective function that is for-
mulated as below:

1X K
2
MSE ¼ ðY exp  Y pre
l Þ ð6Þ
K l¼1 l

where K denotes the number of training samples, Y expl


and Y pre
l
are
the recorded values and estimated data, respectively.
The straightforward application of BP learning iteratively
adjusts the network biases and interconnection weights through-
out the track wherein the objective function declines most
quickly (as shown in following equation, the gradient has nega-
tive sign). Iteration throughout this strategy can be demonstrated
as:

W kþ1 ¼ W k  ak gradk ð7Þ


In which Wk stands for the vector of present biases and weights,
gradk represents the present gradient of the performance function, Fig. 3. The flowchart of ANN trained with back-propagation algorithm [57].
and the parameter ak denotes called the learning rate. It is worthy
to be mentioned that this training algorithm needs the differentia-
bility of activation functions u since the weight adjust way is on
the basis of the gradient of the performance function which is Fig. 3 presents the flowchart of training MLP feed-forward neural
described in terms of the activation functions and weights. Inter- network by application of the BP algorithm. In this study, the
ested readers are referred to the literature [44–48] to know more ANN paradigm trained with BP applied the Levenberg–Marquardt
descriptions of technical point of views of BP training approach. algorithm.
M.A. Ahmadi et al. / Fuel 137 (2014) 145–154 149

3. Particle swarm optimization (PSO) retained throughout the process of searching to impart their
knowledge successfully.
PSO is a stochastic population-based search approach invented
by Kennedy and Eberhart in 1995 [49], modeled on the social Optimization of functions with continuous-valued variables is
behavior of some kinds of animals (such as bird flocks, fish done mainly via PSO. Optimizing weights and bias of NN is one
schooling, and swarm of insects) with the intention of gain more of the first implementations of PSO. The first studies in training
complicated activities that can be utilized to unravel difficult MLP feed-forward neural networks using PSO [55,56] have illus-
issues, mostly optimization problems [50]. This optimization trated that the PSO is a competent substitute for training neural
algorithm can be readily executed and it is inexpensive from a network. Frequent investigations have additional surveyed the
computational point of view due to its CPU speed and memory ability of PSO as a training approach for a number of various neural
necessities are low [51]. network configurations. Investigations have also demonstrated for
PSO conducts the search for finding of the optima using a pop- particular implementations that neural networks trained execut-
ulation (swarm) of particles. Every particle throughout the swarm ing PSO afford more precise outputs.
characterizes a candidate answer to the optimization issue. In a
PSO scheme, every particle is ‘‘flown’’ over hyper-dimensional
4. Implementation of ANN training using PSO algorithm
space of search, iteratively modifying its position in space of search
consistent with its own flight knowledge as well as the flying
With the intention of employ PSO for training a neural network,
knowledge of further particles in the entire of space of search, since
an appropriate representation and fitness function are necessary.
particles of a swarm communicate good positions to each other. A
Meanwhile the main objective is to minimize the error, the objec-
particle thus employs the finest position experienced by itself and
tive function is abridge the provided error (e.g. the MSE). Every par-
the finest position of other particles to guide itself into an optimal
ticle demonstrates a nominee answer to the optimization issue,
answer. The effectiveness of every particle (i.e. the ‘‘nearness’’ of a
and subsequently the interconnection weights of a neural network
particle to the overall optima) is evaluated through objective func-
at training step are a answer, a sole particle illustrates single com-
tion which is associated with the issue being unraveled [50].
prehensive network. Every component of a position vector of par-
After finding the two best aforementioned positions, during an
ticles illustrates single neural network bias or weight. Employing
iteration-based process every particle throughout the swarm is
this illustration, PSO approach can be employed to specify the fin-
adjusted executing the below formulas:
est weights for a neural network to minimize the fitness function
h i h i [50].
v nþ1
i ¼ xv ni þ c1 rn1 xni;p  xni þ c2 r n2 xng  xni ð8Þ As a matter of fact, the fitness function for each particle is
xnþ1 ¼ xni þ v nþ1 ð9Þ gained by adjusting the interconnection weights of ANN as deter-
i i
mined by the parameters of the particle and evaluating the fitness
where n stands for the number of iteration and the index of the par- function, gained in training of ANN. In the same way, the fitness
ticle is denoted by i. v ni represents the particle i velocity at nth iter- functions of the whole particles in the swarm are established.
ation, v nþ1
i is the velocity of particle i at n + 1th iteration. The gbest particle is defined as the particle having lowest fitness func-
individual finest position, xni;p , connected with particle i is the finest tion and the fitness function of the gbest particle is contrasted with
position the particle has stayed meanwhile the first time stage the pre-defined precision. If the pre-defined precision is fulfilled
(pbest), xng is the best value, obtained up to now (i.e. at nth iteration) subsequently the process of training is discontinued. Else, the
by any particle in the swarm (gbest). c1 and c2 are the acceleration new position and velocity of the particles are adjusted again
factors related to pbest and gbest respectively and typically c1 and according to Eqs. (8) and (9). The similar procedure is replicated
c2 values are set to 2. rn1 and r n2 are random values with constant
distribution in the range [0, 1] [52]. xnþ1
i and xni are the position of
particle i at n + 1th and nth iteration respectively. x is the weight
of inertia, presented by Shi and Eberhart [53], that controls the
exploration and exploitation of the search space [54]. Generally,
the inertia weight is calculated by means of linear declining meth-
odology where a primarily large weight of inertia is linearly reduced
to a minor value [50]:
 
xmax  xmin
xn ¼ xmax  n ð10Þ
nmax
where xmax, xmin, n and nmax are the initial weight of inertia, the final
weight of inertia, current iteration number and total iteration num-
ber (maximum number of iteration used in PSO) respectively. Usually
xmax and xmin values are equal to 0.9 and 0.4 respectively [27,50,55].
PSO assigns various common points with evolutionary based
approaches such as genetic algorithms (GAs). However, PSO enjoys
noticeable advantages. The two main advantages of PSO over GAs
are [31]:

 Memory of PSO, that is, the information of worthy answers is


remembered by all particles, while in GA, preceding informa-
tion of the issue is demolished as soon as the new population
is generated.
 GA use filtering operation such as selection operation, however;
PSO does not utilize one, and all the particles of the swarm are Fig. 4. The flowchart of ANN optimized with PSO algorithm [57].
150 M.A. Ahmadi et al. / Fuel 137 (2014) 145–154

1 70
Correlation Coefficient (R2)

Water Dew Point Temperature


(a) 60 (a)
0.95
50
0.9 40
30
0.85 20
Experimental Data
0.8 10
ANN Output
0
0.75 -10 0 50 100 150
-20
0.7
0 2 4 6 8 10 12 -30
-40
Number Neurons
Data Index
800
Mean Square Error (MSE)

60
(b)

Water Dew Point Temperature


750 50 (b)
40
700 30
20
650
10
Experimental Data
600 0
ANN Output
-10 0 10 20 30 40 50
550 -20
-30
500
0 2 4 6 8 10 12 -40
-50
Number Neurons
Data Index
Fig. 5. Variation of (a) R2 and (b) MSE with the number of hidden neurons.
Fig. 6. Actual versus predicted equilibrium water dew point using the BP-ANN
model: (a) training and (b) testing.

Table 1
Details of trained ANN with PSO for the estimation of the water dew point of a natural 80
Water Dew Point Temperature

gas stream in equilibrium with a TEG solution. (a)


60
Type Value/comment
40
Input layer 2
Hidden layer 7 20
Experimental Data
Output layer 1
0 PSO-ANN Output
Hidden layer activation function Logsig 0 50 100 150
Output layer activation function Purelin -20
Number of data used for training 130
Number of data used for testing 44 -40
Number of max iterations 200
-60
c1 and c2 in Eq. (8) 2
Number of particles 22 Data Index

80
Water Dew Point Temperature

60
(b)
until the pre-defined precision is achieved [57]. The flowchart of 40
PSO-ANN is shown in Fig. 4. It should be mentioned that each
20
weight throughout the constructed NN is originally established in Experimental Data
the span of [1, 1] and each initial particle is an assortment of 0 PSO-ANN Output
0 10 20 30 40 50
weights produced arbitrarily in the span of [1, 1]. -20

-40
5. Results and discussion
-60
As mentioned, ANNs were applied to construct reliable para- Data Index
digms to predict the equilibrium water dew point temperature Fig. 7. Actual versus predicted equilibrium water dew point using the PSO-ANN
(Td). They were supplied by the contactor temperatures (T) and model: (a) training and (b) testing.
TEG concentrations (wt%) data as input variables.
The whole database was split into two divisions by a random
number generator: The first, which is used in the training process,
includes 75% of the entire database and is equivalent to 130 data The hidden neurons’ number has a critical impact on the esti-
lines. The remaining points were save for validating and testing mation integrity and precision. Many sources (for example Ref.
the trained networks. This data set consists of 44 samples. It should [58]) claimed that a feed-forward network with one hidden layer
be mentioned that the first assortment is the training data bank, and enough neurons in the hidden layer, can fit any finite input–
which is employed for optimizing the network biases and weights output mapping problem. In this respect, herein, networks with
whereas the testing assortment affords a wholly autonomous one hidden layer with various hidden neurons were examined.
assessment of network integrity. The neurons number throughout the hidden layer illustrates the
M.A. Ahmadi et al. / Fuel 137 (2014) 145–154 151

40 60
y = 0.5066x + 7.7335 (a) 50 (a)
30 R² = 0.9679
40 y = 0.9944x + 0.1149
20 R² = 0.998
30

PSO-ANN Output
10
ANN Output

20
0
10

-10 0

-20 -10

-20
-30 Data
Data
Linear (Data) -30
-40 Linear (Data)
-40 -20 0 20 40 -40
Water Dew Point Temperature -40 -20 0 20 40 60
Actual Water Dew Point Temperature
60
60
(b)
50 y = 0.4954x + 8.8406
50
(b)
R² = 0.9751
40
40 y = 0.9987x + 0.1402
30 R² = 0.9996
30
PSO-ANN Output
ANN Output

20
20
10
10
0
0
-10
-10
-20 Data
Data -20
-30 Linear (Data)
Linear (Data) -30
-40
-40 -20 0 20 40 60 -40
-40 -20 0 20 40 60
Water Dew Point Temperature
Water Dew Point Temperature
Fig. 8. Regression plots of the BP-ANN model for: (a) training data set and (b)
testing data set. Fig. 9. Regression plots of the PSO-ANN model for: (a) training data set and (b)
testing data set.

complication of the network, though the more complex networks


As can be seen in Figs. 6 and 7, a comparison between pre-
are effective in estimate within the restrictions of the data bank
dicted and actual equilibrium water dew point during the testing
employed for their training, they travail from absence of adequate
and training steps for both hybrid PSO-ANN and common BP-ANN
extension. Specification of the number of neurons in the hidden
approaches is executed. As shown in Fig. 7, there are not major
layer is performed on the basis of a trial and error approach.
differences between the outputs of the PSO-optimized network
Fig. 5a shows the change of R2 versus the hidden neurons’ number
and the references values of equilibrium water dew point. It is
throughout the hidden layer. As demonstrated in Fig. 5a, it is
clear that the PSO-ANN approach depicts a higher integrity in
observable that rising the hidden neurons’ number from 1 to 7
estimation of equilibrium water dew point temperature com-
improved the coefficient of determination; conversely, no
pared with BP-ANN, with lower MSE for the training and test sets
improvement followed in an additional rise from 7 to 10. Fig. 5b
43.935 and 13.472 in contrast to 551.13 and 527.098 for BP-ANN,
shows the influence of the neurons’ number on MSE. According
respectively.
to Fig. 5a and b, the highest R2 is observed and the MSE get the
The performance of trained networks with PSO and conven-
minimum when 7 neurons employed in the hidden layer. There-
tional BP can be also evaluated by conducting an analysis of regres-
fore, a three-layer network with a 2 (input units):7 (neurons in
sion between the models outcomes and the relevant object. The
hidden layer):1 (output neuron) architecture is the most appropri-
cross plots of actual equilibrium water dew point versus predicted
ate. The details of the PSO-optimized network used in this study to
values of training and testing data set using PSO-ANN and BP-ANN
predict equilibrium water dew point temperature were given in
approaches are depicted in Figs. 8 and 9. It can be seen that the fit-
Table 1.
ting obtained by PSO-ANN is excellent since the regression line
With the purpose of gauging the effectiveness of the PSO-ANN
(the best linear fit) overlaps with the diagonal (perfect fit), as a
approach, a BP-ANN scheme was performed with the same data
result of a slope value close to 1 and minor value of the y-intercept
banks utilized in the PSO-ANN approach. The PSO-optimized net-
(see Fig. 9) [59]. The training and testing correlation coefficients
work trained via 50 generations conformed by a BP training algo-
(R2) of PSO-ANN were found to be greater than 0.99 while those
rithm. For the BP training algorithm the values of momentum
of BP-ANN model are not as favorably as PSO-ANN model. This
correction factor and learning coefficient are assigned to 0.001
means that the proposed hybrid PSO-ANN model has been well
and 0.7, correspondingly.
152 M.A. Ahmadi et al. / Fuel 137 (2014) 145–154

100 where K denotes the number of training or testing samples,


Y exp ; Y pre ; and Y exp are the experimental response, predicted
Mean Square Error ( MSE)

90 l l l
80 response, and the mean of experimental response respectively.
70 Fig. 10 shows the performance plots for training, validation,
60 test data subset, and best models introduced for predicting equi-
50 librium water dew point. The performance plot shows the value
40 of the performance function (MSE) against number of epochs.
30 As can be seen, the validation and test data sets had similar
20 trends; thus, PSO-ANN can estimate an unobserved data
10
assortment accompanied by the data assortment employed for
its validation [60].
0
1 11 21 31 41 51 61 Fig. 11 shows the scheme of actual data and % error between the
Epoch actual and estimated equilibrium water dew point temperatures
during the testing and training steps for both PSO-ANN and
950 BP-ANN approaches. As shown in Fig. 11a, poor results are
observed through BP-ANN model. However, the agreement
Mean Square Error (MSE)

900
850 between the actual equilibrium water dew point values and the
PSO-ANN predicted ones is acceptable. Considering the
800
performance of PSO-ANN globally, the effectiveness of the model
750
is obvious since the vast majority of the training and testing data
700 subsets falls in the region bordered by a relative deviation less than
650 20%. As a matter of fact, only for six data points the deviation
600 between experimental and estimated equilibrium water dew point
550 temperature was obtained to be P10% through the testing and
training development. According to Fig. 11b, relative deviations
500
1 11 21 31 41 51 61 located in the span 18.96% to 16.33%, the magnitude of minimum
Epoch relative deviation is 0.0334%, and the average magnitude of
deviation is 2.909%, while for the testing data banks the relative
Fig. 10. Performance plot of: (a) PSO-ANN model and (b) ANN model.
deviations located in the span 14.92% to 7.545%, the magnitude
of minimum relative deviation is 0.0099%, and the average
absolute deviations is 1.676%.
1500
(a)
1000
6. Conclusions
Deviation %

500
1. According to the literature database, the feasibility of using
0 ANN scheme trained with a new evolutionary algorithm, viz
-40 -20 0 20 40 60 80
-500 PSO, to predict equilibrium water dew point versus contactor
temperature at different concentrations of TEG was considered.
Training Data
-1000 The proposed PSO-ANN approach produced high reliability,
Testing Data
with MSE and R2 of 13.472 and 0.998, respectively.
-1500
2. The use of PSO led to the rise of comprehensive searching capa-
Water Dew Point Temperature
bility for choosing appropriate initial weights of ANN.
40 3. To specify the optimal structure of the PSO-ANN approach, var-
(b) ious three-layer feed-forward networks with different neurons
30
in hidden layer were tested. Tuning parameters (including
20
acceleration constants (c1 and c2), number of maximum itera-
Deviation %

10 tions, number of particles, and time interval) of proposed


0 hybrid model were carefully carried out.
-40 -20 0 20 40 60 80 4. According to the graphical representations together with the
-10
statistical error analysis, the optimum PSO-ANN scheme
-20 Training Data
performs much better in accuracy than the common back
-30 Testing Data propagation NN approach for the purpose of equilibrium water
-40 dew point prediction due to unlike PSO algorithm there is a
Water Dew Point Temperature probability of trapping or undulating nearby a local minima in
back propagation algorithms.
Fig. 11. Percent deviation between the actual and predicted isotherms against
actual data during the training and testing process: (a) BP-ANN model and (b) PSO-
ANN model.

Appendix A
trained and tested and is superior to BP-ANN model. The formula
for correlation coefficient is as follows: This section provides some of the data that used in this
study. Table A1 reports the contactor temperature, concentra-
PK 2
ðY exp  Y pre
l Þ
tion of TEG and corresponding equilibrium water dew point
R2 ¼ 1  Pl¼1 l 2
ð11Þ temperature.
K exp
l¼1 ðY l  Y exp
l Þ
M.A. Ahmadi et al. / Fuel 137 (2014) 145–154 153

Table A1
Data used in this study [1,6].

Contactor TEG Equilibrium water dew Contactor TEG Equilibrium water dew Contactor TEG Equilibrium water dew
T (°C) purity (%) point T (°C) T (°C) purity (%) point T (°C) T (°C) purity (%) point T (°C)
10 90 6 70 99.8 4.5 30 99.98 55
15 90 1 10 99.8 46.5 35 99.98 52.5
20 90 3 15 99.8 43.5 40 99.98 50
25 90 8.5 20 99.8 40 45 99.98 47.5
30 90 13 25 99.8 36.5 50 99.98 45
35 90 18 30 99.8 33.5 55 99.98 42.5
37 90 20 35 99.8 30 60 99.98 40
10 95 12 40 99.8 26.5 65 99.98 37.5
15 95 8 45 99.8 24 70 99.98 35
20 95 4 50 99.8 20.5 75 99.98 32.5
25 95 1 55 99.8 17 10 99.99 72
30 95 5 60 99.8 14 15 99.99 69
35 95 9.5 65 99.8 11 20 99.99 66.5
40 95 14 70 99.8 8.5 25 99.99 63.5
45 95 19 75 99.8 5.5 30 99.99 61.5
10 97 18 10 99.9 52.5 35 99.99 59
15 97 13.5 15 99.9 49.8 40 99.99 56.5
20 97 10 20 99.9 47 45 99.99 54
25 97 6 25 99.9 43.5 50 99.99 52
30 97 2 30 99.9 40.5 55 99.99 49
35 97 2 35 99.9 37.5 60 99.99 47
40 97 6 40 99.9 34 65 99.99 44
45 97 11.5 45 99.9 31.5 70 99.99 42
50 97 15 50 99.9 28 75 99.99 39.5
55 97 19.5 55 99.9 25 10 99.995 77
10 98 22 60 99.9 22.5 15 99.995 74
15 98 18 65 99.9 19.5 20 99.995 72
20 98 14.5 70 99.9 17 25 99.995 69
25 98 11 75 99.9 14 30 99.995 67
30 98 7 10 99.95 59 35 99.995 64.9
35 98 2.5 15 99.95 56 40 99.995 62.5
40 98 1.5 20 99.95 54 45 99.995 60
45 98 6 25 99.95 50 50 99.995 57.5
50 98 9.5 30 99.95 47.5 55 99.995 55
55 98 13.5 35 99.95 44 60 99.995 53
60 98 17.5 40 99.95 42 65 99.995 51
10 99 30 45 99.95 38.5 70 99.995 48
15 99 26.5 50 99.95 36 75 99.995 47
20 99 22.5 55 99.95 33.5 15 99.997 78
25 99 19 60 99.95 30 20 99.997 76
30 99 15 65 99.95 27.5 25 99.997 73
35 99 11 70 99.95 25 30 99.997 71.5
40 99 8 75 99.95 22.5 35 99.997 68
45 99 4 10 99.97 63 40 99.997 67
50 99 0.25 15 99.97 60 45 99.997 64
55 99 3.5 20 99.97 57.5 50 99.997 62
60 99 7.5 25 99.97 54.5 55 99.997 60
65 99 11.5 30 99.97 52 60 99.997 57.5
70 99 14.5 35 99.97 49 65 99.997 55
10 99.5 37.5 40 99.97 47 70 99.997 53
15 99.5 34 45 99.97 44.5 75 99.997 51.5
20 99.5 30 50 99.97 41
25 99.5 27 55 99.97 38.5
30 99.5 23 60 99.97 36
35 99.5 19.5 65 99.97 33
40 99.5 16.5 70 99.97 31
45 99.5 12.5 75 99.97 28
50 99.5 9 10 99.98 66.5
55 99.5 6 15 99.98 63.5
60 99.5 2.5 20 99.98 61
65 99.5 1 25 99.98 58

References [5] Rosman A. Water equilibrium in the dehydration of natural gas with
triethylene glycol. SPE J 1973;13:297–306.
[6] Herskowitz M, Gottlieb M. Vapor–liquid equilibrium in aqueous solutions of
[1] Parrish WR, Won KW, Baltatu ME. Phase behavior of the triethylene glycol–
various glycols and polyethylene glycols. 1. Triethylene glycol. J Chem Eng
water system and dehydration/regeneration design for extremely low dew
Data 1984;29:173–5.
point requirements. 65th GPA annual convention. San Antonio, TX; 1986.
[7] Won KW. Thermodynamic basis of the glycol dew-point chart and its
[2] Townsend FM. Vapor–liquid equilibrium data for DEG and TEG–water–natural
application to dehydration. 73rd GPA annual convention New Orleans, LA;
gas system. In: Gas conditioning conference. University of Oklahoma, Norman,
1994. p. 108–33.
OK; 1953.
[8] Association GP. Engineering data book: FPS version. Sections 16-26: Gas
[3] Scauzillo FR. Equilibrium ratios of water in the water–triethylene glycol–
Processors Suppliers Association; 1998.
natural gas system. J Petrol Technol 1961;13:697–702.
[9] Bahadori A, Vuthaluru HB. Rapid estimation of equilibrium water dew point of
[4] Worley S. Super dehydration with glycols. In: Gas conditioning conference.
natural gas in TEG dehydration systems. J Nat Gas Sci Eng 2009;1:68–71.
University of Oklahoma, Norman, OK; 1967.
154 M.A. Ahmadi et al. / Fuel 137 (2014) 145–154

[10] Twu CH, Tassone V, Sim WD, Watanasiri S. Advanced equation of state method [33] Ebadi M, Ahmadi MA, Gerami S, Askarinezhad R. Application fuzzy decision
for modeling TEG–water for glycol gas dehydration. Fluid Phase Equilib tree analysis for prediction condensate gas ratio: case study. Int J Comput Appl
2005;228–229:213–21. 2012;39(8):23–8.
[11] Twu CH, Sim WD, Tassone V. A versatile liquid activity model for SRK, PR and a [34] Ebadi M, Ahmadi MA, Hikoei KF. Application of fuzzy decision tree analysis for
new cubic equation-of-state TST. Fluid Phase Equilib 2002;194–197:385–99. prediction asphaltene precipitation due natural depletion; case study. Aust J
[12] Carrera G, Aires-de-Sousa J. Estimation of melting points of pyridinium Basic Appl Sci 2012;6(1):190–7.
bromide ionic liquids with decision trees and neural networks. Green Chem [35] Ahmadi MA, Ebadi M, Shokrollahi A, Majidi SMJ. Evolving artificial neural
2005;7:20–7. network and imperialist competitive algorithm for prediction oil flow rate of
[13] Chen H, Kim AS. Prediction of permeate flux decline in crossflow membrane the reservoir. Appl Soft Comput 2013;13:1085–98.
filtration of colloidal suspension: a radial basis function neural network [36] Zendehboudi S, Ahmadi MA, Mohammadzadeh O, Bahadori A, Chatzis I.
approach. Desalination 2006;192:415–28. Thermodynamic investigation of asphaltene precipitation during primary oil
[14] Urata S, Takada A, Murata J, Hiaki T, Sekiya A. Prediction of vapor–liquid production: laboratory and smart technique. Ind Eng Chem Res
equilibrium for binary systems containing HFEs by using artificial neural 2013;52:6009–31.
network. Fluid Phase Equilib 2002;199:63–78. [37] Ahmadi M. Prediction of asphaltene precipitation using artificial neural
[15] Mohanty S. Estimation of vapour liquid equilibria of binary systems, carbon network optimized by imperialist competitive algorithm. J Petrol Explor
dioxide–ethyl caproate, ethyl caprylate and ethyl caprate using artificial Prod Technol 2011;1:99–106.
neural networks. Fluid Phase Equilib 2005;235:92–8. [38] Fazeli H, Soleimani R, Ahmadi MA, Badrnezhad R, Mohammadi AH.
[16] Mohanty S. Estimation of vapour liquid equilibria for the system carbon Experimental study and modeling of ultrafiltration of refinery effluents.
dioxide–difluoromethane using artificial neural networks. Int J Refrig Energy Fuels 2013;27:3523–37.
2006;29:243–9. [39] Ahmadi MA, Ebadi M, Hosseini SM. Prediction breakthrough time of water
[17] Ghanadzadeh H, Ahmadifar H. Estimation of (vapour + liquid) equilibrium of coning in the fractured reservoirs by implementing low parameter support
binary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2-ethyl-1- vector machine approach. Fuel 2014;117:579–89.
hexanol) using an artificial neural network. J Chem Thermodyn [40] Ahmadi MA, Ebadi M. Evolving smart approach for determination dew point
2008;40:1152–6. pressure of condensate gas reservoirs. Fuel 2014;117(Part B):1074–84.
[18] Ketabchi S, Ghanadzadeh H, Ghanadzadeh A, Fallahi S, Ganji M. Estimation of [41] Reed R. Pruning algorithms – a survey. IEEE Trans Neural Netw 1993;4:740–7.
VLE of binary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2- [42] McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous
ethyl-1-hexanol) using GMDH-type neural network. J Chem Thermodyn activity. Bull Math Biophys 1943;5:115–33.
2010;42:1352–5. [43] Scarselli F, Chung Tsoi A. Universal approximation using feedforward neural

[19] Guimar aes PRB, McGreavy C. Flow of information through an artificial neural networks: a survey of some existing methods, and some new results. Neural
~
network. Comput Chem Eng 19(Suppl. 1):741–6. Networks 1998;11:15–37.
[20] Petersen R, Fredenslund A, Rasmussen P. Artificial neural networks as a [44] Hagan MT, Demuth HB, Beale M. Neural network design. PWS Publishing Co.;
predictive tool for vapor-liquid equilibrium. Comput Chem Eng 1994;18:63–7 1996.
(Suppl. 1). [45] Baughman DR, Liu YA. Neural networks in bioprocessing and chemical
[21] Sharma R, Singhal D, Ghosh R, Dwivedi A. Potential applications of artificial engineering. Academic Press; 1995.
neural networks to thermodynamics: vapor–liquid equilibrium predictions. [46] Freeman JA, Skapura DM. Neural networks: algorithms, applications, and
Comput Chem Eng 1999;23:385–90. programming techniques. Addison-Wesley; 1991.
[22] Lashkarbolooki M, Vaferi B, Shariati A, Zeinolabedini Hezave A. Investigating [47] Haykin SS. Neural networks: a comprehensive foundation. Prentice Hall; 1999.
vapor–liquid equilibria of binary mixtures containing supercritical or near- [48] Mehra P, Wah BW. Artificial neural networks: concepts and theory. IEEE
critical carbon dioxide and a cyclic compound using cascade neural network. Computer Soc. Press; 1992.
Fluid Phase Equilib 2013;343:24–9. [49] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE
[23] Ganguly S. Prediction of VLE data using radial basis function network. Comput international conference on neural networks, vol. 4; 1995. p. 1942–8.
Chem Eng 2003;27:1445–54. [50] Engelbrecht AP. Computational intelligence: an introduction. Wiley; 2007.
[24] Potukuchi S, Wexler AS. Predicting vapor pressures using neural networks. [51] Eberhart RC, Simpson PK, Dobbins R, Dobbins RW. Computational intelligence
Atmos Environ 1997;31:741–53. PC tools. AP Professional; 1996.
[25] Soleimani R, Shoushtari NA, Mirza B, Salahi A. Experimental investigation, [52] Prata DM, Schwaab M, Lima EL, Pinto JC. Nonlinear dynamic data
modeling and optimization of membrane separation using artificial neural reconciliation and parameter estimation through particle swarm
network and multi-objective optimization using genetic algorithm. Chem Eng optimization: Application for an industrial polypropylene reactor. Chem Eng
Res Des 2013;91:883–903. Sci 2009;94:3953–67.
[26] Ebadi M, Ahmadi MA, Hikoei KF, Salari Z. Evolving genetic algorithm, fuzzy [53] Yuhui S, Eberhart R. A modified particle swarm optimizer. In: The 1998 IEEE
logic and Kalman filter for prediction of asphaltene precipitation due to international conference on evolutionary computation proceedings, 1998. IEEE
natural depletion. Int J Comput Appl 2011;35(1):12–6. world congress on computational intelligence; 1998. p. 69–73.
[27] Zendehboudi S, Ahmadi MA, James L, Chatzis I. Prediction of condensate-to-gas [54] Sivanandam SN, Deepa SN. Introduction to genetic algorithms. Springer; 2007.
ratio for retrograde gas condensate reservoirs using artificial neural network [55] Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In:
with particle swarm optimization. Energy Fuels 2012;26:3432–47. Proceedings of the sixth international symposium on micro machine and
[28] Ahmadi MA, Shadizadeh SR. New approach for prediction of asphaltene human science, 1995 MHS ’95; 1995. p. 39–43.
precipitation due to natural depletion by using evolutionary algorithm [56] Kennedy J. The particle swarm: social adaptation of knowledge. In: IEEE
concept. Fuel 2012;102:716–23. international conference on evolutionary computation; 1997. p. 303–
[29] Zendehboudi S, Ahmadi MA, Bahadori A, Shafiei A, Babadagli T. A developed 308.
smart technique to predict minimum miscible pressure—eor implications. Can [57] Geethanjali M, Raja Slochanal SM, Bhavani R. PSO trained ANN-based
J Chem Eng 2013;91:1325–37. differential protection scheme for power transformers. Neurocomputing
[30] Ali Ahmadi M, Zendehboudi S, Lohi A, Elkamel A, Chatzis I. Reservoir 2008;71:904–18.
permeability prediction by neural networks combined with hybrid genetic [58] De Jesus O, Hagan MT. Backpropagation Algorithms for a Broad Class of
algorithm and particle swarm optimization. Geophys Prospect 2013;61: Dynamic Networks. IEEE Trans Neural Netw 2007;18:14–27.
582–98. [59] Sargolzaei J, Haghighi Asl M, Hedayati Moghaddam A. Membrane permeate
[31] Ali Ahmadi M, Golshadi M. Neural network based swarm concept for flux and rejection factor prediction using intelligent systems. Desalination
prediction asphaltene precipitation due to natural depletion. J Pet Sci Eng 2012;284:92–9.
2012;98–99:40–9. [60] Ghandehari S, Montazer-Rahmati MM, Asghari M. A comparison between
[32] Ahmadi MA. Neural network based unified particle swarm optimization for semi-theoretical and empirical modeling of cross-flow microfiltration using
prediction of asphaltene precipitation. Fluid Phase Equilib 2012;314:46–51. ANN. Desalination 2011;277:348–55.

You might also like