Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Fuel 89 (2010) 1101–1109

Contents lists available at ScienceDirect

Fuel
journal homepage: www.elsevier.com/locate/fuel

Prediction of macerals contents of Indian coals from proximate and ultimate


analyses using artificial neural networks
Manoj Khandelwal a,*, T.N. Singh b
a
Department of Mining Engineering, College of Technology and Engineering, Maharana Pratap University of Agriculture and Technology, Udaipur 313 001, India
b
Department of Earth Sciences, Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India

a r t i c l e i n f o a b s t r a c t

Article history: Coal, a prime source of energy needs in-depth study of its various parameters, such as proximate analysis,
Received 21 October 2008 ultimate analysis, and its biological constituents (macerals). These properties manage the rank and cal-
Received in revised form 11 November 2009 orific value of various coal varieties. Determination of the macerals in coal requires sophisticated micro-
Accepted 18 November 2009
scopic instrumentation and expertise, unlike the other two properties mentioned above. In the present
Available online 6 December 2009
paper, an attempt has been made to predict the concentration of macerals of Indian coals using artificial
neural network (ANN) by incorporating the proximate and ultimate analysis of coal. To investigate the
Keywords:
appropriateness of this approach, the predictions by ANN are also compared with conventional multi-
Macerals
Ultimate analysis
variate regression analysis (MVRA). For the prediction of macerals concentration, data sets have been
Proximate analysis taken from different coalfields of India for training and testing of the network. Network is trained by
Multi-variate regression analysis (MVRA) 149 datasets with 700 epochs, and tested and validated by 18 datasets. It was found that coefficient of
Artificial neural network (ANN) determination between measured and predicted macerals by ANN was quite higher as well as mean
absolute percentage error was very marginal as compared to MVRA prediction.
Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction sition of the coal is defined in terms of its proximate and ultimate
(elemental) analysis [9]. Coal can be broadly classified into coking
The development of any country is directly related to per capita and non-coking variety based on the degree of coalification pro-
consumption of energy. Coal is one of the prime sources of energy cess. Coal is composed of a number of different organic entities
in India, and accounts for nearly 70% of the total commercial en- called macerals. These macerals are particularly of great impor-
ergy produced by the country [18]. Coal is mainly of two types, tance to estimate the degree of maturity or rank, coke quality
coking and non-coking. Indian coal belongs to two principal geo- and carbon matter, etc. [6].
logical periods, the lower Gondwana coals of Permo-carboniferous Takahashi and Sasaki [23] proposed the automatic analysis for
age, and tertiary coals of Eocene to Miocene age [19]. Majority of the identification of macerals by measuring the reflectance at some
the Indian coal is of the non-coking type and is available in many fixed intervals from the distribution pattern. Pearson [15] devel-
states, like Jharkhand, Chattisgrah, Orissa, Madhya Pradesh, oped a method of probability analysis from vitrinite reflectance
Maharashtra, Andhra Pradesh, etc. Tertiary Lignite deposits are data to evaluate mixing-technology and to monitor blend consis-
available in Tamil Nadu, Kashmir, Rajasthan, Gujarat, Assam and tency to improve the plant efficiencies for blended coking coal.
Jammu-Kashmir. Vasconcelos [24] investigated the spatial distribution of macerals
Coal is an extremely complex heterogeneous material that is group analyses of world coal, and proposed the boundaries be-
difficult to characterize. Coal may be defined as an organic rock tween the different categories based on VLI – data for coal all
composed of an assembly of macerals, minerals and inorganic ele- around the world. By combining vitrinite reflectance (VR) and fluo-
ments held molecularly by the organic matter. The elementary rescence alteration of multiple macerals (FAMM) analyses, Kalk-
composition of coal is very simple, carbon (C), hydrogen (H) and reuth et al. [2] have proposed a technique, which provides
oxygen (O) being the principal constituents, along with small insights into the organic chemical nature of vitrinites (i.e., perhy-
amounts of nitrogen (N) and sulfur (S). Chemically, coal consists drous vs. orthohydrous vs. subhydrous compositions) in the Perm-
of a mixture of complex organic compounds along with small ian coal of the Parana Basin, Brazil. Parikh et al. [14] calculated the
amounts of inorganic mineral matter and moisture. Physical char- higher heating value of coal from proximate analysis. Balan and
acteristics of coal vary with the rank of the coal. Chemical compo- Gumrah [1] assessed the shrinkage-swelling properties in coal
seams using rank dependent physical coal properties.
* Corresponding author. Tel.: +91 294 2471 379; fax: +91 294 2471 056.
Ravi and Reddy (1999) proposed ranking of coking and non-
E-mail address: mkhandelwal@mpuat.ac.in (M. Khandelwal). coking coals of India for industrial use, using fuzzy multi-attribute

0016-2361/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.fuel.2009.11.028
1102 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109

decision-making (FMADM) model. They used proximate analysis to 1. Vitrinite reflectance.


predict the fixed carbon, volatile matter, moisture content and ash 2. Macerals: vitrinite, fusinite, etc.
content of the Indian coal. They considered these parameters as 3. Mineral matter, sulfur, trace elements.
fuzzy sets over the range of data, on the basis of available litera-
ture. Singh et al. [22] predicted the Poisson’s ratio of coal by Coal is composed of a number of distinct organic entities called
strength properties of rock using neural network. macerals and lesser amounts of inorganic substances – minerals.
Normally, determination of proximate and ultimate analysis of The inorganic constituents of coal can be expressed on the basis
coal is a simple and easy task as compared to identification and of parameters such as ash yield and sulfur content.
determination of concentration of macerals by petrographic study, The organic constituents, the macerals, are singly and particu-
which is very laborious and time-consuming as well as requires larly in combination, fundamental to many coal properties. Macer-
expertise to identify a particular type of macerals, perhaps to the als are defined by both their color/reflectance and morphology. The
overlapping of reflectance of one macerals type with another. two cannot really be separated. It is not always easy to decide
Hence, an attempt has been made to determine the concentration which category a maceral is in; even coal petrographers don’t al-
of macerals, taking into account the ultimate and proximate anal- ways agree. Macerals are phytogenetic organic substances or opti-
ysis by artificial neural network. cally homogeneous aggregates of phytogenetic substances
The artificial neural network (ANN) is a new branch of intelli- possessing distinctive chemical and physical properties. Macerals
gence science, and has developed rapidly since the 1980s. Now a are the remains of plants and degraded plant materials. They have
day, ANN is considered to be one of the intelligent tools to under- some special characteristic (chemical and physical) attributes.
stand the complex problems. Neural network has the ability to On the microscopic level, coal is made up of organic grains
learn from the pattern acquainted with before. Once the network called macerals. Coal petrographers separate the macerals into
has been trained, with sufficient number of sample data sets, it three maceral groups, each of which includes several maceral
can make predictions, on the basis of its previous learning, about types. The groups are liptinite, vitrinite, and inertinite. The vitrinite
the output related to new input data set of similar pattern [4]. group derived from coalified woody tissue, the liptinite group de-
Due to its multidisciplinary nature, ANN is becoming popular rived from the resinous and waxy parts of plants and the inertinite
among the researchers, planners, designers, etc., as an effective tool group derived from charred and biochemically altered plant cell
for the accomplishment of their work. These applications demon- wall material. Identification of these macerals is mainly based on
strate that ANN have superiority in solving problems in which its form and optical properties (e.g. reflectance). Thus, the mor-
many complex parameters influence the process and results, when phology reveals the genesis of macerals. Their optical properties
process and results are not fully understood and where historical can be related in principle to their molecular and chemical consti-
or experimental data are available. The prediction of macerals is tution. Macerals are defined according to their grayness in re-
also of this type. flected light: liptinites are dark gray, vitrinites are medium to
light gray, and inertinites are white and can be very bright. Lipti-
nites are composed of hydrogen-rich hydrocarbons derived from
2. Analyses of coal
spores, pollens, cuticles, and resins in the original plant material.
Vitrinites are composed of ‘‘gelified” wood, bark, and roots, and
The chemical composition of coal is determined either by the
contain less hydrogen than liptinites. Inertinites are mainly oxida-
proximate analysis or by the ultimate analysis. The proximate
tion products of other macerals and are consequently richer in car-
analysis consists of the parameters, like moisture content (M), vol-
bon than liptinites or vitrinites. The inertinite group includes
atile matter content (VM), ash content (A) and fixed carbon (C). The
fusinite, most of which is fossil charcoal, derived from ancient peat
first three parameters are determined by experimentation in the
fires.
laboratory and then fixed carbon is calculated gravimetrically
Thin sections are not often used in analyzing the types and
using the following formula.
amounts of macerals in coal. It is very difficult to make thin sec-
Fixed CarbonðCÞ ¼ 100  ðMoisture þ Volatile Matter þ AshÞ tions of coal. Coal scientists need some way of rapidly preparing
samples for routine examination. An easier technique is to work
The calorific value is calculated with the help of above values. A with polished sections of coal. The surface of many coals can be
general evaluation of the quality of coal can be made on the basis polished until it resembles a black mirror. Then reflectance micros-
of data furnished by the proximate analysis. copy is used to examine the coal. This type of microscopy is com-
Ultimate analysis of coal involves the estimation of proportion monly used with opaque samples. The coal is mostly shades of
of carbon, hydrogen, oxygen, sulfur and phosphorus. The amount grey. Some macerals do show shades of red and orange, but this
of carbon, hydrogen, nitrogen and sulfur are determined directly, is not in the same sense as with thin sections.
and that of oxygen is obtained by the following formula. From the viewpoint of coal petrology, the genesis of coal is pri-
OxygenðOÞ ¼ 100  ðCarbon þ Hydrogen þ Nitrogen þ SulfurÞ marily the genesis of macerals, microlithotypes, and lithotypes.
Macerals are the most uniform microscopical constituents of coal
The presence of high percentage of oxygen is most undesirable, as it and are comparable with minerals present in other rock types.
not only reduces the heating value of coal but also affects its coking Microlithotypes are typical maceral association that can be identi-
property. fied under the microscope, and lithotypes are layers of coal seams
Coal quality is a function of three fundamental, independent which can be distinguished with the naked eye.
factors: Besides the parent plant material and the initial decomposition
before and during the peat stage, the degree of coalification (rank)
1. Coal rank. is decisive for the microscopic appearance of macerals. The essence
2. Organic petrology. of the petrographic approach to the study of coal composition is
3. Inorganic petrology/geochemistry. the idea that coal is composed macerals, each having a distinct
set of physical and chemical properties that control the behavior
Coal quality can subsequently be expressed by a number of di- of coal. Morphology and reflectance under incident light are the
rect parameters, each largely independent of each of the other fun- main features, which distinguish macerals and the macerals groups
damental parameters: under the microscope. In addition to their appearance, macerals
M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109 1103

have many other different physical and chemical properties. Again cal analysis techniques. When data are analyzed using a neural
though, the properties of macerals change as a function of rank or network, it is possible to detect important predictive patterns that
maturation. They cannot be considered as a single molecular spe- were not previously apparent to a non-expert. Thus, the neural
cies with a well defined chemical structure. network can act like an expert. Particular network can be defined
Since, coal is opaque and friable, the preparation of thin and using three fundamental components: transfer function, network
polished sections is comparatively very difficult, time-consuming, architecture and learning law [20]. One has to define these compo-
and requires greater attention and skills. Also, it is generally not nents, depending upon the problem to be solved.
easily possible to make section of high rank coals.

4. Network training
3. Artificial neural network (ANN)
A network first needs to be trained before interpreting new
Artificial neural network (ANN) is a branch of the ‘Artificial
information. Several different algorithms are available for training
Intelligence’, other than, Case Based Reasoning, Expert Systems,
of neural networks but the back-propagation algorithm is the most
and Genetic Algorithms. Classical statistics, Fuzzy logic and Chaos
versatile and robust technique, which provides the most efficient
theory are also considered to be related fields. The ANN is an infor-
learning procedure for multilayer neural networks. Also, the fact
mation processing system simulating the structure and functions
that back-propagation algorithms are especially capable of solving
of the human brain. It attempts to imitate the way in which a hu-
prediction problems makes them so popular. The feed forward
man brain works in processes such as studying, memorizing, rea-
soning and inducing with a complex network, which is Table 1
performed by extensively connecting various processing units. It Input parameters for network and their range.
is a highly interconnected structure that consists of many simple
S. No. Input parameter Range
processing elements (called neurons) capable of performing mas-
1. % Moisture 0.6–37.3
sively parallel computation for data processing and knowledge
2. % Volatile matter 17.36–6.18
representation. The paradigms in this field are based on direct 3. % Ash 1.86–46.96
modeling of the human neuronal system [5]. A Neural network 4. % Carbon 43.37–88.7
can be considered as an intelligent hub that is able to predict an 5. % Hydrogen 2.62–8.77
output pattern when it recognizes a given input pattern. The neural 6. % Oxygen 3.48–22.46
7. % Sulfur 0.22–1.98
network is first trained by processing a large number of input pat-
terns and to show what output could result from each input pat-
tern. The neural network is able to recognize similarities when
presented with a new input pattern after proper training and is
able to predict the output pattern. Table 2
Neural networks are able to detect similarities in inputs, even Output parameters for network and their range.
though a particular input may never have been known previously. S.No. Output parameter Range
This property allows its excellent interpolation capabilities, espe-
1. Vitrinite 0.93–92.15
cially when the input data is noisy (not exact). Neural networks 2. Liptinite 0.39–24.77
may be used as a direct substitute for auto correlation, multivari- 3. Inertinite 1.4–53.26
able regression, linear regression, trigonometric and other statisti-

Input Layer (i)

Hidden Layer (j)


%M wij

wjk
% VM Output Layer (k)

%V
%A

%C %L

%H
%I

%O

%S

Fig. 1. Suggested ANN network for the study.


1104 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109

back propagation neural network (BPNN) always consists of at


least three layers: input layer, hidden layer and output layer. Each
layer consists of a number of elementary processing units, called
neurons, and each neuron is connected to the next layer through
weights, i.e. neurons in the input layer will send then outputs as
inputs for neurons in the hidden layer, and similar is the connec-
tion between hidden and output layer. Number of hidden layers
and number of neurons in the hidden layer change according to
the problem to be solved. The number of input and output neuron
is same as the number of input and output variables.
To differentiate between the different processing units, values
called biases are introduced in the transfer functions. These biases
are referred to as the temperature of a neuron. Except for the input
layer, all neurons in the back propagation network are associated
with a bias neuron and a transfer function. The bias is much like
a weight, except that it has a constant input of 1, while the transfer
function filters the summed signals received from this neuron.
These transfer functions are designed to map a neurons or layers
net output to its actual output, and they are simple step functions
either linear or non-linear functions. The application of these
Fig. 2. Performance of ANN while training.

Table 3
Measured and predicted macerals by ANN and MVRA.

S.No. %M % VM %A %C %H %O %S Vitrinite Liptinite Inertinite


Measured Predicted Predicted Measured Predicted Predicted Measured Predicted Predicted
by MVRA by ANN by MVRA by ANN by MVRA by ANN
1. 37.3 32.32 3.4 71.3 5 22.46 0.26 83.96 62.05 88.26 5.24 5.13 6.01 6.1 17.50 5.81
2. 10.2 24.8 27.6 43.37 2.62 20.12 0.22 45.8 43.82 43.15 19.83 17.85 20.15 21.3 24.79 22.85
3. 2.1 26.5 29.5 54.34 3.53 9.18 0.35 40 40.40 38.06 20.71 17.98 18.95 20.9 23.53 15.75
4. 5.2 26.2 29.1 49.88 3.01 13.3 0.26 48.6 41.82 50.95 18.46 18.61 16.31 18.8 24.17 22.94
5. 1.7 26 32.9 52.89 3.62 8.78 0.37 36.5 37.39 38.26 12.86 19.26 11.57 27.1 23.55 25.84
6. 11.4 28.2 18.9 48.82 2.64 21.5 0.25 40.2 52.21 39.54 17.94 14.95 16.56 35.4 23.71 39.24
7. 5.5 50.26 2.74 73.77 4.68 18.34 0.81 39.8 71.41 37.81 13.4 12.04 11.94 30.84 12.49 34.15
8. 5.64 40.5 3.2 78.11 5.85 12.33 0.75 53.3 63.15 50.31 9.7 5.39 7.94 15.84 15.04 14.92
9. 5.14 48.12 2.87 68.53 4.04 18.15 0.89 68.46 72.89 65.18 11.46 13.52 10.23 10.45 12.58 13.24
10. 4.35 47.95 1.87 69.05 4.86 20.98 0.73 70.56 70.18 64.05 7.06 9.34 7.86 17.5 12.55 16.86
11. 4.37 43.71 2.68 69.79 5.09 21.45 0.93 90.5 68.00 85.96 8.1 7.71 7.15 1.4 15.41 3.51
12. 4.15 50.94 2.85 76.79 5.73 13.05 0.99 92.15 70.15 101.26 7.1 10.85 7.18 1.8 8.14 2.94
13. 4.73 51.86 3.15 80.77 5.99 9.18 1.52 74.45 72.98 70.65 20.49 13.23 18.85 3.48 6.23 5.12
14. 4.97 56.18 3.79 68.35 5.11 19.49 1.98 84.86 79.96 94.26 14.62 18.12 12.47 5.48 4.21 6.14
15. 5.08 38.51 16.63 78.85 5.29 13.21 0.75 56.75 52.24 55.64 12.43 11.99 10.64 16.25 22.39 18.97
16. 4.25 26.34 46.96 74.22 4.93 18.39 0.82 25.56 22.53 25.23 24.77 20.86 27.15 33.38 39.81 35.48
17. 4.59 32.48 31.05 75.29 5.02 15.67 0.81 38.61 38.55 34.53 18.94 16.54 19.86 25.86 30.33 25.64
18. 4.85 31.56 35.85 76.89 4.97 16.95 0.76 30.86 33.69 30.75 20.84 17.98 22.25 30.59 33.94 29.57

100
y = 1.0631x - 3.969
2
R = 0.9684
90

80
Predicted Vitrinite by ANN

70

60

50

40

30

20

10

0
0 10 20 30 40 50 60 70 80 90 100
Measured Vitrinite

Fig. 3. Measured vs. predicted vitrinite by ANN.


M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109 1105

transfer functions depends on the purpose of the neural network. X i ¼ ðx1 ; x2 x3 . . . ; xn Þ


The output layer produces the computed output vectors corre-
sponding to the solution. The net input values in the hidden layer will be
During training of the network, data is processed through the
input layer to hidden layer, until it reaches the output layer (for- X
b

ward pass). In this layer, the output is compared to the measured Netj ¼ xi wij þ hj
i¼1
values (the ‘‘true” output). The difference or error between both
is processed back through the network (backward pass) updating
where
the individual weights of the connections and the biases of the
individual neurons. The input and output data are mostly repre-
xi = Input units,
sented as vectors called ‘‘training pairs”. The process as mentioned
wij = Weight on the connection of ith input and jth neuron,
above is repeated for all the training pairs in the data set, until the
hj = Bias neuron (Optional), and
network error converged to a threshold minimum defined by a cor-
n = Number of input units.
responding cost function, usually the root mean squared error
(RMS) or the summed squared error (SSE).
In Fig. 1 the jth neuron is connected with a number of inputs So, the net output from hidden layer is calculated using a loga-
rithmic sigmoid function

30
y = 1.0327x - 1.0834
2
R = 0.95

25
Predicted Litrinite by ANN

20

15

10

0
0 5 10 15 20 25 30
Measured Litrinite

Fig. 4. Measured vs. predicted liptinite by ANN.

40
y = 1.0023x + 0.8762
R2 = 0.9602
35

30
Predicted Inertinite by ANN

25

20

15

10

0
0 5 10 15 20 25 30 35 40
Measured Inertinite

Fig. 5. Measured vs. predicted inertinite by ANN.


1106 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109

Oj ¼ f ðNetj Þ ¼ 1=1 þ eðNetj þhj Þ incorrect) weights and thresholds. Now, the actual output is com-
pared with the desired output. Hence, the error at any output in
The total input to the kth unit is layer k is
X
n
e1 ¼ tk  Ok
Net k ¼ wjk Oj þ hk
j¼1 where
where
tk = desired output, and
hk = Bias neuron, Ok = actual output.
wjk = Weight between jth neuron and kth output.
The total error function is given by
So, the total output from lth unit will be, X
n
E ¼ 0:5 ðt k  Ok Þ2
Ok ¼ f ðNet k Þ k¼1

In the learning process, the network is presented with a pair of pat- Training of the network is basically a process of arriving at an opti-
terns, an input pattern and a corresponding desired output pattern. mum weight space of the network. The descent down error surface
The network computes its own output pattern using its (mostly is made out using the following rule:

100
y = 0.6311x + 19.397
90 R2 = 0.6495

80
Predicted Vitrinite by MVRA

70

60

50

40

30

20

10

0
0 10 20 30 40 50 60 70 80 90 100
Measured Vitrinite

Fig. 6. Measured vs. predicted vitrinite by MVRA.

30
y = 0.6822x + 3.9609
R2 = 0.6668

25
Predicted Liptinite by MVRA

20

15

10

0
0 5 10 15 20 25 30
Measured Liptinite

Fig. 7. Measured vs. predicted liptinite by MVRA.


M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109 1107

rW jk ¼ gðdE=dW jk Þ 5. Dataset

where
There are many researchers who have done extensive work
to determine the proximate and ultimate analysis vis-à-vis mac-
g is the learning rate parameter, and erals of different coal seams in India only. The range of values
E is the error function.
of different input parameters have been taken and decided by
the various published works [16,17,13,10,11,8,21,12]. Range of
The update of weights for the (n + 1)th pattern is given as: input and output parameters are given in Tables 1 and 2
W jk ðn þ 1Þ ¼ W jk ðnÞ þ rW jk ðnÞ respectively.
All the input and output parameters were scaled between 0
Similar logic applies to the connections between the hidden and and 1. This was done to utilize the most sensitive part of neuron,
output layers [3]. This procedure is repeated with each pattern pair and since output neuron being sigmoid can only give output be-
of training exemplar assigned for training the network. Each pass tween 0 and 1, the scaling of output parameter was hence
through all the training patterns is called a cycle or epoch. The pro- necessary.
cess is then repeated by as many epochs as needed, until the error Scaled value ¼ ðmax : value  unscaled valueÞ=ðmax : value
within the user specified goal is reached successfully. This quantity
is the measure of how the network has learned.  min : valueÞ

45
y = 0.6321x + 8.1413
R2 = 0.5438
40

35
Predicted Inertinite by MVRA

30

25

20

15

10

0
0 5 10 15 20 25 30 35 40 45
Measured Inertinite

Fig. 8. Measured vs. predicted inertinite by MVRA.

120
Measured Vitrinite
Vitrinite by MVRA
Vitrinite by ANN
100

80
Vitrinite

60

40

20

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number

Fig. 9. Comparison of measured vitrinite with predicted vitrinite by ANN and MVRA.
1108 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109

6. Network architecture and a dependent or criterion variable. The goal of regression anal-
ysis is to determine the values of parameters for a function that
Feed forward network is adopted here as this architecture is re- cause the function to best fit a set of data observations provided.
ported to be suitable for problem based on problem identification. In linear regression, the function is a linear (straight-line) equation.
Pattern matching is basically an input/output mapping problem. When there is more than one independent variable, then
Closer the mapping, better the performance of the network. multi-variate regression analysis is used to get the best-fit equa-
Thus, based on the above discussion and objective of the inves- tion. Multiple regressions analysis solves the data sets by perform-
tigation under consideration, one network was designed to predict ing least squares fit. It constructs and solves the simultaneous
the two outputs. equations by forming the regression matrix, and solving for the
The architecture of the network is tabulated below: co-efficient using the backslash operator. The MVRA has been done
by same data sets and same input parameters which we used in
1. No. of input neurons 7 ANN.
2. No. of output neurons 3 The equation for prediction of Vitrinite by MVRA is
3. No. of hidden layers 1
4. No. of hidden neurons 5 vitrinite ¼ 65:1470 þ 0:131  M þ 0:4832  VM  0:8263  A
5. No. of training epochs 700  0:1099  C  2:4817  H  0:0959  O þ 6:1685  S
6. No. of training datasets 48
7. No. of testing datasets 18 The equation for prediction of Liptinite by MVRA is
8. Error goal 0.005
liptinite ¼ 5:8871 þ 0:1413  M þ 0:4528  VM þ 0:4644  A
 0:0585  C  2:8246  H  0:2168  O þ 3:5291  S

The equation for prediction of Inertinite by MVRA is


7. Testing and validation of ANN model
inertinite ¼ 7:9270  0:2431  M  0:6604  VM þ 0:2625  A
To test and validate the ANN model, the new data sets have þ 0:5574  C  2:6185  H þ 0:5668  O  1:0952  S
been chosen. These data were not used while training the network,
as that will validate the use of ANN in more versatile way. The coefficient of determination between predicted and measured
The results are presented in this section to demonstrate the per- values of vitrinite, liptinite and inertinite by MVRA were 0.6495,
formance of the networks. The mean absolute percentage error 0.6688 and 0.5438 respectively (Figs. 6–8). Measured and predicted
(MAPE) and coefficient of determination between the predicted values of vitrinite, liptinite and inertinite by MVRA have given in Ta-
and observed values are taken as the performance measures. The ble 3.
prediction was based on the input data sets discussed above.
The performance of the ANN during training is shown in Fig. 2. 9. Discussion
Observed and predicted values of vitrinite, liptinite and inertinite
have given in Table 3. The coefficients of determination for the pre- Training of the neural network was done using seven input
dicted and measured values were as high as 0.9694, 0.95 and parameters, one hidden layer with five hidden neurons and three
0.9602 for the vitrinite, liptinite and inertinite respectively (Figs. output parameters. As Bayesian regulation [7] was used, so, there
3–5). was no danger of over-fitting problems. Hence, the network was
trained with 700 training epochs. Figs. 3–5, show that prediction
8. Multi-variate regression analysis (MVRA) of vitrinite, liptinite and inertinite by neural network is very accu-
rate and closer to measured values. The high coefficient of determi-
The purpose of multiple regressions is to learn more about the nation values shown by ANN as compared to MVRA indicates
relationship between several independent or predictor variables better prediction capability of ANN over MVRA. The mean absolute

30
Measured Liptinite
Lptinite by MVRA
Lptinite by ANN
25

20
Liptinite

15

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number

Fig. 10. Comparison of measured liptinite with predicted liptinite by ANN and MVRA.
M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109 1109

45
Measured Inertinite
Inertinite by MVRA
40 Inertinite by ANN

35

30

Inertinite
25

20

15

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number

Fig. 11. Comparison of measured inertinite with predicted inertinite by ANN and MVRA.

percentage error (MAPE) for vitrinite, liptinite and inertinite were [4] Khandelwal M, Singh TN. Study of macerals characteristics of Gondwana coal
by physico-chemical properties – an intelligent approach. In: International
5.39%, 9.80% and 6.99%, respectively by neural network method,
conference on Globalcoal, New Delhi, 2005.
whereas the MAPE calculated for vitrinite, liptinite and inertinite [5] Kosko B. Neural networks and fuzzy systems: a dynamical systems approach to
were 14.90%, 20.01% and 108.03% respectively by MVRA method. machine intelligence. New Delhi, India: Prentice-Hall of India; 1994 [p. 12–7].
Figs. 9–11 illustrate the comparison between measured and [6] Lynch LJ. Introductory perspective: characterization of sedimentary organic
matter through true definition of macerals. In: Thomas CG, Strachan MG,
predicted values of vitrinite, liptinite and inertinite by ANN and editors. The effect of macerals on the utilization of coal and their significance
MVRA. Here, Indian coal data have been analyzed with ANN, so it in petroleum exploration: a symposium, 1989. p. 1–3.
can be concluded that ANN can be successfully used to Indian coals [7] MacKay DJC. Bayesian interpolation. Neural Comput 1992;4:415–47.
[8] Misra BK, Singh BD. Susceptibility to spontaneous combustion of Indian coals
for the prediction of macerals. and lignites: an organic petrographic autopsy. Int J Coal Geol 1994;25:265–86.
[9] Mohanty JK, Misra SK, Nayak BB. Sequential leaching of trace elements in coal:
a case study from Talcher coal field, Orissa. J Geol Soc Ind 2001;58:441–7.
[10] Mukherjee AK, Alam MM, Ghose S. Gondwana coals of Bhutan Himalaya –
10. Conclusions
occurrence, properties and petrographic characteristics. Int J Coal Geol
1988;9:287–304.
In this study, it was shown that it is possible to predict the mac- [11] Mukherjee AK, Alum MM, Mazumdar SK, Haque R, Gowrisankaran S. Physico-
chemical properties and petrographic characteristics of the Kapurdi lignite
erals contents of Indian coals from proximate and ultimate analysis
deposit, Barmer Basin, Rajasthan, India. Int J Coal Geol 1992;21:31–44.
using artificial neural network. Using Bayesian regulation and opti- [12] Panigrahi DC, Sahu HB. Application of hierarchical clustering for classification
mum number of neurons in the hidden layer, the mean absolute of coal seams with respect to their proneness to spontaneous heating. Mining
percentage error (MAPE) for vitrinite, liptinite and inertinite were Technol (Trans Inst Min Metal A) 2004;113:97–106.
[13] Pareek HS. Chemico-petrographic studies of some lignite samples from kalol
5.39%, 9.80% and 6.99%, respectively by ANN. The corresponding oilfield, Cambay Basin, Gujarat, Western India. Int J Coal Geol 1983;3:183–204.
coefficients of determination were 0.9684, 0.95 and 0.9602, respec- [14] Parikh J, Channiwala SA, Ghosal GK. A correlation for calculating HHV from
tively. The prediction by MVRA shows very high errors. The coeffi- proximate analysis of solid fuels. Fuel 2005;84:487–94.
[15] Pearson DE. Probability analysis of blended coking coals. Int J Coal Geol
cient of determination for vitrinite, liptinite and inertinite by 1991;19:109–19.
MVRA were 0.6495, 0.6668 and 0.5438, respectively and by MAPE [16] Rai KL. Lower Gondwana sedimentation in Pench and Kanhan valley coalfield,
were 14.90%, 20.01% and 108.03%. Considering the complexity of M.P. with special reference to environment and origin of coal deposits.
Technical Monograph Series, Monograph, Pt. II, vol. 2. Indian School of Mines,
the relationship among the inputs and outputs, the results ob- Dhanbad, 1977. p. 27.
tained by ANN are highly encouraging and satisfactory. ANN could [17] Rai KL, Shukla RT. Depositional environment and origin of coal in Pench–
provide a back-up approach to laboratory methods of macerals Kanhan valley coalfield, M.P., India. In: Proceedings of the 4th international
Gond. Symp., Geol. Surv. India, Calcutta, vol. 1, 1979, p. 256–74.
analysis. So, ANN can be a better supplement for verification and
[18] Ravi V, Reddy PJ. Ranking of Indian coals via fuzzy multi attribute decision
cross-checking of laboratory results. making. Fuzzy Sets Syst 1999;103:369–77.
[19] Sharma NL, Ram KSV. A handbook of introduction to the geology of coal and
Indian coal fields. second ed. Dhanbad, India: Indian School of mines; 1966.
[20] Simpson PK. Artificial neural system – foundation, paradigm, application and
References
implementations. New York: Pergamon Press; 1990.
[21] Singh MP, Misra AK. Source rock characteristics and maturation of Palaeogene
[1] Balan HO, Gumrah F. Assessment of shrinkage-swelling influences in coal coals, North east India. J Geol Soc Ind 2001;57:353–68.
seams using rank-dependent physical coal properties. Int J Coal Geol [22] Singh TN, Kanchan R, Saigal K, Verma AK. Prediction of P-wave velocity and
2009;77:203–13. anisotropic properties of rock using Artificial Neural Networks technique. J Sci
[2] Kalkreuth W, Sherwood N, Cioccari G, Correa da Silva, Silva M, Zhong N, et al. Ind Res 2004;63:32–8.
The application of FAMM (Fluorescence Alteration of Multiple Macerals) [23] Takahashi R, Sasaki M. Automatic macerals analysis of low-rank coal (brown
analyses for evaluating rank of Parana Basin coals. Brazil Int J Coal Geol coal). Int J Coal Geol 1989;14:103–18.
2004;57:165–85. [24] Vasconcelos Lopo de Sousa. The petrographic composition of world coals:
[3] Khandelwal M. Application of neural network for the prediction of triaxial Statistical results obtained from a literature survey with reference to coal type
constants from uniaxial constants. M. Tech Thesis, Banaras Hindu Univ. (maceral composition). Int J Coal Geol 1999;40:27–59.
Varanasi, India (unpublished thesis), 2002.

You might also like