Professional Documents
Culture Documents
Khan Del Wal 2010
Khan Del Wal 2010
Fuel
journal homepage: www.elsevier.com/locate/fuel
a r t i c l e i n f o a b s t r a c t
Article history: Coal, a prime source of energy needs in-depth study of its various parameters, such as proximate analysis,
Received 21 October 2008 ultimate analysis, and its biological constituents (macerals). These properties manage the rank and cal-
Received in revised form 11 November 2009 orific value of various coal varieties. Determination of the macerals in coal requires sophisticated micro-
Accepted 18 November 2009
scopic instrumentation and expertise, unlike the other two properties mentioned above. In the present
Available online 6 December 2009
paper, an attempt has been made to predict the concentration of macerals of Indian coals using artificial
neural network (ANN) by incorporating the proximate and ultimate analysis of coal. To investigate the
Keywords:
appropriateness of this approach, the predictions by ANN are also compared with conventional multi-
Macerals
Ultimate analysis
variate regression analysis (MVRA). For the prediction of macerals concentration, data sets have been
Proximate analysis taken from different coalfields of India for training and testing of the network. Network is trained by
Multi-variate regression analysis (MVRA) 149 datasets with 700 epochs, and tested and validated by 18 datasets. It was found that coefficient of
Artificial neural network (ANN) determination between measured and predicted macerals by ANN was quite higher as well as mean
absolute percentage error was very marginal as compared to MVRA prediction.
Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction sition of the coal is defined in terms of its proximate and ultimate
(elemental) analysis [9]. Coal can be broadly classified into coking
The development of any country is directly related to per capita and non-coking variety based on the degree of coalification pro-
consumption of energy. Coal is one of the prime sources of energy cess. Coal is composed of a number of different organic entities
in India, and accounts for nearly 70% of the total commercial en- called macerals. These macerals are particularly of great impor-
ergy produced by the country [18]. Coal is mainly of two types, tance to estimate the degree of maturity or rank, coke quality
coking and non-coking. Indian coal belongs to two principal geo- and carbon matter, etc. [6].
logical periods, the lower Gondwana coals of Permo-carboniferous Takahashi and Sasaki [23] proposed the automatic analysis for
age, and tertiary coals of Eocene to Miocene age [19]. Majority of the identification of macerals by measuring the reflectance at some
the Indian coal is of the non-coking type and is available in many fixed intervals from the distribution pattern. Pearson [15] devel-
states, like Jharkhand, Chattisgrah, Orissa, Madhya Pradesh, oped a method of probability analysis from vitrinite reflectance
Maharashtra, Andhra Pradesh, etc. Tertiary Lignite deposits are data to evaluate mixing-technology and to monitor blend consis-
available in Tamil Nadu, Kashmir, Rajasthan, Gujarat, Assam and tency to improve the plant efficiencies for blended coking coal.
Jammu-Kashmir. Vasconcelos [24] investigated the spatial distribution of macerals
Coal is an extremely complex heterogeneous material that is group analyses of world coal, and proposed the boundaries be-
difficult to characterize. Coal may be defined as an organic rock tween the different categories based on VLI – data for coal all
composed of an assembly of macerals, minerals and inorganic ele- around the world. By combining vitrinite reflectance (VR) and fluo-
ments held molecularly by the organic matter. The elementary rescence alteration of multiple macerals (FAMM) analyses, Kalk-
composition of coal is very simple, carbon (C), hydrogen (H) and reuth et al. [2] have proposed a technique, which provides
oxygen (O) being the principal constituents, along with small insights into the organic chemical nature of vitrinites (i.e., perhy-
amounts of nitrogen (N) and sulfur (S). Chemically, coal consists drous vs. orthohydrous vs. subhydrous compositions) in the Perm-
of a mixture of complex organic compounds along with small ian coal of the Parana Basin, Brazil. Parikh et al. [14] calculated the
amounts of inorganic mineral matter and moisture. Physical char- higher heating value of coal from proximate analysis. Balan and
acteristics of coal vary with the rank of the coal. Chemical compo- Gumrah [1] assessed the shrinkage-swelling properties in coal
seams using rank dependent physical coal properties.
* Corresponding author. Tel.: +91 294 2471 379; fax: +91 294 2471 056.
Ravi and Reddy (1999) proposed ranking of coking and non-
E-mail address: mkhandelwal@mpuat.ac.in (M. Khandelwal). coking coals of India for industrial use, using fuzzy multi-attribute
0016-2361/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.fuel.2009.11.028
1102 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109
have many other different physical and chemical properties. Again cal analysis techniques. When data are analyzed using a neural
though, the properties of macerals change as a function of rank or network, it is possible to detect important predictive patterns that
maturation. They cannot be considered as a single molecular spe- were not previously apparent to a non-expert. Thus, the neural
cies with a well defined chemical structure. network can act like an expert. Particular network can be defined
Since, coal is opaque and friable, the preparation of thin and using three fundamental components: transfer function, network
polished sections is comparatively very difficult, time-consuming, architecture and learning law [20]. One has to define these compo-
and requires greater attention and skills. Also, it is generally not nents, depending upon the problem to be solved.
easily possible to make section of high rank coals.
4. Network training
3. Artificial neural network (ANN)
A network first needs to be trained before interpreting new
Artificial neural network (ANN) is a branch of the ‘Artificial
information. Several different algorithms are available for training
Intelligence’, other than, Case Based Reasoning, Expert Systems,
of neural networks but the back-propagation algorithm is the most
and Genetic Algorithms. Classical statistics, Fuzzy logic and Chaos
versatile and robust technique, which provides the most efficient
theory are also considered to be related fields. The ANN is an infor-
learning procedure for multilayer neural networks. Also, the fact
mation processing system simulating the structure and functions
that back-propagation algorithms are especially capable of solving
of the human brain. It attempts to imitate the way in which a hu-
prediction problems makes them so popular. The feed forward
man brain works in processes such as studying, memorizing, rea-
soning and inducing with a complex network, which is Table 1
performed by extensively connecting various processing units. It Input parameters for network and their range.
is a highly interconnected structure that consists of many simple
S. No. Input parameter Range
processing elements (called neurons) capable of performing mas-
1. % Moisture 0.6–37.3
sively parallel computation for data processing and knowledge
2. % Volatile matter 17.36–6.18
representation. The paradigms in this field are based on direct 3. % Ash 1.86–46.96
modeling of the human neuronal system [5]. A Neural network 4. % Carbon 43.37–88.7
can be considered as an intelligent hub that is able to predict an 5. % Hydrogen 2.62–8.77
output pattern when it recognizes a given input pattern. The neural 6. % Oxygen 3.48–22.46
7. % Sulfur 0.22–1.98
network is first trained by processing a large number of input pat-
terns and to show what output could result from each input pat-
tern. The neural network is able to recognize similarities when
presented with a new input pattern after proper training and is
able to predict the output pattern. Table 2
Neural networks are able to detect similarities in inputs, even Output parameters for network and their range.
though a particular input may never have been known previously. S.No. Output parameter Range
This property allows its excellent interpolation capabilities, espe-
1. Vitrinite 0.93–92.15
cially when the input data is noisy (not exact). Neural networks 2. Liptinite 0.39–24.77
may be used as a direct substitute for auto correlation, multivari- 3. Inertinite 1.4–53.26
able regression, linear regression, trigonometric and other statisti-
wjk
% VM Output Layer (k)
%V
%A
%C %L
%H
%I
%O
%S
Table 3
Measured and predicted macerals by ANN and MVRA.
100
y = 1.0631x - 3.969
2
R = 0.9684
90
80
Predicted Vitrinite by ANN
70
60
50
40
30
20
10
0
0 10 20 30 40 50 60 70 80 90 100
Measured Vitrinite
ward pass). In this layer, the output is compared to the measured Netj ¼ xi wij þ hj
i¼1
values (the ‘‘true” output). The difference or error between both
is processed back through the network (backward pass) updating
where
the individual weights of the connections and the biases of the
individual neurons. The input and output data are mostly repre-
xi = Input units,
sented as vectors called ‘‘training pairs”. The process as mentioned
wij = Weight on the connection of ith input and jth neuron,
above is repeated for all the training pairs in the data set, until the
hj = Bias neuron (Optional), and
network error converged to a threshold minimum defined by a cor-
n = Number of input units.
responding cost function, usually the root mean squared error
(RMS) or the summed squared error (SSE).
In Fig. 1 the jth neuron is connected with a number of inputs So, the net output from hidden layer is calculated using a loga-
rithmic sigmoid function
30
y = 1.0327x - 1.0834
2
R = 0.95
25
Predicted Litrinite by ANN
20
15
10
0
0 5 10 15 20 25 30
Measured Litrinite
40
y = 1.0023x + 0.8762
R2 = 0.9602
35
30
Predicted Inertinite by ANN
25
20
15
10
0
0 5 10 15 20 25 30 35 40
Measured Inertinite
Oj ¼ f ðNetj Þ ¼ 1=1 þ eðNetj þhj Þ incorrect) weights and thresholds. Now, the actual output is com-
pared with the desired output. Hence, the error at any output in
The total input to the kth unit is layer k is
X
n
e1 ¼ tk Ok
Net k ¼ wjk Oj þ hk
j¼1 where
where
tk = desired output, and
hk = Bias neuron, Ok = actual output.
wjk = Weight between jth neuron and kth output.
The total error function is given by
So, the total output from lth unit will be, X
n
E ¼ 0:5 ðt k Ok Þ2
Ok ¼ f ðNet k Þ k¼1
In the learning process, the network is presented with a pair of pat- Training of the network is basically a process of arriving at an opti-
terns, an input pattern and a corresponding desired output pattern. mum weight space of the network. The descent down error surface
The network computes its own output pattern using its (mostly is made out using the following rule:
100
y = 0.6311x + 19.397
90 R2 = 0.6495
80
Predicted Vitrinite by MVRA
70
60
50
40
30
20
10
0
0 10 20 30 40 50 60 70 80 90 100
Measured Vitrinite
30
y = 0.6822x + 3.9609
R2 = 0.6668
25
Predicted Liptinite by MVRA
20
15
10
0
0 5 10 15 20 25 30
Measured Liptinite
rW jk ¼ gðdE=dW jk Þ 5. Dataset
where
There are many researchers who have done extensive work
to determine the proximate and ultimate analysis vis-à-vis mac-
g is the learning rate parameter, and erals of different coal seams in India only. The range of values
E is the error function.
of different input parameters have been taken and decided by
the various published works [16,17,13,10,11,8,21,12]. Range of
The update of weights for the (n + 1)th pattern is given as: input and output parameters are given in Tables 1 and 2
W jk ðn þ 1Þ ¼ W jk ðnÞ þ rW jk ðnÞ respectively.
All the input and output parameters were scaled between 0
Similar logic applies to the connections between the hidden and and 1. This was done to utilize the most sensitive part of neuron,
output layers [3]. This procedure is repeated with each pattern pair and since output neuron being sigmoid can only give output be-
of training exemplar assigned for training the network. Each pass tween 0 and 1, the scaling of output parameter was hence
through all the training patterns is called a cycle or epoch. The pro- necessary.
cess is then repeated by as many epochs as needed, until the error Scaled value ¼ ðmax : value unscaled valueÞ=ðmax : value
within the user specified goal is reached successfully. This quantity
is the measure of how the network has learned. min : valueÞ
45
y = 0.6321x + 8.1413
R2 = 0.5438
40
35
Predicted Inertinite by MVRA
30
25
20
15
10
0
0 5 10 15 20 25 30 35 40 45
Measured Inertinite
120
Measured Vitrinite
Vitrinite by MVRA
Vitrinite by ANN
100
80
Vitrinite
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number
Fig. 9. Comparison of measured vitrinite with predicted vitrinite by ANN and MVRA.
1108 M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109
6. Network architecture and a dependent or criterion variable. The goal of regression anal-
ysis is to determine the values of parameters for a function that
Feed forward network is adopted here as this architecture is re- cause the function to best fit a set of data observations provided.
ported to be suitable for problem based on problem identification. In linear regression, the function is a linear (straight-line) equation.
Pattern matching is basically an input/output mapping problem. When there is more than one independent variable, then
Closer the mapping, better the performance of the network. multi-variate regression analysis is used to get the best-fit equa-
Thus, based on the above discussion and objective of the inves- tion. Multiple regressions analysis solves the data sets by perform-
tigation under consideration, one network was designed to predict ing least squares fit. It constructs and solves the simultaneous
the two outputs. equations by forming the regression matrix, and solving for the
The architecture of the network is tabulated below: co-efficient using the backslash operator. The MVRA has been done
by same data sets and same input parameters which we used in
1. No. of input neurons 7 ANN.
2. No. of output neurons 3 The equation for prediction of Vitrinite by MVRA is
3. No. of hidden layers 1
4. No. of hidden neurons 5 vitrinite ¼ 65:1470 þ 0:131 M þ 0:4832 VM 0:8263 A
5. No. of training epochs 700 0:1099 C 2:4817 H 0:0959 O þ 6:1685 S
6. No. of training datasets 48
7. No. of testing datasets 18 The equation for prediction of Liptinite by MVRA is
8. Error goal 0.005
liptinite ¼ 5:8871 þ 0:1413 M þ 0:4528 VM þ 0:4644 A
0:0585 C 2:8246 H 0:2168 O þ 3:5291 S
30
Measured Liptinite
Lptinite by MVRA
Lptinite by ANN
25
20
Liptinite
15
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number
Fig. 10. Comparison of measured liptinite with predicted liptinite by ANN and MVRA.
M. Khandelwal, T.N. Singh / Fuel 89 (2010) 1101–1109 1109
45
Measured Inertinite
Inertinite by MVRA
40 Inertinite by ANN
35
30
Inertinite
25
20
15
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Sample Number
Fig. 11. Comparison of measured inertinite with predicted inertinite by ANN and MVRA.
percentage error (MAPE) for vitrinite, liptinite and inertinite were [4] Khandelwal M, Singh TN. Study of macerals characteristics of Gondwana coal
by physico-chemical properties – an intelligent approach. In: International
5.39%, 9.80% and 6.99%, respectively by neural network method,
conference on Globalcoal, New Delhi, 2005.
whereas the MAPE calculated for vitrinite, liptinite and inertinite [5] Kosko B. Neural networks and fuzzy systems: a dynamical systems approach to
were 14.90%, 20.01% and 108.03% respectively by MVRA method. machine intelligence. New Delhi, India: Prentice-Hall of India; 1994 [p. 12–7].
Figs. 9–11 illustrate the comparison between measured and [6] Lynch LJ. Introductory perspective: characterization of sedimentary organic
matter through true definition of macerals. In: Thomas CG, Strachan MG,
predicted values of vitrinite, liptinite and inertinite by ANN and editors. The effect of macerals on the utilization of coal and their significance
MVRA. Here, Indian coal data have been analyzed with ANN, so it in petroleum exploration: a symposium, 1989. p. 1–3.
can be concluded that ANN can be successfully used to Indian coals [7] MacKay DJC. Bayesian interpolation. Neural Comput 1992;4:415–47.
[8] Misra BK, Singh BD. Susceptibility to spontaneous combustion of Indian coals
for the prediction of macerals. and lignites: an organic petrographic autopsy. Int J Coal Geol 1994;25:265–86.
[9] Mohanty JK, Misra SK, Nayak BB. Sequential leaching of trace elements in coal:
a case study from Talcher coal field, Orissa. J Geol Soc Ind 2001;58:441–7.
[10] Mukherjee AK, Alam MM, Ghose S. Gondwana coals of Bhutan Himalaya –
10. Conclusions
occurrence, properties and petrographic characteristics. Int J Coal Geol
1988;9:287–304.
In this study, it was shown that it is possible to predict the mac- [11] Mukherjee AK, Alum MM, Mazumdar SK, Haque R, Gowrisankaran S. Physico-
chemical properties and petrographic characteristics of the Kapurdi lignite
erals contents of Indian coals from proximate and ultimate analysis
deposit, Barmer Basin, Rajasthan, India. Int J Coal Geol 1992;21:31–44.
using artificial neural network. Using Bayesian regulation and opti- [12] Panigrahi DC, Sahu HB. Application of hierarchical clustering for classification
mum number of neurons in the hidden layer, the mean absolute of coal seams with respect to their proneness to spontaneous heating. Mining
percentage error (MAPE) for vitrinite, liptinite and inertinite were Technol (Trans Inst Min Metal A) 2004;113:97–106.
[13] Pareek HS. Chemico-petrographic studies of some lignite samples from kalol
5.39%, 9.80% and 6.99%, respectively by ANN. The corresponding oilfield, Cambay Basin, Gujarat, Western India. Int J Coal Geol 1983;3:183–204.
coefficients of determination were 0.9684, 0.95 and 0.9602, respec- [14] Parikh J, Channiwala SA, Ghosal GK. A correlation for calculating HHV from
tively. The prediction by MVRA shows very high errors. The coeffi- proximate analysis of solid fuels. Fuel 2005;84:487–94.
[15] Pearson DE. Probability analysis of blended coking coals. Int J Coal Geol
cient of determination for vitrinite, liptinite and inertinite by 1991;19:109–19.
MVRA were 0.6495, 0.6668 and 0.5438, respectively and by MAPE [16] Rai KL. Lower Gondwana sedimentation in Pench and Kanhan valley coalfield,
were 14.90%, 20.01% and 108.03%. Considering the complexity of M.P. with special reference to environment and origin of coal deposits.
Technical Monograph Series, Monograph, Pt. II, vol. 2. Indian School of Mines,
the relationship among the inputs and outputs, the results ob- Dhanbad, 1977. p. 27.
tained by ANN are highly encouraging and satisfactory. ANN could [17] Rai KL, Shukla RT. Depositional environment and origin of coal in Pench–
provide a back-up approach to laboratory methods of macerals Kanhan valley coalfield, M.P., India. In: Proceedings of the 4th international
Gond. Symp., Geol. Surv. India, Calcutta, vol. 1, 1979, p. 256–74.
analysis. So, ANN can be a better supplement for verification and
[18] Ravi V, Reddy PJ. Ranking of Indian coals via fuzzy multi attribute decision
cross-checking of laboratory results. making. Fuzzy Sets Syst 1999;103:369–77.
[19] Sharma NL, Ram KSV. A handbook of introduction to the geology of coal and
Indian coal fields. second ed. Dhanbad, India: Indian School of mines; 1966.
[20] Simpson PK. Artificial neural system – foundation, paradigm, application and
References
implementations. New York: Pergamon Press; 1990.
[21] Singh MP, Misra AK. Source rock characteristics and maturation of Palaeogene
[1] Balan HO, Gumrah F. Assessment of shrinkage-swelling influences in coal coals, North east India. J Geol Soc Ind 2001;57:353–68.
seams using rank-dependent physical coal properties. Int J Coal Geol [22] Singh TN, Kanchan R, Saigal K, Verma AK. Prediction of P-wave velocity and
2009;77:203–13. anisotropic properties of rock using Artificial Neural Networks technique. J Sci
[2] Kalkreuth W, Sherwood N, Cioccari G, Correa da Silva, Silva M, Zhong N, et al. Ind Res 2004;63:32–8.
The application of FAMM (Fluorescence Alteration of Multiple Macerals) [23] Takahashi R, Sasaki M. Automatic macerals analysis of low-rank coal (brown
analyses for evaluating rank of Parana Basin coals. Brazil Int J Coal Geol coal). Int J Coal Geol 1989;14:103–18.
2004;57:165–85. [24] Vasconcelos Lopo de Sousa. The petrographic composition of world coals:
[3] Khandelwal M. Application of neural network for the prediction of triaxial Statistical results obtained from a literature survey with reference to coal type
constants from uniaxial constants. M. Tech Thesis, Banaras Hindu Univ. (maceral composition). Int J Coal Geol 1999;40:27–59.
Varanasi, India (unpublished thesis), 2002.