Professional Documents
Culture Documents
Prediction of Uniaxial Compressive Strength of Different Quarried Rocks Using Metaheuristic Algorithm
Prediction of Uniaxial Compressive Strength of Different Quarried Rocks Using Metaheuristic Algorithm
Prediction of Uniaxial Compressive Strength of Different Quarried Rocks Using Metaheuristic Algorithm
https://doi.org/10.1007/s13369-019-04046-8
Received: 16 March 2019 / Accepted: 22 July 2019 / Published online: 31 July 2019
© King Fahd University of Petroleum & Minerals 2019
Abstract
The direct measurement of uniaxial compressive strength (UCS) as one of the main important rock engineering parameters
is destructive, cumbersome, difficult and costly. Therefore, the prediction of this parameter using simpler, cheaper indirect
methods is of interest. In this paper, the UCS was predicted using a developed hybrid intelligent model including generalized
feedforward neural network (GFFN) incorporated with imperialist competitive algorithm (ICA). To find the optimum model,
197 sets including rock class, density, porosity, P-wave velocity, point load index and water absorption from almost all over
quarries of Iran were compiled. The efficiency and performance of GFFN and hybrid ICA-GFFN models subjected to different
error criteria and established confusion matrixes were compared to multilayer perceptron (MLP) and radial basis function
(RBF) neural network models as well as conducted multivariate regression. The hybrid ICA-GFFN with 11.37%, 14.27%
and 22.74% improvement in correct classification rate over than GFFN, RBF and MLP demonstrated superior predictability
level. The results indicated that the developed ICA-GFFN model as a feasible and accurate enough tool can effectively be
applied for UCS prediction purposes. Using the sensitivity analyses, the P-wave velocity and rock class were identified as
the most and least influences factors on predicted UCS.
Keywords Quarries · Hybrid intelligence model · Confusion matrix · Sensitivity analysis · Uniaxial compressive strength
13
Vol.:(0123456789)
8646 Arabian Journal for Science and Engineering (2019) 44:8645–8659
these correlations due to dependency and variation to rock ICA-GFFN—were run using 197 datasets of different
types are not precise enough [11]. Moreover, the identified building stones including rock class, porosity (n), density
drawbacks of statistical methods in effectiveness of auxil- (γ), water absorption (w), point load index (Is) and P-wave
iary factors, uncertainty of experimental tests and inaccurate velocity (Vp) which have been acquired from almost all
prediction in a wide expanded range of data should also be quarry locations of Iran. The evaluated performance of the
considered [12]. According to characterized demerits, adopt- models using different criteria as well as reduced classifica-
ing other alternatives to overcome the associated problems tion error from 30.7% to 10.3% showed superior predict-
in developing UCS predictive models is necessary. In such ability level of ICA-GFFN than other used neural network
cases, the artificial intelligence (AI) techniques have been models. Moreover, two different types of sensitivity analyses
approved as very beneficial tools to handle the uncertainties were carried out and the most and least effective factors on
and insufficient data in modeling the material behavior from predicted UCS were recognized.
experimental data.
In the recent years, significant degree of success using
soft computing- and artificial intelligence (AI)-aided tech- 2 Basic Concept of ICA
niques than conventional statistical methods in producing
more efficient UCS predictive models in rock engineering ICA is a new optimization metaheuristic algorithm based
applications has been demonstrated. Alvarez Grima and on sociopolitical process of imperialistic competition [26].
Babuska [11] showed that fuzzy model can estimate the UCS This population-based algorithm initially was dedicated to
much better than multiple regression analysis techniques. the continuous optimization issues and then applied to find
Several researchers such as Meulenkamp and Alveraz Grima the global optima and convergence rate in many complex
[13], Singh et al. [14], Dehghan et al. [15], Cevik et al. [16] discrete combinatorial problems (e.g., [22, 27, 28]). Due to
and Ceryan et al. [17] used the ANN and different regression the well-recognized mathematical formulation of ICA (e.g.,
models to predict the UCS of sandstone, schistose, traver- [22, 26, 27, 29]), this section merely aims to provide a brief
tine, carbonate and clay-bearing rocks. They highlighted that explanation of its theoretical concept.
ANN-based models provide more accurate predictions than As presented in Table 1 and Fig. 1, this algorithm starts
conventional statistical techniques. The results of Gultekin with an initial population (countries, Ncou) in which some
et al. [18] in estimation of UCS using MR, ANN and ANFIS of the best countries with lower cost are selected to be the
methods subjected to three different models and five datasets imperialists (Nimp) and the remains are divided among the
showed higher predictability in ANFIS. Further, the com- imperialists as colonies (Ncol). These imperialistic empires
pared predicted UCS from different soft computing methods then begin to compete with each other and attract the colo-
revealed better performance in ANFIS than others [19, 20]. nies of the weakest empires based on their power which is a
Singh et al. [21] compared the generalized regression neu- function of the minimum system error. Therefore, the colo-
ral network and ANFIS models and indicated that network nies are moved toward an imperialist peak or new minimum
predicts the UCS more accurate than ANFIS. It was also area (assimilation process) to improve their situations and
confirmed that the predictability of AI model integrated with find better solutions (Fig. 1a). The power of countries and
metaheuristic optimization algorithm can be improved [22, corresponding counterpart of fitness values are inversely
23]. Momeni et al. [24] figured out that predictability level proportional to their cost; the more power empires pos-
of UCS using integrated ANN with particle swarm optimi- sess more colonies and vice versa (Fig. 1b). The movement
zation can be improved. The developed hybrid ICA-ANN
by Taghavifar et al. [25] and Jahed Armaghani et al. [23] Table 1 Essential steps for successful ICA process
showed successful ANN optimization by ICA to produce
Step Processing paradigm
more precise prediction of results comparing to conventional
ANN technique. 1 Generating the initial empires
The imperialist competitive algorithm (ICA) [26] as one 2 Moving the colonies of an
of the developed metaheuristics inspired by sociopolitical empire toward the imperial-
behaviors is a global search population-based algorithm and ist
a component of swarm intelligence technique which can pro- 3 Revolution
vide an evolutionary computation without requirement to the 4 Exchanging positions of the
imperialist and colony
gradient of the function in its optimization process.
5 Total power of an empire
In the current study, a hybrid model for the prediction
6 Imperialistic competition
of UCS using an optimized generalized feedforward neu-
7 Elimination of empires
ral network (GFFN) incorporated with ICA was developed.
8 Convergence
To predict the UCS, four models—MLP, RBF, GFFN and
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8647
Fig. 1 Schematic of execute
process in ICA optimization
procedure: a assimilation
process of a colony toward an
imperialist, b forming initial
empires to gain colonies, c
movement of colony toward rel-
evant imperialist with/without
randomized deviated direction,
d performance of revolution
operation to jump from local
optimum, e competition process
among empires and f eliminate
the weakest empire(s)
13
8648 Arabian Journal for Science and Engineering (2019) 44:8645–8659
( m
)
∑
Ok = f wjk ⋅ Oj , (7)
j=1
where xi is the ith input and bi and f denote the bias and non-
linear activation function, respectively. W and X express the
vectors of weights and inputs. The bi as a type of connection
Fig. 2 Distribution of active quarries in Iran (www.ime.org.ir)
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8649
Table 3 Descriptive statistics of Variable Type Mean Mean stand- SD Min Median Max Skewness
acquired datasets ard error
Rock class Input 1.51 0.061 0.85 1.00 1.00 4.00 1.72
γ (gr/cm3) Input 2.59 0.012 0.17 2.18 2.58 3.06 0.47
n% Input 6.63 0.45 6.34 0.15 4.50 31.4 1.03
Vp (m/s) Input 4.88 0.06 0.92 2.40 5.09 6.82 − 0.74
w (%) Input 3.06 0.29 4.14 0.070 1.38 16.16 1.78
Is (MPa) Input 4.29 0.22 3.08 0.30 3.32 15.12 1.49
UCS (MPa) Output 57.35 2.76 38.69 4.75 44.20 193.00 0.87
Mean
Is Mean
Median Median
Vp
4.7 4.8 4.9 5.0 5.1 5.2 3.0 3.5 4.0 4.5
Mean
Mean
Median n Median
γ
3 4 5 6 7 8 2.54 2.56 2.58 2.60 2.62
Mean
Median w
1.5 2.0 2.5 3.0 3.5
weight with a constant input of 1 is set up into the all neu- The GFFN (Fig. 4) is a subclass of MLPs in which the
rons in the back-propagation and transfer functions except perceptrons of hidden layer are replaced with the generalized
for the input layer. The network error (E) of the kth output shunting inhibitory neurons (GSN) to provide correct deci-
neuron and corresponding root mean square error (RMSE) sion [40]. The GSN enable the connections jump over one
are defined using the actual and predicted values (xk and or more layers which allow neurons to operate as adaptive
yk) as: nonlinear filters and also support higher computational power
as well as more freedom in selecting the optimum topology
n
1 ∑( )2 (e.g., [41–46]).
E= x − yk (8) The RBFs as a hybrid network use nonlinear Gaussian acti-
2 i=1 k
vation transfer functions and tend to learn much faster than
� MLPs.
∑n �
xk − yk
�2 Each input xi at each hidden neuron j is weighted by wh as:
RMSE = i=1
. (9)
n [ ]
Sj = x1 wh1,j , x2 wh2,j , … , xn whn,j , … , xN whN,j , (12)
To reduce the error between the desired and actual outputs,
the weights are optimized using an updating procedure for where xn is the nth input and whn,j is the weight between input
(n + 1)th pattern subjected to: n and hidden neuron j. Accordingly the network output (Om)
is calculated by:
𝜕E(W)
Δwi,k = −𝜂
𝜕wi,k (10) ∑J
(
Sj − c2j
)
Om = exp woj,m + wo0,m , (13)
i=1
𝜎 j
wi,k (n + 1) = wi,k (n) + ∇wi,k (n), (11)
where η is the learning rate.
13
8650 Arabian Journal for Science and Engineering (2019) 44:8645–8659
where the activation function φj (·) for hidden neuron j is back-propagation training algorithm and is used to speed
normally chosen as Gaussian function; cj and σj denote up convergence and maintain generalization performance.
the center and width of jth hidden neuron, respectively; The MO is a locally adaptive approach in which each weight
o
wj,m expresses the output weight between hidden neuron j remembers the most recent update, and thus, each weight is
o
and output neuron m; and w0,m is the bias weight of output able to update independent of other weights [52, 53].
neuron m. The activation transfer function for hidden and output
layers was selected among logistic (Log) and hyperbolic
tangent (HyT). Furthermore, the sum of squares and cross-
5 Assessment of Optimum Hybrid ICA‑GFFN entropy were also employed as output error function, respec-
Predictive Model tively. Using the two embedded internal loops in defined
procedure (Fig. 5), numerous topologies even with similar
Selecting the appropriate training algorithm and network architecture but different internal characteristics were gener-
size due to no exact or unified method is the most important ated. The process of examining such diverse structures was
and critical task in ANN design [45, 47]. In this study, the carried out to prevent the overfitting problem and escape
optimum GFFN structure and corresponding adjusted inter- from trapping in local minima. The RMSE and iteration
nal characteristics (e.g., training algorithm, activation func- number were the organized termination criteria. The first
tion, number of neurons, learning rate) were found through priority is to satisfy the RMSE and if not achieved then the
an integrated iterative trial–error with constructive technique number of iteration (set for 1000 in this study) will consider.
(Fig. 5). To find the appropriate training algorithm, the quick Accordingly, the best results of RMSE and the network cor-
propagation (QP), momentum (MO), quasi-Newton (QN) relation (R2) of each tested structure after three runs were
and Levenberg–Marquardt (L–M) were examined. The QP considered. The learning rate of all implemented algo-
is one of the most popular back-propagation training algo- rithms was 0.7, whereas the step sizes of hidden layers were
rithms with appropriate results in many problems [48]. The changed from 1.0 to 0.001. The results of executed process
L–M [49, 50] is an advanced and fast nonlinear optimization with QP, MO, QN and L–M training algorithms subjected to
algorithm with ability to solve generic curve-fitting prob- Log and HyT activation functions (Fig. 6; Table 4) showed
lems. The memory requirements are proportional to the that the minimum RMSE is occurred in 9–12 neurons. Test-
square of the number of weights in the network, and thus, ing the different architectures using 9 and 11 neurons for MO
the L–M can only be used for small and single output net- and L–M training algorithms demonstrated that the 6-4-5-1
works including a few hundred weights. Moreover, the L–M and 6-5-6-1 topologies can be selected as the candidates of
is specifically designed to minimize the sum of square error optimum models in which the 6-4-5-1 structure subjected
and hence cannot be used for other types of network error. to MO with HyT activation due to less number of neurons
The QN [51] as a training algorithm avoids the need to store was chosen (Table 5).
computed Hessian matrix during each iteration and thus The selected optimum structure (6-4-5-1) then was incorpo-
require less memory and can be used for bigger networks. rated with the ICA code in MATLAB using the same training
The MO [52] as a well-known standard algorithm designed and testing datasets. In optimizing process, the ICA parameters
to overcome some of the problems associated with standard (Table 6) play crucial role and need to be determined from
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8651
Fig. 5 Schematic of proposed
flowchart to find the optimum
GFFN and hybrid ICA-GFFN
models
previous studies or trial–error method (e.g., [22, 25, 26, 54, between moderate and very low variation (350) can be con-
55]). This is because the ICA by adjusting weights and biases sidered as the optimum Ndec. Further, the appropriate Nimp was
can minimize the error of optimum GFFN structure. In the cur- obtained through the calculated R2 and RMSE of ICA-GFFN
rent paper 2, π/4 and 0.02 were assigned to β, θ and ζ (Table 6), models subjected to diverse Nimp values for both training and
but the Ncou, Ndec and Nimp were determined using parametric testing datasets (Table 8). The outcome of trained hybrid ICA-
investigation. The optimum Ncou according to highest R2 and GFFN model subjected to optimum GFFN structure (6-4-5-1)
minimum RMSE values was identified through 11 conducted and determined ICA parameters (Table 6) is reflected in Fig. 8.
hybrid models (Table 7) which trained using selected GFFN The testing datasets were also employed to assess the capacity
structure. Similar process with Nimp was managed to charac- of the network performance.
terize the optimum Ndec using the variation of RMSE against
the Ncou. As presented in Fig. 7, the approximate boundary
13
8652 Arabian Journal for Science and Engineering (2019) 44:8645–8659
6 Validation and Discussion
Table 4 Results of implemented Training Min RMSE Number of Activation Training Min RMSE Number of Activa-
training algorithms to assess the algorithm neurons function algorithm neurons tion
optimum GFFN model function
The optimum network results subjected to implemented internal characteristics are shown in bold
Table 5 A series of tested Training Min RMSE Number of Structure Layer activation R2
structures corresponding to algorithm neuron transfer function
9 and 11 neurons to find the Train Test Validate
optimum structure
L–M 0.345 11 6-4-7-1 Log 0.88 0.87 0.88
0.516 11 6-4-4-3-1 Log 0.85 0.83 0.80
0.486 11 6-3-5-3-1 Log 0.87 0.84 0.87
0.429 11 6-8-3-1 Log 0.89 0.87 0.89
0.586 11 6-4-3-4-1 Log 0.83 0.81 0.80
0.304 11 6-7-4-1 Log 0.89 0.88 0.90
0.650 11 6-3-4-4-1 Log 0.82 0.80 0.80
0.181 11 6-5-6-1 Log 0.93 0.91 0.92
0.308 11 6-3-8-1 Log 0.89 0.90 0.88
0.206 11 6-11-1 Log 0.92 0.91 0.91
0.273 11 6-6-5-1 Log 0.91 0.91 0.90
MO 0.553 9 6-2-7-1 HyT 0.83 0.81 0.80
0.456 9 6-3-6-1 HyT 0.88 0.89 0.86
0.483 9 6-3-3-3-1 HyT 0.86 0.84 0.87
0.211 9 6-9-1 HyT 0.93 0.92 0.91
0.470 9 6-6-3-1 HyT 0.86 0.83 0.85
0.181 9 6-4-5-1 HyT 0.93 0.92 0.92
0.296 9 6-5-4-1 HyT 0.90 0.91 0.90
0.504 9 6-7-2-1 HyT 0.85 0.84 0.82
The optimum network results subjected to implemented internal characteristics are shown in bold
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8653
Table 7 Determination of Ncou using Ndec and Nimp subjected to opti- Table 8 Determination of Nimp using both training and testing data-
mum GFFN structure sets
Tested ICA- Ncou Train Test Tested ICA- Nimp Train Test
GFFN 2 2 GFFN 2
R RMSE R RMSE R RMSE R2 RMSE
13
8654 Arabian Journal for Science and Engineering (2019) 44:8645–8659
Fig. 8 Predictability of opti-
mum GFFN and hybrid ICA-
GFFN using similar randomized
training datasets
higher predictability of ICA-GFFN rather than MLP, GFFN reveals that high predicted values cause higher error rates.
and RBF. The generic IA which varies from 0 to 1 [58] indicates the
Two different multivariate regressions (MVRs) compatibility of modeled and observations. The VAF as an
(Eqs. 15, 16) using the same randomized training datasets intrinsically connected index between predicted and actual
were developed and compared to other models (Fig. 9a–d). values is a representative of model performance. Therefore,
The performance of the models then was tested using higher values of VAF, IA and R2 as well as smaller values of
absolute error (AE) and calculated residuals (CR). The MAPE, MAD and RMSE are interoperated as better model
AE shows the difference between actual and predicted val- performance (Table 12).
ues, and indicates the physical error in a measurement, as The robustness of the optimized ICA-GFFN model was
well as the uncertainty in a measurement. The residual is also analyzed using sensitivity and weight analyses. The sen-
a fitting deviation of predicted value from measured value, sitivity analyses are appropriate methods to investigate the
and thus, lower residual represents better fitting condi- predictability robustness of calibrated model or systems and
tions. The lowest deviated variation of AE and CR was determining the importance of inputs in presence of uncer-
observed in ICA-GFFN in which regarding GFFN, RBF, tainty [59]. Moreover, removing the least effective inputs
MLP and MVR models (Fig. 9a, b) can be interpreted as may lead to the development of better results [60]. Here, two
more precious predictability level. Figure 9a, b also shows sensitivity analyses methods known as the cosine amplitude
better performance in GFFN than RBF and MLP and sig- (CAM) and partial derivative (PaD) [60] were used (Eqs. 17,
nificant improvement than MVR. Therefore, GFFN and 18) and the results are presented in Fig. 10a.
RBF were ranked as second and third appropriate models ∑m � �
for prediction. k=1 xik × xjk
Rij = � , x and xj ∶ elements of data pairs
∑m 2 ∑m 2 i
x
k=1 ik
x
k=1 jk
UCS = −89.58 + 8.49(rock clss) + 15.24𝛾 − 0.25n + 1.31w
+ 11.85Vp + 8.08IS(50) R2 = 0.68 (17)
(15) � p
�2
SSDi � 𝜕Ok
contribution of ith variable = ∑ ; SSDi = p ,
UCS = 2.004 rock class0.31 𝛾 0.27 n−0.09 w0.06 Vp1.41 IS(50)
0.52
R2 = 0.56. i SSDi p 𝜕xi
(16) (18)
The performance of applied models to pursue the accuracy where Okp and xpi are output and input values for pattern P
and monitoring of forecasted values also were cross-exam- and SSDi is sum of the squares of the partial derivatives,
ined using known statistical error indices (mean absolute respectively.
percentage error, MAPE; variance accounted for, VAF; The results of sensitivity analyses also can be examined
RMSE; mean absolute deviation, MAD; coefficient of deter- using the assigned W in intelligence models. Individual W
mination, R2; and index of agreement, IA). The formulation are numerical parameters which can represent the strength
of these indices can widely be found in statistical textbooks. and effect of connections between units; the greater the mag-
The MAPE is one of the most popular indexes for descrip- nitude, the more the influence. Bias (b) in predictive models
tion of accuracy and size of the forecasting error. The MAD is a measure of model rigidity and inflexibility, and means
reflects the size of error in the same units as the data and that the model is not capturing the entire signal from the
13
Table 9 Confusion matrix of optimum models for validation datasets
Output target 4.75–21.94 21.94–39.14 39.14–56.33 56.33–73.52 73.52–90.72 90.72–– 107.91– 125.11– 142.30– 159.41– Total data True False
107.91 125.11 142.30 159.41 176.69
4.75–21.94 5 0 0 0 0 0 0 0 0 0 5 5 0
21.94–39.14 2 8 1 0 0 0 0 0 0 0 11 8 3
39.14–56.33 0 1 8 0 0 0 0 0 0 0 9 8 1
56.33–73.52 0 0 1 1 0 0 0 0 0 0 2 1 1
73.52–90.72 0 0 0 0 2 1 0 0 0 0 3 2 1
90.72–107.91 0 0 0 0 0 3 0 0 0 0 3 3 0
107.91–125.11 0 0 0 0 0 1 2 0 0 0 3 2 1
125.11–142.30 0 0 0 0 0 0 0 1 0 0 1 1 0
142.30–159.41 0 0 0 0 0 0 0 1 1 0 2 1 1
159.41–176.69 0 0 0 0 0 0 0 0 0 0 0 0 0
Note 7 8 11 1 2 5 2 2 1 0 39 31 8
Network output (UCS) → RBF
4.75–21.94 5 0 0 0 0 0 0 0 0 0 5 5 0
21.94–39.14 1 9 1 0 0 0 0 0 0 0 11 9 2
39.14–56.33 0 2 7 0 0 0 0 0 0 0 9 7 2
56.33–73.52 0 0 1 1 0 0 0 0 0 0 2 1 1
73.52–90.72 0 0 0 0 2 1 0 0 0 0 3 2 1
90.72–107.91 0 0 0 0 0 3 0 0 0 0 3 3 0
107.91–125.11 0 0 0 0 0 1 2 0 0 0 3 2 1
125.11–142.30 0 0 0 0 0 0 0 0 1 0 1 0 1
142.30–159.41 0 0 0 0 0 0 0 1 1 0 2 1 1
159.41–176.69 0 0 0 0 0 0 0 0 0 0 0 0 0
Note 6 12 8 1 2 5 2 1 2 0 39 30 9
13
8655
8656 Arabian Journal for Science and Engineering (2019) 44:8645–8659
12
0
0
2
1
1
0
1
2
0
5
Model CCR (%) CE (%) Progress (%)
True
27
2
0
1
0
2
3
5
6
8
0
ICA-GFFN 80.3 89.7 19.7 10.3 10.47
Total data
0
39
1
2
3
3
2
3
5
11
9
0
0
0
0
0
0
0
0
0
0
0
models
142.30–
159.41
The optimum network results subjected to implemented internal characteristics are shown in bold
7 Conclusion
39.14–56.33
4.75–21.94
73.52–90.72
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8657
Fig. 10 Influence of input
parameters on predicted UCS
and E using different sensitivity
analyses
ICA-GFFN than other neural network and MVR models. provides more accurate and reliable prediction than other
The calculated IA index was another supplementary indi- employed models. The implemented sensitivity analyses
cator that showed ICA-GFFN model produces closer pre- showed that Is, Vp and n are the most effective factors on
dicted values to observations. Furthermore, the analytical predicted UCS values. The result of sensitivity analyses
graph of AE and CR indicated that the ICA-GFFN model can also be interpreted as appropriate trend with previous
13
8658 Arabian Journal for Science and Engineering (2019) 44:8645–8659
empirical correlations which mostly have been established 16. Cevik, A.; Sezer, E.A.; Cabalar, A.F.; Gokceoglu, C.: Modeling
by these three factors, and thus, the obtained MVR cor- of the uniaxial compressive strength of some clay-bearing rocks
using neural network. Appl. Soft Comput. 11, 2587–2594 (2011)
relations in this paper also can be calibrated with these 17. Ceryan, N.; Okkan, U.; Kesimal, A.: Prediction of unconfined
effective factors. compressive strength of carbonate rocks using artificial neural
networks. Environ. Earth Sci. 68(3), 807–819 (2012)
Acknowledgements Deputy of research and technology of Islamic 18. Gultekin, N.Y., Gokceoglu, C., Sezer, E.A.: Prediction of uniaxial
Azad University, Roudehen branch, Tehran, is kindly acknowledged. compressive strength of granitic rocks by various nonlinear tools
Critical reviews by anonymous reviewers and the editor, which signifi- and comparison of their performances. Int. J. Rock. Mech. Min.
cantly improved the quality of our work, are thankful. Sci. 62,113–122 (2013)
19. Mishra, D.A.; Srigyan, M.; Basu, A.; Rokade, P.J.: Soft computing
methods for estimating the uniaxial compressive strength of intact
rock from index tests. Int. J. Rock Mech. Min. Sci. 80, 418–424
References (2015)
20. Singh, R.; Umrao, R.K.; Ahmad, M.; Ansari, M.K.; Sharma, L.K.;
1. Kahraman, S.: Evaluation of simple methods for assessing the Singh, T.N.: Prediction of geomechanical parameters using soft
uniaxial compressive strength of rock. Int. J. Rock Mech. Min. computing and multiple regression approach. Measurement 99,
Sci. 38, 981–994 (2001) 108–119 (2017)
2. Abbaszadeh Shahri, A.; Larsson, S.; Johansson, F.: Updated rela- 21. Singh, R.; Vishal, V.; Singh, T.; Ranjith, P.G.: A comparative
tions for the uniaxial compressive strength of marlstones based study of generalized regression neural network approach and
on P-wave velocity and point load index test. Innov. Infrastruct. adaptive neuro-fuzzy inference systems for prediction of uncon-
Solut. 1, 17 (2016). https://doi.org/10.1007/s41062-016-0016-9 fined compressive strength of rocks. Neural Comput. Appl. 23(2),
3. Ferentinou, M.; Fakir, M.: An ANN approach for the prediction 499–506 (2013). https://doi.org/10.1007/s00521-012-0944-z
of uniaxial compressive strength of some sedimentary and igne- 22. Hosseini, S.; Al Khaled, A.: A survey on the imperialist com-
ous rocks in eastern KwaZulu-Natal. Proc. Eng. 191, 1117–1125 petitive Algorithm metaheuristic: implementation in engineering
(2017). https://doi.org/10.1016/j.proeng.2017.05.286 domain and directions for future research. Appl. Soft Comput. J.
4. Abbaszadeh Shahri, A.; Gheirati, A.; Espersson, M.: Prediction (2014). https://doi.org/10.1016/j.asoc.2014.08.024
of rock mechanical parameters as a function of P-wave velocity. 23. Jahed Armaghani, D.; Mohd Amin, M.F.; Yagiz, S.; Shirani Far-
Int. Res. J. Earth Sci. 2(9), 7–14 (2014) donbe, R.; Asinda Abdullah, R.: Prediction of the uniaxial com-
5. Basu, A.; Aydin, A.: Predicting uniaxial compressive strength by pressive strength of sandstone using various modeling techniques.
point load test: significance of cone penetration. Rock Mech. Rock Int. J. Rock Mech. Min. Sci. 85, 174–186 (2016)
Eng. 39, 483–490 (2006) 24. Momeni, E.; Jahed Armaghani, D.; Hajihassani, M.; Amin,
6. Kahraman, S.: The determination of uniaxial compressive strength M.F.M.: Prediction of uniaxial compressive strength of rock
from point load strength for pyroclastic rocks. Eng. Geol. 170, samples using hybrid particle swarm optimization-based artificial
33–42 (2014) neural networks. Measurement 60, 50–63 (2015)
7. Palchik, V.: On the ratios between elastic modulus and uniaxial 25. Taghavifar, H.; Mardani, A.; Taghavifar, L.: A hybridized artificial
compressive strength of heterogeneous carbonate rocks. Rock neural network and imperialist competitive algorithm optimiza-
Mech. Rock Eng. 44(1), 121–128 (2011) tion approach for prediction of soil compaction in soil bin facility.
8. Tugrul, A.; Zarif, I.H.: Correlation of mineralogical and textural Measurement 46, 2288–2299 (2013)
characteristics with engineering properties of selected granitic 26. Atashpaz Gargari, E.; Lucas, C.: Imperialist competitive algo-
rocks from Turkey. Eng. Geol. 51, 303–317 (1999) rithm: an algorithm for optimization inspired by imperialistic
9. Yagiz, S.: Predicting uniaxial compressive strength, modulus of competition. In: Proceedings of the IEEE Congress on Evolu-
elasticity and index properties of rocks using the Schmidt hammer. tionary Computation, pp. 4661–4667 (2007)
Bull. Eng. Geol. Environ. 68(1), 55–63 (2009) 27. Atashpaz-Gargari, E.; Hashemzadeh, F.; Rajabioun, R.; Lucas, C.:
10. Yasar, E.; Erdogan, Y.: Correlating sound velocity with the den- Colonial competitive algorithm, a novel approach for PID con-
sity, compressive strength and Young’s modulus of carbonate troller design in MIMO distillation column process. Int. J. Intell.
rocks. Int. J. Rock Mech. Min. Sci. 41(5), 871–875 (2004) Comput. Cybern. 1, 337–355 (2008)
11. Alvarez Grima, M.; Babuska, R.: Fuzzy model for the prediction 28. Jahed Armaghani, D.; Hasanipanah, M.; Mohamad, E.T.: A com-
of unconfined compressive strength of rock samples. Int. J. Rock bination of the ICA–ANN model to predict air overpressure result-
Mech. Min. Sci. 36, 339–349 (1999) ing from blasting. Eng. Comput. 32, 155–171 (2016)
12. Majdi, A.; Rezaei, M.: Prediction of unconfined compressive 29. Nazari-Shirkouhi, S.; Eivazy, H.; Ghodsi, R.; Rezaie, K.; Atash-
strength of rock surrounding a roadway using artificial neural paz-Gargari, E.: Solving the integrated product mix-outsourcing
network. Neural Comput. Appl. 23, 381–389 (2013). https://doi. problem using the imperialist competitive algorithm. Expert Syst.
org/10.1007/s00521-012-0925-2 Appl. 37, 7615–7626 (2010)
13. Meulenkamp, F.; Alveraz Grima, M.: Application of neural net- 30. Palmer, R.R.; Colton, J.; Karmer, L.: A History of the Modern
works for the prediction of the unconfined compressive strength World, 9th edn. Knopf, New York (2002)
(UCS) from Equotip hardness. Int. J. Rock Mech. Min. Sci. 36, 31. Lorpari Zanganeh, A.; Roosta, A.: Analytical study of Iran export
29–39 (1999) and manufacturing of decorative stones in the year 2012 (2012–
14. Singh, V.K.; Singh, D.; Singh, T.N.: Prediction of strength proper- 2013) and Iran’s position in global decorative stone industry. Int.
ties of some schistose rocks from petrographic properties using J. Sci. Manag. Dev. 3(1), 793–798 (2015)
artificial neural networks. Int. J. Rock Mech. Min. Sci. 38, 269– 32. Tahernejad, M.M.; Ataei, M.; Khalokakaei, R.: A strategic analy-
284 (2001) sis of Iran’s dimensional stone mines using SWOT method. Arab.
15. Dehghan, S.; Sattari, G.H.; Chehreh Chelgani, S.; Aliabadi, M.A.: J. Sci. Eng. 38, 149–154 (2013). https://doi.org/10.1007/s1336
Prediction of uniaxial compressive strength and modulus of elas- 9-012-0422-z
ticity for Travertine samples using regression and artificial neural 33. Abbaszadeh Shahri, A.: Identification and Estimation of Nonlin-
networks. Min. Sci. Technol. 20, 0041–0046 (2010) ear Site Effect Characteristics in Sedimentary Basin Subjected to
13
Arabian Journal for Science and Engineering (2019) 44:8645–8659 8659
Earthquake Excitations. Ph.D dissertation, Department of Geo- (eds.) Proceedings of the 1988 Connectionist Models Summer
physics, Science and research branch, Islamic Azad University, School, pp. 38–51. Morgan Kaufmann Publishers, San Mateo
Tehran, Iran (2010) (1988)
34. Jamshidi, A.; Nikudel, M.R.; Khamehchiyan, M.; Zarei Sahamieh, 49. Levenberg, K.: A method for the solution of certain non-linear
R.; Abdi, Y.: A correlation between P-wave velocity and Schmidt problems in least squares. Q. Appl. Math. 2, 164–168 (1944)
hardness with mechanical properties of travertine building stones. 50. Marquardt, D.W.: An algorithm for least-squares estimation of
Arab. J. Geosci. 9, 568 (2016). https://doi.org/10.1007/s1251 nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441
7-016-2542-3 (1963)
35. Rajabzadeh, M.A.; Moosavinasab, Z.; Rakhshandehroo, G.: 51. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Bel-
Effects of rock classes and porosity on the relation between uni- mont (1995)
axial compressive strength and some rock properties for carbonate 52. Swanston, D.J.; Bishop, J.M.; Mitchell, R.J.: Simple adaptive
rocks. Rock Mech. Rock Eng. 45, 113–122 (2012). https://doi. momentum: new algorithm for training multiplayer perceptrons.
org/10.1007/s00603-011-0169-y Electron. Lett. 30, 1498–1500 (1994)
36. Water and Energy Resources of Iran: Engineering Geology and 53. Wiegerinck, W.; Komoda, A.; Heskes, T.: Stochastic dynamics
Rock Mechanics of Dam and Power Plant of Chamshir. Report of learning with momentum in neural networks. J. Phys. A 27,
5589601, Ministry of Energy of Iran (2012) 4425–4437 (1994)
37. Iran’s Ministry of Industries and Mines Statistics and Information 54. Abdechiri, M.; Faez, K.; Bahrami, H.: Neural network learning
Service (2009). www.mim.gov.ir based on chaotic imperialist competitive algorithm. In: Proceed-
38. Iranian Mining Organization (2004). www.imo.gov.ir ings of the 2nd International Workshop on Intelligent Systems and
3 9. Hamdia, K.; Ghasemi, H.; Zhuang, X.; Alajlan, N.; Rabczuk, Applications (ISA), pp. 1–5. IEEE (2010)
T.: Computational machine learning representation for the 55. Abdollahi, M.; Isazadeh, A.; Abdollahi, D.: Imperialist competi-
flexoelectricity effect in truncated pyramid structures. Comput. tive algorithm for solving systems of nonlinear equations. Com-
Mater. Contin. 59(1), 79–87 (2019). https://doi.org/10.32604/ put. Math. Appl. 65, 1894–1908 (2013)
cmc.2019.05882 56. Stehman, S.: Selecting and interpreting measures of thematic clas-
40. Bouzerdoum, A.; Mueller, R.: A generalized feedforward neural sification accuracy. Remote Sens. Environ. 62(1), 77–89 (1997)
network architecture and its training using two stochastic search 57. Dreižienė, L.; Dučinskas, K.; Paulionienė, L.: Correct classifica-
methods. In: Cantú-Paz, E., et al. (eds.) Genetic and Evolutionary tion rates in multi-category discriminant analysis of spatial gauss-
Computation—GECCO 2003. GECCO 2003. Lecture Notes in ian data. Open J. Stat. 5, 21–26 (2015)
Computer Science, vol. 2723. Springer, Berlin (2003) 58. Willmott, C.J. (1984) On the evaluation of model performance in
41. Abbaszadeh Shahri, A.; Asheghi, R.: Optimized developed arti- physical geography. In: Spatial Statistics and Models, pp. 443–460
ficial neural network-based models to predict the blast-induced (1984)
ground vibration, innovative infrastructure solutions. (2018). https 59. Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.;
://doi.org/10.1007/s41062-018-0137-4 Gatelli, D.; Saisana, M.; Tarantola, S.: Global Sensitivity Analy-
42. Abbaszadeh Shahri, A.; Larsson, S.; Johansson, F.: CPT-SPT cor- sis: The Primer. Wiley, New York (2008)
relations using artificial neural network approach—a case study 60. Gevrey, M.; Dimopoulos, I.; Lek, S.: Review and comparison of
in Sweden. Electron. J. Geotech. Eng. (EJGE) 20(Bund. 28), methods to study the contribution of variables in artificial neural
13439–13460 (2015) network models. Ecol. Model. 160(3), 249–264 (2003)
43. Arulampalam, G.; Bouzerdoum, A. Expanding the structure of 61. Ferrari, S.; Stengel, R.F.: Smooth function approximation using
shunting inhibitory artificial neural network classifiers. In: IJCNN. neural networks. IEEE Trans. Neural Netw. 16(1), 24–38 (2005)
IEEE (2002). https://doi.org/10.1109/ijcnn.2002.1007601 62. Pao, Y.H.; Park, G.H.; Sobajic, D.J.: Learning and generalization
44. Bouzerdoum, A.: A new class of high-order neural networks with characteristics of the random vector functional-link net. Neuro-
nonlinear decision boundaries. In: Proceedings of the Sixth Inter- computing 6(2), 163–180 (1994)
national Conference on Neural Information Processing (ICONIP 63. Asteris, P.G.; Plevris, V.: Anisotropic masonry failure criterion
’99), pp. 1004–1009, Perth, Australia (1999) using artificial neural networks. Neural Comput. Appl. 28(8),
45. Ghaderi, A.; Abbaszadeh Shahri, A.; Larsson, S.: An artificial 2207–2229 (2016). https://doi.org/10.1007/s00521-016-2181-3
neural network based model to predict spatial soil type distribution 64. Nguyen, D.; Widrow, B.: Improving the learning speed of 2-layer
using piezocone penetration test data (CPTu). Bull. Eng. Geol. neural networks by choosing initial values of the adaptive weights.
Environ. (2018). https://doi.org/10.1007/s10064-018-1400-9 In: Proceedings of the International Joint Conference on Neural
46. Jadhav, S.; Nalbalwar, S.; Ghatol, A.: Performance evaluation of Networks, vol. 3, pp. 21–26 (1990)
generalized feedforward neural network based ECG arrhythmia 65. Wang, D.; Li, M.: Stochastic configuration networks: fundamen-
classifier. Int. J. Comput. Sci. Issues 9(4), 379–384 (2012) tals and algorithms. IEEE Trans. Cybern. 47(10), 3466–3479
47. Sonmez, H.; Gokceoglu, C.; Nefeslioglu, H.A.; Kayabasi, A.: Esti- (2017)
mation of rock modulus: for intact rocks with an artificial neural 66. Zhou, W.: Verification of the nonparametric characteristics of
network and for rock masses with a new empirical equation. Int. backpropagation neural networks for image classification. IEEE
J. Rock Mech. Min. Sci. 43, 224–235 (2006) Trans. Geosci. Remote Sens. 37, 771–779 (1999)
48. Fahlman, S.E.: Faster-learning variations of backpropagation: an
empirical study. In: Touretzky, D., Hinton, G.E., Sejnowski, T.J.
13