Hoang 2017

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

CHAPTER

SLOPE STABILITY EVALUATION


USING RADIAL BASIS
FUNCTION NEURAL NETWORK,
LEAST SQUARES SUPPORT
18
VECTOR MACHINES, AND
EXTREME LEARNING MACHINE

Nhat-Duc Hoang∗ , Dieu Tien Bui†


∗ Faculty
of Civil Engineering, Institute of Research and Development, Duy Tan University, Danang, Vietnam
† Geographic Information System Group, University College of Southeast Norway (USN), Bø i Telemark, Norway

18.1 RESEARCH BACKGROUND


Slope collapses are complex geotechnical phenomena that represent a serious natural hazard in many
regions around the globe. These hazards are responsible for hundreds of millions of US dollars of dam-
ages to public/private properties and human casualties every year [1]. The population expansion and
economic development in many countries around the world lead to the construction of road networks
and residential areas in the hilly or mountainous regions [2,3]. As a consequence, slope stability as-
sessment becomes an urgent task and tools for analyzing slopes are necessary to prevent and mitigate
the damages caused by slope failure [4–7].
The slope analysis is indeed very helpful since it can be used by various parties (e.g. Govern-
ment agencies, land-use planners, etc.) for identifying collapse-prone areas. Based on such analyses,
financial resources can be appropriately allocated to construct the retaining structures or can estab-
lish evacuation plans effectively [2,8]. Currently, slope failure prediction models based on machine
learning have shown to be effective tools for assistance of the decision-making processes in haz-
ard prevention planning, especially in the tasks of designing and constructing highways, open pits,
and earth dams [9]. The reason for this effectivity of machine learning is that these techniques pos-
sess intrinsic capability to mining valuable information hidden in records of real slope cases in the
past.
Due to the complex and multi-factorial interactions between factors that affect slope stability, the
task of slope assessment remains a significant challenge for civil engineers. This research carries out a
comparative study of machine learning solutions for slope stability assessment relied on three advanced
artificial intelligent methods: Radial Basis Function Neural Network (RBFNN), Least Squares Support
Handbook of Neural Computation. DOI: 10.1016/B978-0-12-811318-9.00018-1
Copyright © 2017 Elsevier Inc. All rights reserved.
333
334 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

Vector Machines (LSSVM), and the Extreme Machine Learning (ELM). Furthermore, two data sets
with actual slope collapse events have been collected for this study.
The reasons for selecting the aforementioned three approaches are as follows: RBFNN [10,11],
LSSVM [12–15], and ELM [16–19] have been illustrated to be capable pattern classifiers; however,
performances of these three approaches in slope assessment have rarely been discussed in the literature.
In addition, based on a recent comparative work [20], LSSVM has shown superior prediction accuracies
in slope classification; therefore, comparison among the three models can provide helpful information
for readers including both academic researchers and practicing engineers.
The remaining part of this chapter is organized as follows. The second section reviews pertinent
works in the literature. The research framework is described in the third section, followed by the ex-
perimental results. Conclusions of this chapter are stated in the final section.

18.2 LITERATURE REVIEW


Previous researches point out that methods based on advanced machine learning frameworks actually
help to boost the slope prediction credibility [9,20,21]. Machine learning based models for slope eval-
uation are generally established by combining supervised learning techniques and historical cases of
slope performance. Using such models, the slope stability prediction can be equivalently formulated as
a classification task in which target outputs are either “collapse” or “non-collapse.”
Lu and Rosenbaum [22], Zhou and Chen [23], Jiang [24], Das et al. [25], and Wang et al. [1]
utilized the Artificial Neural Network (ANN) to forecast the slope stability. Zhao et al. [9] and Hoang
and Tien-Bui [5] employed the Relevance Vector Machine (RVM) to explore the nonlinear relationship
between slope stability and its influence factors. Prediction models of slope assessment employing the
Support Vector Machine (SVM) were developed by Samui [26], Li and Wang [27], Li and Dong [28],
Tien Bui et al. [29], and Cheng and Hoang [30].
The Evolutionary Polynomial Regression [31] and the Least Squares Support Vector Machine
(LSSVM) [13] have been employed to model the mapping function between the input pattern and the
factor of safety of slopes against failure. Probabilistic slope assessment model that utilized the Gaus-
sian Process was established by Ching et al. [32] for analyzing slopes along mountain roads. Hoang
and Pham [20] constructed a slope classification model based on a hybridization of LSSVM and Firefly
Algorithm.
Recently, Cheng and Hoang [2] have put forward a probabilistic slope assessment model based on
Bayesian Framework. Yan and Li [33] established a method for predicting the stability of open pit slope
based on the Bayes Discriminant Analysis. A swarm-optimized fuzzy instance based classifier has
been proposed for predicting slope collapses occurred along road section in mountainous regions [34].
Those previous researches point out that machine learning can provide a competent tool to establish a
structured representation of the slope system, which allows accurate predictions of slope stability.
18.3 RESEARCH METHOD 335

18.3 RESEARCH METHOD


18.3.1 MACHINE LEARNING APPROACHES
18.3.1.1 Radial Basis Function Neural Network
Basically, a Radial Basis Function Neural Network (RBFNN) [10,35] model is a feedforward neural
network that consists of one input layer, one hidden layer, and one output layer. Within this structure,
a certain number of neurons are assigned to each layer. The neurons in the input layer illustrate the
number of input features (N ) (i.e. the dimensions of the input data). The number of neurons in the
hidden layer represents the number of centroids (M) and their location (i.e. coordination) in the learning
space. RBFNN’s learning phase divides the data set into a certain number of groups; accordingly, a
centroid is a representative of a group of data.
The main purpose of the hidden layer of RBFNN is to map the input data from their original space
onto the network space through a radial basis function φ(.):
 
zj (x) = φ x − cj  (18.1)

where cj denotes the coordination of the j th centroid, x represents the input data, x − cj  denotes
the norm between the data and the centroid.
Since the task at hand is binary pattern recognition, the output of neuron in the output layer is
converted into binary values through the sigmoid function as follows:

0 if S(u) < t
f (u) = (18.2)
1 if S(u) ≥ t

where S(.) represents the sigmoid function, t denotes a threshold value of 0.2 used to convert the real
value input into binary outputs.
The formula of the sigmoid function is written as follows:
1
S(u) = (18.3)
1 + e−u
It is noted that in RBFNN, the weights between the hidden and output layers wj , the centroid
location cj , and the number of centroids M are determined so that prediction error of the model is
minimized. Thus, a least squares objective function for this learning problem can be defined as follows:


D
 2
E= T (i) − y(i) (18.4)
i=1

where T (i) denotes the design output, D is number of training data, and y(i) denotes the network
output.
The network output is computed through a sum product of the network’s weight and the input vector
and it can be expressed in the following form:


M
y= wj zj (x) (18.5)
j =1
336 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

18.3.1.2 Least Squares Support Vector Machines


Least Squares Support Vector Machines (LS-SVM) [36] is a supervised machine learning technique
for solving classification problems which relies on the principle of statistical learning theory. Given a
training data set {xk , yk }N
k=1 with input data xk ∈ R where N is the number of training data points,
n

n is the number of dimensions of the input data, the corresponding class of labels is denoted as yk ∈
{−1, +1}, the LS-SVM for classification is formulated as follows:

1 2
N
1
Min. Jp (w, e) = w T w + γ ek (18.6)
2 2
k=1
 
Subject to yk w T φ(xk ) + b = 1 − ek , k = 1, . . . , N (18.7)

where w ∈ R n is the normal vector to the classification hyperplane and b ∈ R is the bias, ek ∈ R are
error variables, and γ > 0 denotes a regularization constant.
The Lagrangian is given by:


N
   
L(w, b, e, a) = Jp (w, e) − αk yk w T φ(xk ) + b − 1 + ek (18.8)
k=1

where αk are Lagrange multipliers, φ(xk ) represents a kernel function. Applying the KKT conditions
of optimality, the above optimization problem is equivalent to this linear system after the elimination
of e and w:



0 yT b 0
−1
= (18.9)
y ω+γ I α 1 v

in which y = y1 , . . . , yN , 1v = [1; . . . ; 1], and α = [α1 ; . . . ; αN ]. And ω = yi yj K(xk , x1 ) with K rep-


resents a kernel function.
The classification model based on LS-SVC is written in the following form:

N
y(x) = sign αk yi K(xk , x1 ) + b (18.10)
k=1

where αk and b denote the solution to the linear system (Eq. (18.9)). The kernel function that is com-
monly used is Radial Basis Function (RBF) kernel. The RBF kernel is described as follows:

xk − x1 2
K(xb , x1 ) = exp − (18.11)
2σ 2
where σ is the kernel function parameter.

18.3.1.3 Extreme Learning Machine


Extreme Learning Machine (ELM) [37] is a novel method for pattern classification as well as function
approximation. This method is essentially a single feedforward neural network; its structure consists
of a single layer of hidden nodes, where the weights between inputs and hidden nodes are randomly
18.3 RESEARCH METHOD 337

assigned and remain constant during training and predicting phases. On the contrary, the weights that
connect hidden nodes to outputs can be trained very fast. Experimental studies in the literature [16,37,
38] showed that ELMs can produce acceptable predictive performance and their computational cost is
much lower than networks trained by the back-propagation algorithm.
The task at hand is to construct a classification model from a data set X = {xt ∈ R p }, t = 1, . . . , n,
with n samples and p input features. Given a network with p input units, q hidden neurons, and c
outputs, the ELM model’s output is written in the following formula [39]:

oi (t) = mTi h(t) (18.12)

where mi ∈ R q , i ∈ {1, . . . , c} denotes the weight vector that connects the hidden neurons to the ith
output neuron. h(t) ∈ R q represents the vector of outputs of hidden neurons for a certain input pattern
x(t) ∈ R p . Then h(t) can be written in the following form:
      
h(t) = f w1T x(t) + b1 , f w2T x(t) + b2 , . . . , f wqT x(t) + bq (18.13)

where bk (k = 1, 2, . . . , q) denotes the bias of the kth hidden neuron, wk ∈ R p represents the weight
vector of the kth hidden neuron, and f (.) denotes a sigmoidal activation function. It is worth to notice
that the weight vectors wk as well as the bias bk are generated from a Gaussian distribution in a random
manner.
Providing wk and bk , the next step is to establish a matrix of hidden layer output H . It is noted
that H is q × n matrix; its t th column is the vector of a hidden layer output h(t). Accordingly, the
weight matrix M = [m1 , m2 , . . . , mc ]can be calculated via the Moore–Penrose pseudo inverse method
as follows:
 −1
M = H × HT H × DT (18.14)
where D = [d(1), d(2), . . . , d(n)] denotes a c × n matrix whose t th column is the actual target vector
d(t) ∈ R c .
With the network’s parameter being fully specified, the class label for a new input pattern is deter-
mined as follows:
Y = arg max{oi } (18.15)
i=1,...,c

where Y denotes the predicted class label.

18.3.2 HISTORICAL DATA SETS OF SLOPE ASSESSMENT


Two data sets that record slope performance collected from the literature are employed in this study
to construct and verify the machine learning approaches. The first data set (Data Set 1) consists of
168 historical cases [20]; six influencing factors, including unit weight (kN/m3 ), soil cohesion (kPa),
internal friction angle (°), slope angle (°), slope height (m), and pore pressure ratio (Ru ), are employed
to characterize an earth slope. The second data set (Data Set 2) includes 109 historical cases [32]
collected in the Taiwan Provincial Highway No. 18; ten input factors (slope direction, slope angle, slope
height, road curvature, strata type, thickness of canopy cover, catchment area, height of toe to cutting,
change of slope grade, and peak ground acceleration) are employed to predict slope performance.
338 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

Table 18.1 Slope Influencing Factors and Their Statistical Descriptions of Data Set 1
Factors Notation Definition Max Average Std. Min
X1 γ Unit weight (kN/m3 ) 31.30 21.76 4.13 12.00
X2 C Soil cohesion (kPa) 300.00 34.12 45.82 0.00
X3 ϕ Internal friction angle (°) 45.00 28.72 10.58 0.00
X4 β Slope angle (°) 59.00 36.10 10.22 16.00
X5 H Slope height (m) 511.00 104.19 132.68 3.60
X6 Ru Pore pressure ratio 45.00 0.48 3.45 0.00

Table 18.2 Slope Influencing Factors and Their Statistical Descriptions of Data Set 2
Factors Definition Max Average Std. Min
X1 Slope direction (°) 345.00 171.33 93.95 0.00
X2 Slope angle (°) 90.00 62.20 11.74 30.00
X3 Slope height (m) 100.00 22.39 14.86 5.00
X4 Road curvature (1/m) 0.03 0.00 0.02 −0.05
X5 Strata type 5.00 4.40 1.06 1.00
X6 Thickness of canopy cover (m) 4.50 2.07 1.05 0.50
X7 Catchment area (m2 ) 255743.00 19706.81 39956.69 406.00
X8 Height of toe cutting (m) 50.00 7.00 6.52 2.00
X9 Change of slope grade (°) 35.00 9.08 10.36 0.00
X10 Peak ground acceleration (gal) 391.90 251.00 115.71 0.00

Table 18.1 provides the information of the influencing factors and their statistical descriptions.
Table 18.2 provides the information of the influencing factors and their statistical descriptions of Data
Set 2. The data sets are provided in Appendices 1 and 2, within which the output of −1 indicates
a non-collapsed slope and the output of +1 denotes a collapsed slope. For more detailed explanation
regarding the influencing factors of slope, the readers are guided to previous works of Hoang and Pham
[20] for Data Set 1 and Ching et al. [32] for Data Set 2. Furthermore, scatter plots of all input variables
of the two data sets with class label distinction are plotted in Fig. 18.1 and Fig. 18.2. A preliminary
observation from Fig. 18.1 and Fig. 18.2 is that there is a high degree of overlapping regions within
each input feature of the two data sets.

18.4 EXPERIMENTAL SETTING


As mentioned earlier, this study employs two data sets of slope performance to construct and verify
the machine learning models. The number of records in the Data Set 1 and the Data Set 2 are 168
and 109, respectively. The numbers of collapsed and stable slopes are 84 and 84 for Data Set 1 and
55 and 54 for Data Set 2. Herein, 90% of the data set is used to train the machine learning models;
10% of the data set is reserved for testing phase. Additionally, to alleviate the bias in data selection,
a ten-fold cross-validation process is employed. Three artificial intelligence methods are utilized to
construct slope assessment models in this study: RBFNN, LSSVM, and ELM.
18.4 EXPERIMENTAL SETTING 339

FIGURE 18.1
Data distribution of Data Set 1.

FIGURE 18.2
Data distribution of Data Set 2.
340 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

In case of RBFNN, to determine the parameters M and cj , the method of Orthogonal Least Squares
(OLS) is employed [40,41]. In OLS, each data point is initially set as a possible location of the centroid
(cj ); the assessment includes computing the network error for each data point in the training set as the
new centroid of the corresponding cluster. The data point that can reduce the RBFNN error the most
is the new centroid [10]. This assessment procedure is repeated until the network error reaches an
acceptable value. In our study, the threshold value of the RBFNN training phase is set to be 90%. That
is, the training process of the RBFNN will terminate when the classification accuracy rate reaches 0.9.
Our observation is that setting a lower value of threshold results in under-trained models; meanwhile,
setting a higher value causes the problem of overfitting. In this study, the RBFNN model is coded in
Matlab by the authors.
To construct an LSSVM model, it is necessary to specify the regularization constant (γ ) and the
kernel function parameter (σ ). Previous works [20,42] point out that these two parameters affect the
learning performance of LSSVM considerably. Therefore, in this study, a grid search procedure is used
to appropriately set the values of the regularization constant (γ ) and the kernel function parameter (σ ).
These two parameters are allowed to be varied within the following set of values: [0.01, 0.05, 0.1, 0.5,
1, 5, 10, 50, 100, 500, 1000]. It is noted that the training set is further divided into two sets: Set 1
(90%) used for model construction and Set 2 (10%) used for computing the fitness of each pair of
hyper-parameters. It is noted that the LSSVM model is implemented via the LS-SVMlab Toolbox [43]
and the grid search procedure is coded by the authors.
In case of ELM, the sigmoid function is often employed as activation function. The only hyper-
parameter to be set for this model is the number of neurons in the hidden layer. Herein, the number of
neurons is allowed to vary from the number of input features to 100. For the purpose of model selection,
the training data is also split into two sets (Set 1 and Set 2) with a similar manner to the process used
in parameter setting of LSSVM. The number of neurons that maximize ELM performance is selected
for the testing phase. The ELM model is implemented in Matlab environment with the program codes
provided by Huang [44].
Furthermore, besides the classification accuracy rate (CAR), the following four metrics can be used
to measure the classification performance [15]: true positive rate TPR (the percentage of positive in-
stances correctly classified), true negative rate TNR (the percentage of negative instances correctly
classified), false positive rate FPR (the percentage of negative instances misclassified), and false nega-
tive rate FNR (the percentage of positive instances misclassified). The formulations for computing the
above four metrics are stated as follows:
TP
TPR = (18.16)
TP + FN
TN
TNR = (18.17)
TN + FP
FP
FPR = (18.18)
FP + TN
FN
FNR = (18.19)
TP + FN
where TP, TN, FP, and FN are the numbers of true positive, true negative, false positive, and false
negative, respectively.
18.5 EXPERIMENTAL RESULTS 341

Table 18.3 Experimental Results


Data Set 1 Data Set 2
CAR TP FP FN TN CAR TP FP FN TN
RBFNN
Train Mean 85.19 62.00 9.40 12.70 65.30 99.69 48.30 0.00 0.30 47.70
Std. 5.91 8.82 3.34 8.47 3.06 0.50 0.67 0.00 0.48 0.48
Test Mean 81.00 7.00 1.30 2.30 8.00 85.08 5.40 0.90 1.00 5.40
Std. 10.96 1.76 1.06 2.11 1.15 7.94 0.70 0.88 0.67 0.84
LSSVM
Train Mean 94.52 70.50 4.00 4.20 70.70 95.23 45.90 1.90 2.70 45.80
Std. 3.50 3.87 2.11 4.02 1.95 3.98 2.77 2.47 2.75 2.57
Test Mean 87.06 8.30 1.40 1.00 7.90 89.95 6.00 0.90 0.40 5.40
Std. 5.27 1.42 0.84 1.15 0.74 8.13 0.47 1.10 0.52 0.97
ELM
Train Mean 88.22 66.00 8.90 8.70 65.80 93.37 43.10 0.90 5.50 46.80
Std. 3.01 2.91 2.56 3.02 2.74 4.00 3.07 0.99 3.34 0.79
Test Mean 84.00 7.60 1.30 1.70 8.00 90.49 5.50 0.30 0.90 6.00
Std. 7.50 1.35 0.95 1.57 1.15 5.08 0.85 0.48 0.57 0.47

18.5 EXPERIMENTAL RESULTS


Experimental results obtained from the ten-fold cross-validation procedure of the three machine learn-
ing models with two data sets are reported in Table 18.3. Notably, to evaluate the predictive capability
of each model, it is reasonable to focus on the testing outcomes which are highlighted as bold figures
in Table 18.3. It is noted that the figures presented in this table are the average results obtained from
the cross-validation process. Considering Data Set 1, LSSVM is the best method (CAR = 87.06%),
followed by ELM (CAR = 84%) and RBFNN (CAR = 81%). In case of Data Set 2, ELM (CAR
= 90.49%) and LSSVM (CAR = 89.95%) are both superior to RBFNN (CAR = 85.08%); ELM is
slightly better than LSSVM.
In addition, the average true positive, true negative, false positive, false negative rates of all pre-
diction models (RBFNN, LSSVM, and ELM) for the two data sets are shown in Figs. 18.3 and 18.4,
respectively. When predicting Data Set 1, LSSVM achieves the highest TPR and TNR (both rates are
0.89); ELM attains the second best TPR and TNR (both rates are 0.82); RBFF has the worst TPR =
0.75 and TNR = 0.78. In case of Data Set 2, TPR (0.94) and TNR (0.93) obtained from LSSVM are
also the most desirable; with TPR = 0.86 and TNR = 0.87, ELM is ranked after LSSVM, followed by
RBFNN (TPR and TNR are both 0.84). Moreover, it is noted that in slope assessment, false negative
cases (collapsed slopes are wrongly classified as safe slopes) are considered to be more dangerous.
With such point of view, LSSVM deems to be more desirable since its FNR is lowest in both data sets
(0.11 for Data Set 1 and 0.06 for Data Set 2). Meanwhile, FNRs of ELM is slightly higher than that of
LSSVM: 0.18 for Data Set 1 and 0.14 for Data Set 2.
342 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

FIGURE 18.3
True positive, true negative, false positive, false negative rates for Data Set 1.

FIGURE 18.4
True positive, true negative, false positive, false negative rates for Data Set 2.

18.6 CONCLUSION
This chapter has investigated the capabilities of RBFNN, LSSVM, and ELM in slope stability as-
sessment with two historical data sets. To accurately evaluate each model’s performance, a ten-fold
cross-validation process has been carried out. Results obtained from experiments demonstrate that
LSSVM and ELM are superior methods for tackling the problem at hand. The performance of RBFNN
is significantly worse than that of the other two models. Considering the CAR, LSSVM is the best
REFERENCES 343

method for Data Set 1; and ELM is the most desirable method for Data Set 2. Nevertheless, when
taking FNR into account, LSSVM deems to be the most suitable for slope evaluation since this method
has resulted in the lowest FNR in both data sets. Overall, LSSVM and ELM are highly recommended
as an intelligent tool to assist decision-making process in slope assessment. Further directions of the
current study may include: (1) Investigate other advanced machine learning approaches in slope eval-
uation, (2) Combine feature selection technique with LSSVM and ELM, and (3) Enhancing prediction
accuracy by ensemble and boosting methods.

REFERENCES
[1] H.B. Wang, W.Y. Xu, R.C. Xu, Slope stability evaluation using Back Propagation Neural Networks, Eng. Geol. 80 (2005)
302–315.
[2] M.-Y. Cheng, N.-D. Hoang, Slope collapse prediction using Bayesian framework with K-Nearest Neighbor density estima-
tion: case study in Taiwan, J. Comput. Civ. Eng. 30 (2016) 04014116.
[3] H.-M. Lin, S.-K. Chang, J.-H. Wu, C.H. Juang, Neural network-based model for assessing failure potential of highway
slopes in the Alishan, Taiwan Area: pre- and post-earthquake investigation, Eng. Geol. 104 (2009) 280–289.
[4] A. Manouchehrian, J. Gholamnejad, M. Sharifzadeh, Development of a model for analysis of slope stability for circular
mode failure using genetic algorithm, Environ. Earth Sci. 71 (2014) 1267–1277.
[5] N.-D. Hoang, D. Tien-Bui, A novel relevance vector machine classifier with cuckoo search optimization for spatial predic-
tion of landslides, J. Comput. Civ. Eng. 30 (2016) 04016001.
[6] C.-I. Wu, H.-Y. Kung, C.-H. Chen, L.-C. Kuo, An intelligent slope disaster prediction and monitoring system based on
WSN and ANP, Expert Syst. Appl. 41 (2014) 4554–4562.
[7] E. Salmi, S. Hosseinzadeh, Slope stability assessment using both empirical and numerical methods: a case study, Bull. Eng.
Geol. Environ. 74 (2015) 13–25.
[8] P. Luciano, L. Serge, Assessment of slope stability, in: Geotechnical Engineering State of the Art and Practice, 2012,
pp. 122–156.
[9] H. Zhao, S. Yin, Z. Ru, Relevance vector machine applied to slope stability analysis, Int. J. Numer. Anal. Meth. Geomech.
36 (2012) 643–652.
[10] K.-W. Liao, J.-C. Fan, C.-L. Huang, An artificial neural network for groutability prediction of permeation grouting with
microfine cement grouts, Comput. Geotech. 38 (2011) 978–986.
[11] D. Tien Bui, D.T. Quach, V.H. Pham, I. Revhaug, V.L. Ngo, T.H. Tran, et al., Spatial prediction of landslide hazard along
the National Road 32 of Vietnam: a comparison between Support Vector Machines, Radial Basis Function neural networks,
and their ensemble, in: Proceedings of the Thematic Session, 49th CCOP Annual Session, 22–23 October 2013, Sendai,
Japan, 2013.
[12] P. Samui, J. Karthikeyan, Determination of liquefaction susceptibility of soil: a least square support vector machine ap-
proach, Int. J. Numer. Anal. Meth. Geomech. 37 (2013) 1154–1161.
[13] P. Samui, D.P. Kothari, Utilization of a least square support vector machine (LSSVM) for slope stability analysis, Sci. Iran.
18 (2011) 53–58.
[14] M.-Y. Cheng, N.-D. Hoang, Groutability prediction of microfine cement based soil improvement using evolutionary LS-
SVM inference model, J. Civ. Eng. Manag. 20 (2014) 1–10.
[15] N.-D. Hoang, D. Tien Bui, Predicting earthquake-induced soil liquefaction based on a hybridization of kernel Fisher dis-
criminant analysis and a least squares support vector machine: a multi-dataset study, Bull. Eng. Geol. Environ. (2016)
1–14.
[16] G. Huang, G.-B. Huang, S. Song, K. You, Trends in extreme learning machines: a review, Neural Netw. 61 (2015) 32–48.
[17] D. Avci, A. Doğantekin, An expert diagnosis system for Parkinson disease based on genetic wavelet kernel extreme learning
machine, Parkinson’s Dis. 2016 (2016) 5264743.
[18] P. Samui, J. Jagan, R. Hariharan, An alternative method for determination of liquefaction susceptibility of soil, Geotech.
Geol. Eng. 34 (2016) 735–738.
[19] O. Anicic, S. Jović, H. Skrijelj, B. Nedić, Prediction of laser cutting heat affected zone by extreme learning machine, Opt.
Laser Eng. 88 (2017) 1–4.
344 CHAPTER 18 SLOPE STABILITY EVALUATION USING RADIAL BASIS

[20] N.-D. Hoang, A.-D. Pham, Hybrid artificial intelligence approach based on metaheuristic and machine learning for slope
stability assessment: a multinational data analysis, Expert Syst. Appl. 46 (2016) 60–68.
[21] F. Kang, J. Li, Artificial bee colony algorithm optimized support vector regression for system reliability analysis of slopes,
J. Comput. Civ. Eng. 30 (2015) 04015040.
[22] P. Lu, M.S. Rosenbaum, Artificial neural networks and grey systems for the prediction of slope stability, Nat. Hazards 30
(2003) 383–398.
[23] K.-p. Zhou, Z.-Q. Chen, Stability prediction of tailing dam slope based on neural network pattern recognition, in: Proc. of
the Second International Conference on Environmental and Computer Science, ICECS ’09, 28–30 Dec. 2009, Dubai, the
United Arab Emirates, 2009, pp. 380–383.
[24] J.-P. Jiang, BP neural networks for Prediction of the factor of safety of slope stability, in: Proc. of the International Confer-
ence on Computing, Control and Industrial Engineering, CCIE, 20–21 Aug. 2011, Wuhan, China, 2011.
[25] S.K. Das, R.i. Biswal, N. Sivakugan, B. Das, Classification of slopes and prediction of factor of safety using differential
evolution neural networks, Environ. Earth Sci. 64 (2011) 201–210.
[26] P. Samui, Slope stability analysis: a support vector machine approach, Environ. Geol. 56 (2008) 255–267.
[27] J. Li, F. Wang, Study on the forecasting models of slope stability under data mining, in: Proc. of the Earth and Space
2012: Engineering, Science, Construction, and Operations in Challenging Environments, Honolulu, Hawaii, United States,
ASCE, 2010, pp. 765–776.
[28] J. Li, M. Dong, Method to predict slope safety factor using SVM, in: Proc. of the Earth and Space 2012: Engineering,
Science, Construction, and Operations in Challenging Environments, Pasadena, California, United States, ASCE, 2012,
pp. 888–899.
[29] D. Tien Bui, B. Pradhan, O. Lofman, I. Revhaug, Landslide susceptibility assessment in Vietnam using support vector
machines, decision tree, and naive Bayes models, Math. Probl. Eng. 2012 (2012) 26.
[30] M.-Y. Cheng, N.-D. Hoang, Typhoon-induced slope collapse assessment using a novel bee colony optimized support vector
classifier, Nat. Hazards 78 (2015) 1961–1978.
[31] A. Ahangar-Asr, A. Faramarzi, A.A. Javadi, A new approach for prediction of the stability of soil and rock slopes, Eng.
Comput. 27 (2010) 878–893.
[32] J. Ching, H.-J. Liao, J.-Y. Lee, Predicting rainfall-induced landslide potential along a mountain road in Taiwan, Geotech-
nique 61 (2011) 153–166.
[33] X. Yan, X. Li, Bayes discriminant analysis method for predicting the stability of open pit slope, in: Proc. of the International
Conference on Electric Technology and Civil Engineering, ICETCE, 22–24 April 2011, Lushan, China, 2011, pp. 147–150.
[34] M.-Y. Cheng, N.-D. Hoang, A Swarm-Optimized Fuzzy Instance-based Learning approach for predicting slope collapses
in mountain roads, Knowl.-Based Syst. 76 (2015) 256–263.
[35] S. Chen, C.F.N. Cowan, P.M. Grant, Orthogonal least squares learning algorithm for radial basis function networks, IEEE
Trans. Neural Netw. 2 (1991) 302–309.
[36] J. Suykens, J.V. Gestel, J.D. Brabanter, B.D. Moor, J. Vandewalle, Least Square Support Vector Machines, World Scientific
Publishing Co. Pte. Ltd., Singapore, 2002.
[37] G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006)
489–501.
[38] G. Li, P. Niu, Y. Ma, H. Wang, W. Zhang, Tuning extreme learning machine by an improved artificial bee colony to model
and optimize the boiler efficiency, Knowl.-Based Syst. 67 (2014) 278–289.
[39] A.S.C. Alencar, A.R. Rocha Neto, J.P.P. Gomes, A new pruning method for extreme learning machines via genetic algo-
rithms, Appl. Soft Comput. 44 (2016) 101–107.
[40] F.M. Ham, I. Kostanic, Principles of Neurocomputing for Science and Engineering, McGraw-Hill, New York, United States,
2001.
[41] V. Kecman, Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models, The MIT
Press, Cambridge, 2001.
[42] G.S. Dos Santos, L.G.J. Luvizotto, V.C. Mariani, L. dos Santos Coelho, Least squares support vector machines with tuning
based on chaotic differential evolution approach applied to the identification of a thermal process, Expert Syst. Appl. 39
(2012) 4805–4812.
[43] K. De Brabanter, P. Karsmakers, F. Ojeda, C. Alzate, J. De Brabanter, K. Pelckmans, et al., LS-SVMlab Toolbox User’s
Guide Version 1.8, Internal Report 10-146, ESAT-SISTA, K.U. Leuven, Leuven, Belgium, 2010.
[44] G.-B. Huang, Basic ELM algorithms, http://www.ntu.edu.sg/home/egbhuang/elm_codes.html, 2016.

You might also like