Abstract
In the evaluation and prediction of slope stability, the traditional numerical analysis method, which is over reliant on experience, takes a large amount of computing time and lacks the ability to reflect the fuzzy and nonlinear characteristics of slope parameters well. Considering the above characteristics, this study proposes an improved particle swarm optimization of support vector machine (IPSO-SVM) algorithm model, which combines optimized particle swarm optimization (IPSO) and support vector machine (SVM) and applies it to slope stability prediction. Based on 28 groups of slope engineering data, the stability prediction results of IPSO-SVM, PSO-SVM, and SVM models were compared with real values for analysis. The results show that the maximum relative error of the IPSO-SVM model is only 1.3%, and the average relative error is 1.1%, which is far lower than the prediction error of the PSO-SVM model and SVM model; therefore, the prediction result of IPSO-SVM is the closest to the real value. This method can accurately predict the slope safety factor under the influence of different indexes, and the research results can provide guidance for practical engineering.
1. Introduction
As a common natural disaster, slope instability causes huge economic losses every year. To prevent or alleviate the occurrence of slope disasters, it is of great significance to provide timely and accurate predictions of slope stability. In recent years, Chinese and foreign scholars have continuously established new research models in the study of slope stability evaluation through combining interdisciplinary knowledge, effectively promoting the development of slope stability evaluation study [1–7]. At present, the evaluation and prediction methods of slope stability include the rigid limit equilibrium calculation method and the elastic–plastic theoretical calculation method. However, due to many factors that can affect slope stability, the actual slope engineering is often affected by nonlinear factors, which makes it difficult for these conventional analysis methods to meet the application requirements to evaluate and predict slope stability. With the continuous development of computer science and technology in recent years, the machine learning prediction analysis method based on nonlinear characteristics has gradually entered the field of vision due to its characteristics of small error, wide applicability, and fast operation speed, providing a new idea for slope stability prediction research [8, 9].
Moayedi et al. investigated the applicability of combining machine learning-based models in slope stability assessment [10]. Zhou et al. presented a novel prediction method that utilizes the gradient boosting machine (GBM) method to analyze slope stability [11]. Cho et al. developed predictive models for seismic slope displacement by using the artificial neural network (ANN) approach and compared the resulting ANN model with a classical regression model derived from the same data set [12]. Zhang et al. presented a hybrid model of support vector regression (SVR) and a teaching-learning-based optimization technique (TLBO) to predict reservoir bank slope stability [13]. Meng et al. used an artificial neural network to predict three-dimensional slope stability [14]. Based on the finite element fraction and field measured data, Kardani et al. proposed a hybrid stacking integration method to enhance the predictive accuracy of slope stability [15]. Foong and Moayedi suggested the use of equilibrium optimization (EO) and a vortex search algorithm (VSA) for optimizing a multilayer perceptron neural network (MLPNN) employed to anticipate the factor influencing the safety of a single-layer soil slope [16]. Ling et al. researched slope reliability evaluation based on a multiobjective gray wolf optimization-based extreme learning machine agent model [17]. Pham et al. applied parallel learning and sequential learning to implement ensemble classifier models for slope stability analysis [18]. Liao et al. explored the use of multivariate adaptive regression splines (MARS) to capture the intrinsic nonlinear and multidimensional relationship among the parameters that are associated with the evaluation of slope stability [19]. Yuan and Moayedi assessed the superiority of the metaheuristic evolutionary when compared to the conventional machine learning classification techniques for landslide occurrence estimation [20]. Huang et al. proposed an improved slope stability prediction model based on a KNN neural network [21]. Deng et al. investigated a new regularized online sequential extreme learning machine incorporated with the variable forgetting factor (FOS-ELM) based on intelligence computation to predict the factor influencing the rock slope’s safety [22]. Hu et al. proposed a new response surface method (RSM) for slope reliability analysis based on Gaussian process (GP) machine learning technology [23]. Qi and Tang proposed and compared six integrated artificial intelligence (AI) approaches for slope stability prediction based on metaheuristic and ML algorithms [24]. Lin et al. predicted slope stability based on four supervised learning algorithms [25]. Xu et al. proposed a sensitivity analysis method for slope stability based on the least squares support vector machine (LSSVM) to examine the factors that influence slope stability [26]. Kang et al. presented an intelligent response surface method to evaluate the system failure probability of soil slopes based on least squares support vector machines (LSSVM) and particle swarm optimization [27]. Zhang et al. presented the adaptive relevance vector machine (ARVM) for stability inference of soil slopes [28]. Liu et al. presented an approach of a fast robust neural network, named the extreme learning machine (ELM) in slope stability evaluation and prediction [29]. Cheng et al. proposed a Swarm-Optimized Fuzzy Instance-based Learning (SOFIL) model for predicting slope collapses [30].
Support vector machine (SVM) is an intelligent discriminant prediction method developed rapidly in recent years. It has excellent learning ability and is suitable for solving problems such as small samples, nonlinearity, high dimensions, and local minima [31–41]. However, SVM also has certain shortcomings; for example, its penalty parameters and kernel function parameters have a great influence on prediction accuracy, and different parameters will show different prediction accuracy. Based on support vector machine (SVM) model and particle swarm optimization (PSO) model, the improved particle swarm optimization of support vector machine (IPSO-SVM) model is established by introducing nonlinear weight method, which can effectively overcome the problem that PSO is easy to fall into local optimal and improve the prediction accuracy and learning ability of support vector machine algorithm. Then 28 groups of data about slope from practical engineering are used to establish the slope stability data set under the influence of multiple indexes. Finally, in order to prove the superiority of IPSO-SVM model, the prediction accuracy of IPSO-SVM, PSO-SVM, and SVM models was compared in the same slope stability data set. The machine learning method established in this study can provide guidance for stability prediction methods used in slope engineering.
2. Slope Stability Analysis Model Based on IPSO-SVM
2.1. Support Vector Machine
The SVM model is a general learning method developed on the base of statistical learning theory and the principle of structural risk minimization. In the face of pattern recognition problems with small samples and nonlinear and high dimensions, SVM can show strong generalization ability.
The sample is set to , where and . The optimal classification hyperplane (w ⋅ x) + b = 0 is obtained by nonlinear mapping. It is required not only to correctly distinguish samples but also to maximize the classification interval. The principle is shown in Figure 1.

Solving the optimal classification hyperplane can be transformed into solving the following optimization problems:
In the Formula (1), w is the normal vector of the hyperplane, is the bias, and is the penalty parameter, which is used to achieve the compromise between maximizing the classification interval and minimizing the number of wrong samples and is one of the important parameters affecting the classification performance of SVM. is the relaxation variable, approximately representing the number of misclassified samples. To solve this convex quadratic programming problem, the Lagrange function is introduced, and the original optimization problem is transformed into
In the Formula (2), is the Lagrange multiplier and is the kernel function. The commonly used kernel function types include the polynomial kernel function, Gaussian radial basis function, and multilayer perceptron kernel function.
The kernel function can be expressed as follows:
In the Formula (3), is the nonlinear mapping from the sample set to a higher dimensional space. According to Formulas (2) and (3), the nonlinear support vector machines are expressed as
2.2. Particle Swarm Optimization (PSO)
Particle swarm optimization (PSO) is a swarm intelligence optimization algorithm that uses the cooperation between different particles to realize information sharing in the whole swarm and find the optimal solution in the swarm space. The PSO algorithm first initializes a group of particles in the solution space by constantly updating the individual particle’s position, the velocity vector, and the fitness function calculation fitness values. After comparing the local and global fitness values of particles, the individual extreme values were selected. At the same time, using the local optimal value compared with the global optimal value, the optimal group extremum is selected. Second, by constantly adjusting the position and velocity of the particle swarm, the individual extreme value and the group extreme value are constantly updated. Finally, the calculation is finished when the optimal solution is obtained or the maximum number of iterations is reached.
Suppose that in a D-dimensional space, given a total with particles, the position and velocity of particle are and , respectively, and the positions of the optimal particle and the corresponding whole population are and , respectively. In the iterative process of particle swarm optimization, the position and velocity of particle are updated as follows:
In Formulas (5) and (6), is the inertia factor, is the current iteration number, and are learning factors, is the individual extremum, is the population extremum, and and are random numbers between (0,1).
2.3. Improved Particle Swarm Optimization (IPSO)
In the elementary particle swarm optimization algorithm (PSO), when the inertia factor ratio is small, the local space search ability of the algorithm can be improved, but the search ability of the new region is weak. In the process of optimization, particles approach the lowest point step by step, resulting in slow convergence speed. When the inertia factor is large, it is beneficial to improve the global space search ability of the algorithm, but the local search ability is weak, which may make the particle miss the lowest point and cannot reach convergence.
Aiming at the shortcoming of the inertia factor of elementary particle swarms, it is necessary to change the inertia weight factor dynamically to balance the global and local search capability. The nonlinear inertia weight is introduced here, which is defined as the product of an exponential function and a random number in the form of Formulas (7) and (8).
In Formulas (7) and (8), rand is a uniformly distributed random number within [0,1], is the current iteration number, and is the maximum iteration number. and are the maximum and minimum inertia factors, respectively. and are generally used. The maximum number of iterations is 50~100. Through numerical analysis of the formula, it can be seen that the probability of taking a larger value is higher at the initial stage of searching, which can enhance the global searching ability. While in the later stage of searching, the probability of taking a smaller value is higher, and the local searching ability is enhanced. Compared with PSO, the nonlinear inertia factor of IPSO can effectively avoid algorithm prematurity, and the exponential function has a faster convergence rate.
The steps of improved particle swarm optimization (IPSO) are as follows: (1)Initialize the particle swarm. Initialize the position and velocity of each particle with random values between 0 and 1, and set the maximum value and minimum of inertia weight and learning factors and (2)The fitness value of each particle is calculated through the objective function, and the fitness of each particle is evaluated. The current position and fitness value of each particle are stored in its corresponding , and the optimal position and fitness value of all particles are stored in (3)Update the particle’s velocity, position, and inertia factor according to Formulas (5)–(8)(4)Compare the fitness value of each particle with the fitness value corresponding to the population extremum . If one particle performs well, the value of will be updated(5)When the maximum number of iterations is reached or the accuracy of the adaptive value meets the requirements, the algorithm is terminated; otherwise, return to step 3 to continue to update the velocity and position of particles
2.4. Improved Particle Swarm Optimization of Support Vector Machine (IPSO-SVM)
There are two important parameters in the support vector machine (SVM): penalty parameter and kernel function parameter. Different combinations of parameter values have a great influence on the performance of the SVM model. However, SVM parameters are difficult to determine through experience or calculation formulas, so the IPSO algorithm is used to optimize the selection of penalty factor and kernel function parameters in the support vector machine algorithm. It can prevent being influenced by the disadvantages of the traditional grid search method of the SVM algorithm, such as large computation and difficult precise location of the search area, improve the search speed, and intelligently optimize the search area and accuracy. The IPSO-SVM model steps are shown in Figure 2.

After the prediction, the mean square error (MSE) was used as the model evaluation standard to test the accuracy of the model. The MSE refers to the average value of the difference between the predicted value and the actual value after summing of the squares. It is often used to measure the prediction model’s accuracy and evaluate the data change degree. The smaller the value of the MSE is, the better the model’s ability to fit experimental data. The calculation formula is as follows:
In Formula (9), is the predicted value of sample , is the true value of sample , and is the sample capacity.
3. Case Analysis
3.1. Data Sources and Data Processing
To ensure the reliability of the database, the sample data should comply with the following standards: all the slope failure deformation conforms to the basic deformation law. In this study, 28 groups of data on slope from the literature [42, 43] are taken as samples. The whole sample data are divided into two parts. The first 22 groups of data were used as training samples for training and learning the model. The last 6 groups were used as test samples to test the feasibility of the IPSO-SVM model constructed in this study: the bulk density , cohesive force , angle of internal friction , slope angle and slope height , pore water pressure ratio as the input, and slope safety factor as the output. The data set is shown in Table 1.
The problem of slope stability is essentially a multivariate nonlinear regression problem. To eliminate the differences in order of magnitude and magnitude of the factors and to prevent features with too small values from being swamped, the samples need to be normalized before training so that all data fall into the interval (0,1) to achieve numerical comparability of features in different dimensions, thus reducing the training difficulty of the model:
In Formula (10), and are the values before and after normalization, respectively; and are the maximum and minimum values of the original data set, respectively.
3.2. Performance with respect to Data Availability
To verify the performance of the IPSO-SVM slope stability prediction model proposed in this paper, SVM and PSO-SVM algorithm were introduced as reference models for comparison. Taking the same data set, after parameter tuning, the prediction results of each model are output for analysis and comparison. Figure 3 lists the accuracy comparison of MSE of different models. As the figure shows, fitting effects of predicted values and measured values for different models are different, and the prediction accuracy decreases significantly with IPSO-SVM, PSO-SVM, and SVM models. The accuracy of the IPSO-SVM model is significantly higher than that of the other models, and the MSE is only 0.0412. PSO-SVM followed, with a MSE of 0.2014. The MSE of a SVM was 0.3985, which was approximately 4.9 times the performance of the PSO-SVM model and 9.7 times the performance of the IPSO-SVM model.

To further understand the prediction ability of each model on slope stability, the prediction results of the IPSO-SVM, PSO-SVM, and SVM models were compared with the actual slope safety factor F, and then the error analysis was conducted. The detailed data are shown in Table 2, and the error fluctuation range is shown in Figure 4.

As seen from Table 2 and Figure 4, the SVM prediction results have the largest error, with an average error of 18.3% and a maximum relative error of 23.4%. Moreover, there is a wide range of error fluctuations, and the prediction results are not accurate and unstable. Compared with the SVM model, the PSO-SVM model has a certain improvement in performance, and its error is significantly lower than that of the SVM model, with an average error of 5.2% and a maximum relative error of 9.3%. The IPSO-SVM model has the best performance among the three. The average error of the prediction results is 1.1%, and the maximum relative error is only 1.3%. The prediction accuracy is 17.2 times higher than that of the PSO-SVM model and 4.2 times higher than that of the SVM model, and the error fluctuation range is small, only floating at approximately 1%.
To more intuitively compare the prediction results of the three models, the predicted values of the IPSO-SVM model, PSO-SVM model, and SVM model were compared with the real values at the same time, as shown in Figure 5.

Figure 5 shows that the predicted values of the SVM and PSO-SVM models have a large overall deviation from the actual values, in which the prediction of samples 23 and 25 of the SVM model is significantly deviated and that of samples 24 and 25 of the PSO-SVM model is significantly deviated. The IPSO-SVM model shows a better fitting degree, and the predicted value and actual value of each sample point basically coincide.
4. Conclusion
This paper takes 28 groups of slope data as research samples, bulk density , cohesive force , angle of internal friction , slope angle and slope height , pore water pressure ratio as the input, and the slope safety factor as the output. Through introducing the IPSO model of nonlinear weight method, the penalty parameter and kernel function parameters of the SVM model are optimized to improve the prediction accuracy, and an IPSO-SVM model for slope stability prediction are proposed. Then, the feasibility of the proposed method is verified being compared with the traditional SVM model and PSO-SVM model in the same data set. IPSO-SVM model has excellent learning ability and prediction accuracy, which provides a new research idea for slope deformation prediction under the influence of multiple factors. The main conclusions are as follows: (1)The IPSO-SVM model proposed in this paper can achieve accurate prediction of slope stability under the influence of multiple input variables. The MSE of the model performance evaluation index is significantly lower than that of the PSO-SVM and SVM models, and the MSE performance in the test data set is 3.9 times higher than that of the PSO-SVM model. Compared with the SVM model, the improvement is 8.7 times(2)The analysis of the prediction results of three models shows that the IPSO-SVM model has the minimum error in slope stability prediction, with an average relative error of 1.1% and a maximum relative error of only 1.3%, which is significantly better than the PSO-SVM and SVM models, and the error dispersion is small, only floating at approximately 1%, which verifies the effectiveness and superiority of the method
Data Availability
The data reported in this article are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
Authors’ Contributions
Yu Wang is responsible for article algorithm and drawing. Erxia Du is responsible for the writing of the article. Sanqiang Yang and Li Yu are in charge of scientific research funding and manuscript polishing and translation. The manuscript has been approved by all authors for publication.
Acknowledgments
This study was sponsored by the Science and Technology Research Project of Higher Education Institutions of Hebei Province (BJ2018046) and Project of Hydrogeology and Environmental Geology Survey Center of China Geological Survey (DD20190228).