Abstract

This paper presents a settlement prediction method based on PSO optimized SVM for improving the accuracy of foundation pit settlement prediction. Firstly, the method uses the SA algorithm to improve the traditional PSO algorithm, and thus, the overall optimization-seeking ability of the PSO algorithm is improved. Secondly, the improved PSO algorithm is used to train the SVM algorithm. Finally, the optimal SVM model is obtained, and the trained model is used in foundation pit settlement prediction. The results suggest that the settling results obtained from the optimized model are closer to the actual values and also more advantageous in indicators such as RMSE. The fitting value R2 = 0.9641, which is greater, indicates a better fitting effect. Thus, it is indicated that the improvement method is feasible.

1. Introduction

In recent years, with the continuous improvement of China’s economic level, technology, and urbanization, a variety of high-rise buildings have been erected. During construction, foundation pit construction is an essential part, which directly affects the quality of the whole architecture. If the foundation pit settlement is not predicted and monitored in time, architecture is at very high risk of tilting and collapsing, thus causing serious safety accidents and endangering people’s personal and property safety. Therefore, the use of current information technology for the prediction and real-time monitoring of foundation settlement changes is an inevitable choice in the field of building construction, which greatly improves the service life and stability of buildings. At present, the method of predicting foundation settlement has changed from traditional artificial prediction to prediction by various neural network algorithms of machine learning, such as BP neural network, CNN, and Gray theory, which are more applied. The above algorithms have achieved better application results in the fields of image classification recognition and fault diagnosis prediction. With the wide application and popularization of machine learning, more and more intelligent algorithms are proposed and applied to subsidence deformation monitoring, such as particle swarm optimization (PSO) algorithm, and support vector machine (SVM) is also utilized. In terms of specific research, Yan Lv et al. proposed and constructed a settlement prediction model by combining Gray theory and BP neural network. The experimental results indicate that the accuracy rate of prediction of the model stands at 75%, and it can be applied in the prediction of foundation pit settlement engineering, which has a certain reference value [1]; Zhang et al. collected a large amount of foundation settlement data from several projects and summarized 17 main factors affecting ground settlement, which provided a strong database for settlement prediction [2]; for genetic optimization of extreme-value learning machine, Yang proposed three different activation functions for the extreme learning machine (ELM) model based on a genetic algorithm. The test results show that the constructed Ga-ELMsin model has high prediction accuracy. The computational accuracy of the ELM model can be effectively improved with the addition of a genetic algorithm [3]. Wei Jiameng et al. proposed to apply the Newton–Cotes quadrature formula to the nonisometric GM(1, 1) modeling and applied it to the prediction and real-time monitoring of building settlement and deformation. The analysis reveals that the proposed prediction method can be monitored and analyzed in the building settlement changes. The fit is improved by about 30%, and the prediction accuracy of the new model is significantly better than that of the traditional model [4]. Shi-fan et al. proposed a GWO-ELM model to enable training and prediction of ground subsidence. The optimized GWO-ELM model has significantly improved prediction ability and better prediction effect [5]. Zhan et al. proposed an Elman network-based surface settlement prediction method to predict the surface settlement of deep foundation pits in oceanic lots and then correct the predicted values by the Markov chain model, thus further improving the accurate prediction of deep foundation pits in deep marine areas. They also found that this method has a good denoising effect and is practical through practice [6]. Liu et al. constructed a tunnel settlement prediction model represented by Zhengzhou based on the currently available monitoring data. By using this model, the Zhengzhou tunnel can be monitored and predicted in real time, and the specific location and orientation of settlement can be discovered in time so that timely maintenance can be carried out to ensure the normal operation of the subway [7].

The above study shows that combining machine learning and neural networks for settlement prediction has become the mainstream of current thinking. In this study, two typical algorithms, PSO and SVM, are combined to make predictions regarding the settlement of foundation pits.

2. Basic Methods

2.1. Support Vector Machine Model

Support vector machine (SVM), a new machine learning method, is often industrially used as a classifier and contributes to the development and application of deep learning algorithms as well. It was invented by Vapnik’s team on the basis of statistical learning theory [811]. Currently, SVM is gaining momentum in a number of research fields, including image recognition and classification, face recognition and classification, and time series prediction [1215]. As a typical binary classification model, SVM, by separating positive and negative planes through the hyperplane and introducing a linear classification criterion with a maximum interval, empowers the linearizer with nonlinear capability with the help of kernel tricks (nonlinear mapping). Compared with traditional machine learning algorithms, SVM features with adaptability, generalization, a short period needed for training, and a minor chance of being trapped in local search, etc. Therefore, SVM is applied in many fields as a way of solving complex real-life problems [16, 17].

2.1.1. The Basic Idea of the SVM Algorithm

As an effective supervised learning method, SVM includes interval, dyadic, and kernel tricks. From a mathematical point of view, SVM provides the optimal algorithm for convex quadratic programming [1820]. The classification algorithms of SVM are shown in Figure 1.

In Figure 1, the black and white dots denote two types of samples, and the sample dots distributed on the separating plane are the support vectors. Then, “2” denotes the optimal separating hyperplane found by SVM, and “1” and “3” denote the separating hyperplane nearest to the optimal separating hyperplane. The distance between “1” and “3” is the margin, and when the margin reaches the maximum, it is the optimal hyperplane. With the help of the discriminant function of , which is also called the separating hyperplane, SVM finds the optimal separating plane.

(1) Linearly Separable SVM. It is assumed that the training sample set includes 2 classes, where the first class is labeled as and the second class is labeled as . When the samples are sorted out by using the separating hyperplane , the constraint condition is as follows [21]:

The distance between point and separating hyperplane is given by

The distance between the two separating hyperplanes is given by

According to the above analysis, SVM is optimized to find optimal hyperplane, i.e., the optimal hyperplane is obtained by solving which can be expressed by the following equation:

As convex quadratic programming can only be solved by a global optimal solution, which makes the process of solution simple, the global optimal solution can be derived by calculating the extrema. In solving convex quadratic programming, a combination of structural and empirical risk needs to be considered, and we can get the following equation after the Lagrangian function is introduced into equations (3)–(4) based on the Lagrangian duality [22]:where denotes the Lagrangian coefficients.

Find the partial derivatives of and b, respectively, and make them equal to zero as follows:

After collation, we can get

Substituting the above results into equations (3)–(5), we can get

Therefore, the original problem of optimization can be transformed into the Lagrangian dual problem as the following equation:

As denotes the optimal solution of the Lagrangian dual problem, the solution of the original optimization problem can be expressed as follows:where and denote any pair of support vectors in the two categories. With the above derivation process, the classification function identifying the final optimal hyperplane is expressed as follows:

(2) Nonlinear SVM. On the basis of nonlinear mapping functions, achieving the mapping of sample data from low-dimensional space to feature space (high-dimensional space), nonlinear problems are converted into linear ones. The training efficiency of SVM can be significantly improved by choosing appropriate kernel functions to perform inner product operations in the initial space or the high-dimensional space, provided that the Mercer condition is met.

2.2. The Principle of Particle Swarm Optimization

Particle swarm optimization (PSO) is an iteration-based evolutionary algorithm, which was developed by Eberhart et al. The foraging behavior in birds is observed, and the algorithm has been widely applied in artificial intelligence.

The foraging behavior in birds is analyzed by describing each bird in the flock as a particle. Each particle represents a potential solution to an optimization problem. However, the particle is a two-dimensional optimization vector in a two-dimensional optimization problem. In addition, the particle is a multibit optimization vector in a multidimensional optimization problem. Therefore, the bird flock is a swarm of particles. Assuming that there are m particles in the D-dimensional target search space, if the optimization objective function value is used to characterize the particle merit where the position of any particle is and velocity is , the smaller the objective function value, the nearer the particle tends to the extreme position and the better the particle quality is. After a limited search of optimization, the optimal position of a single particle is and the optimal position in the swarm of particles is .

After a round of particle position iteration, the fitness values should be updated simultaneously. Comparing the fitness values of new particles with those of personal best values and group best values, we shall update the Pbest position of the personal best values and Gbest position of the group best values with the following equation:where and are the learning factors and represents the weight. Then, and denote the random number in the interval [0, 1]. together with and together with are the particle velocity and particle displacement at the next moment and the current moment, respectively.

2.2.1. The Flow of PSO

PSO is used to update particle positions and velocities in the solution space and continues to find the best particles in the process, which can be illustrated in Figure 2.(1)Initialize the particle swarm: randomly select the initial particles in the solution space and set the velocity and motion direction of the initial particles as well as the learning factors, inertia weights, and other parameters.(2)Calculate the fitness value: the fitness value of the current particle is solved to determine the personal best value. Then the group best value is determined by comparison.(3)Update the velocity and position of particles: regulate particle velocity by comprehensively considering personal best value and group best value, and guide the particles to move at this velocity.(4)Output the optimal solution: after a round of iterations, if meeting termination condition, the optimal solution can be obtained; if not meeting termination condition, skip to step 2 until the termination condition is met or the number of iterations is reached.

3. The Construction of a Settlement Prediction Model of Foundation Pit Based on the Improved PSO-SVM Model

3.1. SA Algorithm

Simulated annealing (SA) algorithm, proposed by Metropolis as a heuristic algorithm through simulating the annealing process, is often used to solve some solutions that are difficult to denote in theoretical and mathematical derivations. While the molecular motion within a solid at high temperature is fast and the molecule energy is high, as the temperature decreases, the molecular motion tends to slow down and transits from the disordered state to the ordered state. During the annealing process, the solid matter can reach thermal equilibrium at any temperature, and the thermal equilibrium at this time is equivalent to the local optimal solution. Then, as the thermal energy of the solid matter is the lowest and a new thermal equilibrium appears when cooling down to the lowest temperature, the thermal equilibrium at this time is equivalent to the global optimal solution. Compared with the PSO algorithm, the SA algorithm features with a remarkable advantage in global search and therefore is suitable for solving large-scale combinatorial optimization problems [23].

3.2. The Principle and Flow of the SA-PSO Algorithm
3.2.1. The Principle of the SA-PSO Algorithm

The SA-PSO algorithm improves the overall application by using the SA algorithm to compensate for its shortcomings on the basis of the PSO algorithm. As the receive state is determined based on a probability formula in the SA algorithm, if , receive state is ; otherwise, will be received based on the probability . Since the setting situation of the initial value has little effect on the probability value of the SA algorithm, the optimal solution can be calculated according to the probability formula. The SA algorithm changes the annealing temperature through the adjustment function, which means the difference in the particle fitness values is apparent if the temperature at the initial stage is high. As the particle search range expands simultaneously in the process of cooling down, the fitness of the particles is close to the optimal solution when the annealing temperature tends to zero. In addition to the more optimal solutions in the current state, the SA algorithm, when receiving new solutions with a certain probability to receive solutions that do not fully satisfy the conditions, strengthens its global search capability consequently. Apparently, the combination of the two algorithms enables better application performance as the SA algorithm compensates for the shortcomings of the PSO algorithm [24, 25].

3.2.2. Flow of the SA-PSO Algorithm

Combined with the previous discussion, the SA-PSO algorithm features with the advantages of fast convergence and strong global search capability, etc. The implementation procedures are shown as follows [26]:(1)Perform initialization of particle position and velocity based on the PSO algorithm.(2)Select an appropriate fitness function to obtain the personal best fitness value as well as the group best fitness value .(3)Set the initial temperature . denotes the fitness value of the optimal particle.(4)Introduce the SA algorithm to obtain the fitness value of each particle at the initial temperature.(5)Update particle position and particle velocity based on the PSO algorithm.(6)Solve for the updated particle fitness values.(7)Perform annealing treatment.(8)If the termination condition is met, stop iteration and output the result. Otherwise, skip to step 4, and repeat the steps above until the termination condition is met.

3.3. The Construction of the Settlement Prediction Model of Foundation Pit of Improved SVM on the Basis of SA-PSO

To improve the accuracy of foundation pit settlement prediction, firstly this paper uses the PSO algorithm to optimize the covariates (g, C) in the SVM model. Optimization search, however, may be trapped in the circle of local optimization search. In this regard, by invoking the SA algorithm to improve the PSO algorithm, this paper developed a better-performing SA-PSO algorithm, which can determine the optimal solution of parameters in a more efficient and accurate manner. Therefore, the SVM model based on the SA-PSO algorithm can strongly back the settlement prediction of the foundation pit.

The flowchart of the SA-PSO algorithm for SVM model parameter optimization is shown in Figure 3.(1)Data acquisition and collation: the raw data of settlement are preprocessed, and then phase space is reconstructed to establish a time series of phase space. Next, put the collated data into the prediction set and training set, respectively, and thus, by comprehensively applying the Cao method and mutual information, the optimal embedding dimension m and time delay r are determined.(2)Processing by normalization method: the data in the prediction set and training set need to be normalized by the following equation for the purpose of avoiding data redundancy:where and denote the minimum and maximum values in the original data, respectively. Here, x denotes the observed data and y denotes the normalized data.(3)Set the velocity of the initial particle and apply the fitness function , so as to solve the personal best fitness value and the group best fitness value .(4)Simulate annealing initialization on the basis of the SA algorithm. By setting the initial temperature , solve for the initial solution S and the current fitness values, and then update and .(5)Calculate the updated solution S1, and update the particle position and velocity through the PSO algorithm. In the meantime, solve for the new fitness value.(6)Follow the rules in the simulated annealing. If , so at this point, i.e., receive state S1; if , then S remains unchanged at this point.(7)Update and based on the new fitness values.(8)If the termination condition is met, you can stop the iteration and output the result; otherwise, skip to step 1, and repeat steps above until the termination condition is met.

4. The Engineering Application of the Improved PSO-SVM Model

4.1. SA-PSO-SVM Model Training Results

Data from a monitoring point, which represents the maximum settlement, were selected as the base data, and the data were normalized where the slack variable is 0.02 and other parameters are set. Then, the predictions of data from the test set were simulated on the MATLAB, a software platform, by optimizing the model, and quantitative analysis was conducted by combining the application of four indicators, which are goodness of fit , root mean square relative error , mean absolute error , and residual sum of squares . In this case, goodness of fit characterizes the degree of influence of the independent variables in the model on the joint process of the dependent variable, where tending to 1 implies that the model has a good fitness degree. Then the root mean square relative error means the dispersion of the prediction results, and the mean absolute error represents the prediction accuracy of the model.

The prediction results are displayed in Figure 4.

According to the analysis in Figure 4, the SA-PSO-SVM model to train and predict has a good fitting degree. The change curve of settlement predicted by the model basically fits the change curve of the settlement obtained from actual observation. Relevant data and ratings are shown in Table 1.

When analyzed in conjunction with Table 1, the prediction accuracy of the SA-PSO-SVM model is better than that of the PSO-SVM model. The relative error of the root mean square is 0.2134. In addition, the prediction curve of the SA-PSO-SVM model has a good fitting degree with the actual curve, and its goodness of fit is 0.9962, which is higher than that of the PSO-SVM model. Moreover, when the SA-PSO-SVM model has lower requirements for data, it has better robust performance; besides, the SA-PSO-SVM model can produce prediction results that are closer to the actual values since it is less likely to fall into the defect of local extremes; furthermore, the SA-PSO-SVM model can effectively handle nonlinear data and maintain a high convergence rate in the search process.

The SA-PSO-SVM model was used to predict the settlement at the JC15 monitoring point for Period 32 with the input values (−18.04, −18.36, and −18.72). Then, the output values from Period 32 were reinput into the model and used to predict the settlement for Period 33, and the results obtained are listed in Table 2.

According to the predicted results, the settlement of monitoring points for Periods 32 and 33 is −18.83 mm and −18.91 mm, respectively, both of which are below the warning value , and therefore, the current foundation pit works are judged to be safe in terms of construction.

4.2. Comparison and Analysis of the Three Models

To test the prediction accuracy of the SVM regression prediction model, the PSO-SVM prediction model and the SA-PSO-SVM prediction model are used in this section. Under the three models, data are predicted from the training set and test set at JC15 monitoring points, and we obtained the results presented in Table 3 and Figure 5.

The best fitness of the SA-PSO-SVM model outperforms that of the PSO-SVM model and consistently outperforms the fitness value of the SVM regression prediction model throughout the iterations. As the best fitness value of the SA-PSO-SVM model converges to , the corresponding optimal covariate is the kernel function  = 0.0572 and the optimal penalty parameter is C = 65.0981, which is shown in Table 4.

As the curves of fitting predictions of the three prediction models have a good fitness degree to the curves from actual observation, all three prediction models can provide assistance to the settlement prediction.

In addition, the root mean square relative errors are 0.6038 mm, 0.3135 mm, and 0.2134 mm, respectively. Besides, the MAE and SSE values of the SA-PSO-SVM model are the smallest among the three models, which are 0.1706 and 1.41221, respectively. Then, the goodness of fit of the SA-PSO-SVM model, which is equal to 0.996172, is higher than those of the other two models.

4.3. Validation and Analysis of Prediction Models

According to the engineering application results of the three models, the SA-PSO-SVM model is superior to the remaining two models in terms of overall performance. To further verify the reliability of the above conclusion, in this section, we combined the data from the JC30 monitoring point, the submaximum settlement, for analysis and verification.

For the cumulative settlement data at the JC30 monitoring point, settlement predictions were made by applying the SVM regression prediction model, PSO-SVM prediction model, and SA-PSO-SVM prediction model. The results are presented in Figure 6.

It can be seen that the maximum residual values corresponding to the SVM regression model, the PSO-SVM model, and the SA-PSO-SVM model are −1.97 mm, −0.63 mm, and 0.35 mm, respectively, and it therefore indicates that the settlement prediction curves of all three models basically match the original data curves. Furthermore, the SA-PSO-SVM model presents better fitting results than the other two models, with a goodness of fit of 0.9641.

The accuracy of data processing of the three models is listed in Table 5.

It can be seen that the root mean square relative error of the SA-PSO-SVM model, which is only 0.1889, is significantly smaller than those of the SVM regression model and the PSO-SVM model. Then, the SSE value and MAE value of the SA-PSO-SVM model, which are 1.1067 and 0.1693, respectively, are also the smallest among the three models. Therefore, it confirms that the proposed SA-PSO-SVM model has better robust performance in data processing and can search for global optimal solution efficiently. Furthermore, the PSO-SVM model also shows good adaptability to nonlinear time-series settlement data and can finally achieve the expected prediction accuracy.

The three models were used to predict the settlement at the JC30 monitoring point for Periods 32 and 33, respectively, and the results obtained are listed in Table 6.

According to Table 6, based on the SVM regression model, PSO-SVM model, and SA-PSO-SVM model, the cumulative settlement at the JC30 monitoring point for Period 32 and 33 is all below the warning value . Therefore, the current foundation pit project can be judged as safe in terms of construction.

5. Conclusion

Based on the analysis and validation above, the SA algorithm is used to improve the PSO algorithm, so as to optimize the parameters of the SVM model. Thus, the needs of settlement prediction of the foundation pit are met.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding this work.