Abstract
To improve convergence speed and search accuracy, this paper proposes an improved quantum-behaved particle swarm optimization algorithm based on Levy flight. The improved algorithm reduces the probability of a local optimal solution through Levy flight and enhances the accuracy of the later search through a postsearch strategy. During the search process, the probability of quantum behavior is retained and the directivity of the particles is strengthened. According to the simulation comparison results, the improved quantum-behaved particle swarm algorithm exhibits faster convergence speed and higher accuracy.
1. Introduction
Particle swarm optimization is a swarm intelligence optimization algorithm that was proposed in 1995. This algorithm simulates the foraging behavior of some animal populations and is used to solve complex optimization problems. The basic particle swarm algorithm has the advantages of fast convergence and an accurate solution; however, it can easily succumb to the dilemma of local optimization. At the same time, it has been proven that PSO is not a globally convergent intelligent swarm algorithm [1].
In 2004, Sun proposed a quantum behavior particle swarm algorithm by combining quantum behavior with PSO [2, 3]. Quantum-behaved particle swarm optimization (QPSO) has a simpler model, fewer control parameters, better convergence speed, and better global search capabilities. However, when QPSO addresses complex multidimensional optimization problems, it can also succumb to the problem of converging too quickly and falling into a local optimum [4]. Dong Yumin proposed a hybrid quantum-behaved particle swarm optimization algorithm [5] combining the artificial fish swarm algorithm (AFSA) [6] and QPSO to improve the QPSO. Gai-Ge Wang proposed an algorithm based on the krill herd (KH) algorithm [7] and QPSO [8]. Regarding the expansion-contraction factor, Sun proposed three methods in 2011 to improve QPSO [9]. ShuJiang Li proposed an improvement to the expansion-contraction factor [10]. Jing Zhao proposed a new search strategy that enabled particles to have a certain probability to remain unchanged or have a certain probability to initialize [11]. Dianbo Su updated the position of the worst particle in the swarm through a cross-method [12]. Yukun Wang also proposed a new hybrid quantum-behaved particle swarm algorithm by combining Levy flight and QPSO [13]. Luo [14] solved the constraints problem via a quantum-behaved particle swarm algorithm based on Lagrange multipliers [15]. In conclusion, increasing optimization algorithms have been proposed, such as whale optimization algorithms, flower pollination algorithms, binary spider monkey optimization algorithms, firefly algorithms, and spider monkey optimization algorithms [16–21].
It can be found that QPSO, similar to many swarm intelligence optimization algorithms, is prone to falling into a local optimum. To improve the convergence speed and search accuracy, a new search strategy is proposed that combines the advantages of quantum mechanics and traditional mechanics. In the search process, the probability of quantum behavior is retained, and the directivity of the particles is strengthened so that the search is performed in a random and objective coexistence manner, which makes the convergence faster and the accuracy of the solution higher.
2. QPSO
The PSO algorithm obtains the optimal value through multiple iterations by introducing concepts such as speed and location. The formula is as follows:where is the velocity of the particle and is the position of the particle. are the optimal positions of particle . is the optimal position of the whole particle. is an inertia parameter that can control the dynamics of flight, and and are often called the learning parameters. and represent random numbers from 0 to 1 that satisfy a uniform distribution.
The QPSO algorithm is based on the PSO algorithm. The QPSO algorithm is proposed through the quantum model of particles. In quantum space, the speed and position of a particle cannot be accurately determined, and its position must be determined by a wave function. By using the Monte Carlo stochastic simulation method, the position of the particle can be measured. The formula is as follows:where and represent random numbers from 0 to 1 that satisfy uniform distribution. is the dimension of the optimal position of the whole particle. is the dimension of the optimal position of particle . is the expansion-contraction factor, which is the only parameter of QPSO. Currently, the most commonly used method is to linearly reduce the value of from 0.9 to 0.4. is the average value of the optimal positions of all particles in the current iteration.
By comparing PSO and QPSO, it can be found that QPSO only needs to adjust one parameter, which is easy to control. In addition, QPSO can converge to the global optimum with a probability of 1, and the optimization effect is better. The speed and position model is used in PSO, so a single particle can only find optimization in a limited space. However, QPSO uses a quantum model in which particles do not have a certain position and speed. All particles can appear anywhere in the search space, and the optimization effect is better. However, QPSO is prone to falling into a local optimum.
3. Improved QPSO Based on Levy Flight
In QPSO, the position of the particle can be obtained as follows:where d is the dimension of the particle . By analyzing formula (3) of traditional physics, it can be found that the position of particle is obtained by adding to . In other words, the position of particle is obtained by adding to a direction vector. Although the randomness of the direction vector enables the optimization range of the particles in the entire feasible region, the randomness of the direction vector may affect the convergence speed. Therefore, to strengthen the directivity and purpose of the direction vector, two new strategies are proposed, as follows:
represent random numbers from 0 to 1 that satisfy uniform distribution. Moreover, is the same in all dimensions of particle . These two search strategies have clear directivity and fast convergence. However, fast convergence may fall into a local optimum. Therefore, we need to improve on this basis. Throughout the development of human society, it can be found that some people move in the best direction, some move toward the general trend, others rely on their own experience, and still others stand still. Therefore, a search strategy based on the development trend of human society is proposed. It is called strategy a, as follows:
If 0 < q < q1, then
If q1 < q < q2, then
If q2 < q < q3, then
If q3 < q < 1, thenwhere is a random number based on the distribution and , q1, q2, and q3 are the parameters. q represents random numbers from 0 to 1 that satisfy a uniform distribution. The above strategy can increase the diversity of the previous solution and reduce the probability of falling into a local optimum to a certain extent. To further improve the accuracy of the solution, a postsearch strategy is proposed in the postsearch period. is calculated by the following equation:where and is a function of .
In the postsearch period, it is more desirable that the particles search in a small range near gbest. Strategy b is as follows:where is a random dimension in the d dimension of the particle and is a parameter. When the value of is small, the search accuracy can be increased. When the value of is large, it can participate in the global search again. The concept of crossover can further improve the search accuracy. Strategy c is as follows: where and are two random dimensions in the d dimension of the particle. Adopting strategies a and b can not only ensure that particles are optimized near the global best but also ensure that the particles do not overlap with the global best. In the implementation of the algorithm, for the particles that exceed the limit in the iteration, the following strategy d is used (assuming that the dimension of the particle exceeds the limit). The strategy d is as follows:
If and , execute formula (11). Also,
Here, represents random numbers from 0 to 1 that satisfy uniform distribution.
The detailed description of the improved QPSO for solving optimization problems can be given as follows:(i)Step 1: initialize the population according to the set number of particles and dimensions, set the maximum number of iterations, and set various parameters.(ii)Step 2: evaluate all particles to find the individual best of each particle and the global best of all particles. Calculate the average value of the optimal positions of all particles in the current iteration.(iii)Step 3: determine whether the current number of iterations is less than half the maximum number of iterations. If true, execute strategy a to update the particles and skip to step 5. If false, execute strategy a to update the particles with a probability of , then skip to step 5, and perform step 4 with a probability of .(iv)Step 4: execute strategy b to update the particles with a probability of or execute strategy c to update the particles with a probability of .(v)Step 5: determine whether the particle is within the limited range, and if it is out of range, use strategy d to update the particle.(vi)Step 6: evaluate all particles to determine and update the individual best of each particle and the global best of all particles. Calculate and update the average value of the optimal positions of all particles in the current iteration.(vii)Step 7: if the maximum number of iterations is reached, the optimal solution is output. Otherwise, proceed to step 3.
The description of the improved QPSO can is shown in Figure 1.

Computational complexity is an important factor in the optimization algorithm. The complexity is generally , , , , , and , representing an algorithm, the incremental relationship between the occupied time and data, where n represents the amount of data input. In the optimization algorithm, the input parameters are the number of particles (value n) and the number of iterations (value m). The computational complexity of the improved quantum-behaved particle swarm optimization algorithm is , and the computational complexity of PSO and QPSO is .
4. Simulation Results
To better compare the improved quantum-behaved particle swarm optimization algorithm, the quantum-behaved particle swarm optimization algorithm, and the particle swarm optimization algorithm, fifteen test functions are used for simulation comparison. The simulation compares the three algorithms by analyzing the accuracy of the solution and the speed of convergence. Fifteen test functions are shown in Table 1.
The optimal values of the functions are 0, −19.2085, −1, −186.7309, −2.0626, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.
The improved quantum-behaved particle swarm algorithm is temporarily called G-QPSO. In G-QPSO, and are taken as 0.5; q1,q2, and q3 are taken as 0.5, 0.8, and 0.9; and is linearly reduced from 1 to 0.5. In QPSO, is linearly reduced from 1 to 0.5. In PSO, is linearly reduced from 1.5 to 0.5, and and are taken as 2. The following data are the average of 100 tests.
The first eight functions have only two variables, so the accuracy of the solution is analyzed by changing the number of particles with 100 iterations. Through analyzing Table 2, it can be found that G-QPSO is better than QPSO and PSO on the average, maximum, and minimum values of the solution, proving that G-QPSO is better than QPSO and PSO in terms of the accuracy of the two-dimensional test function solution. For example, for function , when the number of particles is 20, the mean, worst, and best values of QPSO are 0.0042, 0.0715, and 2.45E-06; the mean, worst, and best values of G-QPSO are 4.31E−05, 1.30E−03, and 2.15E−08, respectively; and the mean, worst, and best values of PSO are 0.0014, 0.0182, and 1.10E−05, respectively.
The last 7 test functions analyze the resolution accuracy of QPSO, G-QPSO, and PSO in different dimensions by changing the dimensions of the test function with 1000 iterations when the number of particles is 40. Analyzing Table 3, it can be found that G-QPSO outperforms QPSO and PSO on the average, maximum, and minimum values of the solution, which proves that G-QPSO is also better than QPSO and PSO in terms of the accuracy of multidimensional test functions. The analysis of Tables 2 and 3 proves that the search accuracy and solution accuracy of G-QPSO are better than those of PSO and QPSO.
The following compares the curves of the fitness values of the first 8 test functions to compare the convergence speed when the number of particles is 40 and the number of iterations is 50, where a-QPSO is G-QPSO without the b and c strategies; the following data are the averages of 100 tests.
By comparing the eight test functions in Figures 2–9, it is found that G-QPSO converges faster than QPSO and PSO. In summary, G-QPSO is better than QPSO and PSO in both the speed of convergence and the accuracy of the solution.








However, to further understand the role of each strategy in G-QPSO, the functions from 9 to 12 are used for comparison when the number of particles is 40, the number of dimensions is 20, and the number of iterations is 50, where bc-QPSO is G-QPSO without the strategy; the following data are the average of 100 tests.
By analyzing Figures 10–13, it is clear that the convergence speed of G-QPSO and a-QPSO is faster than that of bc-QPSO. By analyzing Table 4, it is clear that the accuracy of G-QPSO and bc-QPSO is better than that of a-QPSO. In general, strategy a mainly speeds up the convergence speed, while strategy bc mainly improves the accuracy of the search to improve the accuracy of the solution.




By analyzing the search strategy of G-QPSO, a clear search direction and precise search in the later stage can ensure that the algorithm has higher accuracy and faster convergence speed.
The following results are clear early in the search: (1) The clear directivity and the randomness of work together to make particles search on the line between the global best (the average individual best) and themselves. Moreover, ranges from 0 to infinity, which ensures that the search area is an entire straight line. (2) Either no update particle strategy or the Levy flight particle strategy keeps the average individual’s best away from the global best, ensures the diversity of the previous search solution, and reduces the probability of local optimization.
At the later stage of the search, when the particle updates its position, some particles will follow the strategy of the later search so that the particle searches near the global optimal position.
5. Conclusion
Based on the analysis of the quantum-behaved particle swarm algorithm, an updated location method based on the combination of traditional physics and quantum behavior is proposed, which not only guarantees the directivity of the particles during the search but also reduces the probability of falling into a local optimum. Therefore, G-QPSO improves the directivity as well as the accuracy of the search. In general, strategy a improves the convergence speed, while strategy bc improves the accuracy of the search. Strategies a, b, and c work together, so G-QPSO is better than QPSO and PSO in terms of both the convergence speed and the accuracy of the solution.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant nos. U1609212 and 61304211), the Belt and Road International S&T Cooperation Projects of Zhejiang Province (No. 2019C04021), the Public Technology Research Project of Zhejiang Province (No. LGG20F030002), and the Graduate Education and Teaching Reform Project of Hangzhou Dianzi University (No. JXGG2019ZD001).