Abstract

Intelligent manufacturing is an important part of Industry 4.0; artificial intelligence technology is a necessary means to realize intelligent manufacturing. This requires the exploration of pattern recognition, computer vision, intelligent optimization, and other related technologies. Particle swarm optimization (PSO) algorithm is an optimization algorithm inspired by the foraging behavior of birds. PSO was an intelligent technology and an efficient optimization algorithm verified by a lot of research and experiments. In this paper, the traditional PSO algorithm is compared with genetic algorithms (GA) to illustrate the performance of the traditional PSO algorithm. By analyzing the advantages and disadvantages of the traditional PSO algorithm, the traditional PSO algorithm is improved through introducing into the sharing information mechanism and the competition strategy, called information sharing based PSO (IPSO). The novel algorithm IPSO was the rapid convergence speed similar to the traditional PSO and enhanced the global search capability. Our experimental results show that IPSO has better performance than the traditional PSO and the GA algorithm on benchmark functions, especially for difficult functions.

1. Introduction

Particle swarm optimization (PSO) algorithm was a bioinspired intelligent technology proposed by Kennedy in 1995 [1]. It was a metaheuristic optimization inspired by the foraging behavior of birds [2]. Compared with the genetic algorithms (GA), we find that both algorithms are population based. But the PSO algorithm has no genetic operation, which only works on an individual chromosome. Instead, the PSO algorithm’s work mechanism was the individual's exchange information with each other, so the population was optimized. Starting from a group of random solutions, the optimal solutions are obtained by searching repeatedly. When the PSO algorithm was first published, it attracted extensive attention from the optimization field scholars. Before long, it became the focus of research in the optimization field. Many scientific achievements have been made in this field [328]. Kulkarni and Venayagamoorthy used PSO to address wireless-sensor networks (WSN) issues such as optimal deployment, node localization, clustering, and data aggregation. Park et al. propose an improved PSO framework employing chaotic sequences combined with the conventional linearly decreasing inertia weights and adopting a crossover operation scheme to increase both exploration and exploitation capability of the PSO. Wu et al. propose a swarm-intelligence-based method-Particle Swarm Optimization (PSO) algorithm to handle the elastic parameter inversion problem. Gong et al. proposed a discrete framework of the particle swarm optimization algorithm. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. Chen et al. used particle swarm optimization to determine optimal power management. Delgarm et al. proposed a mono- and multiobjective particle swarm optimization (MOPSO) algorithm to couple with Energy-Plus building energy simulation software to find a set of nondominated solutions to enhance the building energy performance. Wu et al. introduced particle swarm optimization algorithm into KNN multilabel classification and made adjustments to Euclidean distance formula in traditional KNN classification algorithm and added weight value to each feature. Mousavi et al. proposed a modified particle swarm optimization for solving the integrated location and inventory control problems in a two-echelon supply chain network. Chatterjee et al. proposed a particle swarm optimization-based approach to train the NN (NN-PSO). The PSO is employed to find a weight vector with minimum root-mean-square error (RMSE) for the NN. The proposed (NN-PSO) classifier is capable of tackling the problem of predicting structural failure of multistoried reinforced concrete buildings via detecting the failure possibility of the multistoried RC building structure in the future.

Through research and experiments, the PSO algorithm was an efficient metaheuristic optimization algorithm [29]. PSO algorithm does not need to model the optimization problem and has strong global searchability. But, like all metaheuristics algorithms, PSO algorithm cannot guarantee finding the optimal solution. PSO algorithm is special in that it does not need to use the optimization problem’s gradient. In other words, the PSO algorithm is unlike the classic deterministic optimization methods, for example, gradient descent method and the quasi-Newton method. Therefore, the PSO algorithm is widely used in partially irregularity optimization, high noise optimization, and time-varying optimization, etc. However, the disadvantage of the traditional PSO algorithm is that it falls into local optimum easily. Thus, it is needed to improve PSO to avoid this shortcoming. This paper proposes an information sharing based PSO algorithm (called IPSO, details described in Section 3). The implementation is as simple as the traditional PSO, but the experimental results for benchmark functions show that IPSO is a more efficient searching ability for global optima. The main contributions of our work are as follows:(1)Study the performance of traditional PSO by comparing it with genetic algorithms.(2)Analyze the shortcoming of traditional PSO and develop an improved version of PSO, called IPSO. It is verified that IPSO has better performance than traditional PSO on benchmark functions.

The rest of this paper is organized as follows. Section 2 formally introduces the traditional PSO algorithm and investigates its performance on 10 benchmark functions. In Section 3, we first analyze the reasons why traditional PSO could not always achieve optimal solutions. Based on the analysis, we develop a novel PSO (IPSO) and conduct experiments. Finally, Section 4 concludes the work of this paper.

2. Materials and Methods

2.1. PSO Algorithm

The PSO algorithm was first proposed by Eberhart and Kennedy in 1995. PSO algorithm was inspired by the flight behavior of groups of birds in search of food. PSO algorithm was an efficient metaheuristic optimization algorithm and has wide applications in the optimization field.

The traditional PSO algorithm’s work mechanism is to generate a group (called swarm) of candidate solutions (called particle). The movement of these particles follows certain rules and can be expressed by equations. The best location of particles and the best location of the whole swarm are important factors. When better locations are found, the entire swarm will move to that location. Repeat this process until you find a satisfactory solution (the best location) or reach the termination condition.

Suppose that is the minimized function. In this function, the candidate solution is a vector of real numbers, and each candidate solution has a real number value which calculates using an objective function. The function’s gradient is unknown. The purpose is to get a vector to satisfy equation ; in here, all are in the search space. That is to say, the global minimum that the algorithm is looking for is the solution . The global maximum can be given by the function .

In the traditional PSO algorithm, each solution in the search space can be thought of as a bird called “particle.” The fitness value of each particle can be calculated by the objective function. The state of each particle is determined by its velocity and location. The velocity of a particle determines its direction of flight. All the particles in the swarm search for the optimal solution through mutual cooperation and information exchange. The search process of the PSO algorithm is driven by an update of each particle state. At the initialize process, the algorithm randomly generates a group of particles (random solution), and each particle includes the information of velocity and location. The velocity and location of each particle are updated using (1) and (2). (1) has determined the best experienced location for the current particle (called ) and the current entire swarm’s best location (called in (1)). Update on the velocity and location of particles repeats until the iteration’s maximum number is reached. The maximum number is set at the beginning of the algorithm. Through this updating mechanism iteratively, the algorithm can find (or approach) the best solution in the solution space.

Because each particle’s information is updated not only by its best experienced location (), but also by the whole swarm’s best location (), then, the particles in the swarm must exchange the information with each other. For the traditional PSO algorithm, a particle’s velocity and location update equations are shown in the following equations:

Here, is the weight value of the inertia factor. It is a scaling factor related to the particle’s previous velocity and the value is . and are the accelerating factors; they are constant such as . is the random function to generate the random numbers. denotes the location of the particle . denotes the velocity of particle . denotes the best location in the current moment the particle was found. denotes the whole swarm’s best location in the current moment found.

In (1), it is combined with three parts. The first part is the influence of the particle’s previous velocity. It makes the particles have the ability to expand, that makes the global search ability of the algorithm can be enhanced. The second part represents the influence of the particle’s experience knowledge called the cognition part. The third part represents the learning strategy of particles called the social part. It is the experience of learning and social cooperation process of particles with each other.

The traditional PSO algorithm’s flow is described like this. First, the algorithm randomly generates a group of particles, each particle with a random initial velocity and random location . Then, evaluate each particle through calculating the fitness value f. Find the best location of and the global best location which has the best fitness value in the current entire swarm. According to (1) and (2), each particle can update its location and velocity. Then, each particle recalculates the fitness value using the objective function under its current velocity and location. The fitness value of each particle was obtained according to the recalculation, and the algorithm updates each particle’s best location and the swarm’s best location . Then, use (1) and (2) again to update each particle’s velocity and location. Update on the velocity and location of particles repeats until the iteration’s maximum number is reached; the maximum number is set at the beginning of the algorithm.

2.2. Experiment Comparison

The traditional PSO algorithm has a high convergence rate. Clerc has presented the proof in his paper [2]. We select 10 benchmark functions for the experiment to verify the high convergence rate of the PSO algorithm and compare it with the GA. The 10 benchmark functions have a multimodal function. It is a function that is difficult to solve. And the reason why we chose it is because we want to study not only the algorithm’s convergence efficiency but also the algorithm’s ability to find the global optimal solutions. The 10 benchmark functions are briefly described below.F1: Schaffer function (the function’s graph shown in Figure 1).F2: Shubert function (the function’s graph shown in Figure 2).F3: Hansen function (the function’s graph shown in Figure 3).F4: Camel function (the function’s graph shown in Figure 4).Function 5 (the function’s graph shown in Figure 5).Function 6 (the function’s graph shown in Figure 6).Function 7 (the function’s graph shown in Figure 7).Function 8 (the function’s graph shown in Figure 8).Function 9 (the function’s graph shown in Figure 9).Function 10 (the function’s graph shown in Figure 10).

In order to verify the analysis of the high convergence rate of the PSO algorithm, we run 100 times algorithm experiments for the above 10 benchmark test functions. The comparison experimental results are shown in Table 1. The comparison includes the optimal solution found and the times of finding the optimal solution for each function. For example, for the difficult Function 6, GA cannot find an optimal solution for this function. The best solution GA can find is −14.786954. But PSO algorithm can find the optimal solution (−39.944506) five times in the 100 times algorithm run.

From experimental results in Table 1, we can observe that the convergence rate of the PSO algorithm is significantly higher than GA, except for the simplest function F9. For Function 9, the two algorithms can find the optimal solution 100% in 100 times algorithm run. The following conclusions can be drawn from the experimental results in Table 1: PSO algorithm has better global search ability than GA. The experimental results show that the PSO algorithm’s convergence rate is better than GA. Thus, compared with GA, we prefer to apply PSO to optimization fields, such as traveling salesman problems.

From Table 1, we also notice that the PSO algorithm cannot always find the optimal solution, except the simplest function F9, although it performs better than the GA algorithm. For the functions F4, F5, and F6, in terms of the success rates of achieving the optimal solutions, PSO performs significantly better than GA. However, its success rate is also not very high. We need to analyze the reason and develop the traditional PSO in the next section.

2.3. Why PSO Could Not Always Find Optima

The above experimental results show that the reason why the PSO algorithm can converge quickly is that the cognition part and the social part in (1) can guide particles to search only around and . By analyzing the velocity update and the location update equations of particles in the algorithm, it can be known that when the best particle in the swarm falls into the local optimum, the algorithm’s information sharing mechanism will guide all the particles to this local optimum location. This will cause the entire swarm to only converge to this solution. As the entire swarm falls into the local optimum, we can find that the value of the cognition part and the value of the social part in (1) will eventually become zero. At the same time, with the execution of the iterative process, the particles’ velocity will eventually drop to zero because the inertia weight is between 0 and 1 ().

As all the particles in the swarm have zero velocity, the swarm has converged to one location. At this time, if the current value of in the swarm is not the global optimum, this will lead to the entire swarm falling into the current optimal solution. This results in the swarm not being able to jump out of the local optimum. Especially, this occurs frequently for multimodal functions. That is the reason why the traditional PSO can only achieve the optimal solution five times out of 100 runs on function F6, although it is better than the GA algorithm, which achieves zero time out of 100 runs. To further verify this, we select three more multimodal functions (shown in Table 2 and Figures 1113). In Table 2, presents the range of variables; and shows the minimum value of each function. Like what we did in Section 2, we run the traditional PSO 100 times for each function. In our experiments, the traditional PSO could not achieve the optimal solution once out of 100 runs for each function. Thus, we count the best, mean, and worst values achieved by the traditional PSO in Table 3. These experimental results confirm our conjuncture: the traditional PSO algorithm falls into local optimum easily. (The GA algorithm has the same problem.)

The main reason of the traditional PSO algorithm falls into local optimum easily is because it has no mechanisms to force the particles have the ability to escape from the local optimum. This is the fatal disadvantage of the traditional PSO algorithm. This disadvantage is the reason why traditional PSO could not always find the optimal solution. With this guidance, we will develop a new solution in the next subsection, which forces the traditional PSO algorithm to avoid falling into the local optimum.

2.4. An Improved Version of PSO Algorithm

In this section, we propose an improved version of the PSO algorithm called IPSO, which aims to overcome the disadvantages of easily falling into the local optimum. The main purpose of the IPSO algorithm is must have a very strong global search ability. To achieve this purpose, a new information sharing mechanism introduces into the IPSO algorithm. This reverse mechanism lets the particles move in the opposite direction of the worst particle’s location and the worst swarm’s location. In this way, the global search space can be expanded and the probability of particles falling into the local optimum can be reduced.

Traditional PSO algorithm, each particle’s flying direction follows a certain rule: it always approaches the location of the best particle in the swarm. From the above analysis, we can conclude that the risk of doing this is that if the best particle in the swarm cannot find the optimal solution, then it will cause the entire swarm cannot find the optimal solution. Aim at avoiding falling into local optimal solution. The new algorithm gives each particle two flight directions (an active direction and a passive direction). This is different from the traditional PSO algorithm in which particles only have one direction of flight. The passive direction is used to store particles that fall into the local optimum. Since particles in the passive direction are not always worse than those in the active direction, the new algorithm will select the best particle from active and passive directions for the update operation. Then a competitive strategy is introduced into the new algorithm. This competitive strategy prompts the worst information to compete with the best. This makes the algorithm not only reduce the probability of falling into the local optimum solution, but also can increase the probability of converging to the optimal solution, and the time complexity is only double that of the traditional PSO algorithm.

How will the new algorithm give each particle the two flight directions (the active and the passive)? In the traditional PSO algorithm, each particle has been planned an active flight direction. This direction is still retained in the new algorithm as the active flight direction. The process that produces the passive flight direction is the same as the process that produces the active flight direction. It is well known that when a particle is in a search for the optimal solution in the solution space, it does not know where the optimum solution is. In the traditional PSO algorithm, the best location of a particle and the best location of the entire swarm will be recorded. Based on these two best locations, generate the particle’s active direction through (1) and (2). We can amend the process of generating the active flight direction of particles in the traditional PSO algorithm to generate the passive flight direction of particles. In this process, we record the location of the worst particle and the worst location of the entire swarm. Based on these two worst locations, the new algorithm generates the passive direction of particle flight through (13) and (14). Equations (13) and (14) are the new velocity and location update equations of particles in the new algorithm.where represents the worst location that the particle has recorded. represents the worst location that the entire swarm has recorded.

The particle updating strategy of the IPSO is to select the best one in two flight directions (the active and the passive) for particles updating. In this way, the search space of particles can be expanded, and the premature local optimum can be avoided. Therefore, the probability of finding the global optimal solution can be improved. Since IPSO needs to calculate the velocity and location of each particle twice, the time complexity of IPSO is twice that of the traditional PSO. The framework of IPSO is explained in Algorithm 1.

Step 1: randomly generate particles and give the velocity and location of each particle.
Step 2: evaluate each particle using the fitness value.
Step 3: for each particle,
If the fitness value of the particle is less than the best particle’s fitness value of , update its location with .
Otherwise, if the fitness value of the particle is greater than the worst particle’s fitness value , update its location .
Step 4: for each particle,
If the fitness value of the particle is less than the best fitness value of the entire swarm, update the value of using this particle’s fitness value.
Otherwise, if the fitness value of the particle is greater than the worst fitness value of the entire swarm, update the worst fitness value of the entire swarm using this particle’s fitness value.
Step 5: for each particle,
(1)Generate this particle’s active flight direction using (1) and (2).
(2)Generate this particle’s passive flight direction using (13) and (14).
(3)Compare and , and choose the best of them to update this particle.
Step 6: repeat Step 2 until you find the optimal solution or satisfy the termination condition.

3. Results and Discussion

In order to verify the effect of the IPSO algorithm, we use the same 10 benchmark functions described before to do the experiment and compare it with the traditional PSO algorithm. Again, in the experiments, the two algorithms run 100 times for every function, and the parameters are set as , . Table 4 is an experimental results comparison.

From Table 4, we can see that our IPSO achieves amazing results. It always achieves optimal solutions on six out of the 10 functions. The function F3 almost always achieves the optimal solution (97 times out of 100 runs). On the rest three difficult functions, F4, F5, and F6, our IPSO also significantly improves the success rate of finding the optimal solution. For example, for the most difficult function F6 (again, it is a multimodal function), it achieves the optimal solution 18 times out of 100 runs. The traditional PSO algorithm can only achieve five times out of 100 runs.

For the last three multimodal functions (shown in Table 2), Table 5 shows the experimental results (including the best value, the mean value, and the worst value). From Table 5, we can conclude that our IPSO also significantly improves the performance of the traditional PSO algorithm. For these three difficult functions, our IPSO can achieve very close to the optimal solutions shown in the last column (only a tiny difference). Their best, mean, and worst values are also very close to each other.

3.1. Varied Parameters Setting

In order to further verify the performance of our IPSO, we also carried out experiments on the different values of parameters and compared the experimental results with the traditional PSO. There are three parameters , and in the equation of the velocity update. Among them, and are the accelerating factors, and their value is constant. Generally, their value is set as [1]. We will not vary them here. We just change the value of inertia weight . It is used to balance the global and local search capabilities. The larger the inertia weight , both our IPSO and the standard PSO promote global search. Note that when we vary the inertia weight , the values for and are varied relatively.

We show the experimental results of different values of the inertia weight from 0.1 to 0.9 on the function F3 in Table 6. Like previous experiments, each algorithm runs 100 times with the firmed accelerating factor setting .

Table 6 shows that our IPSO always performs better than the traditional PSO algorithm under different settings. It also shows that both our IPSO and the traditional PSO perform the best when the inertia weight  = 0.6 or 0.7. With the increase of inertia weight , the performance of the two algorithms is getting better. However, when the inertia weight is set not less than 0.8, our IPSO keeps its performance, but the traditional PSO gets worse with the increment of the inertia weight.

4. Conclusions

An information sharing based PSO algorithm called IPSO is proposed in this paper. Comparing with the traditional PSO algorithm, improvements have been made in two areas:

First, it introduces a new information sharing mechanism: each particle’s worst location and the entire swarm’s worst location are also recorded. This new mechanism causes the particles to move in the opposite direction of the worst particle’s locations and the worst swarm’s locations. In this way, the global search space can be expanded and the probability of particles falling into local optimum can be reduced.

Second, it introduces a particle competitive strategy. This competitive strategy prompts the worst information to compete with the best. This makes the algorithm not only reduce the probability of falling into the local optimum solution, but also increase the probability of converging to the optimal solution, and the time complexity is only double that of the traditional PSO algorithm.

In summary, in this paper, we first investigated the performance of PSO by comparing it with GA and verified that PSO performs much better than GA. Then, we analyzed the shortcoming of the standard PSO. With the guidance of the analysis, we proposed a new PSO algorithm (IPSO). Our experimental results showed that IPSO performs better than the traditional PSO and the GA algorithm on benchmark functions. Especially for difficult functions, our IPSO significantly improves the success rate of finding the optimal solution.

Data Availability

The benchmark function data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This paper was supported by the National Natural Science Foundation of China (U1911205).