Abstract
The optimization problems are taking place at all times in actual lives. They are divided into single objective problems and multiobjective problems. Single objective optimization has only one objective function, while multiobjective optimization has multiple objective functions that generate the Pareto set. Therefore, to solve multiobjective problems is a challenging task. A multiobjective particle swarm optimization, which combined cosine distance measurement mechanism and novel game strategy, has been proposed in this article. The cosine distance measurement mechanism was adopted to update Pareto optimal set in the external archive. At the same time, the candidate set was established so that Pareto optimal set deleted from the external archive could be effectively replaced, which helped to maintain the size of the external archive and improved the convergence and diversity of the swarm. In order to strengthen the selection pressure of leader, this article combined with the game update mechanism, and a global leader selection strategy that integrates the game strategy including the cosine distance mechanism was proposed. In addition, mutation was used to maintain the diversity of the swarm and prevent the swarm from prematurely converging to the true Pareto front. The performance of the proposed competitive multiobjective particle swarm optimizer was verified by benchmark comparisons with several state-of-the-art multiobjective optimizer, including seven multiobjective particle swarm optimization algorithms and seven multiobjective evolutionary algorithms. Experimental results demonstrate the promising performance of the proposed algorithm in terms of optimization quality.
1. Introduction
In the field of engineering, aviation scheduling, optimal control, and others, most of the optimization problems are multiobjective optimization problems (MOPs) [1]. MOPs are different from single objective optimization problems. More objective functions need to be optimized, which have the characteristics of conflict or influence each other [2]. This means that it is impossible for all the objective function values to be optimal, in which the optimal solution for one objective function may be the worst solution for another objective. Therefore, a set of trade-off solutions, known as Pareto optimal set, is adopted to represent the best possible compromises among objectives in MOPs. The practical problems are considered to have the characteristics of high-dimensional nonlinearity and strong constraints, so classic optimization algorithms (conjugate gradient method [3], Newton method [4], simplex algorithm [5], etc.) can no longer solve MOPs effectively. With the development of science and technology, the emergence of intelligent control makes multiobjective optimization reach a more advanced stage. The method of optimal control conditions can take different paths. For example, the introduction of the deformable MEMS device in [6] has a positive effect on improving optimal control. In addition, the intelligent optimization algorithm, which belongs to the bionic algorithm, has also attracted the attention of researchers. Among them, the particle swarm optimization (PSO) algorithm [7], which has the advantages of simple operation, fast velocity, wide application range, and few setting parameters, has become the focus of more researchers.
PSO derived from the simulation of complex adaptive systems which was an evolutionary computation method based on swarm intelligence was proposed by Kennedy and Eberhart in 1995. It was developed inspired by the social behavior of a swarm of animals like birds. In PSO, individuals were called particles and each particle represents a potential solution. The swarm consists of a group of particles flying through the search space searching for the optimal solution, like birds searching for food. Individuals called “particles” in PSO “flow” through the ultradimensional search space. The position change of particles in the search space was based on the individual’s social and psychological intention surpassing other individuals successfully. It could communicate with other individuals and change its structure and behavior according to the process of “learning” or “accumulating experience.” Therefore, changes in the velocity and position of particles will be affected by the experience of other particles.
With the development of intelligent algorithms, the relative simplicity and the practical success of single objective optimizer have motivated researchers to extend the usages of PSO from the single objective optimization problems into MOPs. In 2002, Coello et al. extended PSO from a single objective to multiple objectives, which was used to solve MOPs for the first time [8]. In the research of multiobjective particle swarm optimization (MOPSOs), there are at least two fundamental issues to be addressed. The first issue is how to use a standard to select excellent global leader as the learning sample of all particles flight to guide other particles in the population. Due to the important influence of the leader in the search direction, the random selection of global learning samples in the external archives may lead the algorithm to be trapped into a local optimum. At present, most of MOPSOs based on dominance use the infinite external archive to store nondominated solutions, so the maintenance and update of the external archive are also very important. The second issue is how to balance convergence and diversity of the swarm. It is crucial to the performance of MOPSOs, because PSO-based multiobjective optimizations are very likely to be trapped into the local optimum (or one of many optima) of MOPs due to their fast convergence.
In this article, a novel multiobjective particle swarm optimization based on cosine distance mechanism and game strategy was proposed, which was called GCDMOPSO. To maintain the update mechanism of the external archive, the cosine distance was used to delete the worst particles in the external archive. At the same time, the same number of particles was selected in the candidate set to supplement the deleted particles in the external archive to maintain the update of the external archive dynamically. The main contributions of this article were as follows:(1)Dynamic maintenance of the external archive updates. After each iteration of the algorithm, the nondominated solutions selected from the candidate set were added to the external archive. When the number of nondominated solutions in the external archive exceeded the maximum size, the cosine distance was used to compare the degree of crowding of the nondominated solutions in the archive, and the most crowded solutions were deleted. Then, it could also identify the removed solutions and update the crowding degree of all solutions in the domain (i.e., after deleting the most crowded solutions, recalculate the cosine distance of all other solutions). This method achieves better diversity and preservation.(2)The method by which the individual was selected. In the update process of this algorithm, the fitness value of each individual was calculated through nondominated sorting, which will generate individuals of the same ranking value. The individuals with the same ranking value were selected into the candidate set, and the Euclidean distance between each individual and the origin of the coordinate was calculated. Then the Euclidean distance from each individual to the coordinate origin was sorted in ascending order. In order to maintain the updating of external archive dynamically, when we delete the particles in the external archive, we need to put the same number of individuals into the archive.(3)The selection of the global leader. Based on the recently developed competitive group optimizer and combined game mechanism, this article proposed a novel global leader selection strategy based on the game mechanism. Randomly select two nondominated solutions in the external archive, and compare the cosine distances of the two nondominated solutions, respectively. The winner was selected as the global leader, leading other particles to fly. We can keep all obtained solutions converging along the real Pareto front.
The remaining part of this article is structured as follows. Section 2 describes the related definitions of the MOPs and MOPSOs briefly, as well as the related works from which the main ideas are inspired for designing the new algorithm in this article. Then, the details of the proposed GCDMOPSO are described in Section 3. Section 4 is the experimental part of GCDMOPSO. GCDMOPSO is compared with some selected MOPSOs and MOEAs in this article. Finally, the conclusions are drawn in Section 5.
2. Background
2.1. MOPs
In this section, the definition of the fundamentals of the MOPs is presented. The mathematical forms of the MOPs are described as follows:where is the decision vector of D dimension, , and X is the decision space; , d = 1, 2, …, D; and are the upper and lower bounds of each dimension vector; is the objective vector, , and Y is the objective space; m is the total number of optimization objectives; is the inequality constraint; is the equality constraint. These two constraints determine the feasible region of the solution.
MOPs are different from single objective problems, so the same problem-solving ideas cannot be adopted by the former. It is impossible for a certain solution of MOPs to achieve optimal results for all objectives at the same time, and different solutions cannot be compared due to different objective functions. Therefore, when solving a MOP, a set of solutions are usually obtained, and these solutions have different effects for different objective functions. The solutions in this set are called the nondominated solutions or Pareto optimal solutions. The following is a detailed introduction to the related concepts.
Definition 1. Pareto dominance, , , are two feasible solutions of this MOP, and is dominant by comparison with , expressed as , if and only if
Definition 2. For Pareto optimal, is the Pareto optimal solution on , if and only if the following conditions are satisfiedThat is, there is no better solution than in the set X, so is the optimal solution in X, which is also called nondominated solution or noninferior solution.
Definition 3. For Pareto optimal set, for the MOPs, the optimal solution set can be defined as follows:
Definition 4. For Pareto optimal front, the curved surface consisting of the objective function values corresponding to all Pareto optimal solutions in Pareto optimal solution set is called Pareto front:
2.2. Multiobjective Particle Swarm Optimization
MOPSO is an improvement of PSO. In PSO, the individual birds in the population are abstracted as massless particles. Each particle has its own velocity and position. The position and velocity of particle are expressed as and , respectively. Searching for food in is the space, and food is considered as the optimal solution. Particles are updated according to the following formula:
The right side of equation (6) consists of three parts. The first part is the inertia quantity, where is the inertia weight. Its size determines how much the particle inherits to the current velocity. If the value of is large, the overall search capability of the algorithm will be enhanced; if the value of is small, the local search function of the algorithm will be improved. is generally limited to a random number less than 1. The second part is the cognition of the individual, which represents the movement of the individual to the best position according to his historical flight experience. Among them, pbest represents he optimal position of the individual, is a random number normally distributed in the interval (0, 1), and is the learning factor, representing the degree of particles learning. The third part is the amount of social cognition, which leads to the amount of particles that move to the global optimal position. gbest represents the global optimal position, is a random number normally distributed in the interval (0, 1), and is the learning factor, where is usually taken. The coordination of these three parts determines the overall performance of the algorithm.
With the deepening of research, many scholars have extended PSO to MOPSO, so that the algorithm is more suitable to solve MOPs. In MOPs, the number of the optimal solutions is not unique due to the increase of constrained objective. Combined with PSO, the difference of MOPSO is not only the selection of the historical optimal position and the global leader under multiple constraints but also the storage of the historical optimal position and the global leader. Therefore, MOPSOs used the external archive mechanism to solve storage problems and used the external archive to save the nondominated solutions generated during the search in the entire swarm. The nondominated solutions in the external archive are not dominated by any other particles in the external archive. Therefore, all nondominated solutions in the external archive should meet the two following requirements: (a) The nondominated solutions in the external archive collection do not have a mutual dominance relationship, and it is impossible to compare which of the nondominated solutions are better. (b) The introduced particles are stronger than the solutions in the original external archives, and the weaker solutions in the original external archives should be eliminated.
2.3. Existing MOPSOs
The first PSO variant was proposed by Coello et al. [9]. The authors incorporated the concept of Pareto advantage into the method of PSO. The local optima and the global optima in the swarm were determined by the Pareto dominance principle. For the first time, the secondary storage library (i.e., the external archive) was used to store the nondominated solutions obtained after each iteration. This was the first time that PSO has been used to solve MOPs. Compared with classic MOEAs such as NSGA-II [10] and PAES [11], the first MOPSO proposed was more competitive in solved MOPs, but it was unable to solve MOPs with complex landscapes. To address this issue, Sierra and Coello et al. [12] proposed an improved PSO-based multiobjective optimization, in which Pareto advantage and congestion factor were used to select a list of available leading solutions; and the swarm was divided into three subswarms simultaneously; then different mutation operators were suggested for different subswarms divided by users in advance. In addition, the experience of this algorithm used dominance to fix the size of the external archive. Experimental results show that the performance of the improved optimization on MOPs with multiple local fronts is more competitive.
A speed-constrained MOPSO was proposed by Nebro et al., called SMPSO [13], in which the velocity of all particles was restricted in order to tackle MOPs with multimodal landscapes. The SMPSO allowed new effective particle positions to be generated when the velocities were too large. Other features of the SMPSO included polynomial mutation as turbulence factor and the external archive was comprised of nondominated solutions which were found during the search process. However, most of MOPSOs could not solve MOPs effectively due to the fact that velocities in such algorithms were too rapid.
The above MOPSOs only used a single search strategy to update particle’s velocity. So, Lin et al. proposed a novel MOPSO based on multiple search strategies [14], which used a decomposition method to transform MOPs into a set of aggregation issues, and then allocated each particle accordingly to optimize each aggregation issue. This algorithm designed two search strategies to update the velocity of each particle. After that, all nondominated solutions visited by particles were preserved in an external archive, and the evolutionary search strategy was further executed to exchange useful information between them. These multiple search strategies enabled this novel MOPSO to handle various MOPs more effectively.
In contrast to the MOPSOs where the global optimal solution is determined by dominance relations, Zhang and Li used the framework of MOEA/D [15] to try to embed the decomposition mechanism into the PSO-based multiobjective optimization for the first time and proposed a MOPSO by decomposing a MOP into a number of single objective optimization problems [16]. The algorithm used the PSO search method instead of the genetic operator. Later, an improved version of this multiobjective optimization called SDMOPSO [17] was proposed by Al Moubayed et al. In SDMOPSO, the global optima were only selected from the neighborhood of particles, and crowded files were used to preserve the diversity of swarm leaders. Dai et al. divided the solution space into multiple subspaces and retained only one optimal solution in each subspace so that the nondominated solutions can be evenly distributed. This MOPSO was based on object space decomposition [18]. Based on the decomposition method, Martłnez and Coello also proposed a version of multiobjective optimization called dMOPSO [19], in which the global leader was determined according to the scalar aggregation value. Moreover, a memory reinitializationstrategy was used when a particle reached a certain value. The main aim of this approach was to preserve diversity and to avoid trapping in local fronts. Although the improvement of this algorithm holds a lower computational cost than most of the other MOPSOs which often need to maintain an archive, it is difficult to converge to the true Pareto front when dealing with complex models.
In 2020, Alkebsi and Du proposed a novel MOPSO. This algorithm was a novel archive update mechanism based on the nearest neighbor method, called MOPSONN [20]. In the early stage of this algorithm, the external archive was updated based on the nearby distance measurement. In later generations, two new rules were used, namely, the maximum cost rule and the cost sum rule, to update the archive. These two archive update strategies updated the nondominated solutions in the archives.
In addition, a few scholars have improved the MOPSOs from the aspect of parameter setting to make the MOPSO more optimized [21]. In view of the effective analysis of the abovementioned existing algorithms, this article combined with the cosine distance update mechanism and the meshing strategy. A novel multiobjective game particle swarm optimization based on the cosine distance update mechanism was proposed, which effectively improves the convergence and diversity of solving MOPs. The following section describes the proposed algorithm in detail.
2.4. Acronyms in the GCDMOPSO
In order to read the article more clearly, a table of acronyms is listed in this article. The specific contents are shown in Table 1.
3. The Proposed the GCDMOPSO
In this section, the details of our proposed GCDMOPSO are introduced. The algorithm generates a new population from all individuals initialized randomly. The particles of this population will generate many levels according to their dominance relationship. The first-level individuals generated by the nondominated relationship flow into the candidate set, and a new external file is further created. Then, based on the grid technology and the cosine distance strategy, the individuals introduced in the candidate set are screened to dynamically maintain the external archive. At the same time, the nondominated solutions in the external archives are screened through game strategy as the global leader to guide other individuals to fly. After that, this program updates the velocity and position of the group according to equations (6) and (7).
3.1. Selection of Introduced Particles
Any individual only chooses the appropriate type of talents as the learning object, and only the outstanding individuals will be selected into the external archives as leaders to lead other individuals to update and iterate. According to the previous MOPSOs, the program calculated the fitness value of each individual and randomly selected individuals with the same ranking value as candidate solutions to enter the external archive to guide other individuals to fly. Due to the fact that the fitness value was calculated to generate the first-level ranking value after the iterative update of the algorithm may have the same value, the random selection method in the previous algorithm could not better select the candidate solution. This article has improved it in this part. As shown in Figure 1, in our algorithm, a candidate set is added. The fitness value of each individual is calculated, and the first-level individuals flow into the candidate set. At the same time, the candidate set is regarded as a grid, and the Euclidean distance from the fitness value of each individual to the origin of the coordinate is recalculated. Then the distance from each individual to the origin of the coordinate is sorted in ascending order, and individuals closer to the origin of the coordinates are selected into the external archive. If the nondominated solutions in the external archive do not reach the maximum size, all individuals in the candidate set are entered into the external archive according to the individual fitness ranking value, and they are stored; if the nondominated solutions in the external archive reach the maximum size, the individuals with the smaller cosine distance in the external archive will be eliminated.

In other words, in order to maintain the number of particles in the external archive mechanism at stable level, when a certain number of particles are deleted, the same number of particles will be added from the candidate set.
3.2. Maintenance and Update of External Archives
Archiving strategy is an important part of MOPSOs. Excellent maintenance capabilities can not only improve the search efficiency of the algorithm but also improve the convergence of the algorithm on the other hand. This article mainly adopts the external archive scheme to store the nondominated solutions generated during the entire iterative update. The maintenance principle of the external archive mainly uses the cosine distance measurement mechanism. The cosine distance measurement mechanism is usually used in the field of text classification. Since the text space and the multiobjective space are both multidimensional spaces, they have certain similarities at the same time. Therefore, the cosine distance measurement mechanism is applied to the multiobjective optimization. If a dimension is represented by a vector, the dimension of a vector can be regarded as a single objective. The cosine distance between objectives can be used to determine the density relationship between individuals.
Definition 5. For weight ratio, suppose that the population size is and the objective function value of particle is expressed as . For the particle, the weight ratio of the objective function value in the dimension is as follows:
Definition 6. For cosine distance, suppose that the objective vector of any particle is expressed as ; according to the cosine formula, the cosine distance between two objectives isIn this article, in order to better control the size of the external archive, the size of the external archive is set to 200. As shown in Figure 2, the objective space is divided into k subregions. Then a subregion with highest density is selected, and the cosine distance between each nondominated solution in each subspace and its neighboring particles is compared. The smaller cosine distance between the nondominated solution and its neighboring particles, the greater the density of the nondominated solution and the poorer distribution.
The GCDMOPSO calculates the cosine distance between the nondominated solution and its neighbor particles according to Definitions 5 and 6 and sorts the cosine distance in ascending order. Then, the nondominated solutions with minimum cosine distance, minimum angle, and maximum density are selected for dynamic deletion. In addition, only one nondominated solution is deleted. Then the cosine distances of other nondominated solutions are recalculated, and the nondominated solution with the smallest cosine distance is deleted. The solid black dots are the remaining nondominated solutions, and the hollow circles are the deleted individuals, with a deletion rate of 40%. At the same time, the same number of individuals is selected in the set of candidate solutions to supply the nondominated solutions deleted in the external archive to maintain the update of the external archive.
Leaders guiding the optimization process are an effective way to design MOPSOs. Among the many strategies currently available, the direction that prompts particles to explore some potential areas guides the search. The cosine distance strategy proposed in this article is quite different from the random strategy proposed in the past. Figure 3 shows a schematic diagram of the comparison between the cosine distance strategy and the random strategy. First of all, all the evaluation indicators of the two strategies (ZDT1–ZDT4 and ZDT6, DTLZ1–DTLZ5, UF1–UF10) are run 30 times, respectively. The data of all evaluation indicators running 30 times are sorted in descending order into 30 levels. Then all the evaluation indicators of each level are averaged. The ordinate indicates the average of all evaluation indicators for each level, and the abscissa indicates that each strategy has been run 30 times. It can be seen from Figure 3 that, in the same level, the average of the cosine distance strategy is better than the average of the random strategy significantly, which fully illustrates the feasibility of the cosine distance strategy.
Figure 4 shows that the GCDMOPSO used the cosine distance strategy to detect the evolution state. Taking ZDT1 as an example, it was compared to seven state-of-the-art MOPSOs and seven classic MOEAs on ZDT1. (a) shows the convergence trajectory of the GCDMOPSO and seven MOPSOs on ZDT1; (b) shows the convergence trajectory of the GCDMOPSO and seven MOEAs on ZDT1. The experimental results indicate the promising convergence speed of the proposed GCDMOPSO in comparison with the seven state-of-the-art MOPSOs and seven classic MOEAs on ZDT1.
As further observations, Figure 5 presents the nondominated set associated with the best IGD value among 30 runs obtained by the GCDMOPSO, and then MOPSOs and MOEAs were compared on multiobjective DTLZ1. The nondominated sets were obtained by dMOPSO, MOPSO, NMPSO, SMPSO, MOPSOCD, MPSO/D, MMOPSO, NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, IBEA, and GCDMOPSO, respectively. The experimental results showed that the proposed GCDMOPSO outperforms the compared MOPSOs and MOEAs in terms of both convergence and diversity on multiobjective DTLZ1.



(a)

(b)

3.3. Selection Strategy of Global Leader
In MOPSOs, each individual has location information and velocity information, as well as the characteristics of information exchange between individuals. These individuals can learn from the best position in history (pbest) and the best position in the world (gbest) and then their position and velocity are updated through equations (6) and (7) in Section 2 to produce a new generation of groups. The choice of the global optimal position (gbest) is closely related to the distribution of nondominated solutions. If few dense nondominated solutions are distributed in a certain area, the sparsely distributed particles are more likely to become the global optimal particles. In order to strengthen the selection pressure of gbest, it was combined with the game update mechanism. Thus, a novel global optimal selection strategy of the game strategy was proposed. The original game group optimizer theory divides the original population into two parts: game success and game failure. The failed part of the game strategy learns from the successful part of the game, and the population is updated iteratively on this basis. References for the specific game process can be found in [30]. The game strategy proposed in this article is different from the original game mechanism. In this article, the game is played in the external archive, and the particles to be updated are randomly selected from two individuals in the external archive. The winner of the game will become the leader, guiding the failed individuals to search for the optimal set, and the successful individuals will maintain the original speed and direction. The specific update process is shown in Figure 6.

The game individuals in this game strategy were elected through nondominated sorting and grid optimal distance. The success or failure of the game is determined according to the cosine distance between the game individuals. The winner of the game acts as the global optimal individual to guide other individuals in the population to fly. In each pair of games, the individuals to be updated randomly select two nondominated solutions a and b from the external archive. The two nondominated solutions a and b are played through the cosine distance, and the game with a small cosine distance is successful. As shown in the pseudocode algorithm, the cosine distance between the nondominated solution a and individual k to be updated is small, so the nondominated solution a guides individual k to be updated to update the speed and position. The update formula is as follows:
In the above formula, and are randomly generated vectors between [0, 1], is the position of the winner of the game, is the current position of the particle, and is the current velocity of the particle.
The whole process from selecting an external archive to comparing cosine distances is called game. Because the selected nondominated solution is random, the individual to be updated is not sure which guide will be selected in the end. The attributes of the leader will determine the effect of individual renewal, and the leader with better attributes will lead the update better. The effect of individual update depends on the leader entirely, so it is called game.
3.4. Steps of the GCDMOPSO
For MOPs, the objectives are mutually restricted. In MOPSOs, blindness is inevitable when controlling external archives and selecting the global optimum. This article proposes a novel strategy for external archive updates and global optimization. The main flow chart is shown in Figure 7 and the main steps of GCDMOPSO are as follows: Step 1. The population was initialized, and acceleration constants and were set to guide other parameters. Step 2. The fitness value of each individual was calculated, and nondominated sorting was performed by comparing its fitness value during the current iteration with the best historical fitness value. Step 3. Whether the terminal conditions were met was determined. If met, output the results and terminate the algorithm. Otherwise, continue to the next step. Step 4. A candidate set was created. By calculating the Euclidean distance from the origin of the coordinates to each individual, individuals with a shorter Euclidean distance were selected into the external archive. Step 5. An external archive was created and the worst solution part of the external archive was deleted using the cosine distance measurement mechanism. At the same time, the candidate set was added as a storage mechanism for screened advantageous individuals’ mechanism. Step 6. The global optimal sample was selected. Using roulette and combining the game update mechanism, design a game strategy that incorporates the cosine distance measurement mechanism to select the global optimal sample. Step 7. According to formulas (6) and (7), update the position and velocity. Step 8. The fitness value of the current individuals was evaluated and ranked. Step 9. gencount = gencount + 1 was set; then move to step 3.

4. Experimental Study
4.1. Test Problems
Comprehensive and diverse test problems were employed in order to assess the performance of GCDMOPSO. First, the ZDT test problems were adopted. If there are only the ZDT series of test functions, they are impossible to show the superior performance of GCDMOPSO. Therefore, other more difficult MOPs, the UF test problems, are used based on complex characteristics. In order to further test the performance of GCDMOPSO in processing MOPs with three objectives, DTLZ1–DTLZ5 and UF8–UF10 test problems are used in this article. These test problems cover most of the challenges in this area, such as many local Pareto fronts, convergence deviations, concavities, and discontinuities. The relevant settings of these test problems are given in Table 2.
Among them, N represents size of the population; M represents the number of objectives; D represents dimension of the decision variable; FEs represents the maximum number of evaluations. For fair comparison, all relevant parameters of the comparison algorithm are set according to the suggestions in the original reference. The population size N of two objectives and three objectives of each algorithm is set to 200, and the maximum number of fitness evaluations is fixed to 10000. For ZDT1–ZDT3 and all UF test problems, 30 decision variables are used, ZDT4 and ZDT6 used 10 decision variables, DTLZ1 used 7 decision variables, and DTLZ2–DTLZ5 used 12 decision variables. For ZDT1–ZDT3 and all UF test problems, 30 decision variables are used, ZDT4 and ZDT6 used 10 decision variables, DTLZ1 used 7 decision variables, and DTLZ2–DTLZ5 used 12 decision variables. In order to draw statistical conclusions, the number of independent runs of each test experiment is set to 30. For detailed information about ZDT, UF, and DTLZ test problems, the reader is referred to [31–33], respectively.
4.2. Performance Measures
The goal of MOPs is to find a uniformly distributed set that is as close to the true Pareto fronts as possible. In order to compare with other algorithms, this article uses inverted generation distance (IGD) [22] to evaluate the performance of GCDMOPSO. It is believed that this performance index can not only explain the convergence effects of the algorithm but also explain the distribution of the final solution. The true Pareto front for computing IGD was downloaded from http://jmetal.sourceforge.net/problems.html.
4.3. Experimental Settings
In the experiment, in order to verify the performance of GCDMOPSO in a convincing way, it was compared with seven state-of-the-art MOPSOs (i.e., dMOPSO [19], MOPSO [9], NMPSO [23], SMPSO [13], MOPSOCD [24], MPSO/D [18], and MMOPSO [14]) and seven classic MOEAs (i.e., NSGA-II [10], NSGA-III [25], MOEA/D [15], MOEAIGDNS [26], SPEAR [27], SPEA2 [28], and IBEA [29]), respectively. For fair comparison, all relevant parameters in the comparison algorithm are set according to their original references, as shown in Table 3. and are crossover probability and mutation probability in Table 3, respectively; and are the distribution indexes of SBX and PM, respectively; F and CR are parameters set by the authors in differential evolution; T is the number of divisions in genetic algorithm; div is the division network number of cells; , , and are the parameters of the velocity update equation used in the MOPSOs. The population size N of two objectives and three objectives of each algorithm is set to 200, and the maximum number of fitness evaluations is fixed to 10000; the size of the external file is set to be the same as N. In order to draw a statistical conclusion, the number of independent runs of each test experiment is set to 30. The average and standard deviation (std) on IGD are collected in corresponding Tables 4 and 5 for performance comparison. In addition, in order to determine the statistical significance, a Wilcoxon rank-sum test was further carried out to test the statistical significance of the difference between the results obtained by GCDMOPSO and the results obtained by other algorithms at α = 0.05. All experimental results are obtained on PC with 2.3 GHz CPU and 8 GB memory. All source codes of these competing algorithms are provided in the platform PlatEMO [34].
4.4. Comparisons of GCDMOPSO with Seven State-of-the-Art MOPSOs
In GCDMOPSO, seven MOPSOs and seven MOEAs are selected, and the program runs the average and standard deviation of the IGD values on ZDT1–ZDT4 and ZDT6, DTLZ1–DTLZ5, and UF1–UF10 in Table 4. Moreover, the Wilcoxon rank-sum test is adopted at a significance level of 0.05, where the symbols “+,” “−,” and “=” in the last row of the tables indicate that the result is significantly better than, significantly worse than, and statistically similar to that obtained by GCDMOPSO, respectively. The best average for each test instance is shown in bold.
It can be directly observed that the performance of the proposed GCDMOPSO is significantly better than the existing seven compared MOPSOs in terms of benchmark testing, that is, dMOPSO, MOPSO, NMPSO, SMPSO, MOPSOCD, MPSO/D, and MMOPSO. Of all 20 test instances, GCDMOPSO achieved statistically significantly better IGD values on 12 test instances which were far greater than those of the competing MOPSOs. For example, the numbers of optimal IGD values for dMOPSO, MOPSO, and MOPSOCD are zero, the number of optimal IGD values for MPSO/D is one, the numbers of optimal IGD values for NMPSO and SMPSO are two, and MMOPSO has five optimal IGD values.
For two-objective ZDT2, ZDT4, and ZDT6, the proposed GCDMOPSO can obtain a set of nondominant solutions, which can approximate the entire Pareto front well and maintain a good distribution. For the three-objective DTLZ1, the proposed GCDMOPSO can still achieve competitive performance, but, on the three-objective DTLZ2–DTLZ5, the performance of GCDMOPSO does not seem to be so ideal. It is worth noting that MMOPSO performed best on the two-objective ZDT1 and ZDT3, due to the fact that it has adopted the crossover and mutation operators in MOEAs in addition to the updating strategies of PSO. In UF1–UF10, the performance is far better than those of other comparison algorithms. Generally speaking, compared with the existing MOPSOs, the proposed GCDMOPSO proves the overall best performance. At the same time, when different algorithms are run independently 30 times, the partial statistical block diagram of the evaluation index IGD of GCDMOPSO algorithm and the comparison algorithm is shown in Figure 8 (1, 2, 3, 4, 5, 6, 7, and 8 represent dMOPSO, MOPSO, NMPSO, SMPSO, MOPSOCD, MPSO/D, MMOPSO, and GCDMOPSO, respectively). As shown in Figure 8, GCDMOPSO recorded the minimum values on ZDT2, ZDT4, ZDT6, DTLZ1, UF1–UF3, UF5–UF7, and UF9-UF10. It can be clearly seen from Figure 8 that GCDMOPSO can obtain better nondominated solutions compared with other MOPSOs. The results are consistent with the qualitative analysis in Table 4.

From the above empirical results, we can conclude that, compared with the existing MOPSOs, GCDMOPSO has application prospects in solving PSO.
4.5. Comparisons of GCDMOPSO with Seven Competitive MOEAs
Table 5 presents the mean and standard deviation of IGD values of NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, and IBEA on ZDT1 to ZDT4 and ZDT6, DTLZ1 to DTLZ5, and UF1 to UF10, where the Wilcoxon rank-sum test is also adopted and the best mean for each test instance is shown in bold. It can be observed that the performance of the proposed GCDMOPSO is significantly better than those of the seven compared MOEAs (i.e., NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, and IBEA) in terms of benchmark testing. According to the results, there are 12 test cases with statistically significant best performance among 20 test examples.
For two-objective ZDT1-ZDT2, ZDT4, and ZDT6, UF1–UF4, and UF7, GCDMOPSO performs best compared to the seven algorithms. For example, the numbers of optimal IGD values for NSGA-III and MOEAIGDNS are zero, the numbers of optimal IGD values for MOEA/D, SPEAR, and SPEA2 are one, the number of optimal IGD values for NSGA-II is three, and IBEA has four best IGD values. For the three-objective DTLZ series, compared MOEAs are obviously better than GCDMOPSO, and this is because genetic factors are more suitable for solving MOPs with local frontiers. Therefore, in the existing MOPs, more researchers suggest the main reason for using genetic factors.
At the same time, when different algorithms are run independently 30 times, the partial statistical block diagram of the evaluation index IGD of the GCDMOPSO and the comparison algorithm is shown in Figure 9 (1, 2, 3, 4, 5, 6, 7, and 8 represent NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, IBEA, and GCDMOPSO, respectively). As shown in Figure 9, GCDMOPSO recorded the minimum values on ZDT1, ZDT2, ZDT4, ZDT6, DTLZ1, UF1 to UF4, UF7, UF9, and UF10. It can be clearly seen from Figure 9 that GCDMOPSO can obtain best nondominated solutions compared with other MOEAs. The results are consistent with the qualitative analysis in Table 5.

From the above empirical results, we can conclude that, compared with the existing MOEAs, GCDMOPSO has application prospects in solving PSO.
4.6. Complexity of the GCDMOPSO
The complexity of the proposed GCDMOPSO depends on the complexity of its components, that is, the complexity of game strategy and cosine distance. The following is the complexity analysis of GCDMOPSO.
Suppose that the population size is N, where there are m nondominated individuals. In general, it is assumed that games have been played. According to the game strategy, an individual will be eliminated after each game. Therefore, a total of k individuals were eliminated, including dominated individuals and nondominated individuals. After k times of games, individuals become winners among the N individuals. For the convenience of analysis, assuming that, in each game, dominated individuals have the same probability of being selected, after k games, there are
The time complexity of the game strategy is . At the same time, in the process of updating the external archive of the game strategy, the calculation complexity of the cosine distance is . Then the total computational complexity is , where M is the number of objectives.
In addition, this article uses MATLAB functions (tic and toc) to calculate the runtime (unit: second) of each algorithm when the number of evaluations is 10000. It can be seen from Tables 6 and 7 that even though GCDMOPSO uses the cosine distance measurement mechanism and the game strategy, the time complexity of GCDMOPSO and other comparison algorithms is on the same order of magnitude on functions ZDT1–ZDT4 and ZDT6, DTLZ1–DTLZ5, and UF1–UF10.
5. Conclusions
This paper has proposed a novel multiobjective particle swarm optimization based on cosine distance mechanism and game strategy to solve MOPs. The optimization was used to update the Pareto set in the external archives through the update strategy of the cosine distance measurement mechanism and add a candidate set as a storage for screened advantageous individuals’ mechanism. The optimization is conducive to Pareto optimal set close to the true Pareto optimal front and maintains the diversity of the swarm. In order to improve the performance of optimization, this article combined the game update strategy to design a global optimal selection strategy of the game strategy based on the cosine distance measurement mechanism. These experimental studies have shown that the proposed GCDMOPSO has better performance than several state-of-the-art MOPSOs and competitive MOEAs.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant no. 71461027) and Innovative Talent Team in Guizhou Province (Qian Ke HE Pingtai Rencai[2016]5619).