Abstract

Chaos optimization algorithm (COA) usually utilizes chaotic maps to generate the pseudorandom numbers mapped as the decision variables for global optimization problems. Recently, COA has been applied to many single objective optimization problems and simulations results have demonstrated its effectiveness. In this paper, a novel parallel chaos optimization algorithm (PCOA) will be proposed for multiobjective optimization problems (MOOPs). As an improvement to COA, the PCOA is a kind of population-based optimization algorithm which not only detracts the sensitivity of initial values but also adjusts itself suitable for MOOPs. In the proposed PCOA, crossover and merging operation will be applied to exchange information between parallel solutions and produce new potential solutions, which can enhance the global and fast search ability of the proposed algorithm. To test the performance of the PCOA, it is simulated with several benchmark functions for MOOPs and mixed controller design. The simulation results show that PCOA is an alternative approach for MOOPs.

1. Introduction

Optimization problems that at least two objectives need to be optimized simultaneously are called multiobjective, and multiobjective optimization problems (MOOPs) are very common in real world and many engineering areas. Solving MOOPs has become a crucial part of the optimization field. The solution of MOOPs is usually not easy because the multiple objectives tend to be in conflict with each other. Generally, the optimal solution of a MOOPs is a set of optimal solutions, largely known as Pareto-optimal solutions [1]. Each solution represents a particular performance trade-off between multiple objectives and can be considered optimal.

Generating the Pareto-optimal set can be computationally expensive and is often infeasible, because the complexity of the underlying application prevents exact methods from being applicable. For this reason, a number of multiobjective search strategies have been developed over the past decades. One of the most important aspects is the development of evolutionary algorithms and a number of multiobjective evolutionary algorithms have been suggested [29]. Evolutionary algorithms are used for MOOPs because they provide a set of solutions in a single run and do not require objectives to be aggregated. Additionally, the performance of evolutionary algorithms does not get affected by the shape of the Pareto front [6]. Other recent effective multiobjective optimization algorithms are multiobjective particle swarm optimization [10, 11], multiobjective artificial immune algorithm [12, 13], ant colony optimization [14], or artificial bee colony (ABC) [15] for MOOPs.

Facing single objective optimization problems, a lot of existing algorithms, such as genetic algorithm (GA), simulated annealing (SA), particle swarm optimization (PSO), differential evolution (DE), harmony search algorithm (HSA), and compact pigeon-inspired optimization [1618], have demonstrated excellent performance, but trapping in local optimum still remains a challenge. Chaos optimization algorithm (COA) is a recently developed global optimization technique based on chaos theory and particularly is the specification of the use of numerical chaotic sequences. Recently, literatures have demonstrated that the COA can carry out overall global searches at higher speed than stochastic ergodic searches that depend on the probabilities [1622]. In addition to the development of COA, chaos has also been integrated with optimization algorithms for MOOPs, such as chaotic nondominated sorting genetic algorithm (CNSGA) [23, 24], chaotic sequences based multiobjective differential evolution (CS-MODE) [25], chaos multiobjective immune algorithm [26], multiobjective chaotic particle swarm optimization [27, 28], multiobjective chaotic ant swarm optimization [29], and multiobjective chaotic artificial bee colony algorithm [30]. Because of the ergodicity and the pseudorandomness of chaos, applying chaos in multiobjective optimization algorithms becomes an interesting alternative for enriching population diversity and escaping from local optimum [24, 29].

The parallel strategy is widely used for its great capability of overcoming the sensitivity to the initial, by producing a number of the initial solutions which are uniformly distributed in solution space. Some new optimization algorithms, developed by applying the parallel strategy to conventional intelligent algorithm framework, such as parallel particle swarm optimization algorithm and parallel multiverse optimizer, show excellent performance in robustness and convergence speed.

Although COA has been successfully applied to single objective optimization problems, as far as we know, there are no literatures concerning COA for MOOPs until now. The reasons for this may be as follows: (1) COA is an individual-based algorithm which is not suitable for MOOPs with Pareto-optimal solutions; (2) the chaotic sequences are pseudorandom and sensitive to the initial conditions; therefore, the success of COA crucially depends on appropriate starting values.

In this paper, a novel population-based parallel chaos optimization algorithm (PCOA) with crossover and merging operation will be proposed for MOOPs. In the PCOA, multiple chaos variables (like population) are simultaneously mapped onto one decision variable, so PCOA searches from diverse initial points and detracts the sensitivity of initial values. In addition, crossover and merging operation will be used to exchange information within population and produce new potential solutions. Actually, the proposed algorithm provides a combination of the ability of global search of PCOA and local search accuracy of crossover and merging operation. To preserve the diversity of the Pareto-optimality, an external elitist archive and accurate crowding measure method is applied in the PCOA for MOOPs.

The rest of this paper is organized as follows. Section 2 briefly describes MOOPs. The PCOA approach is introduced in Section 3. Section 4 gives presentation of PCOA for MOOPs. Test problems simulation results show the effectiveness of the PCOA approach in Section 5. In Section 6, PCOA approach is applied through mixed controller design. Conclusions are presented in Section 7.

2. Multiobjective Optimization Problems (MOOPs)

MOOPs with conflicting objectives do not have a single solution. Therefore, multiobjective algorithms (MOAs) aim to obtain a diverse set of nondominated solutions, i.e., solutions that balance the trade-off between the various objectives, referred to as the Pareto-optimal front (POF). Another goal of MOAs is to find a POF that is as close as possible to the true POF of MOOPs.

The objectives of MOOPs are normally in conflict with one another; i.e., improvement in one objective leads to a worse solution for at least one other objective. Therefore, when solving MOOPs the definition of optimality used for single objective optimization problems has to be adjusted. For MOOPs, when one decision vector dominates another, the dominating decision vector is considered as a better decision vector.

The MOOPs can be mathematically described aswhere is the objective vector to be optimized and the is the number of objective functions. is the set of inequality constraints, is the set of equality constraints, and and are the number of inequality constraints and equality constraints, respectively. We call the vector of decision variables, and is the feasible region. The MOOPs determine the particular set of values , which yield the optimum values of all the objective functions, from the set of all vectors which satisfy (2) and (3).

Several definitions for MOOPs are given by the following [1].Definition 1 (dominance): a vector is said to dominate (denoted by ) if and only if .Definition 2 (Pareto-optimal solution): a point is Pareto-optimal if there is not another that satisfies with .Definition 3 (Pareto-optimal set): for a given MOP , the Pareto-optimal set is defined as .Definition 4 (Pareto front): for a given MOP and Pareto-optimal set , the Pareto front is defined as .

In the general case, it is impossible to find an analytical expression of the line or surface that contains these points. The normal procedure to generate the Pareto front is to compute the feasible points and their corresponding . When there are a sufficient number of these, it is then possible to determine the nondominated points and to produce the Pareto front.

3. PCOA Approach

A novel population-based parallel chaos optimization algorithm (PCOA) is proposed for MOOPs. The salient feature of PCOA lies in its pseudoparallel mechanism. In addition, crossover and merging operation will be applied to utilize the fitness and diversity information of the population. Actually, the proposed algorithm provides a combination of the ability of global search of PCOA and local search accuracy of crossover and merging operation.

3.1. Twice Carrier Wave Mechanism-Based PCOA

Consider a single objective optimization problem for nonlinear multimodal function with boundary constraints as

In the PCOA, multiple stochastic parallel chaos variables (like population) are simultaneously mapped onto one decision variable, and the search result is the best solution of parallel candidate individuals.

The process of PCOA is based on the twice carrier wave mechanism, which is described in Table 1. The first part of PCOA is the raw search in different chaotic traces, and the second part of PCOA is refined search to enhance the search precision.

3.2. Crossover and Merging Operation within Population

In this paper, crossover and merging operation within population are employed in the PCOA. Both crossover and merging operation will exchange information within population and produce new potential parallel variables, which usually are different from chaotic sequences.

3.2.1. Crossover Operation

The motion step of chaotic maps between two successive iterations is usually big, which results in the big jump of the decision variable in search space. This kind of randomness of chaotic maps is benefit to jump out local optimum; however it is not efficient for local exploitation. In this paper, the crossover is used for information interaction between parallel solutions. The crossover operation within population is illustrated in Figure 1. In the crossover, decision variable from one parallel solution is randomly chosen to be crossed with the corresponding one from the other parallel solution. The new candidate individual by crossover operation is denoted by

3.2.2. Merging Operation

Even if PCOA has reached the neighborhood of the global optimum, it needs to spend much computational effort to reach the optimum eventually by search numerous points [22]. In order to improve PCOA’s precise exploitation capability, merging operation within population is employed here. The merging operation within population is illustrated in Figure 2.

The merging operation between two parallel solutions can be denoted bywhere is chaotic map and frequently used Logistic map is defined by the following equation [21, 29]:

This kind of merging operation within population randomly chooses two parallel solutions to merge, and it may produce new potential solutions for optimization problem. In essence, the merging operation within population is a kind of local exploiting search as shown in (7).

The crossover and merging operation within population are also used as the supplement to the twice carrier wave search during each iteration. This means that if the new parallel variables by crossover and merging operation have reached better objective function value than the original two parallel variables, the new parallel variables will replace one original parallel variable. In another situation, if the new parallel variables bring a worse objective function value than the original two parallel variables, the new parallel variable will be given up.

Both crossover and merging operation within population are conducted during each iteration in the PCOA search procedure. Another problem is to choose how many parallel variables for crossover or merging operation. The more the crossover or the merging operation, the more the diversity of parallel variables and the more the computing cost. In this paper, the crossover rate and the merging operation rate are set as and ; that is, about to of parallel variables have been applied through the crossover or merging operation. These parameters values are usually chosen by trial, taking into account both optimal search and computing cost.

4. PCOA for MOOPs

As far as we know, there are no literatures concerning COA for MOOPs until now, and this motivates us to extend PCOA for MOOPs. In the following discussion, the Pareto dominance concept and elitist archive mechanism are used to extend the PCOA to tackle MOOPs.

4.1. Dominance Selection Operator

To extend PCOA to MOOPs, the most important work is the selection mechanism, where the selection operation is based on the concept of Pareto dominance. According to the dominance relation between potential solutions and , there may be at most three situations: (i) dominates , (ii) dominates , and (iii) and are nondominated with each other.

Thus, dominance selection operator is defined as follows:where denotes the less crowded one between and . The crowding degree estimation is introduced in [7].

4.2. Handling Constraints

For optimization problems with constraints, the penalty function approach is frequently used to handle constraints where the constrained-domination is used to handle constraints, which is a penalty-parameterless constraint handling approach [2]. A solution is said to dominate a solution , if any of the following conditions is true [9]: (i) solution is feasible and solution is not; (ii) solutions and are both infeasible, but solution has a smaller overall constraint violation; (iii) solutions and are feasible and solution dominates solution . The effect of using this constrained-domination principle is that any feasible solution has a better nondomination rank than any infeasible solution. All feasible solutions are ranked according to their nondomination level based on the objective function values. However, among two infeasible solutions, the solution with a smaller constraint violation has a better rank [6].

4.3. External Elitist Archive

Since Zitzler and Thiele formally introduced the strength Pareto evolutionary algorithm (SPEA) [31] with the elitist reservation mechanism in 1999, many researchers have adopted similar elitist reservation concepts in practice [1, 79].

The main motivation for this mechanism is the fact that a solution that is nondominated with respect to its current population is not necessarily nondominated with respect to all the populations that are produced by an optimization algorithm. Thus, what we need is a way of guaranteeing that the solutions that we will report to the user are nondominated with respect to every other solution that our algorithm has produced [6]. Therefore, the most intuitive way of doing this is by storing in an external memory (or archive) all the nondominated solutions found. If a solution that wishes to enter the archive is dominated by its contents, then it is not allowed to enter. Conversely, if a solution dominates anyone stored in the archive, the dominated solution must be deleted.

In this paper, the elitist reservation strategy is also adopted. Initially, this archive is empty. As the evolution progresses, the trial vectors that are not dominated by the corresponding target vectors obtained at each generation are compared one by one with the current archive, and good solutions enter the archive. There are three cases when the nondominated trial vectors compare with the current archive [7, 9]: (a) if the trial vector is dominated by member(s) of the external archive, the trial vector is rejected; (b) if the trial vector dominates some member(s) of the archive, then the dominated members are deleted from the archive and the trial vector enters the archive; and (c) the trial vector does not dominate any archive members and none of the archive members dominates the trial vector, which implies that the trial vector belongs to the Pareto front and it enters the archive.

When the external archived population reaches its maximum capacity, the crowding entropy measure is used to keep the external archive at its maximum size as [1, 7]where the parameters and are the maximum and minimum values of the th objective function and is the number of objective functions. Other variables have been described as in [1, 7].

4.4. Algorithm Procedure of PCOA for MOOPs

Consider a multiobjective optimization problem as in (1); is the decision solution vector consisting of variables bounded by lower () and upper () limits. , which represents each decision variable; , which represents each decision variable simultaneously mapped by multiple chaos variables. can also be considered as population size as other evolutionary algorithms. The process of the PCOA approach for MOOPs is described as follows, and it is illustrated in Figure 3.Step 1: set the iterations number , specify the maximum number of iterations , switch point from the first carrier wave to the second carrier wave , initialize chaotic maps randomly, set population size , set crossover probability and merging probability, and initialize the external elitist archive and its maximum size .Step 2: map chaotic maps onto the variance range of decision variables by the following two ways.If , PCOA search using the first carrier wave isIf , PCOA search using the second carrier wave iswhere is a very important local search parameter and adjusts small ergodic ranges around so far global solutions .Step 3: evaluate the fitness value of each target vector .Step 4: in crossover operation within population, produce and evaluate the trial vector .Step 5: in merging operation within population, produce and evaluate the trial vector .Step 6: perform selection operation between , , and .If dominates , update with the elitist archive update rules. If , , and are nondominated with respect to each other, update with elitist archive update rules. Each vector in has stored its respective global solution .Step 7: size external elitist archive . When exceeds the maximum size, the less crowded vectors based on the distance in (10) keep the archive size at .Step 8: generate next values of chaotic maps by a chaotic mapping function () as in (8):Step 9: if , stop the search process; otherwise and go to Step 2.

5. MOOPs Test Simulation

This section is devoted to presenting the experiments performed in this work. First, the set of MOOPs used as a benchmark and the quality indicators applied for measuring the performance of the resulting Pareto fronts are introduced. Next, our preliminary experiments of PCOA are described and analyzed. Then, PCOA is evaluated and compared to other multiobjective optimization algorithms.

5.1. MOOPs Test Problems

Different sets of classical test problems suggested in the MOOPs literature are used to estimate the performance of the PCOA.

Among these classical test problems, the first set is the following biobjective unconstrained problems: Schaffer, Fonseca, and Kursawe, as well as the problems ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 [1, 2]. The second set includes the constrained biobjective problems ConstrEx, Srinivas, and Tanaka [1, 2]. In these test problems, ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 can be considered as high-dimensional tests (with for ZDT1, ZDT2, and ZDT3; for ZDT4 and ZDT6).

5.2. Performance Measures

In order to determine whether an algorithm can solve MOOPs efficiently, the algorithm’s performance should be quantified with functions referred to as performance measures. A comprehensive overview of performance measures currently used in the multiobjective optimization problems literatures has been provided in [32]. There are three goals in a multiobjective optimization: (i) convergence to the Pareto-optimal set, (ii) maintenance of diversity in solutions of the Pareto-optimal set, and (iii) maximal distribution bound of the Pareto-optimal set. In this article, three quality indicators evaluating each type of the above goals are introduced as follows.(a)Generational distance (GD) [1]: the concept of generational distance was used to measure how far the elements are in the set of nondominated vectors found so far from those in the Pareto-optimal set. This indicator is defined aswhere is the number of vectors in the set of nondominated solutions found so far and is the Euclidean distance (measured in objective space) between each of these solutions and the nearest member of the Pareto-optimal set. A smaller value of GD demonstrates a better convergence to the Pareto front.(b)Spread () [2]: this indicator is to measure the extent of spread archived among the obtained nondominated solutions. The metric is defined aswhere is the number of nondominated solutions found so far, is the Euclidean distance between neighboring solutions in the obtained nondominated solutions set, is the mean of all , and and are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. A smaller value of indicates a better distribution and diversity of the nondominated solutions.(c)Hypervolume (HV) [1]: the reference point can be found simply by constructing a vector of the worst objective function values. Thereafter, a union of all hypercubes is found and its HV is calculated:where is the members of nondominated set of solutions. Algorithms with larger HV values are desirable. Since the calculation of the HV is related to the reference point, in our experiment, the HV value of a set of solutions is normalized by a reference set of Pareto-optimal solutions with the same reference point. After normalization, the HV values are confined in [0, 1].

In order to know how competitive the proposed PCOA approach is, it is compared with several popular multiobjective algorithms (MOAs) as follows: nondominated sorting genetic algorithm-II (NSGA-II), strength Pareto evolutionary algorithm 2 (SPEA2), and multiobjective particle swarm optimization (MOPSO), and these MOAs are representative of the state of the art [1]. In the simulations, the parameter values of NSGA-II, SPEA2, and MOPSO are the same as in [1]. The parameters of PCOA used in this simulation are chosen by trial as follows: parallel number , Logistic map in (8) used as the chaotic map, crossover probability , merging probability , archive size , for low-dimensional tests (Schaffer, Fonseca, Kursawe, ConstrEx, Srinivas, and Tanaka) and for high-dimensional tests (ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6), switch point , and local search parameter in (15) . To avoid randomness, all these MOAs independently run 30 times on each test problem.

5.3. Simulation Results

The Pareto fronts obtained with the proposed PCOA on different MOOPs have been illustrated in Figures 4 and 5. It can be seen from Figures 4 and 5 that the proposed PCOA can usually obtain Pareto fronts on different MOOPs, and Pareto fronts obtained with the PCOA are more perfect and uniformly distributed for low-dimensional tests (Figure 4).

Tables 13 and Figures 68 have reported the simulation results of the previously described quality indicators (generational distance (GD), spread (), and hypervolume (HV)) using the four MOAs: PCOA, NSGA-II, SPEA2, and MOPSO. In the simulation results, the “mean” is the average values of 30 runs, and “SD” is the standard deviation (SD) of 30 runs of each MOAs.

Table 1 shows the generational distance (GD) indicator in these tests, and Figure 6 shows the mean of GD for these MOAs. In the group of low-dimensional tests (Schaffer, Fonseca, Kursawe, ConstrEx, Srinivas, and Tanaka), the resulting Pareto fronts from PCOA are as close as those computed by NSGA-II, SPEA2, and MOPSO. In the group of ZDT problems (high-dimensional tests), PCOA obtains better results than MOPSO. This can also be seen from Figures 4 and 5 that the Pareto fronts in Figure 5 (high-dimensional tests) are not as perfect as those in Figure 4 (low-dimensional tests). From Table 1 and Figure 6, it can be seen that PCOA can reach Pareto fronts for all these tests, while its performance in low-dimensional tests is more competitive.

Table 2 shows the spread indicator ()in these tests, while Figure 7 shows the mean of for these MOAs. The results obtained from in Table 2 and Figure 7 indicate that PCOA outperforms the other three MOAs concerning the diversity of the obtained Pareto fronts. PCOA usually yields the lowest values in almost all MOOPs as PCOA is a kind of global search. From Table 2 and Figure 7, we can see that PCOA obtains better results for in all the test problems than NSGA-II, while PCOA obtains better results in most test problems than SPEA2 and MOPSO. This means that PCOA approach has shown good diversity, and this may attribute its success to PCOA’s parallel search pattern and escaping from local optima.

Table 3 shows the hypervolume (HV) indicator in these tests, while Figure 8 shows the mean of HV for these MOAs. From Table 3 and Figure 8, we can see that PCOA approach obtains very large HV values in low-dimensional tests while its HV in high-dimensional tests is also good. It can be seen from these results that PCOA approach outperforms MOPSO based on the HV indicator Table 4.

From the above results, it can be seen that the proposed PCOA approach has shown good performance based on generational distance, spread indicator, and HV indicator. In all these simulation results, the PCOA outperforms MOPSO and it is as good as NSGA-II and SPEA2. This means that the PCOA can be used as an alternative approach for MOOPs.

5.4. Comparing Algorithm Parameters

In order to test the performance of crossover probability and merging probability on the proposed PCOA approach, here the PCOA with different crossover probability and merging probability is also compared (other PCOA parameters values are the same as those in the former simulation). Case 1, crossover probability and merging probability are all 0.5 (denoted by PCOA1); Case 2, crossover probability and merging probability are all 0.3 (denoted by PCOA2); Case 3, crossover probability and merging probability are all 0.2 (denoted by PCOA3). The simulation results of the PCOA with different crossover probability and merging probability have been shown in Figure 9.

It can be seen from Figure 9 that, with higher crossover probability and merging probability, we may have more potential trial solutions for MOOPs; therefore, its performance with quality indicators (generational distance (GD), spread (), and hypervolume (HV)) will be better.

6. Mixed Controller Design

The mixed control synthesis problem is an important multiobjective controller design problem in the field of control theory, and it has received a great deal of attention in recent years. The most popular approach for solving this problem is the linear matrix inequalities (LMIs) approach [33, 34]. Recently, the mixed control synthesis problem is stated as a multiobjective optimization problem, with objectives of minimizing and norms simultaneously. In this section, we will apply the proposed PCOA approach for the mixed control multiobjective optimal design. Consider the following system equations for control synthesis:where .

For the multiobjective control synthesis, the solutions obtained by the PCOA will be compared with the solutions calculated with the function “msfsyn” provided by the MATLAB LMI Control Toolbox. As the LMI-based approach can only find a single solution in each run, the set of solutions of the multiobjective problem is obtained by varying upper bound as .

Figure 10 illustrates the Pareto estimates ( and closed-loop norms) obtained by the two approaches. It can be seen that PCOA is able to find a set of estimates of the Pareto front that are more equally distributed and are with a better extension over the conflicting objectives. Figure 11 shows the obtained solutions in the parameters space (). Considering the analyzed example, the proposed synthesis procedures present better results than LMI-based approaches.

7. Conclusion

As far as we know, there are no literatures concerning COA for MOOPs until now, and this motivates us to propose PCOA for MOOPs. In this paper, a PCOA with crossover and merging operation is proposed for MOOPs. Both crossover and merging operation can exchange information between parallel variables and produce new potential solutions, which can enhance the global and fast search ability of the proposed algorithm. To test the effectiveness of the PCOA, it is simulated with several benchmark functions for MOOPs and mixed controller design. The simulation results have shown that PCOA can be an alternative approach for MOOPs.

Data Availability

The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Key R&D Program of China (no. 2018YFF0212900).