Abstract
Dragonfly algorithm (DA) is a recently proposed optimization algorithm based on swarm intelligence, which has been successfully applied in function optimization, feature selection, parameter adjustment, etc. However, it fails to take individual optimal position into consideration but only relies on population optimal position and 5 behaviours to update individual position, leading to low accuracy, slow convergence, and local optima. To overcome these drawbacks, Tent Chaotic Map and Population Classification Evolution Strategy-Based Dragonfly Algorithm (TPDA) is proposed. Tent chaotic map is used to initialize the population, making individuals distributed more uniformly in search space to improve population diversity and search efficiency. Population is classified according to individual fitness value, and different position update methods are adopted for different types of individuals to guide the search process and improve the ability of TPDA to jump out of local optima, thus realizing a balance between exploration and exploitation. The efficiency of TPDA has been validated by tests on 18 basic unconstrained benchmark functions. A comparative performance analysis between TPDA, Particle Swarm Optimization (PSO), DA, and Adaptive Learning Factor and Differential Evolution-Based Dragonfly Algorithm (ADDA) has been carried out. Experimental and statistical results demonstrate that TPDA gives significantly better performances compared with PSO, DA, and ADDA on the average and standard deviation in all 18 functions. The global optimization capability of TPDA on high-dimensional functions and the comparison of the time complexity of TPDA and other swarm intelligence algorithms is also verified in the paper. The results indicate that TPDA is able to perform better on optimizing functions without consuming more computational time.
1. Introduction
There is an inevitable demand for optimal usage of things due to the limitation of time, money, and resources in real world. The purpose of optimization for engineering and business problem is to maximize efficiency, performance, and productivity. Conventional optimization algorithm like gradient descent method is prone to stagnation in nonlinear space research and performs poorly on high-dimension multimodal functions. As a typical stochastic optimization algorithm, swarm intelligence (SI) optimization algorithm has attracted the attention of researchers all over the world. SI optimization algorithm is a computing technology based on biological swarm behaviour, which has been applied in combinatorial optimization, image process, data mining, and distributed problems. A swarm is characterized by a group of self-organized and decentralized system, among which individuals are uncomplicated and equal. Swarm operates through the communication and cooperation between individuals [1]. Compared to traditional approach, SI algorithms do not require any gradient information and are simple to implement, including Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) algorithm, and Grey Wolf Optimization (GWO). Dragonfly Algorithm (DA) proposed by Mirjalili is a novel metaheuristic algorithm, which is inspired by static social and dynamic flight behaviours of dragonflies in 2016 [1]. Because of its clear structure, simple implementation and high efficiency, DA has been successfully used to optimize function [2], select feature [3] and adjust parameter [4]. However, the algorithm does not take individual optimal position into consideration, but only relies on population optimal position and 5 behaviours between individuals to update positions, bringing several disadvantages such as low accuracy and slow and premature convergence [5]. Many improved methods have been proposed to improve the performance of DA. Wu et al. [6] put forward a new dragonfly algorithm based on enhancing the exchange of individuals’ information (ETPDA), which introduced three strategies: a greedy strategy, an equilibrium strategy, and a combined strategy to solve the problem of insufficient information exchange among dragonfly individuals. Ranjini and Murugan [7] proposed a novel Memory-Based Hybrid Dragonfly Algorithm (MHDA) which combined the exploration capability of DA and exploitation capability of PSO to achieve global optimal solutions. Elite Opposition Learning-Based Dimension by Dimension Improved Dragonfly Algorithm (EDDA) was proposed in the paper [8]. This method improves the initial population and uses the dimension-by-dimension updating and evaluating strategy instead of the whole updating and evaluating strategy on solutions in order to obtain a better search vitality and higher convergence accuracy in the last stage of iteration. The adaptive learning factor and differential evolution strategy are introduced to update individual positions, guide algorithm evolution direction, and improve the performance of traditional dragonfly algorithm (ADDA) [9]. Although aforementioned work improves the performance of DA from various aspects, there is seldom improvement targeted for the characteristics of DA itself so that the convergence ability and optimization accuracy of improved algorithms are unsatisfactory, especially on high-dimensional multimodal functions. In this paper, we focus on the characteristics and connotations of five basic behaviours in DA and Tent Chaotic Map and Population Classification Evolution Strategy-Based Dragonfly Algorithm (TPDA) is proposed. Tent chaotic map is utilized to generate a more evenly distributed initial population in the search space. Dragonfly population is classified based on fitness value, and different position update methods are employed for different categories of dragonfly individuals to guide the evolution of the algorithm more pertinently so that TPDA can converge to the global optima. The performance of TPDA is compared with PSO, DA, and ADDA on 18 benchmark functions of the original dimension and 14 of them on a higher dimension. Statistical analysis proves that TPDA has more powerful global explore ability and local exploit ability than the comparison algorithms and is able to obtain better optimization accuracy.
The remainder of this paper is organized as follows: Section 2 describes the conventional DA. Tent chaotic map, population classification evolution strategy, and TPDA are introduced in Section 3. In Section 4, simulation experiments on benchmark functions are conducted to analyse the effectiveness of TPDA. Finally, several conclusions and future scope are discussed in Section 5.
2. Dragonfly Algorithm
Dragonfly algorithm is inspired by the unique swarming behaviour of dragonflies in nature: foraging and migrating in groups [1]. Foraging swarm behaviour, also known as static swarm behaviour, is characterized by the fact that dragonflies spontaneously divide into several subgroups to prey on insects in different regions, accompanied by local movement and flight path mutation. Foraging swarm behaviour corresponds to the local exploitation of the algorithm. Migration swarm behaviour is otherwise known as dynamic swarm behaviour, in which dragonflies gather into a mass group and fly in a unified direction over long distances, corresponding to the global exploration of the algorithm. Each dragonfly individual of the population represents a solution in the search space, and the movement of dragonflies is determined by 5 different behaviours: Separation (), Alignment (), Cohesion (), Attraction towards a food source (), and Distraction outwards an enemy (). Separation refers to the static collision avoidance of the individuals from other individuals in the neighbourhood. Alignment indicates velocity matching of individuals to that of other individuals in neighbourhood. Cohesion refers to the tendency of individuals towards the centre of the mass of the neighbourhood. Attraction towards a food source and Distraction outwards an enemy are for the purpose of survival [10]. Weights for these 5 behaviours are initialized randomly for each dragonfly at the beginning of the algorithm and are adjusted adaptively during iteration to make sure that DA converges to the global optima. The distance between dragonfly individuals is Euclidean distance, and the neighbouring radius of the dragonflies also increases as the process of optimization progresses. The mathematical implementation of 5 behaviours and radius is explained in the following equations:
In equations (1)–(5), represent the separation, alignment, cohesion, food source, and the position of enemy of the individual, indicates the position and velocity of the current individual, is the number of neighbouring individuals, show the position and speed of the neighbouring individual, and indicate the location of food source and natural enemies. In (6), indicate the upper and lower limitation of the search space and indicate the current iterations and the maximum iterations. In the implementation of DA, the optimal position and the worst position of each iteration are regarded as the position of the food source and natural enemies, respectively, according to the connotation of the behaviours.
The overall behaviour of dragonflies is assumed to be the combination of these 5 behaviours. If the current individual has at least one dragonfly in the neighbourhood, its velocity and position will be updated in the following equations:where denote the weights of five behaviours, represents the inertia weight, and is the iteration counter.
When there is no dragonfly in the neighbourhood of the current individual, the position will be updated by utilizing Levy Flight equation as given in (9). The randomness, chaotic behaviour, and global search capability of dragonflies will be enhanced by the introduction of this random walk strategy.where represents the dimension of the dragonfly individual.
Levy Flight strategy is shown in the following equations:where are two stochastic numbers in and is a constant, which is usually taken as 1.5.
3. Tent Chaotic Map and Population Classification Evolution Strategy-Based Dragonfly Algorithm (TPDA)
3.1. Tent Chaotic Map
SI optimization algorithm starts from a population of solution and searches for the optimal solution according to several certain rules. Using initial population with a high level of uniformity can decrease the chance of missing a considerable part of search space [11], accelerate the process of the search, and improve the accuracy of the optimal solution. Therefore, a good population initialization method is required to generate a better initial population for the subsequent search process. Pseudo-Random Number Generators (PRNGs) are applied in most SI optimization algorithms to generate the initial solution population, which is the principle used to generate a set of uniformly distributed random numbers to cover the feasible region in the search space. Mersenne Twister [12], as one of PRNGs, is utilized by random functions in MATLAB by default. Nevertheless, the population generated by PRNGs is not uniformly distributed enough, which is especially obvious in high-dimensional space with small population size [13]. Consequently, assuming uniformity as a key factor of initial population, more advanced techniques are demanded to provide better uniformly distributed initial populations [11].
Chaotic maps are used to generate chaotic sequence which is characterized by randomness, ergodicity, and regularity. When it comes to optimization algorithms, Chaotic maps can maintain the diversity of the population effectively as well as enhance the ability to jump out of the local optima and global search. Chaotic maps can replace PRNGs to generate a better initial population in order to improve population diversity, success rate, and convergence speed in the field of function optimization [14]. Tent chaotic map [15–20] is characterized by excellent ergodic uniformity and convergence rate among the commonly used chaotic maps. Tent chaotic map is a piecewise two-dimensional linear map widely used in the generation of chaotic spreading codes, the construction of chaotic encryption systems, and the implementation of chaotic optimization algorithms. Considering that DA relies on 5 interactive behaviours between individuals to update the position and is sensitive to the method of population initialization [21], uniform distribution of population may promote the communication and cooperation between individuals. Tent chaotic map is utilized for population initialization in this work and is expressed as given in the following equation:where is a random number between 0 and 1. There are several small periodic points and unstable periodic points in the tent chaotic sequence. In order to avoid falling into these points and maintain the characteristics of the chaotic sequence, random variable is introduced in tent chaotic map. The improved tent chaotic map is expressed as given in the following equation:where indicates the number of individuals in the tent chaotic sequence. Figure 1 exhibits the histogram distribution graph of tent chaotic map with iterations. In this figure, vertical axis indicates the number of individuals in the sequence generated by tent chaotic map falling in the corresponding horizontal axis interval. It can be observed that the chaotic sequence is evenly distributed in the interval . As a result, tent chaotic map is adopted for population initialization in this paper which ensures the proposed algorithm can obtain an initial population with better diversity.

3.2. Population Classification Evolution Strategy
After each iteration, the fitness values of most individuals in the dragonfly population are moderate, except for a small number of individuals with excellent or poor fitness values. Therefore, population can be divided into good, medium, and poor groups according to the fitness value and different step and position update methods are utilized for different groups so as to improve the global and local search capability of the algorithm.
correspond to the position and the fitness value of the dragonfly in iteration of the process. denote mean and second order origin moment of the population fitness value in iteration, respectively. Population is classified by aforementioned statistical indicators. If , the fitness value of the individual is small, and the individual is close to the optimal solution, so careful search in its own neighbourhood is acquired. If , the individual possesses a large fitness value and is far from the optimal solution, so the global exploration should be strengthened to promote convergence. Otherwise, if , the fitness value of the individual is moderate and the balance between local exploitation and global exploration should be found.
In addition, the classical DA only takes the global optimal position of each iteration, i.e., the food position into account, and ignores the historical optimal positions of the individual. Inspired by Particle Swarm Optimization [22], the historical optimal position of the dragonfly individual is considered when updating its position so that the dragonfly individual can evolve under the guidance of both the population optimal position and its own optimal position, which is beneficial for the balance of global exploration and local search of the algorithm. On the basis of classification of dragonfly individuals, the updating of their positions is treated differently according to the different categories to which the individuals belong.
Specifically, if there are other individuals in the neighbourhood of the dragonfly in iteration, the position is updated by the following equations:
No experience of other individuals can be referred to when there is no adjacent individual in the neighbourhood of the current dragonfly individual, and Levy Flight is utilized to make dragonfly individual walk randomly in the search space. If the current individual is close to the optimal solution, the current position is taken as the centre and the historical optimal position of the individual itself is used to guide the position update. If the fitness value of the current individual is moderate, we still take the current position as the centre but the position is updated by the global historical optimal position as well as the individual itself historical optimal position. If the current individual position is bad, the global historical optimal position is taken as the centre directly to execute the Levy Flight of reverse search [23] in order to speed up the convergence. The position update formula with Levy Flight is expressed as given in the following equation:where indicate the position, step vector, and historical optimal position of the dragonfly in iteration. indicates the global optimal position. denote the position update factor and is a random number between 0 and 1. is adaptive learning factor, which is expressed as given in the following equation:where is the minimum constant in the computer and is represented by eps in the MATLAB.
3.3. TPDA
The specific steps of TPDA are depicted as follows according to the improved method described above:(1)Parameters setting. Define objective function and set the population size , the maximum number of iterations, the upper and lower bounds of the search space, and the individual dimension.(2)Initialize the dragonfly population by tent chaotic map and initialize the individual step vector .(3)Start iteration. Calculate the population fitness values and update the position of food source and enemy as well as value of 5 behaviour vectors and their weights .(4)Calculate the mean of the fitness values of the population and second-order origin moment and classify the population. Update the neighbourhood radius to judge whether there is an adjacent individual. If yes, update the position using equations (14) and (15). Otherwise, update the position using equation (16).(5)Judge whether the termination condition is satisfied. If so, stop the iteration and export the final result. Otherwise, return to (3).
The flow chart of TPDA is shown in Figure 2.

4. Experiments and Analyses
4.1. Benchmark Functions
In order to test the performance of SI optimization algorithms, a series of benchmark functions have been designed which are characterized by unimodal, multimodal, separable, and nonseparable [24–27]. Unimodal functions have only one global optimal value, which are utilized to test the optimization accuracy of the algorithm. Multimodal functions have multiple local optima, which are utilized to test the capability of the algorithm to avoid sticking in the local optimal and search globally. A separable function with D variables can be expressed as the sum of D functions with only one variable; thus, the optimization of a separable function is equivalent to the optimization of these functions separably. On the contrary, a nonseparable function cannot be written in the above form and it has interrelation among variables, so it is more difficult to be solved than a separable function. In particular, the difficulty of nonseparable functions is more prominent than that of separable functions when the number of variables increases. Therefore, nonseparable functions can be used to examine the effectiveness of the algorithm in high-dimensional space. Using U for unimodal, M for multimodal, S for separable, and N for nonseparable, benchmark functions can be divided into four types: unimodal separable (US), unimodal nonseparable (UN), multimodal separable (MS), and multimodal nonseparable (MN).
In order to investigate the performance of TPDA in function optimization, 18 benchmark functions are selected for simulation experiments. The name, expression, dimension, property, variable range, and theoretical optimal value of each function are shown in Table 1. The convergence error threshold for each function is also given in Table 1. Near-optimal solution is considered to be achieved when the algorithm converges to within the convergence error threshold of the theoretical optimal solution. In addition to the properties of unimodal, multimodal, separability, and nonseparability, functions in Table 1 have some other characteristics. Zakharov and Booth functions have flat surfaces, and it is difficult for algorithms since the flatness of these functions does not provide any information to direct the search process towards the minima. The global optimum solution of Schaffer is very close to the local ones, which makes it hard to distinguish them and converge to its global minimum. Rosenbrock and Dixon-Price are unimodal nonseparable functions with the global optima located in a narrow and parabolic-shaped valley. It is easy to find the valley but difficult to find the global optima because the value of functions in the valley changes little. Quartic function is padded with noise, and its function value depends on the random noise generator, so the algorithm never gets the same value at the same point. Algorithms that do not perform well on Quartic function will do poorly on noisy data. The effectiveness of algorithms can be fully examined through various types of functions. Figure 3 shows the function image of the above functions in two-dimensional space.

(a)

(b)

(c)

(d)

(e)

(f)
4.2. Experimental Design
The simulation platform of this work is Windows 10 Professional Edition system, Intel (R) Core (TM) i5-6300HQ CPU@2.30 GHz processor, 12 GB RAM and MATLAB R2019a. To validate the performance of TPDA, Particle Swarm Optimization (PSO) [22], the conventional DA [10], and Adaptive Learning Factor and Differential Evolution-Based Dragonfly Algorithm (ADDA) [9] are introduced for comparison.
The parameters of the aforementioned algorithms during the simulation are as followed. Population size . In order to reduce the influence of randomness, each algorithm runs 30 times on F1–F18 separately and the maximum number of iterations in each round is 500. The average value and standard deviation of the 30 rounds of experimental solutions are recorded. The main parameters of each algorithm are set according to the original paper, which is shown in Table 2.
Among the eighteen selected functions, F11, F12, F15, and F16 are fixed-dimension functions and the rest are functions with variable dimensions. In order to further investigate the performance of TPDA on high-dimensional functions and the influence of the change of function dimensions on the effectiveness of the algorithms, the population size is set to 30 and the dimensions of F1–F10, F13, F14, F17, and F18 are 30. Each algorithm runs 30 times on these functions, and the maximum number of iterations per round is 500. The mean and standard deviation of the experimental solutions for 30 rounds are recorded.
4.3. Analyses of Simulation Results
The mean best values and standard deviations of each algorithm are shown in Tables 3 and 4, where the best results of each function under the two statistical indicators are highlighted in bold. In Table 3, compared with PSO, DA, and ADDA, TPDA achieves the best results in terms of mean best value on all 18 functions with original dimension. Specifically, TPDA achieves the theoretical optimal values on F2, F11, F12, F14–F16, and F18 and approximate optimal solutions are achieved on F1, F3–F9, F13, and F17 (the algorithm converges to the specified error threshold). In contrast, PSO only achieves approximate optimal values on F11, F12, and F16. DA only achieves near-optimal solutions on F12 and F16. ADDA is better than PSO and DA but still performs poorly on F6, F8, F9, F13, and F14. The common ground of functions that PSO, DA, and ADDA do well on is that they are simple comparatively with few variables and clear structure. When it comes to more complicated functions such as F6, F8, F9, F13, and F14, TPDA improves the convergence results obviously. For instance, the best results between PSO, DA, and ADDA on F6 is 6.4055 while the result TPDA obtains is 7.1576e − 114. In terms of standard deviation, TPDA achieved the best results on 15 (83.3%) of the 18 selected functions and the results of the 3 functions which TPDA does not achieve the best results on are also almost 0. The standard deviations obtained by the comparison algorithms on F6, F8, F9, F13, and F14 are much greater than that by TPDA, which means TPDA possesses better stability on multimodal nonseparable functions.
The dimension of benchmark functions in Table 4 is 30. In terms of mean best values, TPDA achieves the best results on all 14 functions compared with PSO, DA, and ADDA. In detail, TPDA achieves the theoretical optimal values on F2, F14, and F18 and performs even better on F1, F3–F6, and F10 than in the low-dimensional case. This indicates that the advantage of TPDA is to deal with high-dimensional problems. The optimization results of comparison algorithms on high-dimensional function are worse than in the low-dimensional case, making the gap between them and TPDA larger. For PSO, the optimization result of F17 at 30 dimensions is 3.7 times that at 10 dimensions, which is the lowest degree sliding. The optimization result of F1 at 30 dimensions is 6248.6 times that at 10 dimensions and that is the greatest degree sliding. For DA, the optimization result of F8 at 30 dimensions is 2.6 times that at 10 dimensions and that is the lowest degree sliding. The optimization result of F10 at 30 dimensions is 275.6 times that at 10 dimensions and that is the greatest degree sliding. Although ADDA does better than PSO and DA in optimizing the functions, its effect still has a significant regression when the function dimension becomes larger. In terms of standard deviation, TPDA obtains the best results on 13 (92.9%) of the 14 functions. Only on F10 ADDA obtains 0.1709, which is 9.9% higher than the result of ADDA. For PSO and DA, all the standard deviation results of 30-dimensional functions are worse than those at 10 dimensions. For ADDA, although the standard deviation results of some functions are slightly improved, they are still poor on the whole. Experimental results indicate that TPDA is more stable on high-dimensional functions than PSO, DA, and ADDA.
In summary, TPDA has stronger function optimization capability and stability than the comparison algorithms. For unimodal functions (F1–F10), the convergence accuracy is improved by an order of magnitude. For multimodal functions (F11–F18), TPDA is able to avoid local optimal solutions and carry out global search. TPDA has strong robustness and optimization ability for high-dimensional nonseparable functions, which is because the population initialization based on tent chaotic map can obtain a more uniformly distributed initial population in the search space so that the search process of TPDA starts with a better population. Besides, the population classification evolution strategy based on fitness value can update the individual position more pertinently to guide the direction of the algorithm and strengthen the capability of the algorithm to jump out of local optimal and search globally.
In order to visualize the performance of TPDA, the convergence curve of mean fitness value of the algorithms is given. Figure 4 shows the convergence curve of TPDA on F1–F18 with original dimension. Since the theoretical optimal value of F12 and F16 are 0.3979 and −1.0316 and that of the other functions is 0, the vertical axis of the convergence curve of F16 is set as function value . The vertical axis of the convergence curve of other functions is set as the logarithm of the function value for the convenience for curve drawing and image display. The horizontal axis of the function convergence curve is the number of iterations. Figure 5 shows the convergence curve of the variable dimension function with 30 dimensions. The vertical axis of the convergence curve is the logarithm of the function value, and the horizontal axis is the number of iterations. PSO, DA, ADDA, and TPDA are represented by different colours and marks on curves. The part of curves that cannot be displayed is because the function value is already 0 so that the logarithm value is infinitesimal.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

(p)

(q)

(r)

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)
The optimization process of TPDA presents a better convergence curve in Figures 4 and 5 regardless of whether the test function is unimodal or multimodal, separable or nonseparable, and low-dimensional or high-dimensional. It can be seen that the influence of the two improvement methods proposed in this paper have on TPDA from the figures. First of all, the starting point of the convergence curve of TPDA is lower than the starting point of the comparison algorithms on the vertical axis, especially for functions F2, F4, F9, F10, and F13 in the case of 10 dimensions and 30 dimensions, which means the initial population of TPDA is significantly better than that of the comparison algorithms. This is mainly because the population initialization method based on tent chaotic map can produce a more uniform distributed population among the search space so that the coverage is wider, laying a solid foundation for the subsequent iterative search. Secondly, in the later stage of iterations, the convergence curves of the comparison algorithms are relatively gentle, which means the convergence speed is slow. As a result, the optimization accuracy of the unimodal functions is poor and it is difficult for the comparison algorithms to jump out of the local optimal on the multimodal functions. However, the population is classified in TPDA according to the fitness value and the individual position is updated according to the category it belongs to. By making the good individuals search around themselves and the poor individuals search with the experience of other individuals, the direction of TPDA is guided purposefully to find the global optimal. Therefore, the convergence curves in the later stage of iteration of TPDA are steeper and TPDA possesses better search vitality and performance in function optimization, which also confirms the data in Tables 3 and 4.
4.4. Computational Time Complexity
As the statistical results have shown the efficiency of TPDA, it should be concerned whether TPDA increases the time complexity of the algorithm [28, 29]. In this section, we compare the computational time complexity of TPDA with PSO and DA. The population size is , and the maximum number of iterations is set to . The time complexity in each procedure of PSO is described as follows:(1)Initialization: (2)Evaluating the fitness of the population: (3)Updating individual optimal location: (4)Updating global optimal location: (5)Updating individual position and speed vector:
The overall time complexity of PSO is . For the same reason, the time complexity of DA is . According to the description of TPDA in Section 3.3, the time complexity of TPDA is shown as follows:(1)Parameters setting: (2)Initialization: (3)Fitness value calculation: (4)Updating the position of food source and enemy: (5)Population classification: (6)Updating individual position:
The overall time complexity of TPDA is . From the above analysis, TPDA equals to DA in terms of time complexity, which means TPDA can perform better than DA without consuming more computational time. The time complexity of TPDA is higher than that of PSO, but the global optimization capability of TPDA is also much better than that of PSO, especially on high-dimensional functions. We can trade time for higher accuracy in some cases.
5. Conclusions
Tent Chaotic Map and Population Classification Evolution Strategy-Based Dragonfly Algorithm (TPDA) is proposed for numerical optimization problems in this paper in order to overcome the shortcomings of the conventional DA. Tent chaotic map is used to initialize the population, making the population distributed more evenly among the search space and the algorithm search from a good population. During the iteration, dragonfly population is classified based on the fitness value and the individual position is updated according to the category it belongs to in order to guide the evolution of the algorithm. In order to validate the effectiveness of TPDA, the experiments are carried out using 18 benchmark functions. In the experiments, TPDA is compared with other SI optimization algorithms such as PSO, DA, and ADDA in terms of mean and standard deviation of results. The simulation results show that in all cases, the mean of the solution given by TPDA is better than that provided by other algorithms, and in 28 of 32 cases, the standard deviation of results provided by TPDA is better than others, which indicates that TPDA has better global explore and local exploit ability and higher optimization accuracy and convergence activity than the comparison algorithms. TPDA can stably converge to the global optimal on the high-dimensional nonseparable functions. Moreover, TPDA does not require any gradient information, so it can be easily applied to real-world problems. The disadvantage of the algorithm is that the population classification evolution strategy increases the algorithm complexity, and the follow-up work will focus on analysing and solving this problem. In addition, to verify the effectiveness of TPDA in practical problems, TPDA will be applied in practical optimization problems in the next step, such as target threat assessment and UAV path planning.
Data Availability
Data sharing is not applicable to this article as no data sets were generated or analysed during the current study.
Conflicts of Interest
The authors declare that there are no conflicts of interest in this paper.
Acknowledgments
This work was supported by the Equipment Comprehensive Research Project under Grant no. 424[2021] and National Natural Science Foundation of China under Grant 61871405.