Abstract
Aiming at the inherent problems of swarm intelligence algorithm, such as falling into local extremum in early stage and low precision in later stage, this paper proposes an improved sparrow search algorithm (ISSA). Firstly, we introduce the idea of flight behavior in the bird swarm algorithm into SSA to keep the diversity of the population and reduce the probability of falling into local optimum; Secondly, we creatively introduce the idea of crossover and mutation in genetic algorithm into SSA to get better next-generation population. These two improvements not only keep the diversity of the population at all times but also make up for the defect that the sparrow search algorithm is easy to fall into local optimum at the end of the iteration. The optimization ability of the improved SSA is greatly improved.
1. Introduction
Swarm intelligence optimization algorithm is a kind of bionics algorithm that simulates the behavior of some creatures in nature or is inspired by some physical phenomenon. Its central idea is to balance the global search and local search in a solution space to find the optimal solution. The swarm intelligence algorithm has attracted the attention of many scholars because of its simple operation and high efficiency. At present, these algorithms are widely used in image processing [1], training neural networks [2], signal processing [3], and feature selection [4]. The common swarm intelligence optimization algorithms mainly include particle swarm optimization (PSO) [5], grey wolf optimizer (GWO) [6], whale optimization algorithm (WOA) [7], salp swarm algorithm (SSO) [8], sine cosine algorithm (SCA) [9], and sparrow search algorithm (SSA) [10].
Swarm intelligence optimization algorithms have the problem of easily falling into a local optimum and reducing the diversity of the population in later iterations. In response to this problem, many researchers have proposed methods to introduce various learning techniques into swarm intelligence optimization algorithms. For example, Li et al. [11] conducted a comprehensive study and analysis on how traditional intelligent optimization algorithms combine with learning operators and gave an outlook on its future development direction. To solve the numerical optimization problem, Li and Wang [12] proposed an improved elephant herd optimization using dynamic topology and learning-based biogeography optimization (BLEHO), while ensuring a better evolutionary process for the population through an elite strategy. Meanwhile, the global velocity strategy and the new learning strategy were introduced in EHO to update the velocity and position of individuals [13], and good results were obtained. The cuckoo search algorithm is an effective evolutionary method for global optimization, but it has the drawback of premature convergence problem and poor balance between exploitation and exploration. To address these problems, a balanced learning function was introduced into the cuckoo search algorithm by which a balance between exploitation and exploration was achieved [14]. Li et al. [15] proposed a new extension of CS with Q-learning steps and genetic operators, namely, the dynamic step cuckoo search algorithm (DMQL-CS). The step control strategy was considered as the action, which was used to check the effect of a single multistep evolution and to learn a single optimal step by calculating the Q-function value, and the crossover and variation operations expand the search range of the population and increase the diversity of the population. Li et al. [16] introduced a learning model combining individual historical knowledge and group knowledge into the CS algorithm while using a threshold statistical learning strategy to select the optimal learning model to exploit the potential of individual knowledge learning and group knowledge learning, providing a good trade-off between exploration and exploitation. To address the premature convergence suffered by the CS and the poor balance between exploration and exploitation, a dynamic CS with Taguchi opposition-based search (TOB-DCS) was proposed [17]. It employed two new strategies: Taguchi opposition-based search and dynamic evaluation. The Taguchi search strategy provided random generalized learning based on opposing relationships to enhance the exploration ability of the algorithm. The dynamic evaluation strategy reduced the number of function evaluations and accelerated the convergence property. Wang et al. [18] and Feng et al. [19] introduced opposition-based learning into the krill herd algorithm and monarch butterfly optimization, respectively, providing us with a strategy to optimize the population of individuals. Wang et al. [20] introduced chaos theory into CS to further improve the optimization performance of CS. Lu et al. [21] studied three chaotic strategies with 11 different chaotic map functions on GWO and successfully applied the most suitable chaotic strategy as the chaotic GWO to real-world engineering problems. Inspired by the SCA algorithm, Nabil et al. [22] updated the positions of the followers in the SSO algorithm through a sine function, which helped to improve the exploration stage and avoid stagnation in local areas. Liu et al. [23] introduced chaos strategy into SSA and used adaptive inertia weight to balance the convergence speed and exploration ability of the algorithm, but between the speed of convergence and the ability of exploration, a certain part must be sacrificed. Yuan et al. [24] used gravity center reverse learning mechanism to initialize the population so that the population has better spatial solution distribution. Secondly, the learning coefficient was introduced into the location updating part of the discoverer to improve the global search ability of the algorithm, but the local search ability of SSA was ignored. Lei et al. [25] introduced the Levy flight strategy into SSA, which improved the global optimization ability of SSA, but increased the complexity. Zhang et al. [26] used the sine cosine algorithm as a hybrid algorithm to help SSA jump out of local optimum. Although all the above various improvements contribute to optimization performance, there are still certain shortcomings, and thus optimization performance can be further improved.
In this paper, on the basis of predecessors’ ideas, the flight behavior of the bird swarm algorithm and the cross mutation idea of the genetic algorithm are introduced into SSA, and the improved tent chaos is used to assist. The experimental results show that the ISSA proposed in this paper is effective and feasible in convergence accuracy, stability, convergence speed, and comprehensive performance.
The organizational structure of this paper is as follows. Section 1 introduces SSA and its improved methods. Section 2 uses test function analysis to verify the effectiveness of the improved algorithm. Section 3 gives the conclusion and describes the future of the proposed algorithm.
2. Sparrow Search Algorithm and Its Improvement
2.1. Sparrow Search Algorithm
Xue and Shen [10] proposed the sparrow search algorithm method. The idea of this algorithm comes from the foraging and antipredation behaviors of sparrows, which can be abstracted as an explorer-follower-forewarner model. The explorer has a high energy reserve and high fitness value. It provides a foraging area and direction for the followers. The followers follow the explorer with the best fitness value to find food to increase their own energy reserves and fitness values. Some followers may also constantly monitor the explorer to compete for food. A forewarner will sound an alarm when it is aware of the danger, and at the same time, moves quickly to a safe area to get a better position. Sparrows in the middle of the flock will randomly walk close to other sparrows, which is an antipredation behavior. At the same time, if the alarm value is greater than the safety threshold, the explorer needs to take all the followers away from the dangerous area.
In SSA, assuming that there are N sparrows in a D-dimensional search space, the position of each sparrow is as follows:where , , and represents the position of the ith sparrow in the dth dimension.
As the explorer guides the movement of the entire sparrow group, search for food can occur anywhere. Therefore, the location of the explorer is updated as follows:where represents the current iteration, is the maximum iteration, is a random number in , Q is a random number following a normal distribution, and L represents a matrix, where each element is 1. represents an alarm value, and represents the safety threshold. If , there are no predators around and the explorer will enter the wide search mode. If , some sparrows have found natural predators, and all sparrows need to fly to another safe area quickly.
The followers follow the explorer to find food and may compete with the explorer for food to increase their food intake. The position update equation is as follows:where represents the global worst position of the sparrows in the d dimension at the tth iteration and represents the best position of the explorer in the d dimension at the (t + 1)th iteration. When , the ith follower with poor fitness is most likely to starve to death; otherwise, the ith follower randomly finds a location for food near the best position of the explorer.
Assuming that the forewarner sparrows account for about 10% to 20% of the sparrow population, and their initial positions are randomly determined, the mathematical model can be expressed as follows:where is a step size control parameter, which is a random normal distribution with the mean of 0 and variance of 1; K is a random number between [−1, 1]; , , and , respectively, represent the current fitness values of the sparrow, the current global best fitness value, and the current global worst fitness value; is the smallest constant to avoid zero-division-error. represents the sparrow at the edge of the group. indicates that the sparrow in the middle of the group is aware of the danger and needs to move to another place.
2.2. Initial Population
The quality and diversity of the initial population in the swarm intelligence optimization algorithm have a great impact on the optimization performance of the algorithm. A high-quality initial population can increase the convergence speed of the algorithm and help find the global optimal solution. The standard SSA algorithm does not have any prior knowledge. The initial population generated by a random initialization method can easily lead to poor population diversity and uneven distribution. Therefore, the initialization of the sparrow population has a great effect on the search accuracy of the SSA algorithm. In order to maintain the diversity of the population, this paper adopts the fusion of an improved tent chaotic map and elite opposition-based learning strategy to initialize the population and replace the method of randomly generating the population in the SSA algorithm. We first use the improved tent chaotic map to make the initial population distribution more uniform, and we then use the feature that elite individuals contain more effective information in the elite opposition-based learning strategy to construct superior individuals and further improve the quality of the population.
2.2.1. Improved Tent Chaotic Sequence
Tent chaos, as a type of chaos, is a more complicated nonlinear dynamic behavior than logistic chaos. The study proposed by Shan et al. [27] has shown that the tent chaotic map has better traversal uniformity and faster search speed than logistic chaos. Using the randomness, ergodicity, and regularity of the tent chaotic sequence to optimize the SSA algorithm can effectively maintain the diversity of the algorithm population, thereby improving the global search capability.
The expression of the tent chaotic map is as follows:
Analysis of equation (5) shows that the tent chaotic sequence is not perfect, and there are points with small periods and those with unstable periods. To avoid falling into the points in the iterative process, a random variable, , is introduced into the tent chaos expression. The improved tent chaotic map expression is as follows:
The improved tent chaotic map after Bernoulli shift transformation is expressed as follows:
According to the characteristics of the tent chaotic map, the method for initializing the sparrow population in the feasible region is as follows:where is the uniform distribution in [0, 1], and and represent the upper and lower bounds of the feasible solution, respectively.
2.2.2. Elite Opposition-Based Learning Strategy
Opposition-based learning (OBL) is a new strategy in computational intelligence. A great number of studies have shown that the opposition-based solution in the opposition-based learning strategy is closer to the global optimal solution than the existing solutions. Therefore, using this strategy can effectively enhance the diversity of the population, and to a certain extent, prevent the algorithm from falling into a local optimum. The elite opposition-based learning strategy is based on the reverse learning strategy in order to overcome the problem in the reverse learning strategy that the current solution and the resulting reverse solution are not necessarily easier than the current search space in search for the global optimal solution. The strategy selects the elite individuals in the current solution to construct the reverse solution, thus avoiding the invalid reverse solutions generated by nonelite individuals. Assuming that is an elite individual in the D-dimensional search space, then its opposition-based solution, , is defined as follows:where is the j-dimension vector of the elite solution , , is the random value in [0, 1], and and are the lower and upper bounds of the jth dimension search space, respectively. After the opposition-based solution is obtained, it needs to be treated across the bounds:where is a random value in .
2.2.3. Population Initialization
According to the two methods described in Sections 2.2.1 and 2.2.2, the pseudocode of the population initialization of the SSA algorithm in this paper is shown in Algorithm 1.
|
2.3. Birds Swarm Algorithm Strategy
The bird swarm algorithm (BSA) was proposed by Meng et al. [28] in 2015 based on the flight, foraging, and vigilance behaviors of birds. Similar to SSA, the BSA also has explorers and followers. The equations for updating their locations are as follows:where represents a Gaussian random distribution with a mean of 0 and a standard deviation of 1; , , represents the probability that a follower follows the explorer to find food.
In SSA, when , the number of the individual explorer sparrows is decreasing in every dimension, while the individual explorer sparrows in the BSA overcome this defect, as shown in Figure 1. Therefore, with the help of the idea of the explorer in BSA, the equation for the position of the explorer in SSA is modified as follows:

(a)

(b)
At the same time, in SSA, the follower approaching the best positions in all dimensions can lead to rapid convergence but also reduces the diversity of the population, thereby making it easy for the algorithm to fall into a local optimum. In the BSA, the follower approaches the explorer with a certain probability. While ensuring global convergence and the diversity of the population, the local optimum can be avoided effectively. Therefore, the idea of the follower in BSA is introduced into SSA, and the location of the improved SSA follower is updated as follows:
2.4. Cauchy Mutation
The Cauchy mutation comes from the Cauchy distribution of the continuous probability distribution. The probability density of the one-dimension Cauchy distribution is expressed by the following:
When a is 1, as the distribution is the standard Cauchy distribution. Figure 2 shows the curves of the probability density functions of the Cauchy distribution and Gaussian distribution. It can be seen from the figure that the main feature of the Cauchy distribution is that, compared to the Gaussian distribution, its peak value at zero is smaller, its descent from the peak to the zero value is less steep and its descent is slower. Therefore, the Cauchy mutation is stronger than Gaussian mutation in perturbation ability, and the range of mutation is more uniform. Introducing the Cauchy mutation into the SSA algorithm can fully take advantage of the perturbation ability of the Cauchy operator and improve the algorithm’s global optimization ability.

The Cauchy mutation equation is as follows:where is the original individual position, is the individual position after Cauchy mutation, and is a random number in (0, 1).
2.5. Chaotic Perturbance
The purpose of introducing chaotic perturbation in the algorithm is to prevent it from falling into a local optimum and to improve the global search ability and optimization accuracy. The steps are as follows:(1)Generate chaotic variables through equation (7).(2)Map the chaotic variables to the solution space of the problem to be solved where and are the minimum and maximum values of the dth-dimension variable .(3)Perform chaotic perturbance on the individual according to the following equation:where is the individual who needs chaotic perturbance, is the amount of chaotic perturbance generated, and is the individual after the chaotic perturbance.
2.6. Introduction of Genetic Algorithm
The genetic algorithm is introduced into the SSA algorithm [29]. Due to its position update equations, the idea of SSA algorithm is position transfer, that is to improve the population quality by sharing useful information among sparrow individuals and self-learning between individuals. After the introduction of the genetic algorithm, the improved individual obtains a better next-generation population through crossover mutation. Moreover, the SSA algorithm now not only has the powerful global search ability of the genetic algorithm, but also integrates the position transfer idea of the SSA algorithm. It makes full use of the information of the population and the individuals that are ignored by the genetic algorithm, and its optimization performance is better. At the same time, the selection of crossover probability and mutation probability in the genetic algorithm is one of the important factors affecting the optimization ability of the algorithm. If is too small, the generation speed of new individuals during iteration will slow down, leading to early termination of the calculation process. If is too large, there are too many newly generated individuals in the group, which can damage the excellent individuals that have been generated in the group. If is too small, the ability to generate new individuals by mutation operations will be weak, and many excellent genes will be lost prematurely and not enter the next generation, which is not conducive to maintaining the diversity of the population. Lastly, if is too large, it will be similar to a random search algorithm.
Therefore, this paper improves the crossover rate and the mutation rate and proposes an adaptive crossover and an adaptive mutation strategy based on the golden ratio index function to make the good individuals at a certain moment of evolution also change. The improved change equations are shown below:where is the maximum crossover probability set by the algorithm, and its value is set to 0.7; is the maximum mutation probability, and its value is set to 0.01. Meanwhile, the golden ratio is introduced into the equations, and the optimal solutions are approached one by one according to the principle of equal ratio, symmetric contraction, and optimal selection; this improves the speed of the global optimal solution search.
A flow chart of the genetic algorithm introduced into SSA is shown in Figure 3.

In Figure 3, the individual sparrow in SSA is regarded as a chromosome in GA. In the Nth generation group, individual sparrows enter the (N + 1)-th generation after being improved, crossovered, and mutated. The steps to introduce GA into SSA are as follows:(1)Improve the selection operator: in each generation, first calculate the fitness value of each individual sparrow, sort by fitness value, and choose the top half of the best individuals as excellent samples to be improved through the SSA algorithm (these samples thus enter the next generation).(2)Crossover mutation operator: in order to obtain the remaining next-generation individuals, select outstanding individuals from the SSA population as fathers and mothers, and generate new offspring individuals for the next generation through dynamic crossover operators and mutation operators.
2.7. Improved Sparrow Search Algorithm
The pseudocode of the improved sparrow search algorithm is expressed in detail in Algorithm 2.
|
3. Experimental Results and Analysis
In order to verify the optimization performance of the ISSA algorithm, the particle swarm algorithm, grey wolf optimization algorithm, whale optimization algorithm, sine cosine optimization algorithm, salvia swarm algorithm, basic sparrow search algorithm, and ISSA were used in simulation experiments on 15 benchmark functions. The convergence accuracy, stability, convergence speed, improved advantages, and comprehensive performance of the ISSA algorithm were analyzed. The simulation experiment in this paper was performed on the Windows 10 64-bit operating system with Intel Core i7 CPU, 2.60 GHz, 16 GB, and MATLAB R2016b. The parameters of the algorithms are shown in Table 1.
3.1. Comparative Experiment with Benchmark Functions
The benchmark functions are shown in Table 2. f1–f6 are unimodal functions, f7–f10 are multimodal functions, and f11–f15 are fixed-dimension functions. In order to make the experimental results fair and objective, the number of individuals in each algorithm is set to 30, and the maximum number of iterations is 500. Fifty independent simulation experiments were performed on each benchmark function for each algorithm, and the average value, optimal value, and standard deviation obtained from 50 experiments were calculated; the results are shown in Table 3.
For the unimodal functions f1–f4, ISSA can accurately find the optimal value of zero, and the average value and standard deviation are also zero. Although SSA can find the optimal value of zero in function f1 and function f2, the average values and standard deviations are not zero. The optimization performance of the other five algorithms can only approach zero infinitesimally. Among them, WOA performs better in f1 and f2, with its optimization performance several orders of magnitude higher than those of the other four algorithms. In the unimodal functions f5 and f6, the optimization performance of ISSA is only one or two orders of magnitude higher than that of SSA; meanwhile, the optimization performance of the remaining five algorithms is not much different from that of the SSA algorithm. For the multimodal functions f7–f10, ISSA, SSA, and WOA can all find the optimal value zero in f7, and the average value and the standard deviation are all zero. In f8, ISSA and SSA stop searching after finding a point that is infinitesimally close to the optimal value and have the same stability; WOA is slightly inferior and is not as stable as ISSA and SSA. In f9, ISSA, SSA, GWO, and WOA can also find the optimal value zero, but GWO and WOA are less stable. In f10, the performance of the ISSA algorithm is not improved much, with the optimal value being only three orders of magnitude higher than that of SSA; moreover, ISSA and SSA have stabilities in the same order of magnitude. In the fixed-dimension functions, seven algorithms are able to find the optimal value in both f11 and f13. The standard deviation of SSO in f11 is the lowest, and the standard deviation of PSO in f13 is the lowest, indicating they are the most stable. In f12, only ISSA and SSA are able to find the optimal value. The stability of ISSA is three orders of magnitude higher than that of SSA, and the remaining algorithms are infinitesimally close to the optimal value. In f14, only SCA and WOA fail to find the optimal value, and the optimization capabilities of the other algorithms are not much different. In f15, only SCA fails to find the optimal value, whereas ISSA has the best stability, which is 14 orders of magnitude higher than the stability of the other algorithms. Therefore, the performance of ISSA is greatly improved in the unimodal functions and the multimodal functions. All the algorithms perform well for the fixed-dimension functions. It is relatively easy to find their optimal values. ISSA’s performance improvement is weak in these functions.
Figure 4 shows the convergence curves of different algorithms on the benchmark functions. It can be seen that, for the unimodal functions f1, f2, and f3 and multimodal functions f7 and f9, the ISSA algorithm makes these functions find the optimal value before reaching the maximum number of iterations. From the overall performance of all functions, the convergence speed of ISSA is faster than that of other algorithms. For the fixed-dimension functions f12, f13, f14, and f15, each algorithm is able to find the optimal value or comes infinitesimally close to the optimal value. Although the convergence speed of the ISSA algorithm has not improved much, its convergence speed can still reflect its advantages.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)
3.2. Rank-Sum Test
Derrac et al. [30] suggest that a statistical test should be carried out to evaluate the performance of improved algorithms. It is inadequate to evaluate an algorithm only based on the average and standard deviation. A statistical test is needed to demonstrate that the proposed improved algorithm has significant improvements over existing algorithms. Each result is compared independently to represent the stability and fairness of the algorithm. In this paper, the Wilcoxon rank-sum test is used to determine whether each result of ISSA is statistically significantly different from the best results of the other five algorithms at less than 0.05. Table 4 shows the values obtained by the rank-sum tests for ISSA and the other five algorithms on the 15 benchmark functions. When two algorithms under comparison reach the optimal values, they cannot be compared; thus, NaN in the table means “not applicable;” that is, a significance assessment cannot be made. R is the result of the significance assessment: “+,” “−,” and “=,” respectively, represent ISSA’s performance as being superior, inferior, and equivalent to the algorithms under comparison.
Table 4 shows that only the values of ISSA and WOA in f9, ISSA and PSO in f14, and ISSA and SSA in f12 and f13 are slightly greater than 0.05; the other -values are much less than 0.05. This indicates that the superior performance of ISSA is statistically significant. In the comparison between ISSA and SSA, the R in the f12 and f13 functions is “−.” This is because the optimization performance of SSA itself is better. Both ISSA and SSA can find the optimal values, but their average values are different. Although the optimization performance of ISSA on fixed-dimension functions has been improved somewhat, there was not much room for improvement to begin with.
In order to evaluate the performance of the algorithms in many aspects, the mean absolute error (MAE) is used as an evaluation index. The MAE ranking shown in Table 5 corresponds to the algorithms listed in Table 2.
As can be seen, ISSA ranks first. Compared with the other six algorithms, ISSA has the smallest MAE, which further proves the effectiveness of ISSA.
3.3. Box Plot
In Section 2.1, each algorithm is used to perform 50 experiments on each of the 15 benchmark functions. The results are compared in Table 3. However, it is difficult to observe the various data and the relationship between them from Table 3. To verify the stability and convergence of ISSA, we also conduct a box plot analysis [31]. The box plot summarizes six types of data: the maximum, upper quartile, median, lower quartile, minimum, and outliers. Figure 5 shows the plots for the 15 benchmark functions in the experiment. In the figure, “+” in each box subgraph represents the outlier, “—“ represents the median, the two ends of the rectangular box are the upper and lower quartiles, and “−” represents the maximum or minimum. The difference across the seven algorithms is large. In order to make this box plot comparison more obvious, three subgraphs are generated for each benchmark function, which are the comparison box plots of the seven algorithms, the comparison box plots of the remaining six algorithms after the SCA algorithm is removed, and the comparison box plots of the ISSA and SSA algorithms.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)
It can be seen from Figure 5 that the median of ISSA is always close to the optimal value of each function, and the variation in optimal values across 50 iterations is the smallest. Furthermore, the distribution of the convergence value is more concentrated than in the other six algorithms, indicating that the ISSA algorithm has stronger robustness. Comparing ISSA and SSA separately, SSA and ISSA can find the optimal value in each iteration of functions f7, f8, f9, and f13, so the performance of ISSA cannot be improved anymore. Additionally, except for f11 and f14, in each benchmark function SSA has more outliers than ISSA, indicating that the ISSA algorithm performs better in terms of stability. Although no SSA outliers appeared in f11 and f14, the variation range in optimal values of SSA is relatively large, which shows that the SSA algorithm is less stable than ISSA.
3.4. Radar Chart
In order to analyze the overall optimization capabilities of the seven algorithms, the results of the algorithms in each benchmark function are compared and ranked according to their mean values (see Table 3). If the means are equal, the standard deviations are compared; and if the standard deviations are also equal, the two are ranked identically. The ranking results are shown in Table 6. The last row in Table 6 is the average rank of each algorithm. ISSA ranks first and thus has the best optimization performance among the seven algorithms; it is also significantly better than SSA. The ranking results of the remaining algorithms are SSA, GWO/WOA, PSO, SSO, and SCA. In order to display the ranking of the seven algorithms more intuitively on different benchmark functions, a radar chart is used to plot the results of Table 6. This chart is shown in Figure 6. The smaller the area enclosed by the algorithm performance curve, the smaller the ranking values of the algorithm and the better the optimization performance. It can be seen from the figure that the area enclosed by the black curve representing ISSA is the smallest, indicating that ISSA has the best optimization performance overall among the seven algorithms.

4. Conclusion
In this paper, the idea of flight behavior of BSA and the idea of crossover and mutation of GA are introduced into SSA. Thus ISSA is proposed. Through the improvement, the ISSA can keep the diversity of the population at the end of iteration and keep the population in a better state. The improved method has good global convergence performance and robustness and shows better optimization ability and better comprehensive performance than the original algorithm. In the next step, it will be compared with other novel algorithms such as Monarch Butterfly Optimization (MBO) [32], Earthworm Optimization Algorithm (EWA) [33], Elephant Herding Optimization (EHO) [34], Moth Search (MS) algorithm [35, 36], Slime Mold Algorithm (SMA) [37], and Harris Hawk Optimization (HHO) [38] on specific problems, and it will be applied in practical engineering to verify the advantages of ISSA.
Data Availability
All data are included within the article.
Additional Points
Highlights. (1) Bird swarm algorithm and genetic algorithm are integrated into SSA; (2) Cauchy mutation and Tent chaotic sequence are introduced into SSA; (3) The greater performance of ISSA is verified over some classic algorithms.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors’ Contributions
Chenghai Li made a major contribution to this work.