Abstract
The swarm intelligence algorithm is a new technology proposed by researchers inspired by the biological behavior of nature, which has been practically applied in various fields. As a kind of swarm intelligence algorithm, the newly proposed sparrow search algorithm has attracted extensive attention due to its strong optimization ability. Aiming at the problem that it is easy to fall into local optimum, this paper proposes an improved sparrow search algorithm (IHSSA) that combines infinitely folded iterative chaotic mapping (ICMIC) and hybrid reverse learning strategy. In the population initialization stage, the improved ICMIC strategy is combined to increase the distribution breadth of the population and improve the quality of the initial solution. In the finder update stage, a reverse learning strategy based on the lens imaging principle is utilized to update the group of discoverers with high fitness, while the generalized reverse learning strategy is used to update the current global worst solution in the joiner update stage. To balance exploration and exploitation capabilities, crossover strategy is joined to update scout positions. 14 common test functions are selected for experiments, and the Wilcoxon rank sum test method is achieved to verify the effect of the algorithm, which proves that IHSSA has higher accuracy and better convergence performance to obtain solutions than 9 algorithms such as WOA, GWO, PSO, TLBO, and SSA variants. Finally, the IHSSA algorithm is applied to three constrained engineering optimization problems, and satisfactory results are held, which proves the effectiveness and feasibility of the improved algorithm.
1. Introduction
In recent years, new intelligent optimization algorithms have emerged continuously and have been practically applied in medical treatment [1, 2], finance [3], production scheduling [4], and other fields. Besides, it has been proved to be remarkably effective. Since the end of the last century, scholars from all over the world have been inspired by social behavior [5], trying to simulate the behavior characteristics of biological populations in nature, and proposed algorithms such as Ant Colony Algorithm (ACO) [6, 7], Particle Swarm Optimization (PSO) [8, 9], Whale Optimization Algorithm (WOA) [10], Grey Wolf Optimization Algorithm (GWO) [11], and a series of swarm intelligence optimization algorithms. Most of the modeling process of these algorithms is based on the characteristics of the biological population, such as foraging [12], reproduction [13], and hunting [14], which vividly simulate the main behaviors in social actions. In 2020, Xue J and Shen B jointly proposed the Sparrow Search Algorithm (SSA) [1] based on the foraging behavior and backfeeding behavior of sparrow populations. The formula and control parameters of algorithm are not complex and are easier to be understood and implemented relatively. Experiments show that SSA’s optimization capability is stronger than the particle swarm optimization algorithm proposed in 1995 and the grey wolf algorithm proposed in 2014, with better convergence accuracy, faster convergence speed, and better stability. However, compared with the existing swarm intelligence optimization algorithm, the SSA also has certain shortcomings, such as longer running time, and a greater possibility to fall into a local optimal solution due to the excessively fast convergence speed, so that the global optimization ability is insufficient.
In order to strengthen the optimization effect of the algorithm and balance the capabilities of exploration and mining, researchers have proposed a series of improved methods for the original sparrow search algorithm model to improve the problem of easily trapping into local optimality. Of course, these improved methods of the swarm intelligence optimization algorithm have been used in various research fields, and the application has been realized extensively. Lv et al. [2] introduced a chaotic sequence to chaotically perturb some individuals, which fell into the local optimum, in order for the SSA to be jumped out of the restriction and continue to search for the global optimum solution. At the same time, the integrated Cauchy-Gaussian mutation operator is combined together to avoid the stagnation of optimization by changing the position of the elite sparrow in the search space. Zhu [15] introduced an adaptive learning factor to solve the problem that the convergence trend will slow down and the convergence accuracy will be reduced under a limited number of iterations, which are the shortcomings of the SSA. At the same time, the ASSA is applied to the optimization and identification of PEMFC stack parameters. Mao and Zhang [16] fused the sine cosine algorithm and the Levy flight strategy on the basic SSA, performed disturbance mutation at the optimal solution position, which enhanced the ability of the algorithm to escape locally, and greatly increased the accuracy of the solution. Liu et al. [17] and others introduced an improved sparrow search strategy to apply to the route planning problem of UAVs, which solved the inefficiency of path planning in the complex three-dimensional flight process. Yuan et al. [18] utilized the center of gravity reverse learning mechanism to initialize the population, which made the population distribution wider. A learning factor is put forward in the update part of the discoverer, and the mutation operator is introduced to increase the mutation processing and reduce the probability of the algorithm falling into the local optimum. Applying it to Distributed Maximum Power Point Tracking (DMPPT) provides conditions for the stable operation of the microgrid. Liu et al. [19] came up with a balanced sparrow search algorithm (BSSA), and the random walk strategy of Levy flight method was exerted to appropriately adjust the local search, which brought the improving efficiency of CNN focus. Besides, they applied it to the medical field to improve MRI image diagnosis of the brain robustness and accuracy of tumors. In order to solve the problem of labeled data classification, Zhang et al. [20] adopted the method of combining the improved SSA and the adaptive classifier and introduced the sine-cosine algorithm and the newly proposed labor cooperation structure. Great effect of application in the classification of lung CT images has been demonstrated. Zhang and Ding [21] designed a random configuration network based on the chaotic sparrow search algorithm, and, combined with the adaptive control factor of CSSA, it automatically updated the regularization parameters and scale factors for SCN. Thereby, the regression performance of SCN got improved when solving large-scale random configuration problems. Zhu and Yousefi [15] proposed to hold the adaptive sparrow search algorithm ASSA to optimize the seven unknown parameters of the proton exchange membrane fuel cell model in the PEMFC stack. The ultimate goal is achieving the best consistency with the empirical voltage polarization curve of the battery pack. Zhou et al. [22] successfully applied SSA to wavefront shaping and focusing by introducing a cross strategy, which solved the problem of SSA’s lack of performance in high-dimensional optimization problems. Without a doubt, the improved algorithm provided a good reference for future wavefront shaping research.
Up to now, owing to the fact that the sparrow search algorithm has not been come up with for a long time, researchers are still in the exploratory stage and have not been able to develop an absolutely excellent algorithm. In order to further improve the solution accuracy and convergence efficiency of the sparrow algorithm, this paper continues to explore paths that can be improved on the basis of the predecessors and proposes the novel sparrow search algorithm called IHSSA. Improved infinite folding iterative chaotic mapping and hybrid reverse learning strategy are combined with it. The innovation points can be summarized as follows:(1)The improved infinite folding iterative chaotic map (IICMIC) is used to initialize the sparrow population. This strengthens the diversity of the initial population to a certain extent and increases the breadth of distribution.(2)A hybrid reverse learning strategy is put forward to update the position of a specific individual. Taking into account the effectiveness of reverse learning in mining new solutions, this paper uses a hybrid reverse learning strategy. After the discoverer is updated, lens reverse learning can be utilized to update the global optimal solution. After the position of the joiner is updated, the generalized opposition-based learning strategy contributes to update the current worst individual. Besides, considering the limitation of the boundary, the population can get more feasible areas as possible so as to maximize the mining.(3)The horizontal and vertical crossing strategy is introduced to update the position of the guard. The advantage of this strategy is that it can update the individual sparrows in both the horizonal and vertical angles, while maintaining the solution speed, and the range of the population can be expanded to a certain extent.
This paper follows a reasonable logical order. The first chapter introduces the research background of intelligent algorithms in recent years and some contributions made by researchers to this field. The second chapter introduces the basic sparrow search algorithm SSA. The third chapter introduces several improvement points of this paper in sequence according to the application order, shows the proposed new algorithm IHSSA, and attaches the flow chart of the new algorithm. The advantage of the algorithm is proved by time complexity analysis and Wilcoxon rank sum test, and the population distribution diagram proves its contribution to the dispersed population. In the following chapters, the new algorithm is tested on 14 standard test functions, the results are statistically tabulated, and a comparative analysis is made according to the data to verify the pros and cons of the algorithm. In Chapter 5, we apply IHSSA to a classical constraint engineering optimization problem, and the obtained data further proves the feasibility and effectiveness of the proposed algorithm. Finally, a brief summary of the work of this paper is made, and the author and his team have made some plans and prospects for the next research work.
2. Sparrow Search Algorithm SSA
2.1. Group Predation Behavior of Sparrows
In nature, as one of the common birds, sparrows live in the environment where humans live. Generally speaking, the upper body of the sparrow is brown and black, and the conical mouth is short and strong. They usually live together in groups with a clear division of labor. Some sparrows are responsible for finding food and providing foraging areas and directions for the entire population, while the remaining sparrows obtain food based on the food information the former sparrows provide. In addition, a sparrow in the population will issue an alarm in time when it realizes that danger is coming, and the entire population will quickly start backfeeding behavior.
2.2. SSA Algorithm Description
The proposal of SSA is based on the characteristics of sparrows’ cleverness and strong memory, which well simulates the cooperative mechanism of sparrow populations in daily foraging. We will give new names to the three types of sparrows mentioned earlier. ① Those who are responsible for finding food are called discoverers. ② Those who follow the discoverers to obtain food are called joiners. ③ Some joiners will always monitor the discoverers and choose the time to compete for food resources in order to increase the rate of food acquisition. This type of joiner is called monitor. The discoverer generally accounts for 10%–20% of the entire population. The roles of the discoverer and the joiner can be exchanged, provided that the proportion relative to the entire population is constant.
The position of each sparrow is held as a solution of the algorithm. The initial positions of the sparrow represented by a matrix are as follows:
Among them, d represents the dimension of the problem to be optimized, and n represents the number of sparrow population. And then, the fitness value of all sparrows can be expressed:
Among them, the function f represents the fitness function. The discoverer with better fitness will obtain food earlier in the food search process.
Since the discoverer needs to guide the foraging direction for the entire population, the discoverer can obtain a larger food search range. In the iterative process, the location of the discoverer is updated as follows:
Among them, represents the current position of the ith sparrow; Maxitem represents the maximum number of iterations of the algorithm; t is the current iteration number; is a uniform number conforming to (0,1]; Q is a random number that obeys the standard normal distribution; L is a 1 d matrix with each element being 1; alarm value ; safety value . Once a sparrow in the population finds a predator or other danger, an alarm signal will be issued. When the alarm value is more than safety critical value, the discoverer will lead the population to other safer areas to forage.
The formula updated position of the follower is as follows:
Among them, represents the current global worst position; represents the global optimal value of the jth dimension at the (t+1)th iteration (that is, the best position of the discoverer); A represents a 1∗d matrix whose elements are randomly assigned 1 or -1, Satisfy .
When , it indicates that the ith joiner has a low fitness level and is not able to obtain food. In order to obtain food and increase energy reserves, one must fly to other places for foraging; when , it means that the ith joiner has held the best position and randomly finds a location to forage near .
The sparrows responsible for investigation generally account for 10%–20% of the total, and they always monitor and remind the entire population to take backfeeding behavior when facing danger. The position update formula of the monitors is as follows:
Among them, K is also a random number, and the range is ; is an infinitesimal constant, and its existence avoids the situation where the denominator is 0; , and represent the current fitness, the global optimal, and the global worst fitness value of the sparrow, respectively.
3. IHSSA
3.1. Infinitely Folded Iterative Chaotic Map Initialization Population
3.1.1. ICMIC
The swarm intelligence algorithm needs an initialization strategy to generate an initial population and provide an initial guess for the subsequent evolution process. The difference in the initial distribution state of the sparrow population will lead the entire subsequent foraging process to the final result with a large gap. Both the convergence speed and the optimization accuracy are deeply affected. Therefore, the importance of the quality of the initial population can be realized. According to the original SSA, the population is not guided by prior knowledge; that is, it is generally randomly generated. In 1975, Li et al. proposed the concept of “chaos” for the first time in the article “Period Three Implies Chaos” and used the word chaos for the first time [23].
Considering its unpredictability, ergodicity, and parameter sensitivity, chaotic systems are special. In the field of parameter optimization, chaotic mapping can be operated to replace pseudorandom number generators in order to generate chaotic numbers between 0 and 1. Considering that chaos can only traverse all the space in a sufficient length of time, it is feasible to combine chaos into the global optimizer to improve the search performance of the latter in order to complete the optimization of the target task in a short time range [24]. Experiments have proved that the utilization of chaotic sequences for population initialization will affect the entire process of the algorithm, and better results than pseudorandom numbers can often be held. The ergodicity of chaos allows the initial state of the sparrow population to have better diversity, to avoid premature convergence, that is, to improve the global optimization accuracy and convergence, which overcomes the shortcomings of traditional optimization algorithms.
This paper applies ICMIC map, one of the most classic chaotic maps (Iterative Chaotic Map with Infinite Collapses) to initialize the sparrow population. The chaotic map was proposed in 2001 by Di He. Its basic idea is to generate a chaotic sequence in through the mapping relationship and then transform the chaotic sequence into the search space of the population [25]. Its higher Lyapunov exponent shows stronger chaotic characteristics than other commonly used continuous chaotic models [26]. Selecting appropriate parameters can generate a good chaotic model so as to contribute satisfactory results in practical applications. The uniform distribution test of chaotic systems by Di et al. [26] proved that the one-dimensional ICMIC presents a noise phenomenon closer to uniform distribution. Two mathematical expressions for ICMIC mapping are as follows:
Expression one:
Expression two:
In expression one, is a very important adjustable parameter. Experiments show that the value of directly affects the mapping effect and then affects the pros and cons of the population. In the second expression, also plays an important role as a control parameter.
3.1.2. IICMIC
Based on the expression two in 2.1.1, this paper proposes an improved infinite fold iterative chaotic map-IICMIC. The mathematical expression is as follows:
After a lot of experiments, it is concluded that SSA can obtain a good chaotic sequence when the value of a is in the range of (0.6, 1). Combining IICMIC with the original SSA, the initial population state generated is shown in Figure 1(a) which shows the population state distribution after initial SSA initialization, and Figure 1(b) shows the distribution of sparrow population state after initialization with IICMIC. It can be seen that the improved initialization method has greatly improved the diversity of the population, and it has greatly avoided falling into the local optimum. The value of a is set 0.9 in subsequent experiments afterwards.

(a)

(b)
In combination with SSA, we first select initial values whose number is N with small differences as the initial state of the population. Taking into account the parameter sensitivity of ICMIC mapping, even if the individual gap is small, it can be captured. These N initial values can be mapped to get the same amount of chaotic sequence and then inversely mapped to the corresponding individual search space. The initial position of the i-th individual after the change is denoted as (i = 1.2, …, D).
3.2. Hybrid Reverse Learning Strategy to Update the Position of the Discoverer
Opposition-based learning (OBL) is an intelligent calculation method, which is first proposed by Tizhoosh in 2005. With the in-depth research of various algorithms, OBL has been successfully applied to many intelligent algorithms [27–31]. The main idea can be summarized as follows: calculate a feasible solution and its reverse solution. Then, evaluate the pros and cons of the two, and select the required solution according to certain conditions. Research shows that the solution generated by reverse learning is better than the randomly generated solution, and the probability of reaching the optimal solution is higher. Therefore, OBL is a good method that is greatly suitable for mining new solutions in unknown fields, which increases the diversity of the population.
In the discoverer stage, a broad and flexible search mechanism is the key to guide the entire sparrow population to search for food and avoid tripping into danger. In order to better realize the lead role of discoverers, it takes researchers too much time to explore in these fields, and they put forward a series of improvement methods gradually. However, traditional learning strategies have limited ability to solve problems and can only achieve their goals in certain dimensions. In response to this problem, based on the traditional OBL, this paper proposes a hybrid reverse learning method. Not only the improved lens imaging inverse learning mechanism is applied to the update of the optimal solution in the discoverer stage, but also the generalized opposition-based learning is performed on the global worst solution. The higher optimization accuracy can be obtained in this hybrid way so as to avoid premature convergence.
3.2.1. Reverse Learning Strategy Based on Improved Lens Principle to Update Optimal Position
The reverse learning strategy based on the lens principle has strong flexibility and versatility. A strong ability to explore unknown areas and dig new solutions is another advantage. The principles of this method are as follows:
Supposing that there is an object P with a height of h, and is the projection of P on the X axis. Define and to be the upper and lower bounds of the solution in the j-th dimension under the current algorithm. The midpoint of the upper and lower bounds is defined as the base point O, and a lens with focal length f is placed at this point. Through the lens imaging, an image different from P can be obtained. The projection of the image on the X axis is denoted as . is the newly generated reverse solution based on this learning strategy. The schematic diagram is shown in Figure 2.

From Figure 2, we can clearly see that generates a new image under the action of the lens. According to the properties of similar triangles, we can get the following formula:Let (k is the scale factor), and the mathematical expression of the reverse point can be written as
When , we can get
The formula above is the general form of the reverse learning strategy, and the new individuals generated by this formula are fixed. Studies have shown that, for high-dimensional complex functions, new individuals with a fixed range have a certain probability of falling into a local optimum. In the later stage of the algorithm iteration, the optimal solution generated is usually very close to the optimal solution. In order to deal with the hidden danger, we can introduce a new operator . Changing the scale factor k contributes to dynamically variable and new individuals. The randomness of the solution prevents individuals from losing vitality and increases the diversity of the population. The mathematical expression of is as follows (the expression is implemented to the position of the i-th sparrow individual on the j-th dimension):
Among them, t represents the current number of iterations, and T represents the maximum number of iterations.
3.2.2. Generalized Reverse Learning Strategy Updates the Current Global Worst Position
With the deepening of research, more and more attempts have been made to choose the optimal solution, but the research on the current global worst position cannot be ignored. The update of the worst position can bring a better search range and make the population distribution greater. In order to maximize the diversity of individuals in the population, not only the best individuals in the discoverer stage are learned, but also the worst individuals in the sparrow group are learned in reverse strategy. Combined with the improved lens learning strategy in the previous subsection, this subsection adopts the generalized opposition-based learning (GOBL) strategy to optimize and update the current global worst position after each iteration.
The concept of generalized opposition-based learning is as follows: let the individual , and their dynamic search range in the j-th dimension space is []. is its reverse solution. The mathematical expression of is
Among them, k is a random number that obeys a uniform distribution between (0, 1). If the value range of the reverse solution exceeds the predetermined range, the solution will be randomly generated within the dynamic search range [] according to the following formula:
The purpose of reverse learning is to find a new and most suitable solution. Generalized opposition-based learning (GOBL) compares the worst solution while finding it and updates the current global worst solution once. At the same time, GOBL increases the dynamic update operation of the boundary than the basic reverse learning, which means the relatively small search space. The GOBL is combined with the worst joiner update of the sparrow search algorithm. The characteristics of reverse learning are fully utilized to explore more feasible regions while improving the convergence speed of the algorithm.
3.3. Vertical and Horizontal Cross Strategy
The optimization speed of the SSA is very fast, and the solution accuracy is also strong. As the number of iterations increases, the sparrow population will gather around the local optimal solution to a large extent. In order to balance the global search and development capabilities of the algorithm and avoid the algorithm from falling into the local optimum, the crossover optimization algorithm is newly proposed in 2014. It is inspired by the Confucian golden section principle and the crossover operation in genetic algorithms. The experimental results demonstrate that, compared with other heuristic algorithms, the cross-optimization algorithm has excellent performance on most test functions [32].
This paper proposes to integrate the vertical and horizontal crossover strategy into the guard search stage of the sparrow search algorithm, which expands the range of the population as much as possible while preserving the speed of the solution.
3.3.1. Horizontal Crossover Strategy
Horizontal crossover divides the solving space of multidimensional problem into half the population of hypercubes. In order to reduce the blind spots that cannot be reached, the horizontal crossover also searches the edge of each hypercube with a small probability. This is the guarantee that horizontal search has strong global search ability.
In this paper, two parental vigilant individuals and are crossed horizontally to generate new individuals and .
Among them, and are random numbers in [0, 1] conforming to a uniform distribution, and and are random numbers in [−1, 1] conforming to a uniform distribution.
The offspring produced by the horizontal crossover needs to make an elite selection with their parents and retain the individuals with high adaptability. In this way, the algorithm can continuously converge to the optimal solution, which can ensure the convergence efficiency without affecting the optimization accuracy.
3.3.2. Vertical Cross Strategy
The premature convergence of most swarm intelligence search algorithms is caused by a small number of stagnant population dimensions. The original purpose of introducing vertical crossover is to promote certain dimensions of the population to escape from the dimensional convergence. Differing from the horizontal crossover strategy, the vertical crossover is operated on all dimensions of the new individual. Its function is to avoid premature maturity in the later stage of SSA, which is similar to the mutation mechanism in genetic algorithm.
Assuming that there is a newborn individual , which crosses longitudinally in the and dimensions, the calculation formula is as follows:
Among them, is a new individual generated after vertical crossover, .
Like the individuals generated by the horizontal crossover strategy, the new individuals generated after the vertical crossover must be selected by elites with their parents. The one with high adaptability is retained as the final individual. The advantage of this is that it not only increases the possibility of seeking the best in breadth, but also chooses various dimensions and realizes the continuous improvement of the quality of the solution. Even individuals who have fallen into a local optimum have a chance to jump out.
It is not difficult to see that, after combining with the horizontal and vertical crossover, it is indeed possible to balance the exploration and mining capabilities of the algorithm to a certain extent. The bottleneck in the horizontal direction can be shifted from the vertical experiment, and the vertical gains will be immediately fed back to the horizontal cross. Then, the information will spread to the entire population. The perfect combination of the two is like a layer of mesh structure that provides maximum help for optimization.
3.4. Frame Work of IHSSA
In summary, in order to solve the problems of the original sparrow search algorithm, such as fast convergence speed and high accuracy, but easy to mature early, several improvement measures have been proposed. The improved ICMIC is applied in the initialization phase, and the hybrid reverse learning strategy is utilized to update the discoverer and joiner, respectively. At the same time, the vertical and horizontal crossover strategy is added in the monitor stage to realize the overall update of each stage and strive to maximize the optimization. The specific implementation steps are as follows:
Step 1. Initialize the population and its parameters, including the population size N, the proportion of discoverers PD, the proportion of guards SD, the dimension of the objective function set to D, the upper and lower bounds of the initial value set to lb and ub, the maximum number of iterations T, and the alarm threshold ST, solving accuracy ε.
Step 2. Employ IICMIC to initialize the population (8), generate N D-dimensional vectors Zi, and then inversely map to the corresponding individual search space. The renewal of the population ensures the diversity of the sparrow population.
Step 3. Calculate the fitness of each sparrow, select the current optimal fitness fb and its corresponding position xb, and the current worst fitness fw and its corresponding position xw.
Step 4. According to the set ratio PD, randomly select pNum sparrows with excellent adaptability as discoverers, and the rest become joiners. Update the position of the discoverers according to formula (3).
Step 5. According to the population fitness updated by the discoverer, an improved lens-based reverse learning strategy (12) is utilized to update the optimal value.
Step 6. Update the position of the joiner according to formula (4).
Step 7. Employ the generalized opposition-based learning strategy (13) to update the current global worst value.
Step 8. Randomly generate sNum guards from the population according to the ratio SD, and perform the horizontal crossover (15) and (16) operation.
Step 9. Perform vertical crossover operation according to formula (17), compare the degree of fitness, and save the better ones.
Step 10. According to the current state of the sparrow population, update the optimal position xb, the best fitness value fb, the worst position xw, and the worst fitness value fw of the entire population during the entire foraging process.
Step 11. Determine whether the iteration is over. If the algorithm reaches the maximum number of iterations, or the solution accuracy reaches the set value, it is determined that the loop ends, and the optimization result is output. Otherwise, it returns to Step2 to continue the next iteration operation, and the current iteration number t satisfies .
Step 12. Output the results of IHSSA.
The flow chart is shown in Figure 3.

4. Experimental Results and Analysis
4.1. Benchmark Function Test
In order to better verify the effectiveness of the newly improved algorithm, this paper selects 14 internationally representative benchmark functions for testing. The selected benchmark functions, which hold the function name, expression, and search interval of the function, are shown in Table 1. F1–F4 in the table are unimodal functions, usually only a global optimal value, the purpose of which is to test the local mining capability of the function. F5–F7 are multimodal functions which test the balance between exploration and mining of the function. The final selections F8–F14 are all fixed-dimensional functions. The theoretical optimal values of the 14 selected test functions are all 0.
All the algorithms mentioned are performed on Windows10 64 bit system, and the processor is Intel(R) Core(TM) i5-9300H CPU @ 2.40 GHz with 16 GB RAM. And the MATLAB R2016b simulation experiment platform is used for simulation.
4.2. Ablation Experiment
In order to verify the influence of the three improvement points of the algorithm on the effect of the entire experiment, an ablation experiment is hereby carried out. The comparison results are analyzed and have strong persuasion. The functions used for verification still select the 14 functions selected in the previous section, and the statistical results are divided into 5 angles according to the type and number of improvement points. These algorithms include the original SSA; the improved ICMIC initial population combined with the initial SSA is named ISSA-I; the improved algorithm combined the hybrid reverse learning strategy with the ISSA-I is named ISSA-II; the improved algorithm combined the crisscross strategy with ISSA-I is named ISSA-III; the last one is the IHSSA that combines all the innovations proposed in this paper. Integrate the data into Table 2 according to the principles above.
It can be seen from Table 2 that, in the process of improvement, the indicators of 8 functions have no obvious changes in the data. Among them, each index of the 7 functions, F1, F5, F6, F7, F9, F11, F12, and F13, reaches the optimal value of 0 in the SSA. The value obtained by the improved algorithm still keeps the optimal state. With the increase of improvement points, in the five functions of F2, F3, F4, F10, and F14, the optimization effect becomes more significant. Except for the best optimization of ISSA-III in F14, the other four functions are all the best optimization values of IHSSA and even get progress of many orders of magnitude. In F8, although there is no improvement in the two data of average and standard deviation, the optimal value has reached an improvement of 7 orders of magnitude. Overall, the IHSSA that combines all the innovation points proposed in this article has the best effect. Each innovation point has played a certain role in each step of the algorithm; especially combining IICMIC with population initialization has brought obvious results.
4.3. Population Diversity Analysis
Population diversity is one of the important performance indexes to measure the pros and cons of an algorithm, which can reflect whether the algorithm falls into a local optimum to a certain extent. In this paper, the population distribution map in the early stage of the iteration (the number of iterations is 10) was selected as a reference. The unimodal function F1 and multimodal function F8 proposed in the above table were selected as the research objects to show the advantages and disadvantages of IHSSA and the original SSA, as shown in Figures 4(a) and 4(b), respectively, representing the individual distribution of SSA and IHSSA on F1, and Figures 4(c) and 4(d), respectively, represent the individual distribution of SSA and IHSSA on F8. The theoretical optimal value of F1 is 0, and the theoretical optimal value for F8 is 420.

(a)

(b)

(c)

(d)
As can be seen from Figure 4, in the early stage of the algorithm iteration, the distribution in Figure 4(a) is linear, while the IHSSA in Figure 4(b) is more widely distributed. Compared with the poor aggregation state of SSA in Figure 4(c), the distribution shown in Figure 4(d) is closer to the theoretical optimal value and presents a wider distribution field. It can be seen that the improved IHSSA in this paper increases the diversity of the population to a certain extent and reduces the invalid search of individuals.
4.4. Comparison with Other Optimization Algorithms
14 standard test functions proposed in the previous section are utilized to test the performance of the improved IHSSA. Nine intelligent optimization algorithms, including particle swarm optimization (PSO), whale optimization algorithm (WOA), grey wolf optimization algorithm (GWO), teaching and learning algorithm (TLBO), Sparrow Search Algorithm (SSA), Chaos Sparrow Optimization Algorithm (CSSA) proposed by Lv et al. [2], LSSA improved by Zhu DL [33], GSSA improved by Chen G, and YSSA proposed by Yan et al. [34], are chosen for comparison. In order to ensure the objectivity of the experiment and the fairness of comparison, the population size and maximum iteration number of each algorithm are 100 and 500, respectively. The other parameter settings of the 8 algorithms are shown in Table 3. Considering the importance of parameter values in experimental results, the feasibility proved by a large number of experiments is the only source of value, so the data in the table are from the parameter values set by the author when each algorithm was first proposed. In order to avoid the contingency of the algorithm results, each test function is run 30 times separately. And the average value, standard deviation and optimal value of the experiment are calculated, respectively. Meanwhile, the average running time of each algorithm for optimizing in each function was recorded as a reference for improving performance. The experimental data are shown in Table 4.
It can be seen from Table 4 that, compared with the other three SSA algorithms, the same results are achieved in 7 functions; even the 6 functions F1, F5, F7, F9, F11, and F13 have found the optimal solution 0. There are obvious improvements in the remaining 7 functions, and the average value of optimization in F2, F3, and F4 has been improved by multiple orders of magnitude. Compared with the WOA, GWO, and TLBO algorithms, the optimal solution 0 is found in F5, F9, and F7, respectively, and the results in the other functions are better. Compared with the PSO algorithm, the five functions of F4, F11, F12, F13, and F14 have a significant improvement, which is particularly prominent in F4. In addition, compared with the basic SSA, the optimal values found in the three functions are improved significantly. Compared with the other two improved SSA algorithms, the results are better in the three functions of F4, F10, and F14.
In F8, apart from the GSSA, the performance of several SSA algorithms is not as good as WOA, especially in the average value. Overall, the IHSSA proposed in this paper has the best performance among the 14 functions, while the PSO has the worst performance. Figure 5 shows the convergence curves of 8 algorithms for 10 functions. It can be seen that, among the five functions of F1, F2, F3, F6, and F12, IHSSA has the fastest convergence speed and higher convergence accuracy. In F4, F10, and F14, although IHSSA has the same convergence speed as other SSA variants, it is obviously able to obtain a better solution. For F8, WOA showed a high advantage, and GSSA shows superior optimization ability than other SSA variants. However, compared with SSA and LSSA, IHSSA performs better in convergence accuracy, but compared with GSSA, CSSA, and ISSA, the accuracy is still far from the theoretical optimal value. In terms of running time, the variant of SSA consumes more time than the original SSA. However, among several variants, LSSA and IHSSA have relatively shorter running times, and higher efficiency in the optimization process of 7 functions, respectively.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)
In general, IHSSA has the fastest convergence speed and better convergence accuracy; that is, the quality of the algorithm’s optimal solution is better.
4.5. Wilcoxon Rank Sum Test
Derrac et al. proposed that, for the performance evaluation of improved intelligent optimization algorithms, data comparison only based on average, standard, and optimal values is not convincing enough. One of the necessary conditions, the quality of the statistical test results, also proves whether the algorithm has been significantly improved or not. In order to judge that the results of the improved IHSSA in this paper are significantly different from the results of other algorithms, the Wilcoxon statistical test was performed at a significance level of 5% [23]. The test principle is briefly described as follows: when , it is considered that there is a significant difference between the two algorithms. When , it indicates that the performance of the two algorithms is equivalent, and the difference is not obvious. In this article, the partial value of is expressed as N/A. Table 5 shows the P value calculated in the Wilcoxon rank sum test of IHSSA and other algorithms among the 14 selected benchmark functions. The results show that accounts for the main component. The IHSSA has a greater improvement over the SSA algorithm, and its superiority is also statistically significant, which proves that the improved algorithm has a higher convergence accuracy.
4.6. Time Complexity Analysis
Time complexity is one of the important indicators for judging the performance of the algorithm and calculating the running cost. Analyze whether the improved IHSSA increases the time complexity from both the macro- and microperspectives. On the one hand, from a macroperspective, supposing that the maximum number of iterations of the algorithm is M, the dimension is D, and the population size is P, then, according to the time complexity calculation formula of the intelligent optimization algorithm, the time complexity of SSA is . For the improved IHSSA, although the number of cycles has been increased, the structure of the algorithm has not changed. Therefore, the time complexity of the IHSSA can be calculated as . Obviously, , and the time complexity has not increased in the macroscopic view. On the other hand, from a microperspective, the time complexity of IHSSA has increased to a certain extent. Assuming that the proportions of discoverers and joiners are A and B, respectively, then, the time complexity of lens-based reverse learning and generalized opposition-based learning is , , respectively. The increase in time complexity of the alert phase of the vertical and horizontal cross strategy update is . The initialization phase of IICMIC does not increase the time complexity. In summary, from a microscopic point of view, the time complexity of the improved algorithm has increased by , but the increase in each step did not cause orders of magnitude. The total time complexity is still .
From the above on, regardless of the macroscopic or microscopic point of view, the time complexity has not changed, which undoubtedly proves the feasibility of the algorithm improvement.
5. Application in Constrained Engineering Optimization Problem
5.1. I-Shaped Beam
The design optimization problem of I-beam is one of the classic engineering optimization problems. The goal is to minimize the vertical deflection by optimizing the width of the leg , the height of the waist and the two thicknesses (, ). The objective function and constraint conditions of this optimization problem are as follows:
Minimize:
Subject to:
Variable range:
5.2. Tree-Bar Truss Design Problem
The design problem of three-bar truss is another classic problem in engineering case studies. In order to minimize the weight constrained by stress, deflection, and buckling, it is necessary to evaluate the optimal cross-sectional area and adjust the two long rods A1 and A2 (, ). The specific mathematical formulas for adjustment are as follows:
Minimize:
Subject to:
Variable range:
5.3. Cantilever Beam
The application is a structural engineering design problem. The component part of the cantilever arm is five hollow bricks, and the purpose of the project is to increase the rigidity. Increasing the cross-sectional height of the brickwork is more conducive to improving the rigidity. If the section height increases, in order to reduce the mass or maintain the same quality, the section width must be reduced. Therefore, the size of the cross section (height or width) is the optimal parameter for this experiment. The modeling expression of this case is as follows:
Minimize:
Subject to:
Variable range:
Three classic constrained engineering optimization problems, I-beam optimization problems, three-bar truss design problems, and cantilever beam problems, are representative in verifying the feasibility of the algorithm. The parameters and constraints of the three engineering problems are integrated in Tables 6–8, respectively. In decades of research [14, 35–41], to some extent, generations of researchers have designed many kinds of optimizers to solve these three nonlinear problems. The statistical results of these optimization methods (including the IHSSA proposed in this paper) are shown in Tables 6–8, respectively, and the optimal solutions obtained are denoted as f(X). It can be seen from Tables 7 and 8 that the IHSSA algorithm can be used in engineering optimization problems and has better performance than the original SSA algorithm. Compared with other optimizers shown in [27], the overall result is also slightly superior.
6. Conclusion
Based on the basic sparrow search algorithm, this paper proposes an improved sparrow search algorithm (IHSSA) that integrates infinite folding iterative chaotic mapping and hybrid reverse learning strategy so as to deal with shortcomings. Firstly, an improved infinite fold iterative chaotic map (IICMIC) is introduced in the initial population stage to increase the search range of the population. Then, in order to update the position of the global optimal value and the current worst, a hybrid reverse learning strategy is proposed to be applied after the update of the discoverer and the update of the follower, respectively. The introduction of the hybrid reverse learning strategy increases the quality of understanding and avoids falling into the global optimum. Moreover, combining the vertical and horizontal crossover strategy into the monitor stage contributes to maximizing the exploration and mining capabilities of the balance algorithm. In general, the proposal of IHSSA makes the optimization accuracy better, the development ability becomes stronger, and the algorithm’s global search ability gets enhanced.
Overall, the comparison results of the solutions obtained by the 14 standard test functions also prove that the new algorithm is generally better than several well-known heuristic algorithms such as WOA, GWO, TLBO, PSO, the newly proposed SSA, and its excellent variants. IHSSA has strong stability and robustness. In terms of running time, the optimization process of the seven functions takes the least amount of time, showing high computational efficiency. In addition, the high quality of convergence accuracy is proven in the Wilcoxon rank sum test. It is proven that the update of the algorithm does not bring an order of magnitude increase in time complexity, which indicates that it is a good operation. Moreover, the application of the improved algorithm in three constrained engineering optimization problems has demonstrated its great feasibility and effect, which is better than other optimizers. This undoubtedly makes the research more meaningful. However, IHSSA research is still in its infancy.
In the follow-up research, in order to obtain better accuracy and convergence speed, we will continue to try to improve the sparrow search algorithm and other swarm intelligence algorithms. In addition, the improved algorithm and innovative points are applied to engineering optimization problems to solve practical problems, so as to broaden the application field of the algorithm and further verify the feasibility and effectiveness of the algorithm.
Data Availability
Some data of our team needs to be kept confidential. If necessary, please ask the corresponding author for it.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was financially supported by National key Research and Development program (Grant no. 2020YFB1713700).