Abstract

The original Chimp Optimization Algorithm has disadvantages such as slow convergence, the tendency to fall into local optima, and low accuracy in finding the best. To alleviate the existing problems, a chaotic chimp optimization algorithm based on adaptive tuning is proposed. First, sine chaos mapping was used to initialize the chimpanzee population and enhance the quality and diversity of the initialized population. Then the global search capability and local exploitation capability of the optimization algorithm at iteration are enhanced by improving the convergence factor f and dynamically changing the number of chimpanzee precedence echelons. Finally, 10 benchmark functions are used to test the optimization-seeking performance of the Improved Chimp Optimization Algorithm, while an engineering design optimization problem is introduced to compare the experimental results with other swarm intelligence optimization algorithms. The Improved Chimp Optimization Algorithm is shown to have better convergence and solution accuracy, resulting in an improvement in the global optimization-seeking capability of the original Chimp Optimization Algorithm.

1. Introduction

As an emerging intelligent optimization technique, the swarm intelligence optimization algorithms [13] have been widely used in artificial intelligence, communication networks, and industrial production since their introduction. The extraordinary interest of experts in the exploration of this technology: swarm intelligence refers to the natural communities of organisms whose unique way of life enables scholars to access an evolutionary computational technology that embodies the proprietary intelligence of nature. Optimization, on the other hand, is a mathematical-based approach to obtaining the best parameters or optimal solution from a large number of solutions that will satisfy a given objective, given a set of constraints. Examples include the optimization of the backpack problem [46], the optimization of the vehicle path problem [79], the Steiner minimum tree problem [1012], and a growing number of other optimization problems that require solutions. The study of swarm intelligence optimization algorithms has become a hot topic of research in the field of intelligence and has permeated all aspects of people’s lives.

The Chimp Optimization Algorithm [13, 14] is a new optimization algorithm proposed by Khishe et al. [15, 16] in 2020. The Chimp Optimization Algorithm is characterized by high exploration capacity and high development accuracy. However, the Chimp Optimization Algorithm also has some disadvantages: it is easy to fall into local optimum solutions and slow convergence, as well as insufficient convergence accuracy.

Specialist scholars have proposed different solutions to make the above problems alleviated, such as the proposal in the literature [17] to optimize the diversity of populations using Sobol sequences, which facilitates the preglobal exploration of Chimp Optimization Algorithm. The literature [18] proposes the use of the simplex method and group individual memory mechanisms to balance positions between groups during chimpanzee position updating, thus achieving enhanced local exploration capability of the algorithm. The incorporation of thinking related to the golden sine algorithm in updating chimpanzee positions to alleviate the problem of the algorithm’s tendency to fall into local extremes was proposed in the literature [19]. The literature [20] proposes adding variational Cauchy operators and backward learning strategies for optimal solutions in the early stages of the algorithm to jump out of the local optimum while speeding up the convergence of the algorithm.

Although the improvement strategies in the above literature have improved the convergence accuracy and speed of the algorithm to a certain extent, the algorithm still suffers from problems such as easy to fall into local extremes and premature convergence. In order to further improve the solution accuracy and convergence efficiency of the Chimp Optimization Algorithm, this article continues to explore methods that can be optimized on the basis of previous algorithmic improvements, and proposes the Chimp Optimization Algorithm with hybrid improvement strategies (IChOA).The innovations can be summarized as follows:(1)Initialization of chimpanzee populations using sine chaos mapping. This strategy can make the individuals more evenly distributed in the whole solution space, improve the diversity of the algorithm at the initialization, and at the same time, the individuals can quickly discover the location of the high-quality solutions, so as to speed up the convergence of the algorithm, and improve the accuracy of the algorithm’s convergence.(2)Introducing a nonlinear convergence factor. The improved convergence factor is in a nonlinearly decreasing condition throughout the iteration process, which makes the algorithm expand the chimpanzee search range in the preiteration period to improve the global search capability of the algorithm, and at the same time improves the local search capability of the algorithm in the late iteration period.(3)When individual chimpanzee locations are updated, the appropriate number of prior chimpanzees is chosen to enhance the algorithm’s developmental and local exploration capabilities. This strategy avoids premature convergence of the algorithm and better balances the global search capability with local exploitation.

This paper follows a reasonably logical sequence. The first section presents the background of research on intelligent algorithms in recent years and some of the contributions made by researchers in the field. The second section describes the original Chimp Optimization Algorithm. Section 3 describes the three improvements of this article in order, gives the proposed new algorithm IChOA, and includes the flow steps of the new algorithm. In the next section, the new algorithm is tested on 10 standard test functions and the results are tabulated and analyzed comparatively based on the data to verify the strengths and weaknesses of the algorithm, and the superiority of the algorithm is proved by time complexity analysis and Wilcoxon rank sum test. In Section 5, we apply IChOA to a classical constrained engineering optimization problem and the data obtained further demonstrate the feasibility and effectiveness of the algorithm. In the end, the work done in this article is briefly summarized and the next steps of the research are planned and foreseen.

2. Chimp Optimization Algorithm

The Chimp Optimization Algorithm is a new optimization algorithm obtained by researchers by observing the hunting mode of chimpanzee communities. In the chimpanzee community, all individuals are divided into four types: Attacker chimpanzees (predicting to compress prey survival space), Driver chimpanzees (following prey), Barrier chimpanzees (limiting prey escape space), and Chaser chimpanzees (overtaking prey). There is a phenomenon of “social incentive” in the process of chimpanzee hunting. Chimpanzees hunt for meat in exchange for social benefits, such as ethnic support, mate choice, or their own strength. Attackers are rewarded with more pieces of meat during hunting, as it is widely believed that this function requires more knowledge to predict the actions of prey. This important role is positively correlated with age, intelligence, and physical ability. In addition, this social incentive causes chimpanzees to behave so confusingly in the final stages of the hunting process that all chimpanzees abandon their respective responsibilities in an attempt to frantically obtain meat.

In general, the chimpanzee hunting process is divided into two main stages. One is to drive, intercept and chase prey, and the other is to attack prey.

2.1. Surround Prey

During hunting, the act of chimpanzees rounding up their prey is defined as:

Equation (1) represents the distance between the chimp and its prey. Equation (2) represents the chimpanzee’s position update formula, where t represents the number of current iterations; xprey is a vector of prey positions; and xchimp is a position vector for chimpanzees. The a, m, and c vectors are calculated by Equations (3), (4), and (5).where f is the convergence factor, which decreases linearly from 2.5 to 0 with the number of iterations. The modulus of r1 and r2 are random numbers within the interval [0, 1]; c is the random value of the interval in [0, 2]; and m is a chaos vector calculated from various chaos graphs.

2.2. Attack Prey

In order to simulate the chimpanzee’s approach to its prey, the value of f gradually decreases, so the fluctuation range of a also decreases. That is, during the iteration of the algorithm, when the value of f drops linearly from 2.5 to 0, its corresponding value of a also changes within the interval [–f, f]. When the value of a is within the interval [–f, f], the chimpanzee’s next position can be anywhere between its current position and the prey position. When |a| < 1, chimpanzees attack their prey.

2.3. Search for Prey

Chimpanzees search for prey based on the location of xAttacker, xBarrier, xChaser, and xDriver. Chimpanzees move separately when searching for prey, but gather together to hunt when they find prey. Based on the divergence of mathematical modeling, chimpanzees can be separated from prey with random values of |a| > 1 or |a| < –1, which emphasizes the process of exploration and allows Chimp Optimization Algorithm to search globally for the optimal solution. When |a| > 1, separate the chimpanzee from the prey (local optimal solution), hoping to find a better prey (global optimal solution). The Chimp Optimization Algorithm has a parameter c to help it discover new solutions. As shown in Equation (4), c is the random value of the interval in [0, 2]. c indicates the random weight of the influence of chimpanzee position on prey, |c| > 1 indicates that the influence weight is large, and vice versa, it indicates that the influence weight is small. This helps the Chimp Optimization Algorithm to play well, and at the same time avoids the Chimp Optimization Algorithm from falling into the local optimal solution in the optimization process. In addition, unlike a, c is nonlinearly reduced. In this way, from the beginning of the iteration to the end, the Chimp Optimization Algorithm provides a global search in the decision space. When the Chimp Optimization Algorithm falls into the local optimal solution and is not easy to jump out, the randomness of c plays an important role in avoiding the local optimality, especially in the final iteration where the global optimal solution needs to be obtained.

3. Improved Chimp Optimization Algorithm

This subsection shows an improvement strategy for the original Chimp Optimization Algorithm, which consists of three main areas of improvement.

3.1. Sine Chaotic Mapping Initializes Population

The original Chimp Optimization Algorithm uses an initialization method that randomly generates chimpanzee individuals in the search space. This stochastic approach does not ensure that individuals are evenly distributed in the initial search space, and sometimes there are even overlapping positions of some initialized individuals, which can lead to a lack of search space. Most importantly the initial population affects the efficiency of the development of the optimization algorithm. For this reason, the Sine chaotic mapping method [21] is used in this article to initialize the chimpanzee population.

The Sine chaotic one-dimensional self-map expression is shown in Equation (6).where xi is the iterative sequence value; i is taken as a nonnegative integer; x0 ∈ (0, 1); and k is a system parameter taking values in the range (0, 4]. The value of k is taken as 4 for the experiments. The sine chaotic sequences are shown in Figure 1.

As shown above, the sine chaotic mapping initialization allows for a more uniform distribution of chimpanzeeised individuals in the search space, with traversal and nonrepeatability. This ensures that the initial space is adequately searched in order to enhance the diversity and quality of the initial population, avoiding the problem of the algorithm falling into a local optimum during the exploration process, thus improving the efficiency of the algorithm and compensating for the shortcomings of the original Chimp Optimization Algorithm.

3.2. Improved Convergence Factor

Measuring how good an algorithm is requires a perfect synergy between global exploration capabilities and local exploitation capabilities. The value of the coefficient vector a fluctuates in the range [–f, f], and the magnitude of the value of this vector represents the two states of the algorithm in the iteration. When , the algorithm increases the degree of exploration of the search space by requiring the individuals of the population to hunt separately; when , the individuals of the population attack the prey quickly, enhancing the local exploitation ability to find the optimal solution; and the magnitude of the value of the coefficient vector a is determined by the convergence factor f, which in the original Chimp Optimization Algorithm is linearly decreasing in value. Although this also satisfies the requirements of algorithmic optimization for parameter a, it is only applicable to simple function optimization. If it is substituted into solving problems where there are multiple local optima, it will lead to a less efficient convergence of the algorithm, and may even easily fall into a local optimum solution due to insufficient exploration search space.

To address the above problems, an improved nonlinear convergence factor f′ [2224] is proposed, which has the mathematical expression shown in Equation (7).where f0 denotes the initial maximum value of the nonlinear convergence factor f′; I denotes the current number of iterations; and Max_iter denotes the maximum number of iterations. Figure 2 shows a comparison of the degree of degradation between the nonlinear convergence factor and the original f.

The convergence trend presented by the dotted line in the above diagram is the proposed nonlinear convergence. At the beginning of the iteration, the f′ value decreases slowly and the hunting individuals expand their search range to ensure the Chimp Optimization Algorithm’s search accuracy. In the middle of the iteration, the f′ value decreases rapidly and the hunting individuals attack the prey quickly to lock in the local optimum of the Chimp Optimization Algorithm. Later in the iteration, the rate of decline in f′ values slows down again, chimpanzee individuals are forced to separate from each other, and the algorithm jumps out of the local optimum solution, improving the optimality finding accuracy of the Chimp Optimization Algorithm. This series of algorithmic search processes allows for better synergy between the global search capabilities of the Chimp Optimization Algorithm and the local exploitation capabilities.

3.3. Improved Chimpanzee Advance Echelon

In the original Chimp Optimization Algorithm, the researcher assigned the four retained optimal solutions in turn to the first echelon involved in the hunt, where xAttacker represents the best of the four solutions retained, xBarrier represents the second best solution, xChaser represents the third best solution, and xDriver represents the fourth best solution. The other individual chimpanzees were constantly moving to target the position of these four chimpanzees. Equations (8)–(10) show the mechanism for updating the position of individual chimpanzees in the Chimp Optimization Algorithm, with improvements for this part of the algorithm using a dynamic change in the number of chimpanzee first echelons.

Using the magnitude of the value of the coefficient vector a as the entry point, such that |a| > 0.5, the maximum number of pioneer echelons is retained and Equations (8)–(10) is used to update the chimpanzee individual positions. This initiative ensures that the algorithm has a sufficient number of optimal pioneers to guide individual chimpanzees to explore the search space during iterations, while avoiding falling into local optima. Make |a| < 0.5 to reduce the number of optimal pioneers. For example, Equation (11) is used to update the individual chimpanzee positions. This initiative enhances the efficiency of the algorithm in finding the optimum at iteration and enables the algorithm to find the local or global optimum solution quickly.

3.4. Steps in the Execution of the Improved Chimp Optimization Algorithm

To summarize the above improvements, the steps in the execution of the chimpanzee optimization algorithm are specified below.Step 1:Set the parameters of the algorithm. Such as the initial number of chimpanzees N, the dimension of the algorithm space dim and the boundary range of the algorithm, then initialize the chimpanzee population using the Sine chaotic mapping.Step 2:Calculating the fitness values of individual chimpanzees, selecting the optimal four individual chimpanzee positions, ranking them in order from smallest to largest, and assigning them to attacker, barrier, chaser and driver in that order.Step 3:The iteration begins by updating the parameters c and m to calculate the distance between the chimpanzee and the prey, and then using the vector a and the nonlinear convergence factor f′ to update the positions of the other individuals of the chimpanzee.Step 4:For the chimpanzee hunting pioneer echelon, (10) was chosen to update the chimpanzee position when |a| > 0.5. When |a| < 0.5, (11) was chosen to update chimpanzee positions.Step 5:Updating of xAttacker, xBarrier, xChaser, xDriver.Step 6:Determine if the algorithm has reached the maximum number of iterations. If not, return to step 2. Otherwise the algorithm ends and the optimal chimpanzee position is output.

4. Simulation Experiment

4.1. Experimental Environment and Parameter Settings

The algorithms were programmed using MATLAB 2014a software, the laptop system version was Win10 and the processor was an Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz 1.80 GHz.

In order to verify the feasibility of the IChOA algorithm, the experiments in this paper were chosen to test the performance with the particle swarm algorithm [25], the particle swarm optimization algorithm with adaptive inertial weight (AIWPSO) [25], the Improved particle swarm optimization (IPSO) [26], and the Chimp Optimization Algorithm. Ten benchmark functions [27] with different characteristics were used for the simulation experiments. Where F1–F7 are single-peak test functions and F8–F10 are multipeak test functions. The specific characteristics of these 10 test functions are shown in Table 1. In this experiment, to ensure fairness, all algorithms were made so that the number of populations N = 50, the maximum number of iterations Max_iter = 500, and the dimensionality of the test functions dim = 30.

4.2. Test Results

In order to be able to clearly demonstrate the comparison between the five optimization algorithms, one optimization algorithm is run 30 times independent of each of the 10 benchmark functions and the mean, standard deviation, and minimum of the 30 test results are derived by recording the data.

Table 2 shows the parameter settings for each optimization algorithm. The test results are shown in Table 3 (where F represents the benchmark functions and A represents the optimization algorithms).

4.2.1. Analysis of Single-Peak Type Benchmark Function Test Results

The results of the F1 and F5 test functions show that the PSO algorithm is the best in terms of optimality and convergence accuracy, both in terms of minimum and average values, while the Improved Chimp Optimization Algorithm is at the bottom of the list, with poorer results. Looking at the results of the F2 and F6 function tests, the AIWPSO algorithm has the best performance. However, the Improved Chimp Optimization Algorithm outperformed the last three optimization algorithms in terms of convergence accuracy when analyzed from the average of the F2 function tests, and the data ranked second. The results of the F3 and F7 function tests show that in general the Improved Chimp Optimization Algorithm outperforms other algorithms in terms of minimum and average values, and has good merit-seeking ability and convergence accuracy. The results of the F4 function test show that both Chimp Optimization Algorithms are stronger than the Particle Swarm Optimization algorithm in terms of outperforming. Also the Improved Chimp Optimization Algorithm has the smallest optimal value and outperforms the other optimization algorithms in terms of optimality seeking capability.

4.2.2. Analysis of MultiPeak Type Benchmark Function Test Results

The obtained results from the F8 to F10 test functions show that overall the optimal and average data of the Improved Chimp Optimization Algorithm are excellent, with good optimization finding ability and convergence accuracy. In particular, the Improved Chimp Optimization Algorithm outperformed the other four optimization algorithms in terms of the F8 and F9 function test results. As the test function changes from a single-peaked type to a multipeaked type, many local optima emerge. Other algorithms tend to fall into the local trap, so for the problem of finding optimal solutions to functions of multipeaked type, the Improved Chimp Optimization Algorithm has a great advantage.

4.3. Convergence Curve of the Improved Chimp Optimization Algorithm

The convergence curve graph of an algorithm is a way to visualize the convergence trend of the algorithm. In order to be able to clearly compare the merit-seeking accuracy and convergence rate between the Improved Chimp Optimization Algorithm and the original Chimp Optimization Algorithm, the convergence graphs of the six test functions were selected for this experiment. Where the x-axis is the number of iterations and the y-axis is the fitness value. The specific curves are shown in Figures 38.

As shown from the six convergence curves above, for the six different test functions, the Improved Chimp Optimization Algorithm converges at the fastest rate, and the convergence curves of the Improved Chimp Optimization Algorithm all lie below the convergence curves of the Chimp Optimization Algorithm, and the Improved Chimp Optimization Algorithm obtains the smallest fitness value at the same time.

Throughout the iterations of the algorithm, the convergence curve of the Chimp Optimization Algorithm stalls to varying degrees at the beginning of the iterations, but the convergence curve of the Improved Chimp Optimization Algorithm provides some relief from this problem. In the middle of the iteration, the Improved Chimp Optimization Algorithm converges significantly more rapidly than the original algorithm. Such a situation of the optimization curve is precisely due to the improved strategy using sine chaotic mapping to initialize the population, improving the nonlinear convergence factor and dynamically changing the number of chimpanzees in the first echelon. In the late iteration, the fitness values of the Improved Chimp Optimization Algorithm were all smaller than those of the Chimp Optimization Algorithm, and the improved algorithm led to an enhanced search capability and convergence accuracy.

4.4. Wilcoxon Rank Sum Test

The Wilcoxon rank sum test is a nonparametric statistical test that can detect more complex data distributions. The general data analysis is only for the mean and standard deviation of the current data, and not compared with the data of multiple runs of the algorithm, so this kind of data comparison analysis is not scientific. In order to fully reflect the performance of IChOA, this article adopts the statistical analysis method to analyze the performance difference between IChOA and other algorithms for each simulation result. The results of IChOA in 10 test functions are selected and compared with the results of the remaining four algorithms for Wilcoxon rank sum test and the p-value is calculated as when , it can be regarded as a strong validation of the rejection of the null hypothesis. Not a number indicates that there is no data to compare with the algorithm. While +, = , and – indicate that the IChOA optimization performance is better than, equal to and worse than the compared algorithms respectively, the results of Wilcoxon rank sum test are shown in Table 4.

As shown from the results in Table 4, the p-value of the Wilcoxon rank sum test results of IChOA is basically less than 5%, which indicates that statistically speaking, the performance advantage of IChOA for the optimization of the basic function is obvious, thus further reflecting the robustness of IChOA.

4.5. Time Complexity Analysis

Time complexity is a key index to test the operation efficiency of the algorithm. In the ChOA algorithm, assuming that the population size is N; the dimension of the search space is n; the parameter initialization time is t1; and the time for generating random numbers is t2; and the time complexity of the initialization phase of the ChOA population is

In the iterative phase of the algorithm, assuming that the time to compute the individual fitness values of the population is f(n); the time to compare the individual fitness values of the population and select the four optimal individual positions is t3; the time to update the convergence factor is t4; and the time for the other chimpanzees of the population to follow the update of the positions of the four optimal individuals is t5; then the time complexity of this phase is

So the total time complexity of the basic ChOA algorithm for optimization is

In the IChOA algorithm, let the population parameter initialization time be the same as the ChOA algorithm, and the time for each one-dimensional sine chaotic mapping is t6, then the time complexity of the population initialization phase of the IChOA algorithm is

In the iterative phase of the algorithm, assuming that the time to compute the individual fitness value of the population is f(n), the time to update the improved nonlinear convergence factor is t7, and the time to execute the strategy of dynamically changing the number of chimp precedence echelons is t8, the time complexity of this phase is

The time complexity of the improved IChOA algorithm is

In summary, the time complexity of the IChOA algorithm is of the same order of magnitude as the standard ChOA algorithm and does not increase the time complexity of the algorithm.

5. Engineering Application

In a bid to verify the feasibility and effectiveness of Improved Chimp Optimization Algorithm in practical engineering design applications, the pressure vessel design optimization problem was selected for this experiment.

The objective of the pressure vessel design problem is to minimize the cost of making the pressure vessel, which involves pairing, forming and welding, and so forth. A schematic diagram of the pressure vessel is shown in Figure 9. Both ends of the pressure vessel are sealed by a top cap, with a hemispherical cap at the head end.

L is the length of the section of the cylindrical part without considering the head, R is the radius of the inner wall of the cylinder, Ts and Th indicate the wall thickness of the cylinder and the wall thickness of the head respectively. L, R, Ts, and Th are the four optimization variables of the pressure vessel design problem.

The objective function of the pressure vessel design problem is expressed as follows.

The objective function is subject to the following constraints.where 0 ≤ x1 ≤ 100, 0 ≤ x2 ≤ 100, 10 ≤ x3 ≤ 100, and 10 ≤ x4 ≤ 100.

In this article, the proposed IChOA is experimentally compared with ChOA, PSO, ALO (Ant Lion Optimization), FA (Fire-fly Optimization Algorithm), and MVO (Multi-Verse Optimization), where the data for the comparison algorithms are obtained from the literature [28].

As evidenced by Table 5, Improved Chimp Optimization Algorithm has better optimization results compared to the design results of the other five optimization algorithms for the engineering optimization problem of this pressure vessel design. This further demonstrates the feasibility and effectiveness of Improved Chimp Optimization Algorithm in practical engineering applications.

6. Conclusion

In order to solve the problems of the traditional Chimp Optimization Algorithm, such as too much randomness in the initialized population, insufficient traversal, slow convergence rate of the algorithm, prone to premature convergence, and falling into local optimum, this article proposes a Chaotic Chimp Optimization Algorithm Based on Adaptive Tuning. The Improved Chimp Optimization Algorithm is demonstrated to have high-solution accuracy by comparing the optimal solutions of the 10 benchmark functions obtained by the algorithm. The proposal of IChOA improves the shortcomings of the original optimization algorithm, which is slow to converge and prone to premature convergence, improves the global (local) search capability of the original Chimp Optimization Algorithm throughout the iterative process, and enhances the algorithm’s ability to be locally exploited at a later stage. In addition, the algorithm was shown to have a high convergence accuracy in the Wilcoxon rank sum test. Also the update of the algorithm does not bring an order of magnitude increase in time complexity, which shows that it is a good operation. Finally, it is further validated by a pressure optimization design problem, and the results show that the Improved Chimp Optimization Algorithm has a better performance in global optimization search and local development, and therefore also proves the effectiveness and reliability of the improved strategy.

However, the research of IChOA is still in its infancy, and the algorithm has some shortcomings that need to be improved down the road. First, the IChOA algorithm cannot catch up with the PSO algorithm in terms of convergence rate. This flaw may stem from the fact that the researchers added chaotic vectors to the algorithm when mimicking the chimpanzee group feeding method. The convergence rate of the algorithm would have had to be decreased while ensuring sufficient search space. The second problem is the tendency of the algorithm to fall into local optima, which is also related to the phenomenon of “social incentives” in chimpanzee populations. While this property facilitates the algorithm’s optimality-seeking accuracy at iteration, the chimpanzees’ abandonment of their exclusive duties in order to compete for prey affects the algorithm’s ability to converge on the global optimum more precisely at later stages of the convergence accuracy.

The above summary and analysis are what were gained from this experiment. On the basis of understanding the principle of the Chimp Optimization Algorithm, the original algorithm is slightly optimized, thus improving the optimization efficiency of the original algorithm. At the same time, the problems of the optimization algorithm are also analyzed, pointing out the direction of subsequent improvement. Therefore, future work will also be based on this basis to continuously optimize and improve the IChOA algorithm in order to apply it to more complex application areas.

Abbreviations

ChOA:Chimp optimization algorithm
PSO:Particle swarm optimization
AIWPSO:Particle swarm optimization algorithm with adaptive inertial weight
IPSO:Improved particle swarm optimization
IChOA:Improved chimp optimization algorithm
ALO:Ant lion optimization
FA:Fire-fly optimization algorithm
MVO:Multi-verse optimization.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Shaanxi Provincial Key Laboratory for Intelligent Processing of Big Data in Energy (IPBED11 and IPBED1), partially funded by Yan’an University Doctoral Research Initiation Project (YDBK2018-39) and Yan’an University Graduate Education Innovation Program Project (YCX2021070, YCX2023032 and YCX2023033), also supported by the Emergency Research Project on Epidemic Prevention and Control (ydfk007, ydfk062, ydfk060, and ydfk064), the Special Research Project on Epidemic Prevention and Control and Economic and Social Development (YCX2022075 and YCX2022079) of Yan’an University, and the “14th Five Year Plan Medium and Long Term Major Scientific Research Project” (2021ZCQ015) of Yan’an University.