Abstract

Differential evolution (DE) is a robust algorithm of global optimization which has been used for solving many of the real-world applications since it was proposed. However, binomial crossover does not allow for a sufficiently effective search in local space. DE’s local search performance is therefore relatively poor. In particular, DE is applied to solve the complex optimization problem. In this case, inefficiency in local research seriously limits its overall performance. To overcome this disadvantage, this paper introduces a new local search scheme based on Hadamard matrix (HLS). The HLS improves the probability of finding the optimal solution through producing multiple offspring in the local space built by the target individual and its descendants. The HLS has been implemented in four classical DE algorithms and jDE, a variant of DE. The experiments are carried out on a set of widely used benchmark functions. For 20 benchmark problems, the four DE schemes using HLS have better results than the corresponding DE schemes, accounting for 80%, 75%, 65%, and 65% respectively. Also, the performance of jDE with HLS is better than that of jDE on 50% test problems. The experimental results and statistical analysis have revealed that HLS could effectively improve the overall performance of DE and jDE.

1. Introduction

Differential evolution (DE), which was proposed by Storn for solving Chebyshev inequality in 1995 [1], is a well-known numerical optimization algorithm. Due to its simple structure, limited number of parameters, an easy implementation, and outstanding optimization performance, DE has drawn great attention of many researchers and engineers since it was proposed. Over the past two decades, DE has been successfully applied to a variety of fields, such as computer vision [2], dynamic economic dispatch [3], engineering design [4], project scheduling [5], artificial neural networks [6], and complex problems inherent to magnetorheological fluids of interest to the automotive industry, in the framework of extended irreversible thermodynamics [7, 8]. Unlike other population-based evolutionary algorithms, the mutation operator in DE utilizes differential information between individuals in the current population. The mechanism gives DE an obvious edge over other evolutionary algorithms. The binomial crossover, however, only produces one offspring in the space constructed by the target individual and its descendant. Therefore, the trial individual is just only one case of many potential solutions, and other potential solutions are ignored. Hence, it is clear that DE’s search of the subspace is insufficient. This clearly affected the overall performance of DE.

To fill this gap, we introduced a new scheme of local search based on the Hadamard matrix (HLS) for the sake of improving the overall performance of DE.

The remainder of this paper is organized as follows. Section 2 introduces the basic elements of DE algorithm and reviews the related work. Section 3 presents the details about the Hadamard local search. The experimental results are reported in Section 4, while Section 5 concludes this paper.

2. Background

2.1. Differential Evolution

DE algorithm consists of the following four steps.

2.1.1. Initialization

Initialization is the first step of DE algorithm. It randomly generates a population which contains NP individuals in D-dimensional space. For the individual, the jth parameter was initialized by the following formula:where is a uniformly distributed random number within the range and and are the lower and upper bounds of the dimensional spaces, .

2.1.2. Mutation Operator

Following initialization, the mutation operator was applied to each target individual , thus generating a mutant . In view of the important implications that the mutation operators have on the ability of DE’s global search, many researchers focus on the work of improving them. Six efficient and widely used operators [9] are listed below.(i)DE/rand/1/:(ii)DE/best/1:(iii)DE/rand/2:(iv)DE/best/2:(v)DE/rand-to-best/1:(vi)DE/current-to-best/1:

2.1.3. Crossover Operator

Crossover operator randomly combines the genes of the target and its mutant to produce a new offspring. The binomial crossover is the most commonly used method. It is expressed as follows:where is a uniform random number in and is the crossover probability.

2.1.4. Selection Operator

The DE selection operator is a greedy strategy. From the target individual and its offspring, the one with the better fitness value will enter the next generation. The selection operator is shown in the following formula:

Although DE has been successfully applied in many fields, it still needs to improve the performance of the algorithm in many other fields. Thus, several improved versions of DE were proposed by the researchers. These works can be divided into the following four categories.

3.1. Improvement of the Mutation Operator

Ramadas et al. proposed a ReDE algorithm introducing a revised mutation strategy for DE [10]. Mohamed and Almazyad et al. proposed an ANDE algorithm which introduced a new triangular mutation for DE [11]. Gong and Cai proposed a classification-based mutation strategy for DE [12], in which some of the parents are selected proportionally based on their classification in the current population. Peng et al. proposed an improvement in differential evolution, which was named RNDE. RNDE used a new mutation operator, DE/neighbor/1, to balance the exploration and exploitation ability of DE process [13].

3.2. The New Scheme of Self-Adapting Parameters

Brest et al. presented a new approach to the self-adaptive control parameter of DE [14]. Qin et al. proposed a self-adapting DE algorithm, in which the strategies for generating test vectors and their associated parameter values are progressively self-adapting, taking advantage of their previous experiences in the generation of promising solutions [15]. Zhu et al. proposed an adaptive population adaptation scheme (APTS) for DE to dynamically adjust the size of the population [16].

3.3. Hybrid DE

Wang et al. proposed an orthogonal crossover (OX) operator, which is based on orthogonal design and can make a systematic and rational search in a region defined by the parent solutions [17]. Rahnamayan proposed an opposition-based learning DE (ODE). ODE employs opposition-based learning (OBL) for population initialization and for generation jumping [18]. Sun et al. proposed a new algorithm, named DE/EDA [19], which combined DE with estimation of distribution algorithm. Peng et al. proposed a novel DE variant with commensal learning and uniform local search, named CUDE. The biggest contribution of CUDE is to enhance the local space search performance of DE by using uniform experimental design [20].

3.4. The New Methods of Local Search Strategy

Local search can effectively improve the performance of evolutionary algorithms. For example, fittest individual refinement (FIR) was proposed by Noman et al. [21]. In the FIR, the search space around the best individual is explored greedily in each generation. Later, two implementations (DEfirDE and DEfirSPX) of FIR were proposed. The results of the experiments show that both schemes speed up DE for a set of well-known test functions, especially for high dimensions, and they are better than the other two well-known variants of DE. A crossover-based adaptive local search (LS) operator was proposed to enhance the performance of the standard DE algorithm [22]. The new algorithm mainly improved the local search by adaptively adjusting the length of the search using a hill-climbing heuristic. Trigonometric local search (TLS) and interpolated local search (ILS) were proposed in [23]. Combining these two local search strategies, two new variants of DE algorithms (DETLS and DEILS) were implemented. The new scheme improved the performance of DE in terms of the quality of solution without compromising on the convergence rate. A restart differential evolution algorithm with local search mutation (RDEL) was proposed in [24]. In RDEL, a novel local mutation rule based on the positions of the best and the worst individuals among the entire population of a particular generation is introduced. Also, it was combined with the basic mutation rule through a linear decreasing function. The new local mutation effectively enhanced the local search tendency of the basic DE and accelerated the convergence speed. An adaptive local search for dynamically balancing the degree of global search (GS) and local search (LS) was proposed in [25]. In this adaptive local search, if LS performs better than GS, it will increase its preference for utilization. If LS does not perform well, it will reduce its preferences for LS. The performance of the new algorithm for hybridization of the adaptive LS scheme is evaluated by using 10 benchmark problems, and the results prove the effectiveness of the algorithm. An enhanced differential evolution with random local search (named DERLS) was proposed in [26]. The advantage of using random local search in DERLS is to make a small random “jump” to a more promising area in the solution space, thus avoiding the local optimum. It is very simple, fast in calculation, and more efficient for multimode functions than classical DE. Peng et al. proposed a heterozygous differential evolution with Taguchi local search, which effectively enhances the local search performance of DE [27].

Inspired by local search methods, this paper uses the Hadamard matrix to construct the local search for DE.

4. Hadamard Local Search for Differential Evolution

4.1. Motivation

A crossover operator is a recombination operator that generates an offspring around the parents. Therefore, a local search strategy can be regarded as a moving operator [22]. In traditional DE, the binomial crossover operator (the most commonly used crossover operator) only generates and evaluates one single trial vector, which is a vertex of the hyper-rectangle defined by the mutant vector and the target vector [17]. That is to say, only one of many combinations is obtained. As a result, the search for space around parents is inadequate. On the other hand, if all vertices of the super-rectangle defined by the mutation vector and the target vector are checked, a lot of computation is needed. In this paper, a compromise method is used to construct a local search operator by using Hadamard matrix to search several vertices.

4.2. Local Search Based on Hadamard Matrix

A Hadamard matrix is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. For example, a fourth-order Hadamard matrix (H4) is represented as follows:

In geometric terms, this means that each pair of rows in the Hadamard matrix represents two vertical vectors [28]. Except for the first line, half of the elements in each row contain +1 and the other half −1. This feature allows us to construct a new local search strategy based on Hadamard matrix (HLS).

Based on the above characteristics of Hadamard matrix, we propose a new local search operator. We take the fourth-order Hadamard matrix H4 as an example. Since the scale of the optimization problem d is generally greater than 4, it is impossible to create the crossover operator directly on H4. To use H4, the d-dimensional space of the optimization problem needs to be divided into several subspaces. For example, if the dimension size of the optimization problem is 10, the interval [1, 10] should be randomly divided into four subintervals, and each interval corresponds to an element of H4. Figure 1 shows an example of HLS.

Algorithm 1 presents the steps of HLS. With HLS, v1 and x1 will produce four offspring, among which v1 is the mutant individual of x1. Due to the characteristics of Hadamard matrix, these four offspring are four random combinations of v1 and x1. Compared with the traditional crossover operator, HLS can search more completely in local space and find better solution more easily. Therefore, HLS will improve the search performance when classical crossover cannot find a better solution.

Input:
(1)Divide into four subvectors randomly
(2)Read the information of H4
(3)for i = 1 : 4
(4)  for j = 1 : 4
(5)   if H4[i][j] = = 1
(6)    offspring[i][j] = [i][j]
(7)  else
(8)    offspring[i][j] = [i][j]
(9)  end
(10) end
(11)end
Output: offspring
4.3. New Framework of DE with HLS

There are two common ways to use the local search operator in DE algorithm. One is to replace the original crossover operator with local search operator, just as OXDE [17] did. Another method is to select an individual to perform a local search independently during evolution. In essence, for local search, the goal is to find better offspring than the target individual. Therefore, when the crossover operator can produce better offspring, there is no need for local search. In the process of evolution, the success rate of individual renewal is relatively fast in the early stage of evolution, while in the late stage of evolution, the success rate of individual renewal is very slow and even tends to zero.

In the following, we will use three representative functions: Quartic with Noise, Penalized1, and Shift Ackley, to give the convergence process. In the experiment, the population size was set to 30. Figure 2 shows a successful single update in solving these three functions. As can be seen from Figure 2, almost all individuals cannot be successfully updated at the later stage.

To improve the success rate of DE during the later stage, a new framework of DE with HLS was proposed. HLS operator will not affect the performance of the algorithm in the early stage of evolution, but it can effectively avoid premature convergence in the late stage of evolution. The new framework is presented in Algorithm 2. To avoid consuming too much evaluation resources, the framework uses HLS with the specified probability p (see Algorithm 2, Step 13). During our experiments, P is set to 0.1. In addition, to make full use of the information of the evolution process, HLS uses a mutation vector to construct the local search (see Algorithm 2, Step 14).

Input: D, NP, F, CR, P, MaxFEs
(1)Randomly initialize population pop
(2)Evaluate the pop by objective function obj_func, get fit
(3)FEs = NP
(4)while FEs < MaxFEs do
(5) for i = 1: NP do
(6)  Execute the mutation operator to generate a mutation vector
(7)  Execute the crossover operator to generate a trial vector
(8)  Evaluate the trial vector to get fit_ui
(9)  FEs = FEs + 1
(10)  if fit_ui< fit(i)
(11)   pop(i,:) = 
(12)   fit(i) = fit_ui
(13)  else
(14)   if rand < P
(15)    offspring = HLS(, pop(i, :))
(16)    ovalue = obj_func(offspring)
(17)    FEs = FEs + 4;
(18)    [min_value, min_index] = min(ovalue)
(19)    if min_value < fit(i)
(20)     pop(i, :) = offspring(min_index)
(21)     fit(i) = min_value
(22)    end
(23)   end
(24)  end
(25) end
(26)end
Output: optimal solution
4.4. Computational Complexity

For DE with HLS, computational complexity is determined by the number of times the three operators of DE and HLS are executed. Also, its execution time is proportional to the search space dimension. Consequently, DE with HLS has a worst case time complexity on the order of, where is size of population, is the maximum iteration number, and P is the user-defined probability to execute HLS. It is easy to deduce that the time complexity of DE with HLS is. In [9], the time complexity of DE is . Therefore, the time complexity of DE with HLS is the same as that of DE.

5. Experimental Study

5.1. Test Suit

In our experiments, twenty widely used test functions [29, 30] are used to evaluate DEHLS. These functions include 7 unimodal functions (f1∼f7), 6 multimodal functions (f8∼f13) and 7 shifted functions (f14∼f20). The information of each function is listed in Table 1.

5.2. Experimental Setting

In this paper, three sets of experiments were conducted. The first set of experiments combined the HLS with four classical operators of DE algorithm to verify the validity of the HLS. The second set of experiments analysed the influence of the size growth of Hadamard matrix. The third set of experiments tested the performance of jDE [14] with HLS and compared it with four other state-of-the-art DE variants (jDE, Sade, ODE, and OXDE).

In all experiments, the dimension size (D) of the test problem is set to 30, the termination criterion is D10000, and 30 independent runs were conducted. In the first set of experiments, the parameters F and CR are set to 0.9, and the parameter NP is set to NP = D. In the third set of experiments, the parameters of the four compared algorithms are set according to the original literature.

5.3. Quality of the HLS

In this section, four classical schemes of DE, namely, DE/rand/1, DE/best/1, DE/rand-to-best/1, and DE/current-to-best/1, are used in the experiment to evaluate the quality of HLS. To distinguish them, these four diagrams are, respectively, named DE1, DE2, DE3, and DE4. The HLS operator has been integrated into each of the four schemes above, under the names DE1HLS, DE2HLS, DE3HLS, and DE4HLS. To guarantee the fairness of the experiment, the same parameters are defined for all the algorithms.

Table 2 presents the results of the experiment. “Average error” and “standard error” represent, respectively, the average value of error of the function and the standard deviation obtained by all the algorithms. The results of the Wilcoxon rank sum test are marked “-,” “+,” and “≈” in the table to indicate that the performance of DE without HLS is lower, better, and similar to that of DE with HLS. In addition, Figures 310 show the evolutionary processes of the two competitors.

From the results of Table 2, we can see that the HLS can greatly improve the performance of the four classic DE schemes. For the 20 benchmark problems, the number of functions with better results from the four DE schemes with HLS than the DE schemes was 16, 15, 13, and 13, respectively. This improvement suggests that HLS promotes the performance of the majority of test functions. Also, the number of functions with worse results from the four DE schemes with HLS than the DE schemes was 3, 4, 7, and 7, respectively. These functions are mainly unimodal functions. One of the reasons is that the solutions of the unimodal functions are easy to obtain, but HLS has increased the number of evaluations.

Therefore, we concluded that HLS can effectively improve the performance of DE, especially with multimode and shifted/offset functions.

5.4. Effect of the Size of Hadamard Matrix Dimension

In this section, the effect of dimension size of Hadamard matrix was analysed. Since the dimension size of the matrix is a multiple of 2 or 4, to reduce the computing burden, the dimension sizes of Hadamard matrices 4, 8, and 16 are used for experiments (written as HLS-4, HLS-8, and HLS-16). On the other hand, as can be seen from the analysis in Section 4.3, DE1HLS (DE/rand/1 + HLS) is the best one among the four schemes. Thus, DE1HLS is used in the experiment.

Table 3 summarizes the results of the experiment, in which “†“ represents the best solution for these three solutions. The statistical results are in the last row of the table.

As can be seen from the results in Table 3, when the Hadamard matrix dimension is 4, the optimal solutions of 16 problems are better than the other two dimension sizes (8 and 16). Table 4 further gives the average ranking of the Hadamard matrices in three different dimensions (based on the Friedman test). The best average ranking is HLS_4. Thus, the following experiment will use HLS_4.

5.5. Implementation in jDE

In this section, the proposed HLS is implemented in jDE [14], which is a very powerful state-of-the-art DE variant. jDEHLS is compared with four state-of-the-art DE variants (jDE, SaDE, ODE, and OXDE). The experimental results and Wilcoxon’s rank sum test are summarized in Table 5.

Compared with jDE, jDEHLS is superior to jDE on 10 functions and similar to jDE on 10 functions. Compared with SaDE, jDEHLS is superior to SaDE on 10 functions, but inferior to SaDE on 3 functions and similar to SaDE on 7 functions. Compared with ODE, jDEHLS is superior to ODE on 13 functions, but inferior to ODE on 1 function and similar to ODE on 6 functions. Compared with OXDE, jDEHLS is superior to OXDE on 17 functions, but inferior to OXDE on 1 function and similar to OXDE on 2 functions.

In short, HLS can improve the performance of jDE. On 20 test functions, the performance of jDEHLS is the best one among the five methods.

To judge whether the results of the five methods differ in a statistically significant way, a nonparametric statistical test called Friedman test is conducted. The test results are presented in Table 6. As shown in Table 6, the average ranking values for these five algorithms can be sorted in the following order: jDEHLS, jDE, ODE, SaDE, and OXDE.

In addition, the multiproblem Wilcoxon’s test was conducted to check the behaviour of the six algorithms. The results are summarized in Table 6. We can find that R+ values are higher than R- in all cases, This suggests that jDEHLS is markedly superior to OXDE, SaDE, ODE, and jDE.

Figures 1118 show the convergence process of the five algorithms on eight representative test functions. As illustrated in this eight graphs, jDEHLS converges faster than other algorithms.

Figure 19 shows the ranking of the numbers of optimal solutions achieved by each algorithm. Obviously, jDEHLS is also the first ranking.

6. Conclusion

This paper presents a new local search operator based on Hadamard matrix, which is called HLS. HLS searches the subspace defined by two randomly selected individuals. HLS can improve the probability of finding a better solution in a specified space by constructing multiple offspring. This is very beneficial to promote the balance between exploration and exploitation. Implementation of four classical DE algorithms and one DE variant, jDE, demonstrates the effectiveness of HLS. In future work, a parameter adaption mechanism for HLS is expected to be developed. In addition, using HLS for large-scale optimization problems will be considered. Finally, the proposed HLS operator may be used to tackle some complex real-world optimization problems.

The source code of DEHLS is available at https://github.com/gitdxg110/DEHLS_v1.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (nos. 61763019, 61364025, and 62041603), the Natural Science Foundation of Jiangxi Province (nos. 20202BABL202036 and 20202BABL202019), the Science and Technology Foundation of Jiangxi Province, China (nos. GJJ180891, GJJ170953, and GJJ201808), and the “Thirteenth Five-Year Plan” of Education Science in Jiangxi Province for 2017 (no. 17YB211).