Abstract

The discounted {0–1} knapsack problem may be a kind of backpack issue with gathering structure and rebate connections among things. A moth-flame optimization algorithm has shown good searchability combined with an effective solution presentation designed for the discounted {0-1} knapsack problem. A new encoding scheme used a shorter length binary vector to help reduce the search domain and speed up the computing time. A greedy repair procedure is used to help the algorithm have fast convergence and reduce the gap between the best-found solution and the optimal solution. The experience results of 30 discounted {0-1} knapsack problem instances are used to evaluate the proposed algorithm. The results demonstrate that the proposed algorithm outperforms the two binary PSO algorithms and the genetic algorithm in solving 30 DKP01 instances. The Wilcoxon rank-sum test is used to support the proposed declarations.

1. Introduction

The knapsack problem is a well-known problem in combinatorial optimization. There are many variants of knapsack problems such as 0-1 knapsack problem, multidimensional knapsack problem, change-making problem, generalized assignment problem, bin-packing problem, and discounted {0-1} knapsack problem (DKP01). Among the knapsack variants, the discounted {0-1} knapsack problem is new. The DKP01 is first introduced by Guldan in [1]. This problem has an important role within the modern commerce real world. It could be a portion of numerous key issues such as venture decision-making, mission determination, and budget control. A correct calculation based on energetic programming for the DKP01 is, to begin with, proposed in [1]. An approach that combined dynamic programming with the center of the DKP01 to unravel it is considered in [2]. Two calculations based on approximate methods for DKP01 are named FirEGA and SecEGA in [3].

DKP01 can be presented as follows:where , , and represent whether the items , and are put into the a knapsack; indicates the item is not in knapsack, while indicates the item j is in the knapsack. It is worth noting that a binary vector is a potential solution of DKP01. Only if X meets both (2) and (3), it is a feasible solution of DKP01. is the number of groups, each group has three items, and each item has its profit and weight. The item is collected for knapsack aims to maximized profit while the weight capacity does not excess C. Each group does not contain more than one item.

Lately, they moreover had a point-by-point consideration of the calculations of the DKP01 and proposed greenhorn deterministic calculation and estimation calculations. A modern correct calculation and two guess calculations with an eager repair administrator were proposed to illuminate DKP01 [4]. A calculation based on PSO is named GBPSO utilizing discrete molecule swarm optimization [5]. An evolution algorithm combines with ring theory to solve DKP01 [6], binary moth search algorithm [7], and a teaching-learning-based optimization algorithm [8].

Moth-flame optimization is first proposed in [9] and it is successfully used to solve many optimization problems such as a quantum-behaved simulated annealing algorithm-based moth-flame optimization method [10], an efficient task scheduling approach using moth-flame optimization algorithm for cyber-physical system applications in fog computing [11], a hybrid Harris hawks-moth-flame optimization algorithm including fractional-order chaos maps and evolutionary population dynamics [12], a differential moth-flame optimization algorithm for mobile sink trajectory [13], LVCI approach for optimal allocation of distributed generations and capacitor banks in distribution grids based on moth-flame optimization algorithm [14], real challenging constrained engineering optimization problems [15], parameters extraction of the three diode models for the multicrystalline solar cell [16], Alzheimer’s disease diagnosis [17], profit maximization with integration of wind farm [18], and MFO with rolling mechanism to forecast the electricity consumption of inner Mongolia [19].

This research proposed a new moth-flame optimization (MFO) and a new encoding scheme for DKP01. A successful 0-1 vector with 2∗dimensional length is utilized for a solution combined with MFO. This advantageous solution present is first used by Truong [20]. The experience results on 30 discounted {0-1} knapsack problem (DKP01) instances are used to evaluate the proposed algorithm. The results demonstrate that the proposed algorithm outperforms the two binary PSO algorithms and genetic algorithm in solving 30 DKP01 instances:(i)Moth-flame optimization algorithm has shown good searchability combined with an effective solution presentation designed to the discounted {0-1} knapsack problem.(ii)A new encoding scheme used a shorter length binary vector to help reduce the search domain and speed up the computing time.(iii)A greedy repair procedure is used to help the algorithm have fast convergence and reduce the gap between the best-found solution and the optimal solution.

The rest of this paper is organized in the following order: Section 2 presents related works. Section 3 proposes moth-flame optimization algorithm for DKP01. The simulated results of the proposed algorithms are presented in Section 4. We conclude this paper and suggest potential future work in Section 5.

2.1. Particle Swarm Optimization

The PSO conducts its search utilizing a swarm of particles; a swarm of particles is arbitrarily made initially [21, 22]. The standard atom swarm optimizer keeps up a swarm of atoms that talk to the potential courses of action for the issue at hand. Suppose that the look space is D-dimensional and the position of ith particle of the swarm can be portrayed utilizing a D-dimensional vector,  = . The velocity of the particle is described by a D-dimensional vector  = . The last best position of ith particle is named  = . In substance, the heading of each atom is updated concurring to its claim flying encounter as well as to that of the finest atom inside the swarm. The basic PSO calculation can be portrayed aswhere is dth dimension velocity of particle i in cycle k; is the dth dimension position of particle i in cycle k; is the dth dimension position of personal best (pbest) of particle i in cycle k; is the dth dimension position of global best particle () in cycle k; is the inertia weight; is the cognitive weight and is a social weight; and and are two random values uniformly distributed in the range of [0, 1] [23].

2.2. Moth-Flame Optimization Algorithm

Mirjalili [9] proposed MFO in 2015 as a nature-inspired optimization algorithm that simulates the actions of individuals in a swarm of moths (search agents) that have unique night navigation methods. In the MFO algorithm, the candidate solutions are assumed to be search agents. In order to model how individuals move in a spiral, the m-by-d matrix namely M is used, where m stands for the number of search agents and d for the number of dimensions. It is assumed that, for each entity, there is an array for storing the value of the objective function as an m-by-one matrix, namely, OM.

The flame, which is defined in an m-by-d matrix called , is also an important part of this algorithm. It is assumed that there is a way to store ’s fitness value as OF in an array. When using the MFO algorithm, can be thought of as ’s best location in the search space. To mathematically model this action, each search agent’s location is modified as follows:where is the search agent and is the best position found so far, and S indicates the logarithmic spiral function which is updated as follows:where is a random number in [−1, 1], is a constant that defines the shape of the logarithmic spiral, and factor is the distance of the search agent for the flame, which is calculated as follows: is required to use only one of the to change its location in this algorithm, and an adaptive mechanism for the number of is suggested as follows:where t is the current iteration number, N is the maximum number of flames, and T is the maximum iteration number.

3. The Proposed MFO for DKP01

3.1. Solution Presentation

Currently, there are two methods for presenting a solution: one uses a binary vector with a length equal to the dimensional of the problem which is 3∗n [3, 7, 24, 25], and the other uses an integer vector with a length equal to the number of groups n [8].

The solution [20] is presented in this paper using a new binary encode scheme with a length of 2∗n. This encoding scheme has the benefit of being shorter in length and automatically satisfying constraint 2. In Table 1, a new binary encoding scheme is introduced. When compared with the previous solution presentation shown in Table 2, it has two disadvantages: it uses a longer vector to present a solution and there are four violate solutions in each scheme.

3.2. Repair Function

Constraint 2 is automatically satisfied by the current encoding scheme. A new repair based on the concept in [3] is proposed to manage restriction 3 and increase the consistency of the solution.

The benefit of this repair technique is that it strikes a balance between CPU time consumption and the avoidance of local optima. The profit-to-weight ratios are / ( = 1, 2, …, ) so that they are not increasing. It means that

This repair operator consists of two phases. The first phase (called repair phase) examines each variable in an increasing order of and drops an item from knapsack if feasibility is violated. The first phase (called optimization phase) examines each variable in an increasing order of and add an item to knapsack as long as feasibility is not violated. The repair phase aims to obtain a feasible solution from an infeasible solution, whilst the optimization phase seeks to improve the fitness of a feasible solution. The details of this repair operator can be found in [20].

The overall pseudocode of the MFO algorithms for DKP01 is given in Algorithm 1.

Input: initial parameters
Output: optimal solution
(1)Initialize search agents M
(2)t = 1;
(3)while tT do
(4) Update flame no. by equation (9)
(5) Generate binary X matrix by equation (11);
(6) Apply repair operator on X and assign its fitness to OM;
(7)if t== 1 then
(8)  F = sort ();
(9)  OF = sort ();
(10)else
(11)  F = sort ();
(12)  OF = sort ();
(13)for i=1:m do
(14)  for j=1:d do
(15)   Calculate D by equation (8);
(16)   Update M (i, j) by equations (6) and (7);
(17)t = t + 1.
3.3. Binary Moth-Flame Optimization Algorithm

The MFO algorithm was designed for real domain. To solve DKP01, MFO is used to redesign a search in a binary domain. Equations (11)–(13) are used to convert real vectors to binary vectors:where TF (.) are the transforming functions of the probability as the following expressions:

In this section, we proposed 2 binary algorithms based on MFO named MFO1 and MFO2. MFO1 and MFO2 use transfer function (equation (12)) and (equation (13)), respectively. They use formula (11) to compute binary vector X.

4. Simulation Results

In this paper, the experience results of MFO1 and MFO2 algorithms are compared to find out the best one to solve DKP01 among them. The proposed algorithms are used to compare the results of three algorithms took from [6] named as SecEGA and two PSO algorithms took from Truong [20]. The PSO1 and PSO2 algorithms are BSPO7 and BPSO8 taken from Truong [20], respectively. 30 DKP01 test instances include 10 weakly correlated instances (denoted as WDKP1–WDKP10), 10 inverse strongly correlated instances (denoted as IDKP1–IDKP10), and 10 strongly correlated instances (denoted as SDKP1–SDKP10) [3].

All experiments of the proposed algorithms are running on a Laptop ASUS with an Intel (R) Core (TM) i5-8250u CPU-1.6 GHz and 8 GB DDR4 memory. The operating system is Microsoft Windows 10. The programming language is MATLAB, version R2016a.

In MFO, the number of moths is set to 50, and the search domain is the interval [1, 10]. The parameters of SecEGA are shown in [6]. The population size of SecEGA is set to 50, and the iteration is set equal to the dimension of the DKP01. For a fair comparison, the parameters for two PSO algorithms are set as the number of particles equal to 50, and are set to 2, is linearly decreased from 0.9 to 0.4, the maximum number of iterations is set equal to the dimension of DKP01, and the stopping criterion is satisfied when the maximum number of iterations is reached. For all algorithms, the max iteration is set equal to . The Gap is calculated by

In this section, the short terms Best, Average, Worst, and StdDev are best, average, worst, and standard deviation of 30 independent runs, respectively.

Tables 35 summarizes the comparison among PSO1, PSO2, MFO2, SecEGA, and MFO1 based on the 6 different performance criteria on 30 independent runs including Best, Average, Worst, StdDev, the Gap, and Average rank. The MFO1 is better than PSO1, PSO2, MFO2, and SecEGA in Best, Average, and Worst for the instances of SDKP, UDKP, and WDKP except for instances of IDKP. The algorithm MFO1 archived the best rank on Average results.

The results showed that MFO1 is the best one among the 5 algorithms. Table 6 summarizes the average rank of the 5 algorithms on 30 instances. The result showed that MFO1 achieved the average best rank in all the three test instances on average mean rank. Figure 1 demonstrates the boxplot of the four algorithms on IDKP instances: IDKP2, IDKP4, IDKP6, and IDKP8. Figure 2 demonstrates the boxplot of the four algorithms on WDKP instances: WDKP2, WDKP4, WDKP6, and WDKP8. Figure 3 demonstrates the boxplot of the four algorithms on SDKP instances: SDKP2, SDKP4, SDKP6, and SDKP8. These box plot figures showed that MFO1 obtained the best result.

Figure 4 demonstrates the convergence curves of the four algorithms on IDKP instances: IDKP2, IDKP4, IDKP6, and IDKP8. Figure 5 demonstrates the convergence curves of the four algorithms on WDKP instances: WDKP2, WDKP4, WDKP6, and WDKP8. Figure 6 demonstrates the convergence curves of the four algorithms on SDKP instances: SDKP2, SDKP4, SDKP6, and SDKP8. These convergence curves demonstrated that MFO1 has faster convergence than group algorithms PSO1, PSO2, and MFO2.

Therefore, the performance of MFO1 is better than that of the other algorithm for the DKP01 problem. From the above comparison, MFO1 showed far better result than those of PSO1, PSO2, MFO2, and SecEGA. The evidence supports that MFO1 is a potential method for solving DKP01.

4.1. Wilcoxon Rank Sum Test

With the observable measures, I am ready to prove beyond a shadow of a doubt that the outcomes are not the product of chance. The nonparametric Wilcoxon statistical test is used and the calculated values are reported as metrics of significance as well. Any values evidence the statistical significant superiority of the results when comparing MFO1 with MFO2, PSO1, and PSO2. After all, the statistical results for 30 instances are provided in Table 7.

5. Conclusion

A moth-flame optimization algorithm that showed good searchability is combined with an effective solution presentation designed to the discounted {0-1} knapsack problem. A new encoding scheme used a shorter length binary vector to help reduce the search domain and speed up the computing time. A greedy repair procedure is used to help the algorithm have fast convergence and reduce the gap between the best-found solution and th eoptimal solution. The simulation results of 30 DKP01 instances showed that the proposed algorithms are better than the two particle swarm optimization algorithms and one genetic algorithm. In the future, some variants of the moth-flame optimization algorithm are considered for study for the discounted {0-1} knapsack problem.

Data Availability

The data used to support the findings of this study are included within the article or are made publicly available to the research community at https://www.researchgate.net/publication/336126537_Four_kinds_of_D0-1KP_instances.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author thanks Van Lang University for supporting this work.