Abstract

For the minimization of the sum of linear fractions on polyhedra, it is likewise a class of linear fractional programming (LFP). In this paper, we mainly propose a new linear relaxation technique and combine the branch-and-bound algorithm framework to solve the LFP globally. It is worthwhile to mention that the branching operation of the algorithm occurs in the relatively small output space of the dimension rather than the space where the decision variable is located. When the number of linear fractions in the objective function is much lower than the dimension of the decision variable, the performance of the algorithm is better. After that, we also explain the effectiveness, feasibility, and other performances of the algorithm through numerical experiments.

1. Introduction

Consider the linear fractional programming (LFP):where , , , and . is a nonempty polyhedron-feasible set, with and , in a decision (or variable) space . Note that throughout the paper, represents the transpose of vectors, such as the previous representing the transpose of vector . Furthermore, for each , we let and , which are in favour of the following narrative. For the single linear fraction in the objective function, we assume that and ; at the same time, by [1], our hypothesis has not lost generality, that is, as long as the denominator of is not 0, we can convert it into and by means of the method in [1].

The problem (LFP) has many important applications in laminated manufacturing [2, 3], material layout [4], MIMO networks [5], and economics [6], among others. From [79], it can be seen that the computational efficiency of the algorithm is very sensitive to the number of linear fractions in the problem (LFP). When , Charnes and Cooper [10] indirectly solved the linear fractional programming by transforming the original linear fractional programming into an equivalent linear programming problem. Charnes and Novaes [11] proposed an updated objective function method to solve linear fractional programs by resolving a series of linear programs, while Dinkelbach [12] used a parametric approach to solving linear fractional programming problems. Through the transformation of the objective function and constraint, Das and Mandal [13] simplified the linear fractional programming into an equivalent linear program and then used the simplex method to solve this linear program. Indeed, when , Matsui [14] proved that the problem (LFP) is an NP-hard problem. Therefore, its solution method has caught the attention of numerous scholars. So far, many methods have been proposed and used to solve the LFP and its special forms, such as the image space method [15], outer approximation method [16], unifying monotone method [17], cutting plane method [18], branch-and-bound algorithm [7, 8, 19, 20], interval-split algorithm [21], internal point method [22], heuristic method [23], and concave minimization method [24]. Besides, based on [25], by introducing strategies such as bound-lift and cone-compression, Shen et al. [26] proposed a new branch-reduction-bound algorithm. According to Jiao and Liu [9], the original problem is transformed into a bilinear programming problem, and then the lower bound of the original problem is constructed by using the technique of linearization of convex envelope and concave envelope of the bivariate product function and then solved by the branch-and-bound algorithm. By utilising the linearization technique, Jiao et al. [27] constructed the linear relaxation lower bound of the original problem and then proposed a new algorithm by pruning strategy and gave the convergence of the algorithm. When is a fixed number, an approximate solution algorithm is given in [1, 28, 29], and its complexity analysis shows that it is a complete polynomial-time approximation algorithm. Xia et al. [30] extended the conclusion in [28] to the problem of linear-ratio-sum with the linear matrix inequality. Moreover, relevant scholars have also studied linear fractional programming problems in uncertain environments, such as fuzzy linear fractional programming [31, 32], linear fractional programming with absolute value variables [33], and interval linear fractional programming [34]. Das et al. [31] proposed the concept of a simple sorting method between the fuzzy numbers of two triangles and gave an equivalent three-objective linear fractional programming problem to calculate the upper, middle, and lower bounds of the fuzzy linear fraction programming problem, thus numerically constructing and solving the optimal value. In [32], the authors used the Charnes–Cooper scheme and the multiobjective linear programming problem to obtain an effective algorithm for solving the fully fuzzy linear fractional programming problem, and in [33], they also proposed a new model of linear fractional programming problems with absolute value functions and then transformed the linear fractional programming problems into independent linear programming problems with some theorems. Then, popular algorithms (such as the simplex algorithm) are utilized to solve these problems. Recently, in [34], Abad et al. proposed two new approaches to interval linear fractional programming, and in each method, two submodels were used to obtain the range of the objective function.

In this paper, a branch-and-bound algorithm based on the branch of the -dimensional output space is presented for solving the LFP. The literature studies [79] all report that the branching operation of the branch-and-bound algorithm occurring in the -dimensional space may save more computation than in the -dimensional or -dimensional space, especially in [8], which, by Theorem 5 and its corollary, shows that the branching operation occurring in the -dimensional space, in the case of , is the most advantageous. Therefore, the algorithm set forth in the present paper can greatly save the computational cost compared with the branching operation for the - or -dimensional space. In addition, another advantage of our algorithm is the fact that we only need to solve linear programming in the iterative process of the algorithm, which is easier to solve than the general nonlinear programming algorithm.

This paper is organized as follows. In Section 2, we give the equivalence problem (EP) of the problem (LFP) as well as some preparatory work. Section 3 mainly discusses the relevant theories of the proposed algorithm based on the equivalence problem (EP), including the bounding, branching, pruning, detailed steps, and the convergence analysis of the proposed algorithm. In Section 4, numerical experiments are performed, which are utilized to illustrate the effectiveness and feasibility of the algorithm and other performances. Finally, the conclusion part mainly makes a simple summary and prospect of the algorithm in this paper.

2. Equivalent Problem of Problem (LFP)

To get the equivalent problem of the LFP, we first solve the following -linear programming problems using the existing linear programming method:

Obviously, the optimal values of the above -linear programming problems determine the initial upper and lower bounds of the numerator and denominator in each linear fraction function, that is, , , , and , respectively.

Next, we define the initial rectangleand the subrectangleat the current iteration .

Consider the following equivalent problem (EP):

The following Theorem 1 creates the equivalent relationship between the problem (EP) and the original problem (LFP).

Theorem 1. If is a global optimal solution for problem (EP), then is a global optimal solution for problem (LFP). Conversely, if is an optimal solution for problem (LFP), then is a global optimal solution for problem (EP), where and .

Proof. The conclusion of the theorem is obvious and omitted here.
Theorem 1 shows that the solution of problem (LFP) can be obtained indirectly by addressing problem (EP).
In addition, if the rectangle in problem (EP) is replaced by its subrectangle , this will generate subproblem of problem (EP) over the rectangle .

3. Branch-and-Bound Algorithm

In this section, we will consider the three components of the branch-and-bound algorithm, namely, branching, bounding, and pruning, respectively, by studying problem (EP).

3.1. Bounding

Note that the objective function of the problem (or (EP)) is separable and still nonconvex. And then, we can propose a new underestimation method for and give it through the following Theorem 2.

Theorem 2. Consider the rectangle , where , , , and are all constants satisfying and . For any , define the functions and as follows:Then, the following conclusions hold:(i)For any , the functions and satisfy(ii)Let and ; then, .

Proof. (i)For any , we haveThe establishment of the first inequality in formula (8) takes advantage of the convexity of the univariate function in the case of , i.e., . Moreover, the establishment of the first inequality of formula (8) also takes advantage of the fact that . As a result, according to formula (8), it is easy to know . Conclusion (i) holds.(ii)By the definition of and , we haveAccording to formulae (9) and (10) and conclusion (i), we haveSince is a bounded value, this implies that as . Thus, combined with (11), we have , and the proof is completed.
In the following, by using Theorem 2, we will construct the lower bounding function of the objective function for problem (or (EP)). Assume that denotes or a subrectangle of that is generated by the branching process, where , with . Obviously, satisfy . For each and for each , defineThen, by Theorem 2, we haveThus, for any , let , and we haveFurthermore, by using formulae (13) and (14), we can construct the following linear relaxation programming problem of :Based on the construction method of the linear relaxation programming problem , obviously, the optimal value for problem is less than or equal to that of , i.e., . Then, we have . Therefore, the linear relaxation programming problem provides a valid lower bound for the optimal value of .
It is noted that the solution obtained from the above linear relaxation programming problem , which is used to progressively update the upper bound of the optimal value of problem (EP), is a feasible solution to problem (EP).

3.2. Branching

For the selected rectangle , we use the dichotomization as a branching process, which subdivides a -dimensional rectangleinto two -dimensional subrectangles and of the same volume along the midpoint of its longest edge. The specific rectangle-branch method is as follows:(i)Let .(ii) is divided into and , i.e.,(iii)Let .

Be sure to note that the interval is never subdivided during the branching process. Thus, the branching process occurs only in the -dimensional space, which greatly saves this computational cost.

3.3. Pruning

Suppose is the lower bound of problem over the rectangle and represents the best upper bound that the algorithm has known so far. And let represent the tolerance of the algorithm. Then, the pruning process is the transversion of deleting the node information on the branch-and-bound tree corresponding to the rectangle that satisfies .

3.4. Output-Space Branch-and-Bound Algorithm

Now, we present an output-space branch-and-bound algorithm for solving (LFP). For the -th iteration, we give the following notation in advance: is the rectangle to be subdivided, represents a set that produces a new solution after each iteration (note that the number of elements in the set does not exceed 2), and is a collection of rectangles left after pruning. and represent the optimal solution and optimal value of problem over the rectangle , respectively. represents the current lower bound of the global optimal value of problem (EP), and represents the current upper bound of the global optimal value of problem (EP). represents the best function value of all new feasible solutions to the resulting problem (EP) after each iteration is completed.

Combined with the above content, the proposed algorithm is as follows:Step 0 (initialization):, and are obtained by (2), and then the initial rectangle is constructed. Set tolerance . Then, the initial linear relaxation programming problem is solved, and its optimal solution and optimal value are obtained. And then, letStep 1 (termination):If , then terminate the algorithm. Then, is the global -optimal solution of problem (EP), and the global optimal solution of problem (LFP) can be obtained according to Theorem 1 which is . Otherwise, go to Step 2.Step 2 (branching):Let and . Then, using the rectangle-branching method in Section 3.2, is divided into two subrectangles and .Step 3 (pruning):For each , solve to obtain and for . If , then delete the rectangle . Otherwise, let and .Step 4 (bounding):Step 4.1 (upper bounding):If is not empty, let . If , set and .If is empty, both and remain the same.Step 4.2 (lower bounding):Set and , and go to Step 1.

3.5. Convergence of the Algorithm

The global convergence of the algorithm is reproduced as follows.

Theorem 3. The above algorithm either terminates in a finite iteration and is accompanied by the generation of optimal values of problem (LFP) or produces an infinite sequence composed of feasible solutions and makes any convergence point of global optimal solutions for (LFP).

Proof. If this algorithm is terminated at iteration , then according to Step 1 of the algorithm, we haveSo, if is the global optimal solution of problem (EP), there isBy integrating (20) and (21),can be obtained. Thus, when the algorithm terminates in the iteration, the corresponding solution is a global optimal value of problem (LFP).
The algorithm produces an infinite sequence by solving a series of linear programs . And each point of this sequence is a feasible point for problem (EP), respectively. From the algorithm, there will be an infinite sequence of rectangles corresponding to this series of linear programs. By the branching process of Step 2, we haveTherefore, we haveAlso, since , we haveBy (24) and (25), we haveThus, is also a viable solution to problem (EP). Moreover, according to the properties of the branch-and-bound algorithm, the sequence is an increasing sequence bounded by , and then we haveThrough the update process of the lower bound in Step 4 of the algorithm, we haveThrough Theorem 2 and the continuity of function , we haveBy using the previous formulae (27), (28), and (29), we haveUltimately, is a global optimal solution to problem (EP), and then using equivalence Theorem 2, we can know that is a global optimal solution to problem (LFP) immediately. The proof is completed.

4. Numerical Experiments

In this section, several test problems are given to illustrate the performance of the algorithm. All of our testing procedures were performed via MATLAB (2012a) on computers with Intel(R) Core(TM)i5-2320 3.00 GHz power processor 4.00 GB memory and Microsoft Win7 operating system.

Problem 1. (see [1]).For test Problem 1, we use the proposed algorithm to address the problem in detail, which can explain the feasibility of the algorithm. Then, the calculation results of the three test problems in other literature studies are also calculated and indicated in Table 1 together with the results of Problem 1. First, for convenience, we denote the feasible domain of Problem 1 byWe also letIt is easy to see that the molecules of Problem 1 are negative (i.e., ), and we need to convert these two molecules into nonnegative functions. Next, we solve the following four linear programming problems:Among them, , , , and are obtained by solving the above four linear programming problems. Apparently, , and we letWell, for any , we haveAccording to the above data, we can getAs a result, Problem 1 can be reconstructed as follows:Therefore, the optimal solution of Problem 1 is the same as that of the following problem (P):Now, we convert problem (P) into the following initial equivalence problem (EP1):whereBy solving the linear relaxation programming problem constructed over the rectangle , we can obtain the initial lower bound 2.5843, the initial upper bound 2.6918 of problem (EP1), and the corresponding optimal solution ; then, we select the rectangle corresponding to the initial lower bound and then generate the subrectangleby the branching operation. 2 of flowchart 1 represents rectangle ; 3 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal solution and the optimal value are and 2.5916, respectively, and the objective function value of problem (EP1) is , so the current optimal solution and optimal value of problem (EP1) are not updated. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and 2.5844, respectively, and the objective function value of problem (EP1) is 2.6919; then, the current optimal solution and optimal value of the problem will not be updated. As a result, the lower bound of the current problem (EP1) is updated to 2.5844. Next, we select the rectangle corresponding to the contemporary lower boundary to divide and then obtain two subrectangles:4 of flowchart 1 represents rectangle ; 5 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are and , respectively. So, this node is deleted. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and 2.6655, respectively, and the objective function value of problem (EP1) is 2.6918; then, the current optimal solution and optimal value of the problem will not be updated. As a consequence, the lower bound of the current problem (EP1) is updated to 2.5916. We continue to select the rectangle corresponding to the current lower bound and divide it to obtain two subrectangles:6 of flowchart 1 represents rectangle ; 7 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are and , respectively. So, this node is deleted. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and 2.6697, respectively, and the objective function value of problem (EP1) is ; then, the current optimal solution and optimal value of the problem will not be updated. As a result, the lower bound of the current problem (EP1) is updated to 2.6655. Continue to select and section the rectangle corresponding to the current lower bound, resulting in two subrectangles:8 of flowchart 1 represents rectangle ; 9 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are 2.6719 and , and the objective function value of problem (EP1) is 2.6918, respectively. Then, we continue to solve linear programming problem and get the corresponding optimal solution and value which are and , respectively. So, this node is deleted; then, the current optimal solution and optimal value of the problem will not be updated. As a result, the lower bound of the current problem (EP1) is updated to 2.6697. Continue to select and section the rectangle corresponding to the current lower bound, resulting in two subrectangles:10 of flowchart 1 represents rectangle ; 11 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are and , respectively. So, this node is deleted. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and 2.6816, and the objective function value of problem (EP1) is , respectively; then, the current optimal solution and optimal value of the problem will not be updated. As a result, the lower bound of the current problem (EP1) is updated to 2.6719. Continue to select and section the rectangle corresponding to the current lower bound, resulting in two subrectangles:12 of flowchart 1 represents rectangle ; 13 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are and , respectively. So, this node is removed. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and 2.6850, and the objective function value of problem (EP1) is 2.6918, respectively; then, the current optimal solution and optimal value of problem (EP1) will not be updated. As a result, the lower bound of the current problem (EP1) is updated to 2.6816. Continue to select and section the rectangle corresponding to the current lower bound, resulting in two subrectangles:14 of flowchart 1 represents rectangle ; 15 of flowchart 1 represents rectangle ; through solving linear programming problem , the corresponding optimal value and the optimal solution are and , respectively. So, this node is removed. Then, we continue to solve linear programming problem and obtain the corresponding optimal solution and optimal value which are and , respectively. So, this node is deleted; then, the current optimal solution and optimal value of problem (EP1) will not be updated. So far, there is one node 13 left on the branch-and-bound tree, and the current best optimal value and best solution for problem (EP1) are 2.6918 and [0.1000; 2.3750; 2.7080; 6.6972; 3.5416; 3.4750], respectively. As a result, the lower bound of the current problem (EP1) is updated to 2.6850. At this point, in the current iteration, the difference between the upper and lower bounds of problem (EP1) satisfies , so the algorithm iteration stops. Then, the optimal solution of Problem 1 (or (P)) is [0.1000; 2.3750], and the optimal value is . The specific steps are shown in Figure 1.

Problem 2. (see [8, 21])

Problem 3. see [1, 8, 21])

Problem 4. (see [1, 8, 21])

Problem 5. (see [9, 17])where

Problem 6. (see [1])where both real numbers , and are randomly generated in the interval [0, 10], , and , which are the same as the data generation method in [1].
Besides, as can be seen from Table 1, the algorithm in this paper can effectively solve the five test problems known in the literature. Through the description of the solving process of Problem 1, it can be seen that the proposed algorithm finds the optimal solution in the initial iteration, but the branch-and-bound algorithm needs to run again 7 times, which can make the upper and lower bounds meet the precision requirements. The results are not as good as the methods in [1]. However, Problem 1 is a small-scale example. In the relatively large-scale random experiments to Problem 6 in the following, the numerical results show that the proposed algorithm is better than that of [1]. At least, the algorithm is better than the algorithm in [8] when solving Problems 24. And then, for Problem 5, we find that the solution obtained by the algorithm in [9] is not feasible, but the solution solved by the algorithm in [17] is feasible, and the results obtained by the proposed algorithm are the same as those obtained in [17], and the number of iterations used is less than the previous two.
Next, we also perform random tests on Problem 6 to further explore the performance of the algorithm. We set the convergence accuracy of the algorithm to 0.01. For each set of the fixed parameter , we run the algorithm 15 times to compare with the algorithm in [1] and give the numerical results in Table 2. In addition, we have done a series of large-scale numerical experiments, and the numerical results are also indicated in Table 3. In Tables 2 and 3, Avg.Time and Avg.Iter represent the average CPU run time and average number of iterations applied by the algorithm to run 15 times, and Std.Time and Std.Iter represent the standard deviation of the CPU run time and number of iterations used by the algorithm to run 15 times, respectively.
Table 2 shows that the proposed algorithm is better than the one in [1] in terms of average CPU running time and average number of iterations. In addition, from the standard deviation of the number of iterations and CPU running time, it can be seen that our algorithm is more stable than the algorithm in [1]. Moreover, it can be seen from the data in Tables 2 and 3 that, under the premise of fixed parameter , the CPU running time required by the algorithm is gradually increasing with the scale of Problem 6. On the premise of fixed parameter , however, the CPU running time and iteration times of the algorithm are increasing with the increase of the number of the linear fraction in the objective function of problem (LFP). Interestingly, the numerical results in Table 3 show that our algorithm has more advantages in solving large-scale (LFP) problems when , which is in agreement with the conclusions described in [8].

5. Concluding Remarks

In this paper, we mainly propose an algorithm that can address problem (LFP) effectively. Based on the branching operation of the -dimensional output space, a unique construction method of the linear lower-bound relaxation subproblem is given. A branch-and-bound algorithm is therefore proposed to find the global optimal solution of problem (LFP) by combining the pruning operation. Numerical experiments show that their algorithm is effective and feasible, its calculation effect is better than that in [1], and it is more suitable for solving large-scale (LFP) problems in the case of . Moreover, linear fractional programming problems in uncertain environments (such as [3134]) have also been gradually studied by relevant scholars. In future studies, we will also try to gradually study the linear fractional programming problems in uncertain environments.

Data Availability

All data and models generated or used during the study are described in the numerical experiments section (Section 4) of the submitted manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant no. 11961001), the Construction Project of First-Class Subjects in Ningxia Higher Education (Grant no. NXYLXK2017B09), and the Major Proprietary Funded Project of North Minzu University (Grant no. ZDZX201901).