Abstract
This paper presents an efficient branch-and-bound algorithm for globally solving a class of fractional programming problems, which are widely used in communication engineering, financial engineering, portfolio optimization, and other fields. Since the kind of fractional programming problems is nonconvex, in which multiple locally optimal solutions generally exist that are not globally optimal, so there are some vital theoretical and computational difficulties. In this paper, first of all, for constructing this algorithm, we propose a novel linearizing method so that the initial fractional programming problem can be converted into a linear relaxation programming problem by utilizing the linearizing method. Secondly, based on the linear relaxation programming problem, a novel branch-and-bound algorithm is designed for the kind of fractional programming problems, the global convergence of the algorithm is proved, and the computational complexity of the algorithm is analysed. Finally, numerical results are reported to indicate the feasibility and effectiveness of the algorithm.
1. Introduction
In this paper, we consider the following a class of fractional programming problems:where and are all any arbitrary natural numbers and is an dimensional variable, and for any , and are all affine functions such that and . It should be pointed out that, in portfolio optimization, the variable refers to the amount of investment; in computer vision, the variable refers to the mapping space; in communication engineering, variable refers to input signal.
During the past decades, as special cases of the (FP), the linear sum-of-ratios problem and linear multiplicative programming problem have attracted a huge attention of practitioners and researchers for many years. This is because the linear sum-of-ratios problem and linear multiplicative programming problem exist in very important applications such as chance optimization, portfolio optimization, engineering optimization, and data envelopment analysis [1]. In addition, the linear sum-of-ratios problem and linear multiplicative programming problem generally possess multiple local optima which are not globally optimum. The problem (FP) investigated in this paper can be looked as the extensions of the linear sum-of-ratios problem or linear multiplicative programming problem so that the problem (FP) has a broader of applications than the linear sum-of-ratios problem and the linear multiplicative programming problem, and it poses more complex theory and computational difficulties.
Over the two decades, a variety of algorithms have been developed for solving the special forms of the problem (FP). For example, for the linear sum-of-ratios problem, several algorithms can be obtained, such as simplex and parametric simplex methods [2, 3], image space approach [4], branch-and-bound methods [5–12], trapezoidal algorithm [13, 14], and monotonic optimization algorithm [15]; for the linear multiplicative programming problem, some algorithms can be also found in literatures, such as branch-and-bound algorithms [16–18], polynomial time approximation algorithm [19], outcome space algorithms [20, 21], level set algorithm [22], heuristic method [23], and monotonic optimization algorithm [15]. Furthermore, Jiao et al. [24, 25] and Chen and Jiao [26] presented three different algorithms for solving the linear multiplicative programming problem. Recently, for the several kinds of special forms of the problem (FP), Huang et al. [27], Jiao et al. [12], and Wang and Zhang [28] presented three different global optimization algorithms for the sum of linear ratios’ problem; Jiao et al. [29] and Yin et al. [30] proposed two different outer space branch-and-bound algorithms for the generalized linear multiplicative programming problem; Jiao and Chen [31] and Jiao and Liu [32, 33] gave three branch-reduction-bound algorithms for solving the quadratically constrained quadratic programming problem and quadratically constrained sum of quadratic ratios’ problem; Jiao et al. [34, 35], Ghazi and Roubi [36], and Bennani et al. [37] presented four different algorithms for solving the generalized polynomial optimization problem and generalized linear fractional programming problem. In addition, several differential evolution algorithms [38–40], a novel gate resource allocation method using improved PSO-based QEA [41], and an enhanced MSIQDE algorithm with novel multiple strategies [42] are also proposed for solving the global optimization problems including the problem (FP). Up to today, although some researchers have proposed some algorithms for solving the linear sum-of-ratios problem, the linear multiplicative programming problem, or the special form of the problem (FP), to our knowledge, little work has been still done for globally the general form of the problem (FP) considered in this paper.
The purpose of this paper is to develop an effective algorithm for globally solve all variants of the problem (FP). First of all, based on equivalent transformation and characteristics of the exponential function and logarithmic function, a novel linearizing method is proposed. By utilizing the linearizing method, we can convert the initial problem (FP) or its subproblem into a linear relaxation problem (LRP), whose solution can be infinitely close to the optimal solution of the problem (FP) by a successive refinement partition. Secondly, based on the branch-and-bound framework, a novel branch-and-bound algorithm is constructed for globally solving all variants of the problem (FP), and the global convergence of the proposed algorithm is proved. In addition, the computational complexity of the algorithm is analysed for the first time, and the maximum iterations of the algorithm are estimated for the first time. Finally, numerical experimental results demonstrate the feasibility and effectiveness of the proposed algorithm.
The remaining sections of this paper are organized as follows. A new linearizing method is constructed for deriving the problem (LRP) of the problem (FP) in Section 2. Based on the branch-and-bound scheme and the constructed problem (LRP), a global optimization algorithm is established in Section 3, and its convergence is derived. In Section 4, the computational complexity of the algorithm is analysed. In Section 5, numerical experiments are given to verify the feasibility and effectiveness of the proposed algorithm. Finally, some conclusions are given in Section 6.
2. Novel Linearizing Technique
In this section, we will present a new linearizing method for constructing the problem (LRP). For each , we firstly solve the following two linear programming problems:
We can get an initial rectangle , which contains the feasible region of the problem (FP).
Next, since and for all , it can easily follow that
For any , for each , we let
By the characteristics of the logarithmic functions and over , it can follow that
Then, by the above inequalities, we can follow that
For each , we let
Thus, by equations (3)–(7), it follows that
For any , we let
By the characters of the exponential function over , we can get that
Therefore, we have
Let
Combining equations (8)–(12), it can follow that
Consequently, based on the above discussions, we can establish the corresponding linear relaxation problem (LRP) of the problem (FP) over as follows:where , and , are given by (9).
Based on the constructing method of the linear relaxation problem (LRP), for any , the global optimal value of the problem (LRP) can offer a reliable upper bound for the global optimal value of the problem (FP) over .
Theorem 1 guarantees that will infinitely approximate as .
Theorem 1. For any , we have
Proof. For any , without loss of generality, let and ; we can obtain thatTherefore, to prove that , we just prove that and .
First of all, let ; we have thatSince is a concave function about , it can achieve the maximum value at the point . Let ; then, through computing, we can obtain thatSincewe have thatTherefore, by (17) and (20), we haveSecondly, let and ; we haveSincewhereFor each , letand then, it follows thatSince the function is a convex function about , it can obtain the maximum value at the point or . Then, through computing, we getSince as , then we have . Therefore, it follows thatSimilarly, we can prove thatBy (28) and (29), it follows that, as ,Since is a continuous and bounded function about , then there exists some such that . Therefore, by (23) and (30), it follows thatBy (30) and (31), it follows thatBy (21) and (32), it follows thatBy the above discussions, it is obvious that the conclusion can be followed.
From Theorem 1, it follows that will infinitely approximate as .
3. Algorithm and Its Global Convergence
In this section, based on the above linear relaxation problem, a branch-and-bound algorithm is proposed for globally solving the problem (FP). We firstly select a rectangle bisection technique of maximum edge, which ensures the global convergence of the present algorithm. The detailed branching technique is described as follows. Suppose that is the selected rectangle for partitioning; let ; subdivide the interval into two subintervals and so that we can subdivide into two subrectangles. Next, by solving a sequence of linear relaxation problems (LRP) of the problem (FP), the upper bound of the optimal value of the problem (FP) can be updated. Moreover, by detecting the feasible points and computing the objective functional values of these feasible points, the lower bound of the optimal value of the problem (FP) can be updated.
3.1. The Proposed Branch-and-Bound Algorithm
Let and be the optimal value and the optimal solution of the problem (LRP) over , respectively. The corresponding branch-and-bound algorithm is given as follows.
Steps of the proposed branch-and-bound algorithm:Step 1.Given the convergence error and the initial lower bound , solve the problem (LRP) over to get and , respectively. Let and . If , then the algorithm terminates, and is a global -optimal solution of the problem (FP). Otherwise, let , , and .Step 2.Let . Use the selected branching method to subdivide the rectangle into two new subrectangles and , and let .Step 3.For each subrectangle , where , solve the problem (LRP) to get and , and let . If the midpoint of is feasible to the problem (FP), let , and let be the best feasible solution satisfying that .Step 4.If , then let , where , and let .Step 5.Let and satisfy that . If , then the algorithm terminates with that is the global -optimal solution of the problem (FP). Otherwise, let , and return to Step 2.
3.2. Convergence Analysis
In this section, the global convergence of the proposed algorithm is given as follows.
Theorem 2. If the proposed algorithm is finite, then the algorithm will terminate after iteration, and is the global optimal solution of the problem (FP). Otherwise, the proposed algorithm will produce an infinite sequence of iterations such that any accumulation point of the sequence will be the global optimal solution of the problem (FP).
If the proposed algorithm terminates after iteration, by the termination condition of the proposed algorithm, we can get that . By the updating method of the upper bound, we have that . By the structure of the branch-and-bound algorithm, we have that . By combining these inequalities, we have that . Thus, is a global -optimal solution of the problem (FP).
If the present algorithm does not stop after iterations, by the branch-and-bound structure of the algorithm, it is well known that we can obtain a nondecreasing sequence with that the lower bound is , where is the set of the known feasible point. Thus, we get that .
Because and is a bounded close set, we can follow that there must exist a convergent subsequence satisfying that . It is obvious that we have . By the branch-and-bound framework of the algorithm, there must exist a subsequence , where with , , and . By Theorem 1 and continuity of the function , we can follow that . From the steps of the algorithm, we know that is always a feasible solution to the problem (FP), thus is the global optimal solution for the problem (FP), and the proof of the theorem is completed.
4. Computational Complexity of the Algorithm
In this section, first of all, we derive the difference between and with the relationship of to analyse the computational complexity of the present algorithm.
For any , for each , without loss of generality, assume thatand define
Obviously, we have thatand for any and , we have that
Theorem 3. For any , the difference between and with the relationship of satisfies the following inequality:
By Theorem 1, we have that
By the proving process of Theorem 1, we have
From the proving process of Theorem 1, for any , we have that
By the proving process of Theorem 1, for any , we also have
Therefore, by the above conclusions, we can get that
Without losing generality, we define the size of a rectangle by
Besides, for convenience, we let
Theorem 4. For any given convergence error , if there exists a rectangle at the iteration, which satisfies that , then we have thatwhere is the optimum value to the problem over and is the known best lower bound of the global minimum value of the problem (FP).
Proof. Assume that is the optimal solution to the problem (LRP) over ; obviously, is also feasible to the problem (FP) over . Therefore, we have thatBy the conclusion of Theorem 3, we can get thatFurthermore, by , we can get thatand the proof of the theorem is completed.
From the steps of the algorithm and the conclusions of Theorem 4, when , the investigated rectangle can be deleted from the active nodes’ set. Therefore, when the sizes of all investigated rectangles satisfy , the present algorithm will be terminated. From the conclusions of Theorem 4, we can estimate the maximum iterations of the present algorithm as follows.
Theorem 5. For any given convergence error , the present algorithm can obtain a global -optimal solution to the problem (FP) in at mostiterations.
Without loss of generality, assume that the subrectangle is selected for subdividing in Step 2 of the present algorithm for every iteration. After iterations, we can get that
From the conclusions of Theorem 4, wheni.e.,we can obtain that . Therefore, after at mostiterations, we can obtain a global -optimal solution to the problem (FP), and the proof of the theorem is completed.
5. Numerical Experiments
To verify the feasibility of the proposed algorithm, the algorithm is coded in C++ software, and the simplex method is used to solve the problem (LRP); some numerical examples are solved on the microcomputer with Intel(R) Core(TM)2 i5-4590s CPU @3.0 GHz; the computational comparison of numerical results is listed Tables 1 and 2 ; although these numerical examples have a relatively small-size variable, they are still challenging. In Table 1, we denote “Iter” as the number of iteration of the present algorithm.
In the following, first of all, some small-size certainty examples in Appendix were tested with the present algorithm; a deep and accurate numerical comparison with the current state-of-the-art “BARON” is given in Table 1. Next, to verify the robustness and reliability of the proposed algorithm, with the convergence error , we have also solved following randomly generated test Problem 1; numerical comparisons with the current state-of-the-art “BARON” are listed in Table 2.
Problem 1. where is a natural number, , , and , , , , , , , and , , are all randomly generated between 0 and 1; all elements of the matrix are randomly generated between 0 and 1; all elements of vector are randomly generated between 0 and 16; the number of the constraints .
When , the software BARON failed to solve all arbitrary ten tests in . Thus, we just report the numerical computational results of our algorithm in Table 2.
In Table 2, some notations have been used as follows: Avg.NT denotes the average number of iterations for the algorithm; Std.NT denotes the standard deviation of number of iterations for the algorithm; Avg.Time denotes the average running time of the algorithm in seconds; Std.Time denotes the standard deviation of running time for the algorithm.
From Table 1, it is observed that, our algorithm spends less running time than the software BARON. Thus, our algorithm is the most efficient one for solving the problem (FP) with the small-size variable.
From Table 2, we can observe that, when , the software BARON failed to solve all arbitrary ten tests in , but our algorithm can obtain the global optimal solution of the problem (FP) for arbitrary ten tests in a short time. So this demonstrates that our algorithm has the stronger robustness and stability than the software BARON.
6. Conclusions
In this paper, based on the branch-and-bound framework, we present an efficient global optimization algorithm for solving the problem (FP). In this algorithm, a novel linearizing technique is proposed for deriving the linear relaxation problem of the problem (FP), which can provide a reliable upper bound in the branch-and-bound algorithm. By subsequently partitioning the initial rectangle and solving a series of linear relaxation problems, the proposed algorithm is convergent to the global optimal solution for the problem (FP). Furthermore, based on the steps of the branch-and-bound algorithm, the computational complexity of the algorithm is analysed. Finally, numerical experiments verify the feasibilities and effectiveness of the proposed algorithm. The future work is to extend our new algorithm to the min-max affine fractional programming problem and generalized linear fractional programming problems.
Appendix
Examples of some small-size certainty are given in the following.
Example 1. (see Ref. [9]).
Example 2. (see Ref. [43]).
Example 3. (see Ref. [43]).
Example 4. (see Ref. [24]).
Example 5. (see Ref. [24]).
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (11871196 and 61304061) and Key Scientific and Technological Research Projects of Henan Province (202102210147 and 192102210114).