Abstract

In this article, we propose a root-finding algorithm for solving a quadratic convex separable knapsack problem, which is more straightforward than existing methods and competitive in practice. Besides, we also present an extension of the proposal, which improves its computational time, and then we incorporate the accelerated Anderson’s and Aitken’s fixed-point algorithms to obtain better results. The algorithm only performs function evaluations. We present partial convergence results of the algorithm. Moreover, we illustrate superior computational results in medium and large problems as well as the applicability of the algorithm with real-life applications to show their efficiency.

1. Introduction

We are interested in solving the quadratic convex separable knapsack problem:where and for all with such that .

Problem (1) has a variety of applications, including among them, financial models, production and inventory management, stratified sampling, optimal design of queuing network models in manufacturing, computer systems, subgradient optimization, and health care (see[16] and references therein).

Motivated by its extensive applications, a significant amount of attention has been attracted to developing optimization algorithms, and many iterative methods have been proposed to solve this simple problem. See, for instance, an excellent survey [4], which is complemented by [5]. In addition, they contribute to an improvement in the process of fixing variables in the relaxation algorithm and a better way to evaluate subsolutions. Finally, they provided a rigorous numerical evaluation of several relaxations (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies as well as a Newton-type method.

This article aims to propose a fixed-point algorithm (FPA) capable of solving the box-constrained quadratic convex separable knapsack (QSKP) problem as efficiently as other state-of-the-art methods. We extend the FPA to apply in the quadratic convex separable knapsack problem under upper bound constraints (QSKPz). After reformulating the QSKP problem, the new problem QSKPz is solved by an extension of the FPA, called FPA2, which requires fewer calculations than applying FPA directly in the QSKP problem. To obtain a better performance of the proposed methods, we also incorporated the Anderson algorithm [7] and a generalization of the Aitken algorithm [8] for fixed-point methods to improve the proposal. Such acceleration approaches are also explored in [911]. Below, we briefly describe each section of this article.

First, in Section 2, we define the proposed method, which is based on [3]. While this article modified the method to solve quadratic convex separable knapsack problems, in [3], the fixed-point method solves problems in stratified sampling under box constraints. In Section 3, we propose the algorithm and establish its convergence. In Section 4, we extend the proposed fixed-point algorithm to be applied to quadratic convex separable knapsack problem under upper bound constraints. We describe the strategy to accelerate the fixed-point convergence in Section 5 and in Section 6 we present the numerical experiments. We finally present the final remarks in Section 7.

2. Fixed-Point Method

This section provides the material necessary for article comprehension. We start by proposing the existence and uniqueness of the solution to problem (1). Here, we rephrased as a function dependent on . This reformulation leads to a root-finding problem for the Lagrange multiplier of the equality constraint, which is needed to formulate the fixed-point iteration as in [3].

Firstly, based on [1], we present an optimality condition theorem.

Theorem 1. A vector is a minimum of problem (1) if and only if there exist Lagrange multipliers , , and such thatand furthermore,where , , and .

Lemma 2. Under the given assumptions of Theorem 1, equation (2) is equivalent tofor all .

Proof. Let be a solution of problem (1). If , by (3), and . By (2) and for all , we haveThen,If , by (3) and (4), and . By (2) and for all , we haveThen,If , by (4), and . By (2) and for all , we haveThen,Thus, we conclude the proof.
Motivated by (5), we will treat as a function depending on the variable . To achieve this, we setwith

Theorem 3. A vector is the unique solution of optimization problem (1) if and only if there exists a multiplier such that defined in (13) satisfies

Proof. Let be a solution of problem (1); then, . Considering so that it satisfies (5), we define as in (6). Thus, if , we haveand then , i.e., . If , we haveand then , i.e., . If , we haveand then , i.e., . Thus, we conclude that . On the other hand, for some the vector satisfiesIt is easy to check that also satisfies (5). The proof is complete.
Now, to get a fixed-point based algorithm formulation, we use (13) to defineWe have thatThen, if and only ifthat is,Then, we define the following function:It is easy to see that if, and only if, .

Remark 4. Formula (22) appears in [12] as an intermediate step in the variable-fixing algorithms. Moreover, following [12], we can assume that for all , if , then .

3. Statement of Fixed-Point Algorithm and Its Convergence Results

Now, we formally describe the fixed-point-based Algorithm 1 (abbreviated as FPA). For sake of simplicity, for , we denote , , and .

Step 0 (Initialization)
Set k = 0. Let according to [2, 6, 12].
Step 1 (Calculating dual bounds)
For ,
Compute .
Step 2 (Calculating fixed-point sums)
, where .
, where .
, where .
, where .
Step 3 (Update dual variable)
Compute .
Step 4 (Check stopping criterion)
If , then set according to equation (13) and STOP. Otherwise, set and return to Step 2.

Kim and Wu [6] proposed an improvement characterized by eliminating calculations of all primal variables in every iteration as in [2, 12]. The natural formulation of the FPA algorithm (i.e., equation (13)) leads to the improvement proposed in [6]. Besides, the FPA algorithm does not necessarily need a variable fixing step as in the other state-of-the-art methods, although we can implement it. We aim to keep the FPA algorithm as simple as possible and apply an extension capable of improving its performance, as the acceleration step presented in the next section.

Below, the first result concerns the algorithm’s stop criteria.

Proposition 5. If the FPA generates a finite sequence, then the last point is a solution of problem (1).

Proof. Let us assume that is the last point obtained by the proposed algorithm. So, we haveIt implies thatHence, our conclusion follows from Theorem 3.
From now on, we assume that the FPA algorithm generates an infinite sequence denoted by , and we present the following important properties.

Proposition 6. The sequence is bounded, that is, there exists such that for all .

Proof. In fact, for all , we haveAs in [3], in the following result, we assume that the inequalities used in the definition of and are strict. Then, we show that small perturbations of do not change these index sets. It means that if the fixed-point iteration converges, then the iteration terminates after finitely many steps with the exact solution because the index sets do not change anymore.

Lemma 7. Consider such that . Then, there exists such that if , we have

Proof. From our assumptions, we may rewriteHence, we get our aim by considering

4. FPA Extended to Quadratic Convex Separable Knapsack under Upper Bound Constraints

This section presents a variable substitution in problem (1) to obtain a box constraint of the type , . Besides speeding up the computational time of problem (1), such formulation has direct applications in the continuous relaxation of the sensor placement problem [13] and problems arising in multicommodity network flow and logistics, as presented in [14, 15].

Definingand substituting in (1), we haveand then problem (1) is equivalent to the following problem:where , , , , and is a vector of zeros.

Now, through (13) we can write a function depending on the variable as follows:

To get the new fixed-point algorithm, we may have to define the sets , , and according to (19).

Then, by equations (20) and (22), since the lower bound of the reformulated problem (32) is a vector of zeros, we can assume that . Therefore, we can define the new as

Defining the new functionwe obtain

According to the formulations above, we describe the new fixed-point-based Algorithm 2 (FPA2) below.

Step 0 (Initialization)
Set k = 0. Let according to [2, 6, 12].
Step 1 (Calculating dual bounds)
For ,
Compute .
Step 2 (Calculating fixed-point sums)
, where .
, where .
, where .
Step 3 (Update dual variable)
Compute .
Step 4 (Check stopping criterion)
If , then set according to equations (30) and (33) and STOP. Otherwise, set and return to Step 2.

The main advantage of the FPA2 algorithm proposed above is in Step 2, where a sum presented in the FPA algorithm is no longer needed in this new algorithm. This modification makes the algorithm even simpler and makes it possible to improve its performance, as we will see in the experiments section. The convergence analysis of the FPA2 algorithm follows as shown in Section 3.

5. Fixed-Point Acceleration

As mentioned in [11], acceleration methods can alleviate slow convergence. Our interest here is in two particular acceleration methods. The first originated from the work of [7], which we refer to as Anderson acceleration, and the second one resulted from the work of [8], which we refer to as Aitken acceleration. In the following subsections, we describe both algorithms and define the accelerated fixed-point method incorporating the acceleration techniques.

5.1. Anderson Acceleration

Anderson’s acceleration defines a vector of weights . These weights are determined using the following optimization problem:where is found as follows. Let us consider according to equation (23) and . Then, , where .

With these weights, we are able to create the expression of the next iteration as

To improve the fixed-point algorithm’s convergence rate, we consider the Anderson acceleration algorithm formulated as in [11]. Below, we describe the Anderson approach incorporated into FPA2 Algorithm 3.

Step 0 (Initialization)
Set k = 0. Let according to [2, 6, 12].
Define .
Step 1 (Calculating dual bounds)
For ,
Compute .
Step 2 (Compute )
Compute , and using according to FPA2 algorithm.
Compute .
Step 3 (Updating Anderson acceleration variables)
Set .
Let .
Set .
Determine according to equation (38).
Set according to equation (39).
Step 4 (Check stopping criterion)
If , then set according to equations (30) and (33) and STOP. Otherwise, set and return to Step 2.

We see that we must solve a constrained minimization problem at each iteration. In most references, the minimization in equation (38) is recast as an unconstrained minimization problem. We generally keep the number of elements, , in the Anderson history small to ensure we have sufficient storage and make the optimization problem less ill-conditioned. In our experiments, we define . More about Anderson’s acceleration theory and formulations can be seen in [911].

5.2. Aitken Acceleration

As in [9], Aitken acceleration’s idea is to change the relaxation factor (and, thus, the size of the iteration step) based on the information from the previous iteration.

Following [10], let us consider a sequence of scalars that converges linearly to its fixed-point , which implies that for a large :

Below, we rearrange equation (40) to give a formula predicting the fixed point used as the subsequent iterate.

According to equation (41), we incorporate the Aitken approach into the FPA2 Algorithm 4.

Step 0 (Initialization)
Set k = 0. Let according to [2, 6, 12].
Step 1 (Calculating dual bounds)
For ,
Compute .
Step 2 (Calculating fixed-point sums and defining )
Compute , and using according to FPA2 algorithm.
Compute .
Step 3 (Check stopping criterion)
If , then set according to equations (30) and (33) and STOP.
Step 4 (Calculating fixed-point sums and defining )
Compute , and using according to FPA2 algorithm.
Compute .
Step 5 (Calculating Aitken acceleration procedure)
Compute .
Step 7 (Check stopping criterion)
If , then set according to equations (30) and (33) and STOP. Otherwise, set and return to Step 2.

6. Numerical Experiments

This section presents several numerical experiments using the FPA and FPA2 algorithms. The proposed algorithm is very simple and can be used to solve different forms of the quadratic convex separable knapsack problem.

We split our experiments into three subsections described as follows. In Subsection 6.1, we used randomly generated problems to compare the FPA2 and FPA algorithms with state-of-the-art solvers, the accelerated FPA2, and some root-finding methods. Then, we show the performance profile of the computational time for all algorithms presented. In Subsection 6.2, we solve the problem of finding the lowest risk portfolio. In Subsection 6.3, we perform the proposed algorithms with a continuous relaxation of the sensor placement problem presented in [13].

We implemented the methods in C and the compiler used was gcc 12.2.0 with optimization flags march=native -O3 -fast-math. All the experiments were performed on a Desktop with an Intel Core i5-9400 CPU (2.9 GHz). The computer has 16 GB of memory and runs Ubuntu 20.04.6 64 bit .

The tables in the following subsections have an error column corresponding to the number of failed experiments. An experiment that includes one of the following items is considered a failure:(i)Reach the maximum number of 100 iterations.(ii)The relative residual is not small enough .(iii)The optimal value, when the problem is viewed as a D-projection, is not approximately equal to the other solvers.

6.1. Random Generated Problems

In this section, we generate random problems into medium and large problems, e.g., dimensions from to . As in [2, 16], the problems were divided into four classes:(1)Uncorrelated: .(2)Weakly correlated: , .(3)Strongly correlated: .(4)Flow: , , for , and , , , all for while was selected uniformly in .

Furthermore, for the problem classes 1, 2, and 3, were chosen uniformly as in [2], and .

6.1.1. Comparison with State-of-the-Art Algorithms

In the first experiment, we consider the Newton-based method [2], variable fixing [12], secant-based method [14], and median search [17].

From Tables 14, we can see the results of FPA2, FPA, Newton, secant, variable fixing, and median search algorithms in milliseconds over 50 randomly generated tests for each dimension and class. Each random test was repeated 20 times in a loop to obtain a reliable estimate for the running time. We report the mean time of each random test. The stopping criterion used for the algorithms FPA2 and FPA is according to the criteriawhere is a small positive number greater than 0 and its value was chosen as as in [2].

The results show that the computational time of the FPA2 algorithm was superior in all experiments than the other state-of-the-art methods. The FPA algorithm was superior to the other methods only for weakly correlated and uncorrelated problems with a large number of variables, e.g., n = 10,000,000 and n = 50,000,000. For the other problems, the FPA does not perform better than the Newton algorithm.

In the largest (n = 50,000,000) test, Tables 14 show the following results:(i)For the uncorrelated and weakly correlated problems, the FPA2 algorithm was about 14%, 18%, 20%, 34%, and 154% faster than FPA, Newton, secant, variable fixing, and median search algorithms, respectively.(ii)For the correlated problems, the FPA2 algorithm was about 14%, 11%, 10%, 31%, and 152% faster than FPA, Newton, secant, variable fixing, and median search algorithms, respectively.(iii)For the flow problems, the FPA2 algorithm was about 15%, 8%, 10%, 2%, and 100% faster than FPA, Newton, secant, variable fixing, and median search algorithms, respectively.

The results show yet that all algorithms solved all problems correctly.

6.1.2. Comparison with Accelerated Algorithms

In this subsection, we compare the performance of the FPA2 and FPA algorithms with two new versions of the accelerated FPA2 algorithm. We incorporated the Aitken and Anderson acceleration approach in the FPA2 algorithm since the FPA2 presented a better performance in Subsection 6.1.

As in [9, 10], the fixed-point acceleration approach performs better than the original fixed-point algorithm. The numerical experiments in [9] show that for a specific engineering problem, Anderson and Aitken’s acceleration has similar performance depending on the parameters defined in the algorithm. In [10], fixed-point accelerations are applied in different scenarios. The default acceleration approach for the solver proposed in [10] is Anderson’s acceleration since it shows better results in the numerical experiments.

Anderson’s acceleration-based algorithm, FPA2-Anderson, presents slight improvement compared with the FPA2 algorithm for the uncorrelated and correlated problems when the problem size is larger. For the class of problems weakly correlated and flow, the FPA2 performed better than the accelerated algorithms. Furthermore, the FPA2-Anderson could only solve some of the problems of each class correctly. The FPA2-Aitken algorithm correctly solves almost all the problems, as visualized in the column Error. In our experiments, presented in Tables 58, in comparison with computation time, Anderson’s acceleration performed better than the Aitken acceleration.

6.1.3. Comparison with Root-Finding Algorithms

We also compare our fixed-point approach with some root-finding algorithms, as in [3]. Tables 911 show the result for the secant, bisection, and regula falsi methods, respectively. All pseudocodes for these methods can be found in [3].

In this subsection, we do not perform the algorithms with the flow problem because the root-finding algorithms need many iterations to reach good results with the established value in the stop criterion. Furthermore, we can see throughout column Error that the secant, regula falsi, and bisection algorithms did not solve some problems well. The FPA2 algorithm reported here is the same as in Subsections 6.1.1 and 6.1.2, so the results are very similar.

We highlight the high superiority of the FPA2 algorithm compared to other popular root-finding algorithms.

6.1.4. Performance Profile Analysis of the Algorithms

The approach adopted to analyze and compare the performance profile of the algorithms developed in this work was proposed in [18]. The authors created it intending to facilitate the visualization and interpretation of the results obtained in experiments, comparing a set of algorithms to identify the one with the best performance applied to a set of problems. The method considers a set of test problems , with , a set of algorithms , with , and a performance metric (computation time, an average of objective function values, and others).

The performance ratio (always greater than or equal to 1) is defined as

The algorithm performance profile is given bywhere is the fraction of problems solved by the algorithm with performance within a factor of the best performance obtained, considering all algorithms.

In Figure 1, we display the graphs regarding the performance profile of [18] concerning computational time for the problems correlated, uncorrelated, weakly correlated, and flow with size . In this experiment, we use the same data to generate the tables presented in Subsections 6.1 and 6.2.

Figure 1 shows that although the average time of the FPA2-Aitken algorithm is faster than all other algorithms, for some of the 50 samples generated in each experiment, the other algorithms performed better. Below we describe Figure 1 based on the individual results of the 50 generated samples for each problem type.(i)Uncorrelated: FPA2-Aitken and Newton solved about 46% and 30% of the samples faster, respectively, while the variable fixing method solved about 20% of the samples faster;(ii)Correlated: the variable fixing method and FPA2-Aitken solved about 54% and 34% of the samples faster, respectively. Newton and FPA2-Anderson together solved about 12% of the samples faster.(iii)Weakly correlated: FPA2-Aitken and variable fixing method solved about 52% and 36% of the samples faster, respectively, while Newton, FPA2-Anderson, secant, and median search together summed solved about 12% of the problems faster.(iv)Flow: secant and FPA2-Aitken solved about 60% and 40% of the samples faster, respectively.

The results presented in Subsections 6.16.3 and 6.1.4 reinforce the competitiveness of the method proposed in this article. Simplicity and easy implementation contribute to placing the FPA2 and FPA algorithms, as well as their accelerated versions, as excellent options for solving problem (1).

6.2. The Portfolio Optimization

In this subsection, following the experiments presented in [19], we apply the FPA algorithm to the portfolio optimization problem based on the mean-variance model [20]. Knowing that there is a trade-off between reward and risk in investment portfolios, the investor must be willing to tolerate risk to obtain ever-increasing returns.

An investment portfolio is defined by the vector , where denotes the proportion of the investment to be invested in asset . Assuming that all available assets are invested, then the problem should satisfy the constraints and , for . So, the portfolio optimization problem can be written aswhere is the covariance matrix and .

As it can be seen, model (45) is a special case of problem (1) with and , , , , and . As and , these constraints are equivalent to and , for all , and thus .

Since the FPA algorithm solves a separable problem, we must reformulate model (45). In this case, we can use a framework proposed in [21] to solve nonseparable optimization problems in the form of problem (45). This framework was first proposed for training SVM, but it can also be applied to portfolio optimization.

The framework comprises two main stages: the first consists of approximating the objective function of the main problem into a separable objective function with a diagonal Hessian matrix, and the second stage consists of solving the subproblem with the FPA algorithm.

In [21], the authors use a separable quadratic approximation, defined firstly in [22], to obtain a diagonal approximation of at the point , named . The function has the following formula:where and is the current iteration. Snyman and Hay [22] declare that forcing , it is possible to obtain as follows:

As a result of the separable approximation in the first stage of the framework and after reformulating objective function (46) to the format we are looking for in (45), the following problem is obtained:

Thus, the FPA algorithm solves problem (48) in each iteration of the framework until the stopping criterion in the FPA algorithm is achieved.

We use asset data from the Brazilian stock exchange to apply our algorithm to model (48). More specifically, we searched the returns of the shares that were part of the Ibovespa Index during the trading sessions of five years, namely, from 01-01-2020 to 30-12-2022. For that, we used Yahoo Finance and 58 assets, all with all returns in the period. The period considered produced 745 trading sessions on the stock exchange, and the dimension of the covariance matrix is 58 lines by 58 columns. Therefore, a total of 58 assets will compose the portfolio, listed in Table 12.

In Figure 2, MR is the optimal portfolio determined by our algorithm applied to model (48), which has a return of 0.0502 and risk of 1.3818 by day. The other points are portfolios investing 100% in each asset. The portfolio WEGE3 is determined by investing 100% in company WEG; this is the portfolio with the highest return, which is 0.1497. The portfolio PETR4 is determined by investing 100% in company PETROBRAS, and it has the lowest return among all portfolios. Besides, it is negative, and its value is −0.1782.

As in [19], we also consider the diversification of the assets. All portfolios formed by investing 100% in any of the individual assets have a greater risk than the optimal portfolio, which is formed by combining several assets, as in the MR portfolio. We highlight this in Table 13 , which shows the optimal MR portfolio.

In Table 13, the order is the order of the assets, prop is the proportion of the assets in the MR portfolio, is the expected return of asset , and is the risk of asset . For example, order 1 is asset 1, which, according to Table 12, is AMBEV S/A. The proportion of this asset to the optimal MR portfolio is 1.2243, whose expected return and risk are 0.0029 and 4.8227, respectively, similar to the other assets in orders 2 to 58. In addition, we highlight the assets WEG in order 57 and PETROBRAS in order 45, with a proportion equal to 0 in the optimal MR portfolio.

In the following portfolio formulation, we incorporate a risk tolerance parameter denoted by , and model (45) becomeswhere is the vector of the expected returns of the assets and is the preference of the individual investor which is also known as a risk aversion parameter.

To use the FPA algorithm in problem (49), we need to apply the framework as shown previously. In this case, we reformulate the objective function (46) according to the objective function in (49), resulting in the following subproblem:

Similar to problem (48), the FPA algorithm solves problem (50) in each iteration of the framework until the stopping criterion in the FPA algorithm is achieved.

We run our algorithm to solve problem (50) with the same data used in the first example, and the assets are given in Table 12. For each fixed value of in problem (50), we will have an optimal portfolio determined by the algorithm. If an investor is risk averse, he will choose a considerable value for , meaning he wants to minimize the risk. On the other hand, if the investor is more tolerant of risk, he will choose a small , giving more weight to the return.

Figure 3 shows 200 portfolios determined by our algorithm varying from 0.5 to 100 by steps of the size of 0.5, that is, the first portfolio was determined by taking , which is the portfolio in the right corner of Figure 3 and it has expected return of 0.1473 and the risk of 5.1151. The next one is determined by taking until , which is the portfolio in the left corner of Figure 3 with an expected return of 0.0538 and the risk of 1.3854. It is almost identical to the MR portfolio determined by problem (48).

6.3. Sensor Placement Problem

In this section, we address the FPA algorithm to a type of traffic problem named sensor placement problem. The model used in our experiments is addressed to optimization problems according to [23], where there is a single commodity demand that has to be satisfied, a set of potential resources, and a fixed activation cost for each resource, and the congestion heavily influences the cost of a resource. Such model is formulated as a mixed integer nonlinear programming problem (MINLP). Following [13], we consider the problem of optimally placing a set of sensors to cover a given area, where deploying one sensor has a fixed cost plus a cost that is quadratic in the radius of the surface covered. The problem can be written aswhere indicates the fraction of demand allocated to resource and is a binary variable indicating whether resource is active or not .

Now, in the continuous relaxation of problem (51), Reference [23] relaxes the integrality constraint on the variables. Since we can assume (for otherwise can surely be fixed to 1), the “design” variables can be “projected” onto . The problem is now the following:

Problem (52) is a representative case of the continuous nonlinear resource allocation problem that, in addition to the sensor placement problem, has many applications, see reference [5].

In [13, 23], some complexity issues were proved for problems (51) and (52). The instances of the sensor placement problem were generated with the generator freely available at https://groups.di.unipi.it/optimize/Data/RDR.html.

We generated five types of random problems based on their length: , , , and .

Table 14 presents the computational time necessary to solve each problem instance and the total number of iterations for the convergence of each method. As presented in Subsection 6.1.1, we choose to compare with the FPA and FPA2 the following methods: the Newton-based method [2], variable fixing [12], secant-based method [14], and median search [17].

We report the mean time of each random test. 10 randomly generated tests for each dimension were repeated 10 times in a loop to obtain a reliable estimate of the computational time, which is presented in milliseconds. The stopping criterion used for the algorithms is the same presented in Section 6.1.

The results in Table 14 show that all the algorithms solve the problem well except the secant method. For the generated problems with , the secant method could not solve four random tests before the maximum number of iterations. For the problems with and , the secant method could not solve any random tests before the maximum number of iterations.

The FPA2 and FPA solved all the random problems with less computational time than the other algorithms and the number of iterations similar to Newton’s method. The variable fixing method presented fewer iterations than FPA2, and the median search method presented fewer iterations than all algorithms.

7. Final Remarks

This article presents a straightforward root-finding numerical scheme for solving a quadratic convex separable knapsack problem based on a fixed-point algorithm studied in [3]. We named our algorithm as FPA. Then, we incorporate acceleration techniques as an alternative to improve the performance of the proposed algorithm.

To obtain better results in our experiments, we also reformulated the quadratic convex separable knapsack problem. Such reformulation allows for a new constraint format, and we named this new algorithm FPA2. We tested our algorithm by performing three different problems: randomly generated problems, portfolio optimization problem, and sensor placement problem. In the first experiment, we compared the FPA and FPA2 with some popular methods in the literature: Newton-based method [2], variable fixing [12], secant-based method [14], and median search [17]. For all random problems, the FPA2 obtained the best computational time in all problem sizes. The acceleration technique was applied to the FPA2 since it presented the best results in the previous experiments. Although the accelerated algorithms presented fewer iterations to converge, the computational time showed little gains.

Since the FPA algorithm solves a separable problem, we used a framework proposed in [21] to solve nonseparable optimization problems in the form of the portfolio optimization problem presented in our experiments. The results are similar to those presented in [19]. Finally, we performed the FPA and FPA2 with the sensor placement problem, and the proposed algorithm could solve all the generated problems faster than all compared algorithms.

Other acceleration techniques, as well as improvements in convergence analysis, are ongoing research.

Data Availability

The code and data used to support the findings of this study have been deposited in the GitHub repository and are available at https://github.com/jona04/scripts-fpa.

Conflicts of Interest

The authors declare that they have no conflicts of interest.