Abstract
It is well-known that the stability of a first-order autonomous system can be determined by testing the symmetric positive definite solutions of associated Lyapunov matrix equations. However, the research on the constrained solutions of the Lyapunov matrix equations is quite few. In this paper, we present three iterative algorithms for symmetric positive semidefinite solutions of the Lyapunov matrix equations. The first and second iterative algorithms are based on the relaxed proximal point algorithm (RPPA) and the Peaceman–Rachford splitting method (PRSM), respectively, and their global convergence can be ensured by corresponding results in the literature. The third iterative algorithm is based on the famous alternating direction method of multipliers (ADMM), and its convergence is subsequently discussed in detail. Finally, numerical simulation results illustrate the effectiveness of the proposed iterative algorithms.
1. Introduction
Linear matrix equations are often encountered in various engineering disciplines, such as control theory [1, 2], system theory [3, 4] and stability analysis [5, 6]. For example, the stability of the first-order autonomous system is determined by whether the associated continuous-time Lyapunov equation exists a positive definite solution or not, and the stability of the discrete-time system can be determined by solving the associated discrete-time Lyapunov matrix equation , where M is a given positive definite matrix with approximate size [7].
In this paper, we consider the following constrained (continuous-time) Lyapunov matrix equations (termed as CLMEs): finding a matrix such thatwhere denotes the symmetric positive semidefinite (SPSD) matrix cone, and are two given constant matrices, and is an unknown matrix to be determined. To solve problem (1), many iterative methods have been developed in the literature. For example, based on the hierarchical identification principle in parameter estimation, Ding et al. [8] designed a gradient-based iterative algorithm and a least-squares iterative algorithm for problem (1) without . Niu et al. [9] and Xie and Ma [10] extended the method in [8] by designing some accelerated gradient-based iterative algorithms for problem (1) without , which often have more efficient numerical performance by choosing suitable parameters. Based on the hierarchical identification principle and a new fixed point equation, Sun et al. [7] designed two least-squares iterative algorithms for problem (1) without . Conjugate gradient method is an influential numerical method for nonlinear optimization programming, which has also been applied to solve problem (1) without . For example, Huang and Ma [11] proposed a conjugate gradient method to solve problem (1) whether it is consistent or not. An appealing characteristic of the method proposed by Huang and Ma [11] is that: (i) it has finite termination property in the absence of round-off errors; (ii) it can obtain least Frobenius norm solution of problem (1) when it adopts some special kind of initial matrix. Furthermore, Ke and Ma [12] extended the alternating direction method of multipliers (ADMM), which is a famous numerical method in separable optimization programming, to solve problem (1) with the nonnegative constraint , which is obviously motivated by the design principle proposed by Xu et al. [13].
Though the above iterative methods often have high efficient numerical performance, they only solve problem (1) without the constraint . However, we often need to solve problem (1) with the constraint in practice. In this paper, we are going to deal with this issue and propose three iterative algorithms, which are based on the famous relaxed proximal point algorithm (RPPA), the augmented Lagrangian method (ALM) and the alternating direction method of multipliers (ADMM), respectively. By transforming CLMEs into some optimization programming, we can determine the existence of solutions to CLMEs by judging that the optimal value of the transformed optimization programming equals to zero or not.
Before ending this section, let us introduce the notations used in this paper. For any matrix , we use and tr(N) to denote its transpose and trace, respectively. The symbol stands for an identity matrix of order m. Let , where the matrices A and B has the same size. The Frobenius norm is defined as . The symbol is defined as , which represents the Kronecker product of matrices A and B. For a matrix , the vectorization operator vec(A) is defined by vec(A) = , where is the k–th column of the matrix A. According to the property of the Kronecker product, for any matrices and X with approximate size, it holds that
The Kronecker product satisfies the following properties [14]:
Property 1. Let and . Then,
Property 2. Let , and . Then,Let denote the projection operator of a matrix onto the convex set , i.e.,which satisfies the following property [15]:Furthermore, has a closed-form expression, i.e.,and the computational load of the above procedure is flops.
The remainder of this paper is organized as follows. In Sections 2 and 3, we present the first and second iterative algorithm for problem (1) and analyze their global convergence. Section 4 discusses the third iterative algorithm for problem (1) and presents its global convergence. Section 5 presents some numerical results to illustrate the efficiency of the proposed algorithms. Finally, Section 6 concludes the entire paper and discusses future research opportunities.
2. The First Iterative Algorithm and Its Convergence
In this section, motivated by the relaxed proximal point algorithm (RPPA) in [16], we shall present an iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence.
Firstly, let us transform the CLMEs into the following constrained matrix optimization (termed as CMO):
Remark 1. Obviously, the objective function of CMO (8) has lower bound zero, therefore, CMO (8) must have optimal solution, and from its optimal value, we can determine the existence of solutions of CLMEs. In fact, if the optimal value of CMO (8) equals to zero, CLMEs has at least one solution; otherwise, CLMEs does not have solution.
Since CMO (8) can be viewed as the matrix form of the traditionary convex optimization with vector variables, we apply the relaxed proximal point algorithm (termed as RPPA) proposed by Gol’shtein and Tret’yakov in to solve CMO (8), and get the following RPPA.
Algorithm 1. The RPPA for problem (8) Step 1. Input a parameter , a symmetric positive semidefinite matrix , a tolerance . Initialize . Set . Step 2. Generate a temporal iterate by in which denotes for some vector . Step 3. If then, stop and return an approximate solution of (8). Step 4. Compute the new iterate by Set , and goto Step 2. In the following, we set the matrix G in RPPA as Now, let us elaborate on the optimal subproblem in (9). For the given , by some simple manipulations, the X subproblem in (9) is amount to solving the following problem: where the second equality follows from the two properties in Section 1, i.e.,Then, can be computed by the formula (7).
From [16], we have the following theorem, which means that the iterative sequence generated by RPPA is Fejér monotone with respective to the solution set. The proof of its global convergence follows immediately from this property and the results in [17].
Theorem 1. Let the sequences be generated by the RPPA. Then, for any , one haswhere is a solution of CMO.
3. The Second Iterative Algorithm and Its Convergence
In this section, based on the Peaceman–Rachford splitting methods in [18, 19], we shall present another iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence.
By introducing an auxiliary matrix variable , CMO (8) can be further written as the following separable convex optimization (termed as SCO):
Since SCO (16) can be viewed as the matrix form of the traditionary convex optimization with vector variables, we apply the Peaceman–Rachford splitting method in [18, 19] to solve SCO (15), and get the following Peaceman–Rachford splitting method (termed as PRSM).
Algorithm 2. The PRSM for problem (16) Step 1. Input two parameters , a tolerance . Initialize , , . Set . Step 2. Generate the new iterate by Step 3. If then stop and return an approximate solution of (16); else set , and goto Step 2. Now, let us elaborate on the two optimization subproblems in (17). For the given , , the X subproblem in (17) is amount to solving the following normal equation: where Thus, we have For the given , , the Y subproblem in (17) is amount to solving the following optimization problem: which can be computed by the formula given in (7).
Remark 2. Though the computation of involves an inverse of a matrix. However, since this term is invariant in each iteration, we only need to compute it once before all iterations.
From [18, 19], we have the following theorem, which means that the iterative sequence generated by PRSM is Fejér monotone with respective to the solution set. Obviously, SCO is a convex optimization problem. Thus, its global convergence follows immediately from this property and the results in [17].
Theorem 2. Let be the sequence generated by the PRSM. Then, for any , one haswhere H and N are two positive definite matrices, with being a solution of SCO, being the corresponding Lagrangian multiplier, and is an auxiliary sequence defined as
Remark 3. Following the above procedure, some famous iterative algorithms for convex programming, such as the proximal point algorithm (PPA), the relaxed PPA [16], can also be used to solve SCO.
4. The Third Iterative Algorithm and Its Convergence
In this section, motivated by the alternating direction methods of multiplier (ADMM) in [12, 13], we shall present the third iterative algorithm for the constrained Lyapunov matrix equations and discuss its convergence.
Firstly, CMO (8) can be written as the following equivalent form:
The augmented Lagrangian function of (25) iswhere the matrices are Lagrangian multipliers and the constants are penalty parameters. Then, we can derive the second iterative algorithm (termed as ADMM) for CMO by alternating minimizing one variable in when other variables is fixed.
Algorithm 3. The ADMM for problem (25) Step 1. Input three parameters α, , , a tolerance . Initialize , , , , and . Set . Step 2. Compute the new iterate by Step 3. If then stop and return an approximate solution of (25); else, set , and goto Step 2. Now, let us elaborate on the three optimization subproblems in (27). For the given , , , and , the X subproblem in (27) is amount to solving the following normal equation: from which, we have For the given , , , and , the Y subproblem in (27) is amount to solving the following normal equation: from which, it holds that For the given , , , and , the Z subproblem in (27) is amount to solving the following optimization problem:which can be computed by the formula given in (7).
Remark 4. It is obvious that the two matrices and are symmetric positive definite for any real constants α, , which are invariant during the iteration, and we only need to compute their inverses once before all iterations.
The KKT conditions for problem (25) are:where , , and are three Lagrangian multipliers corresponding to the constraints , and in (25), respectively. Eliminating the variable , the above KKT system can be written as:If the point satisfies the above conditions, then is called KKT point, and is the corresponding multiplier.
To simplify notation, we set . Now, let us analyze the convergence of ADMM. Due to the objective function of (25) is not separable, we only get the following weak convergence result.
Theorem 3. Let be the sequence generated by ADMM which satisfies the condition
Then, any accumulation point of satisfies the KKT conditions for problem (25).
Proof. From (11)–(13), it holds thatFrom , we can find that both the sides in the above five equations tend entirely to zero, i.e.,For any limit of the sequence , from (42), (43), (45) and (46), it is obvious that the first four equations (14)–(17) in the KKT conditions are satisfied.
In the following, we only need to prove that (39) also hold for . From (35) and (36), it holds thatThis together with implies thatThis and the symmetry of and C imply that the matrix is also symmetric. From (44) and (46), it holds thatFrom this and (6), we haveSetting and in the above inequality, we getThus, . Let with be the m eigenvalues of the matrix . Suppose the matrix is not positive semidefinite, then . Let be an orthogonal matrix, such that . Setting with in (50), we haveThus,That is,Then, , which contradicts with and . Thus, the matrix . This completes the proof.
Remark 5. Based on the proximal ADMM [20], we can design an inverse-free ADMM for the studied problem by attaching some proximal terms to the subproblems with respective to X and Y in ADMM. Furthermore, based on the parallel or partially parallel splitting method or their proximal versions [21, 22], we can design some inverse-free (partially) parallel splitting methods for the studied problem [23, 24].
5. Numerical Results
In this section, we evaluate the performance of the three iterative algorithms designed in this paper for solving the Lyapunov matrix equations, and report some preliminary numerical results. The codes of our algorithms were written entirely in Matlab R2014a, and were executed on a ThinkPad notebook with Pentium(R) Dual-Core CPU T4400@2.2 GHz, 2 GB of memory.
The parameters in the three tested algorithms are set as follows: For RPPA: , For PRSM: , For ADMM: , ,
Problem 1. Consider the following constrained Lyapunov matrix equations:where A and C are set as the following three cases:All initial matrices are set zero matrix. Due to the iterate generated by PRSM satisfies the constraint andand RPPA, ADMM also have similar property, then we adopt the following stopping criterion:or the number of iterations .
The numerical results of the three tested algorithms for the first case of Problem 1 are plotted in Figure 1, from which we find that they successfully solve this problem within 66, 35, and 37 iterations, respectively, and the final iterates of the three tested algorithms areThe corresponding absolute errors areThough the matrixSatisfies the equations , it is not symmetric. Then, it is not a solution of CMO. The numerical results of the three tested algorithms for the second case of Problem 1 are plotted in Figure 2, from which we find that the Error(k)s generated by the three algorithms do not converge to zero, and the final iterate is aboutSimilarly, the matrixsatisfies the equations , it is not positive definite. Then, it is not a solution of CMO. The numerical results of the three tested algorithms for the third case of Problem 1 are plotted in Figure 3, from which we find that the Error(k)s generated by the three algorithms do not converge to zero, and the final iterate is about



Problem 2. Consider the following constrained Lyapunov matrix equations:where A and C are set asThree curves in Figure 4 indicates that for CLMEs with nonsymmetric matrix A, RPPA and PRSM fall into the premature convergence, while ADMM successfully solve this problem.

Problem 3. Consider the following constrained Lyapunov matrix equations:where A and are set byThen, C is set as . Furthermore, we set .
The numerical results of the three tested algorithms are plotted in Figure 5, from which we find that the performance of ADMM is a litter better than that of PRSM, which is obviously better than that of RPPA.

6. Conclusions
In this paper, three iterative algorithms have been proposed to solve constrained Lyapunov matrix equations, and their convergence is discussed in detail. Furthermore, preliminary numerical results have been included, which verify the efficiency of the proposed algorithms.
Among various challenges for large-scale constrained Lyapunov matrix equations, high performance is a must, not an option. Therefore, an object of our future investigations is to design more efficient iterative algorithms and propose some valid criteria, from which we can directly determine whether the Lyapunov matrix equations has a symmetric positive semidefinite solution or not.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
Acknowledgments
This research was partially supported by the National Natural Science Foundation of Shandong Province (no. ZR2016AL05) and the Doctoral Foundation of Zaozhuang University.