Abstract
The inspiration of the study concerns an iterative predictor-corrector method with order of convergence for computing the inverse of the coefficient matrix , which is obtained by the Sylvester equation . The numerical solutions of three examples by predictor-corrector algorithm are given. The final numerical results also support the applicability, fast convergency, and high accuracy of the method for finding matrix inverses.
1. Introduction
Consider the Sylvester matrix equations of the form where , , are the constant coefficient matrices and is the unknown solution matrix. These equations play important roles in matrix decompositions of the eigenvalues, in control theory, in reduction models, in numerical solutions of the matrix differential Riccati equations, and in image processing (see [1–6]).
Solution of Sylvester matrix equations follows the transformation to the linear form , where is an matrix [7]. So, higher-order dimensions of can create problems. These problems can be avoided with iterative algorithms for finding matrix inverses. An iterative optimization-based method on partial swarm theory to solve Sylvester equation was given in [8]. Iterative gradient-based algorithm for solving the equations by minimizing specific criterion functions was presented in [7]. A preconditioned iterative gradient-based method is obtained by selection of two auxiliary matrices by using the Newton method, and it can be acceptable as a generalization of the iterative splitting method to solve system of linear equations in [9]. Later, a preconditioned positive-definite skew-Hermitian iteration method of splitting for continuous Sylvester equations with positive-definite/semi-definite matrices was presented in [10]. The implicit iteration alternating direction for solving the continuous Sylvester equation, where the coefficient matrices are taking as positive semi-definite matrices, and at least one of them to be positive definite is given in [11]. The implicit iteration alternating direction method for solving the continuous Sylvester equation, when the coefficient matrices are taking as positive semidefinite matrices (at least one of them to be positive definite) is given in [12]. A method based on the sign function iteration for solving large-scale Sylvester equation was presented in [13]. A transformation method such as Hessenberg–Schur algorithm was also used to solve Sylvester equation by reducing the coefficient matrices and to triangular form in [14]. A numerical Arnoldi-based method for solving Sylvester equation when is sparse and large for partial pole-assignment problem for large matrices was given in [15]. A method consisting of orthogonal reduction of the coefficient matrix of the Sylvester matrix equation to a block-upper-Hessenberg form was given in [16].
Other than these methods, direct methods generally take time and require a lot of storage. Special interest is to use an iterative method for computing the Moore–Penrose inverse of the matrix . This method is derived by the Schulz method of order. A order iterative method for computing the Moore–Penrose inverse was studied in [17, 18] for the orders , and it is known as the hyperpower iterative method. Let , and we surveyed on iterative methods by which presents the approximate Moore–Penrose inverse of at the iteration. It is usually presented asfor . performs matrix-by-matrix multiplications per step. There are different representations of (1) which differ in the calculation of the power sum . If the nested loops are used through the factorizations, then the computational effort in the order method decreases and the number of matrix-by-matrix multiplications and additions in the polynomial is reduced (see [19, 20]). Most recently, in [21], for a given integer , two different classes of iterative methods for finding matrix inverses of square nonsingular matrices were proposed and they were used as approximate inverse preconditioners for solving linear systems. Among these classes of methods, class 1 converges with order and class 2 converges with order , requiring and matrix-by-matrix multiplications per step, respectively. The method of orders to approximate matrix inversion was applied for approximating the Schur complement matrices. The given algorithm was applied to solve Fredholm integral equations of first kind in [22]. Another recursive approach for constructing incomplete block-matrix factorization of the -matrices by iterative two-step method for the approximation of the inverse of pivoting the diagonal block-matrices at each stage of the recursion was given in [23].
In this study, an iterative predictor-corrector method of order is used for computing the inverse of nonzero coefficient matrix of the linear form of the Sylvester equations. It was established in [24]. The method requires 5 matrix-by-matrix multiplications per step for the predictor step and 5 matrix-by-matrix multiplications per step for the corrector step. Then, to verify the predictor-corrector method’s fast convergence, experimental analysis is conducted for the three examples, whereas for large dimensional matrices, the computation of the Moore–Penrose inverse is costly when the direct method as singular value decomposition is used. Therefore, fast converging iterative methods for approximating the Moore–Penrose inverses that are also computationally efficient are essential. The need to compute Moore-Penrose inverse for the solution of the Sylvester equation leads to one of the application area. As stated before, these equations appear in control theory and other important real-life phenomena.
2. Sylvester Matrix Equation
Consider the Sylvester matrix equations of the formwhere , , and are constant matrices and is the matrix of unknowns. Linear system (2) has a unique solution given asif and only if for any and [7], where presents the Kronecker product for
In practice, if , then equation (2) becomes continuous time Lyapunov equation. For numerical solution ofthe following iterative predictor-corrector method is used to find inverse of in (4) for solving (7).
3. The Iterative Predictor-Corrector Method for Approximating Moore–Penrose Inverse of the Matrix
Let , denote the unit matrix, denote the conjugate transpose of , denote the range of , and denote the orthogonal projection of . The following matrix-valued functions are defined:where . We propose the predictor-corrector method in [24] as follows.
Predictor-corrector method:where represents the initial step, represents the generator, represents the predictor at the fractional step , and represents the corrector for approximating the Moore–Penrose inverse of at the step in (11). The predictor-corrector method given in (11) is effective in computational aspects and is useful for . If , then dual version of the predictor-corrector iterative algorithm (11) can be used.
Theorem 1 (see [24]). Let and let the nonzero eigenvalues of satisfy where the real scalar satisfies . Then, the sequence obtained by the predictor-corrector method (11) converges to the Moore–Penrose inverse in the norm when with order of convergence and the asymptotic convergence factor is [21] . Moreover, the following error estimate is valid:whose degree of practicality increases in the given order and degree of precision decreases in the same order and .
3.1. Error Analysis
The minimum norm of the solution to the linear system:where with denotes the range of , denotes the unknown solution, and . The general least squares solution of (13) or minimum norm least squares solution is the solution of the problem
System (14) has a unique solution obtained by the equation , and it is called the pseudoinverse solution [25].
If is nonsingular matrix, then . Moreover, the condition number of is defined by .
3.2. Algorithm for Approximating Inverse of Matrix to Solve Sylvester Equation
Let be the approximate solution of (14), where is the inverse of obtained by the predictor-corrector method given in (11) at the iteration by using initial approximation , and satisfies where . Therefore, the residual error at the corresponding step is . Given a predescribed accuracy , we propose the following algorithm based on Algorithm A in [24] that uses the predictor-corrector method (11) and approximates the pseudoinverse for (See Algorithm 1).
|
4. Numerical Examples
In this section, all the calculations are carried out by Mathematica program in double precision for Examples 1–3. The tables adopt the following notation:
Example 1. Consider the following coefficient matrices for Sylvester equation (2) (see [7]):with exact solutionHere, ( represent the matrix of ones) and apply the predictor-corrector method to find the inverse of in (4).
It is obtained asand then, compute approximate solution of the Sylvester equation (2). In this example, the right side vector is calculated by . We apply the proposed predictor-corrector algorithm to obtain the inverse of , by performing iterations for an accuracy of for the corresponding Sylvester matrix equation (7). Table 1 presents norm of the errors using the predictor-corrector algorithm and the condition numbers.
The best error of the approximate solution for Example 1 obtained in [7] was after iterations. When we compare results in Table 1 with the literature [7], the predictor-corrector method is a more accurate, fast, and effective method.
Example 2. Sylvester equation (2) with the following coefficient matrices is considered:where represents a tridiagonal matrix, represents random numbers, represents the diagonal matrix, and is used for the weights of the diagonal entries [9]. For Example 2, the exact solution iswith , where represents a sparse identity matrix and . Stopping criterion is . Table 2 presents , total iteration number , and norm errors for given values of the dimensions and .
It was already expected that the error would increase with the increase in the condition numbers as shown in Table 2. Figure 1 illustrates the error for the test problems in Example 2 with respect to iteration number for when . The obtained numerical results are compared with the results given in [7] and those given in [9]. Hence, our results by predictor-corrector algorithm are more accurate and more faster than methods given in [7, 9].
According to Figure 1, fast and accurate result can be obtained when increases.

Example 3. Consider the following randomly generated coefficient matrices , , [16] to solve Sylvester equation (2) via (7) by predictor-corrector algorithm with stopping criteria :Figure 2 illustrates the error for the test problems in Example 3, and Table 3 presents norm of the errors using the predictor-corrector algorithm and the condition numbers. The accuracy of the predictor-corrector algorithm is remarkable.

5. Concluding Remarks
An iterative predictor-corrector method of convergence order is used for computing the inverse of a nonzero matrix in (4) to solve Sylvester equation (2). It converges fast, and it is a highly accurate method. It can also be useful when is rectangular matrix or an ill-conditioned matrix. The predictor-corrector iterative method may also be applied to precondition the coefficient matrices of the linear algebraic system of equations obtained by using finite difference method to solve boundary value problems, for example, for the solution of Laplace’s equation with singularities (see [26, 27]), for the solution of the heat equation on hexagonal grid (see [28]), and for the approximation of the derivatives of the solution of the heat equation (see [29, 30]). Moreover, the iterative method of predictor-corrector for finding matrix inverses may be applied to precondition the coefficient matrices of the system of equations obtained by using finite element method to solve parabolic partial differential equations (see [31]).
Data Availability
No data were used in this study.
Conflicts of Interest
The author declares that there are no conflicts of interest.