Abstract

We use a second-order learning algorithm for numerically solving a class of the algebraic Riccati equations. Specifically, the extended Hamiltonian algorithm based on manifold of positive definite symmetric matrices is provided. Furthermore, this algorithm is compared with the Euclidean gradient algorithm, the Riemannian gradient algorithm, and the new subspace iteration method. Simulation examples show that the convergence speed of the extended Hamiltonian algorithm is the fastest one among these algorithms.

1. Introduction

The algebraic Riccati equations (AREs) have been widely used in control system syntheses [1, 2], especially in optimal control [3], robust control [4], signal processing [5], and the LMI-based design [6]. And, in the past several decades, there has been an increasing interest in the solution problems of the AREs [79]. The lower matrix bounds for the AREs were discussed in [10, 11]. The structure-preserving doubling algorithm (SDA) was given under assumptions which are weaker than stabilizability and detectability, as well as practical issues involved in the application of the SDA to continuous-time AREs [12]. It is the purpose of this paper to investigate the unique (under suitable hypotheses) symmetric positive definite solution of an algebraic Riccati equation. Some effective methods, such as the Schur method (SM) [13] and the new subspace iteration method (NSIM) [14], were given for the numerical solution of the algebraic Riccati equation. Recently, the Euclidean gradient algorithm (EGA) and the Riemannian gradient algorithm (RGA) for the numerical solution of the algebraic Riccati equation with a cost function of the Riemannian distance on the curved Riemannian manifold were proposed in [15].

It is well known that the Euclidean gradient algorithm (EGA) and the Riemannian gradient algorithm (RGA) are first-order learning algorithms; hence the convergence speeds of the EGA and RGA are very slow. The inclusion of a momentum term had been found to increase the rate of convergence dramatically [16, 17]. Based on this, the extended Hamiltonian algorithm (EHA) on manifolds was developed in [18], while its numerical implementation was discussed in [19]. Furthermore, the EHA is a second-order learning algorithm and converges faster than the first-order learning algorithm for optimization problem if certain conditions are satisfied.

In this paper, we will apply the EHA to give the numerical solution of the AREs. Furthermore, we give two simulations to show the efficiency of our algorithm.

Briefly, the rest of the paper is organized as follows. Section 2 is concluded with some fundamental knowledge on manifold that will be used throughout the paper. Section 3 introduces the extended Hamiltonian algorithm on manifold of positive definite symmetric matrices and Section 4 illustrates the convergence speeds of the EHA compared with other algorithms using two numerical examples. Section 5 concludes the paper.

2. Preliminaries

In this section we recall some differential geometric facts of the space of positive definite symmetric matrices that will be used in the present analysis. More details can be found in [2023].

2.1. The Frobenius Inner Product and the Euclidean Distance

Let be the set of real matrices and be its subset containing all the nonsingular matrices. is a Lie group, that is, a group which is also a differentiable manifold and for which the operations of group multiplication and inverse are smooth. The tangent space at the identity is called the corresponding Lie algebra denoted by . It is the space of all linear transformations in , that is, .

On , we use the Euclidean inner product, known as the Frobenius inner product and defined by where stands for the trace of the matrix and superscript denotes the transpose. The associated norm is used to define the Euclidean distance on by

2.2. Geometry of Positive Definite Symmetric Matrices

We denote by the vector space of symmetric matrices and denote by the set of all positive definite symmetric matrices, which is a differentiable manifold of dimension . Here means the quadratic form for all . Furthermore, is an open convex cone; namely, if and are in , so is for any .

The exponential of any symmetric matrix is a positive definite symmetric matrix and the (principal) logarithm of any positive definite symmetric matrix is a symmetric matrix. Thus the exponential map from to is one-to-one and onto. It follows that provides a parameterization of via the exponential map.

2.3. Metric and Distances on PD

Let denote the Riemannian metric on manifold . For any the tangent space is identified with . On we define the inner product and the corresponding norm as follows: which depend on the point . The positive definiteness of this metric is a consequence of the positive definiteness of the Frobenius inner product for

Compared to manifold with the Frobenius inner product (1) resulting in a flat Riemannian manifold, manifold with the Riemannian metric (5) becomes a curved Riemannian manifold.

To measure closeness of two positive definite symmetric matrices and one can use the Euclidean distance (2) of the ambient space ; that is, It will be more appropriate to use the Riemannian distance induced from (5), which is intrinsic to and defined by where ,   , are the positive eigenvalues of . The positivity of the is a consequence of the similarity between the (in general nonsymmetric) matrix and the positive definite symmetric matrix . It is straightforward to see that the Riemannian distance (8) is invariant under inversion, that is, and is also invariant under congruent transformations; that is, where .

Now, we discuss the geodesics on manifold under the different metrics. The geodesic passing through in the direction of with respect to the Euclidean metric (2) is given by for some positive number . The restriction that is not too large is necessary in order to ensure that the line stays within . Similarly, the geodesic passing through in the direction of with respect to the Riemannian metric (5) is expressed by which does not suffer the previous restriction. Let the exponential map Exp: ; then Notice that the geodesic (12) is defined for all values of , which is called geodesic completeness. The Hopf-Rinow theorem states that a geodesic complete manifold is also a complete metric space. And, manifold with the Riemannian metric (5) has nonpositive sectional curvature.

3. The Extended Hamiltonian Algorithm for the Solution of AREs

3.1. Problem Formulation

The infinite time state regulator, namely, the state equation of linear time-invariant system, is in the form where and are the state and the control vector, respectively, and are constant matrices of appropriate dimensions, and the terminal time . The problem would be to find an optimal control satisfying (14) while minimizing the quadratic performance index as follows: where the designed parameters satisfy that both and are positive definite symmetric matrices. It has been proved that the optimal control of the cost function (15) is uniquely given by [24] and the quadratic optimal performance index is gotten by where is a positive definite symmetric matrix satisfying the AREs; that is,

It is assumed that the pair is completely controllable and that the pair is completely observable, where . Under these assumptions, (18) is known to have a unique positive definite solution [25]. There are, of course, other solutions to (18), but for the algorithm presented in this paper the emphasis will be on computing the positive definite one. Moreover, the optimal trajectory satisfies the following optimal asymptotically stable closed-loop system: where the state-feedback gain is given by

In order to solve (18), our purpose is to seek a matrix on manifold such that the matrix is as close as possible to the given matrix .

3.2. Distances and Gradients

In fact, the idea of formulating an equation on a manifold as a minimal-distance problem was suggested previously in [26]. Hence we use the two distance functions (7) and (8), and the difference between and can be expressed as follows:

Let be a scalar function of matrix. The Riemannian gradient of at , denoted by , is a map from to that satisfies , for every , where is the Euclidean gradient of at . It follows from the inner product (5) that [27]

Next, we will state some known facts, which are very helpful in computing and .

Lemma 1 (see [28]). Let denote two constant matrices and let denote matrix functions. One has the following properties: (1),(2),(3),(4),(5),
where denotes the differential of a matrix (scalar) function.

Lemma 2 (see [28]). Let be a scalar function of matrix , if holds; then where denotes the gradient of and denotes the derivative of .

Theorem 3. Let be a scalar function of matrix , if holds; then

Proof. Since is symmetric, we have from Lemma 1 that Using the above Lemma 2, we have Therefore, it is shown that (27) is valid.

Theorem 4. Let be a scalar function of matrix , if holds; then

Proof. Since is symmetric, it follows immediately from Lemma 1 that Using the above Lemma 2, we have Combining (23) and (33), (31) is established. This completes the proof of Theorem 4.

3.3. The Extended Hamiltonian Algorithm on

The Euclidean gradient algorithm (EGA) and the Riemannian gradient algorithm (RGA) are first-order learning algorithms and inherit some drawbacks that are well known about gradient-based optimization like, for instance, the slow-learning phenomenon in presence of plateau in the error surface. In order to overcome this difficulty, Fiori generalized EGA and RGA and proposed the extended Hamiltonian algorithm on manifold. In particular, on manifold , the extended Hamiltonian algorithm can be expressed by [18] where , denotes the instantaneous learning velocity, denotes a cost function, denotes the Riemannian gradient of , and the constant denotes a viscosity term. The function denotes the kinetic energy of the particle in a point and has the corresponding expression .

System (34) may be implemented by [19] where is a small positive number known as the learning rate. The effectiveness of the algorithm is ensured if and only if where denotes the largest eigenvalue of the Hessian matrix of the cost function . See [19] for more details.

Since the Riemannian distance is the shortest one on manifold , we take the Riemannian distance (22) as the cost function of (18); that is,

In order to apply (35) to solve (18), we plug (31) into (34) and get the final extended Hamiltonian algorithm on manifold as follows: which is implemented by where denotes the learning rate and the constant denotes a viscosity term.

Now, based on the above discussion, we give the iterative formula of the EHA for the numerical solution of (18).

Algorithm 5 (EHA). For any belonging to the considered manifold , the extended Hamiltonian algorithm is given by the following.(1)Set as an initial input matrix and as an initial direction of , and then choose a desired tolerance .(2)Compute .(3)If then stop.(4)Update and by (39) and return to step ().

4. Simulations

In this section, we give two computer simulations to demonstrate the effectiveness and performance of the proposed algorithm.

Example 1. First consider a second-order algebraic Riccati equation. In the present experiment, , , and . Moreover, , and used in this simulation are as follows: which satisfy the unique definite solution condition.
To study the performance differences among the EHA, the EGA, the RGA, and the NSIM, these algorithms are, respectively, applied to solve (18). In this paper, the optimal step size , which corresponds to the best convergence speed in each algorithm, is obtained by numerical experiments. For instance, it can be found that the iterations will reduce gradually as the step size changes from to , while the algorithm will be divergent once the step size is bigger than . Thus this optimal step size is convenient to be obtained experimentally. Based on (36), the constant depends on the selection of initial value. That is to say, different initial values will be corresponding to different ; hence the best is also obtained by the experiment.
As Figure 1 shows, the behavior of the cost function is shown. Clearly, in the early stages of learning, the EHA decreases much faster than the EGA, the RGA, and the NSIM with the same learning step size. The result shows that the EHA has the fastest convergence speed among four algorithms and needs 28 iterations to obtain the numerical solution of the ARE as follows: However, RGA realizes the given error through 36 iterations and the slowest one among them is the EGA with 88 steps. And Figure 2 shows that the kinetic energy function finally converges to 0.

Example 2. A third-order ARE is considered following the similar procedure as Example 1. In this experiment, , , and , and
As Figure 3 shows, the convergence speed of the EHA is still the fastest one among four algorithms. Finally, we get the numerical solution of the ARE as follows:

Moreover, the kinetic energy function also converges to 0 in the Figure 4.

5. Summary

In this paper, a second-order learning method called the extended Hamiltonian algorithm is recalled from literature and applied to solve the numerical solution for the algebraic Riccati equation. And, the convergence speed of the provided algorithm is compared with the EGA, the RGA, and the NSIM using two simulation examples. It is shown that the convergence speed of the extended Hamiltonian algorithm is the fastest one among these algorithms.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This subject is supported by the National Natural Science Foundations of China (nos. 61179031 and 10932002).