Abstract

This note focuses on the finite horizon control for stochastic nonlinear jump systems with partially unknown transition probabilities. We derive the nonlinear stochastic bounded real lemma and the nonlinear optimal regular result for the considered system at first. A sufficient condition and a necessary condition for the solution of control are, respectively, offered by four cross-coupled Hamilton–Jacobi equations (HJEs). Besides, numerical examples show the effectiveness of the obtained results.

1. Introduction

Control synthesis is one of the important parts of control theory [15]. The synthesis aims to seek for a suitable controller to suppress the effect of the exogenous disturbance on the dynamic system below a given level [69]. However, it is aware that control can guarantee the good robustness of the designed system, but cannot optimize the closed-loop to achieve perfect performance. Because of this, the linear quadratic control ( control) is selected to make up the lack in optimization. Combining the two control methods becomes a natural idea to reach the balance. control not only ensures to repress the influence of the disturbance, but also minimizes the energy cost under the disturbance input [1012]. So far, control and control have been paid continuous attention [1317]. In particular, for stochastic systems, bounded real lemmas in finite and infinite horizon have been derived for linear models by the coupled Riccati equations method [14, 15], and the coupled Hamilton–Jacobi equations method has been performed for nonlinear models [16, 17]. It can be seen that a local solution to the primal nonlinear control exists if its linearized control problem is solvable. However, most of the existing works on stochastic control are concerned with jump linear systems, while little attention is paid on nonlinear systems with Markov jump.

As we all know, Markov jump systems have been used widely both in theory and in engineering over the past decades [1824]. The main motivation of research studies is that such models have numerous applications in mechanics, traffic, power, and many other fields in industry and finance. When encountering system failures, sudden environmental changes, and external noise, the structure and parameter factors of dynamics are mutated. The process of state hopping from one mode to another can be marked as Markov jumps. The transition probabilities of a jump process are crucial factors which determine the behavior of a system exactly [25, 26]. Normally, the elements of the transition probability matrix are assumed to be fully known [14, 15]. However, in some practical cases, the transition probabilities may not be fully known, which inspired scholars to study Markov jump systems with partial probability [2737]. For instance, Zhang and Boukas considered stability and stabilization of Markovian jump systems with partially unknown transition probabilities [27]. In addition, sliding-mode control, , , and filtering control subject to partially unknown transition probabilities have gained considerable research interest [2935]. Nevertheless, to the best of our knowledge, there are no literatures to deal with control for nonlinear jump systems with partially unknown transition probabilities up to now, which is the theoretical significance of this note. More importantly, to get the robust controller, a nonlinear stochastic bounded real lemma is derived.

The remainder of this paper is arranged as follows. The second part provides some useful definitions and lemmas. In the third part, for nonlinear jump systems with partially unknown transition probabilities, a sufficient condition and a necessary condition for the finite horizon control are obtained, respectively. The forth part gives numerical illustrative examples. Conclusions are drawn in the fifth part.

Notations used in this study are as follows. is the -dimensional Euclidean space; : is a positive definite (positive semidefinite) matrix; is the mathematical expectation; is the transpose of a matrix ; is the inverse of a nonsingular matrix ; is the set of all real matrices; is the Euclidean vector norm; is the completed filtration space with the filtration satisfying the usual conditions, i.e., it is right continuous and contains all -null sets; is the space of all nonanticipative stochastic processes with respect to an increasing -algebra , which satisfies ; : the class of all functions are twice continuously differentiable with respect to and once continuously differentiable with respect to , except possibly at the point ; is the identity matrix; .

2. Preliminaries

Let us think about the following stochastic nonlinear jump systems described by the Itô-type equation:where , , , and stand for the system state, penalty output, control input, and exogenous disturbance signal, respectively. is an one-dimensional standard wiener process defined on the filtered probability space with . The stochastic mode jump process is a continuous-time discrete-state Markov process with values in a finite space and is assumed to be independent with . The process of transition probabilities are denoted bywhere , , and represent the transition rate from mode at time to mode at time and for all . So, the transition probabilities matrix is defined by

In this paper, we suppose that the transition probabilities are partly unknown. For instance, for , the transition rate matrix is given by

In above, we use to sign the unknown element. Furthermore, , we set , where

In addition, if , it can be described as , , where is the th known element and the exponent is the th row of matrix . Then, set .

Remark 1. When and , the transition rates in the stochastic process are fully unknown. and mean that the transition rates are fully known.
For convenience, we signify throughout the paper, and all coefficients in (1) are thought to be Borel measurable. Meanwhile, assume that , for .
Now, we introduce the following definitions.

Definition 1. Given , a feedback control law is called the finite horizon robust control of system (1), if the following conditions are satisfied:(i)For any nonzero, , and the trajectory of the resulted closed-loop by (1) starting from and , we always have(ii)When the worst disturbance is executed in (1), can minimize the quadratic performance , .

Definition 2. (see [18]). For each , we have an operator associated with (1) given by

Definition 3. (see [16]). We set two extreme value functions and related with (1) as follows:in which the performances are as follows:

Remark 2. The following results are obvious:

Remark 3. Let the perturbation operator be denoted by with the normIt can be checked that (6) is equivalent to .
Next, we state the following lemmas, which will be used later.

Lemma 1. For a given level , with an initial state and , think about the following stochastic perturbed system with Markov jumps:If there exists with system (12) and satisfies the following HJE:Then, holds.

Proof. Noticing that for any symmetric matrix (), we have . Applying the generalized Itô’s formula, one getsThen, it can deduced thatwhereInferring from (14)–(16), it reveals that is the corresponding worst disturbance, which yields thatThat is,This lemma is proved.

Lemma 2. For a given level , with an initial state , , think about the following nonlinear stochastic controlled system with Markov jumps:If the following HJEadmits a nonnegative solution , then we havewith the optimal controlwhere

Proof. Taking integration and expectation in , for any , we getBy Itô’s formula and the completing square technique combined with (24), we obtainin which the inequality holds since that , , and . So, from (25), this lemma is obvious with the optimal control (22).

3. Main Results

In this part, a sufficient condition comes down to Theorem 1 for the solvability of the control with nonlinear jump systems (1).

Theorem 1. For a given , think about the following four cross-coupled HJEs:

If there exist solutions with and for (26)–(29), then the finite horizon control of nonlinear Markov jumps has a pair of solutions with , , and .

Proof. Notice the following transformations in (26)–(29):Substituting with defined by (29) into (1), we haveApplying Lemma 1 to system (31), we concludeand is the worst case disturbance. In the meantime, implementing into (1), one yields thatMinimizing under the constraint of (33) is a standard nonlinear quadratic optimal problem. By applying the Lemma 2, achieves its minimum at , and . By Definition 1, this theorem is proved.
Theorem 2 offers a necessary condition for the control of system (1).

Theorem 2. For a given , , think about system (1). If the finite horizon of nonlinear stochastic jump systems has solutions satisfying the following terms:Then, and are the solutions of the four cross-coupled HJEs (26)–(29).

Proof. Substituting into (1), one reads (31). Hence, (6) holds because solves the control. By Definition 3, Lemma 1, and Lemma 4.1 of [17], we assert that solves the following HJE:Notice the inequality (15), apparently,This, together with (35) and Definition 3, for each From (37), we see that , andis the worst perturbance. So, . Then, substituting into (1), (33) can be obtained. Owing to minimizing under the constraint of (33), we can infer that is the optimal solution. Next, considering the stochastic dynamic programming principle, we can certify that solves the following HJE:that is,In this step, the generalized Hamiltonian function is prescribed as follows:withFrom (41), we get . Substituting the above into (40) and considering Definition 3, it tests that solves the following HJE:Combining (35) and (43), the desired result therefore is obtained.

Remark 4. It should be noted that HJEs (26)–(29) are hard to solve in general. To get the analytic expression of the controller, Takagi–Sugeno fuzzy model is constantly used, which can approximate nonlinear system effectively. The method to solve HJEs (26)–(29) is worth to make further studies.

4. Examples

In this part, we will give two examples to illustrate the usefulness of above results.

Example 1. Think about the following one-dimensional Markov jump systems with three modes:We assume that the elements of the transition probabilities matrix are fully known:Set . For given , the corresponding HJEs have solutions with , , , , , and . According to Theorem 1, the controller of (44)–(46) can be chosen as , , , , and .

Example 2. Think about the following one-dimensional Markov jump systems with three modes:The elements of the transition probability matrix are supposed to be partially unknown:where “?” represents the unaccessible element. Set and . It can be found that , , , , , and solving the corresponding HJEs. According to Theorem 1, the controller of (48)–(50) can be given by , , , , , and .

5. Conclusions

This note dealt with the finite horizon control for stochastic nonlinear Markov jump systems with partially unknown transition probabilities. Based on the four cross-coupled Hamilton–Jacobi equations, a sufficient condition and a necessary condition for the existence of control are respectively drawn, which can be regarded as the generalization of [16] to nonlinear jump models. The validity of the results has been demonstrated by two numerical examples.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61673013), the Natural Science Foundation of Shandong Province (ZR2016JL022), and the Key Research and Development Plan of Shandong Province (2019GGX101052).