Abstract
This paper investigates the stochastic finite-time stabilization and control problem for one family of linear discrete-time systems over networks with packet loss, parametric uncertainties, and time-varying norm-bounded disturbance. Firstly, the dynamic model description studied is given, which, if the packet dropout is assumed to be a discrete-time homogenous Markov process, the class of discrete-time linear systems with packet loss can be regarded as Markovian jump systems. Based on Lyapunov function approach, sufficient conditions are established for the resulting closed-loop discrete-time system with Markovian jumps to be stochastic finite-time boundedness and then state feedback controllers are designed to guarantee stochastic finite-time stabilization of the class of stochastic systems. The stochastic finite-time boundedness criteria can be tackled in the form of linear matrix inequalities with a fixed parameter. As an auxiliary result, we also give sufficient conditions on the robust stochastic stabilization of the class of linear systems with packet loss. Finally, simulation examples are presented to illustrate the validity of the developed scheme.
1. Introduction
Networked control systems (NCSs) are feedback control systems with control closed loops via digital communication channel. Compared with traditional point-to-point controller architectures, the advantages of NCSs include low cost, high reliability, less wiring, and easy maintenance [1]. In recent years, NCSs have found successful applications in broad range of modern scientific areas such as internet-based control, distributed communication, and industrial automation [2]. However, the insertion of the communication channels creates discrepancies between the data records to be transmitted and their associated remotely transmitted images, which hence makes the traditional control theory confronts new challenges. Among these challenges, random communication delay, data packet dropout, and signal quantization are known to be three main interesting problems for the stability and performance degradation of the controlled networked system. In view of this, many researchers have made to study how to design control systems by packet loss, delay, and quantization, see [3–6] and the references cited therein. Among a number of issues arising from such a framework, packet losses of NCSs are an important issue to be addressed and have received great attention, see [7–15]. Meanwhile, Markovian jumps systems are regarded to be as a special family of hybrid systems and stochastic systems, which are very appropriate to model plants whose structure is subject to random abrupt changes, see [16–22] and references therein.
It is well known that classical Lyapunov theory focuses mainly on the state convergence property of the systems in infinite time interval, which, just as was mentioned above, does not usually specify bounds on the trajectories in finite interval. However, the main attention in many practical applications is the behavior of the dynamic systems over a specified time interval, for instance, large values of the state are not acceptable in the presence of saturations [23]. To discuss this transient performance of control dynamics, finite-time stability or short-time stability was presented in [24]. Then, some appealing results were found in [25–32]. However, to date and to the best of our knowledge, the problems of stochastic finite-time stability and stabilization of network control systems with packet loss have not fully investigated and still remain challenging, although results related to systems over networks with packet loss are reported in the existing literature, see [6–15, 33–36].
Motivated by the above discussion, in this paper, we address the stochastic finite-time boundedness (SFTB) problems for linear discrete-time systems over networks with packet dropout, parametric uncertainties, and time-varying norm-bounded disturbance. Firstly, we present dynamic model description studied, which, if the data packet loss is assumed to be a time homogenous Markov process, the class of linear discrete-time systems with packet loss can be referred as Markovian jump systems. Thus, the class of linear systems investigated could be studied by the theoretical framework of Markov jumps systems. Then, the concepts of stochastic finite-time stability, stochastic finite-time boundedness, and SFTB and problem formulation are given. The main contribution of this paper is to design a state feedback controller which guarantees the resulting closed-loop discrete-time system with Markovian jumps SFTB. As an auxiliary result, we also give sufficient conditions on the robust stochastic stabilization of the class of linear systems with packet loss. The SFTB criteria of the class of Markovian jump systems can be addressed in the form of linear matrix inequalities (LMIs) with a fixed parameter.
The rest of this paper is organized as follows. Section 2 is devoted to the dynamic model description and problem formulation. The results on the SFTB are presented in Section 3. Section 4 presents numerical examples to demonstrate the validity of the proposed methodology. Finally, in Section 5, the conclusions are given.
Notation 1. The notation used throughout the paper is fairly standard, , , and denote the sets of component real vectors, real matrices, and the set of nonnegative integers, respectively. The superscript stands for matrix transposition or vector, and denotes the expectation operator with respective to some probability measure . In addition, the symbol denotes the transposed elements in the symmetric positions of a matrix, and stands for a block-diagonal matrix. and denote the smallest and the largest eigenvalue of matrix , respectively. Notations sup and inf denote the supremum and infimum, respectively. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.
2. Problem Formulation and Preliminaries
Let us consider a linear discrete-time system (LDS) as follows: where is the state, is the measure output, and is the control input. The noise signal satisfies The matrices and are uncertain matrices and satisfy where is an unknown, time-varying matrix function, and satisfies
Due to the existence of the packet dropout of the communication during the transmission, the packet dropout process of the network can be regarded as a time-homogenous Markov process . Let mean that the packet has been successfully delivered to the decoder, while corresponds to the dropout of the packet. The Markov chain has a transition probability matrix defined by where are the state of the Markov chain. Without loss of generality, let and the failure rate and the recovery rate of the channel satisfy . It is worth noting that a smaller value of and a larger value of indicate a more reliable channel.
Remark 2.1. When the above transition probability matrix is with , the above two-state Markov process is reduced to a Bernoulli process [37].
Consider the control law for the LDS (2.1) in the form
where is to be designed the control gain matrix. is a Markov packet dropout process satisfying (2.5). Then, the resulting closed-loop LDS follows that
where and .
Now, we define two models according to the value of . If , we define the Model 1 at time as follows:
If , we define the Model 2 at time as follows:
where the selection of in (2.7) is according to the model of for all , that is to say, if is at Model 1, which is , otherwise, if is at Model 2, which is .
Then, (2.7) can be regarded as a closed-loop LDS with Markovian jumps described by
where , denotes the mode indicator function. corresponds to a mode with feedback, and corresponds to a mode without feedback. It is noted that it yields when at time be and for . The mode transition probabilities of Markovian jump LDS (2.10) is given by
where for all and . implies , which the communication transmission succeeds, and implies , which the communication dropout occurs. Thus, compared to (2.5), it follows that .
Definition 2.2 (stochastic finite-time stability (SFTS)). The closed-loop Markovian jump LDS with (2.10) is said to be SFTS with respect to , where , is a symmetric positive-definite matrix and , if
Definition 2.3 (stochastic finite-time boundedness (SFTB)). The closed-loop LDS with Markovian jumps (2.10) is said to be SFTB with respect to , where , is a symmetric positive-definite matrix and , if the relation condition (2.12) holds.
Definition 2.4 (stochastic finite-time boundedness (SFTB)). The closed-loop LDS with Markovian jumps (2.10) is said to be SFTB with respect to , where , is a symmetric positive-definite matrix and , if the closed-loop LDS with Markovian jumps (2.10) is SFTB with respect to and under the zero-initial condition the output satisfies for any nonzero which satisfies (2.2), where is a prescribed positive scalar.
Lemma 2.5 (see [38]). The linear matrix inequality is equivalent to and , where and .
Lemma 2.6 (see [38]). For matrices , and of appropriate dimensions, where is a symmetric matrix, then holds for all matrix satisfying for all , if and only if there exists a positive constant , such that the inequality holds.
In this paper, the feedback gain matrices and with Markov packet dropout of failure rate and recovery rate will be designed to guarantee the states of the closed-loop Markovian jump LDS (2.10) SFTB.
3. Main Results
In this section, for the given failure rate and recovery rate with , we will design a state feedback controller that assures SFTB of the Markovian jump LDS (2.10).
Theorem 3.1. For the given failure rate and recovery rate with , the closed-loop Markovian jump LDS (2.10) is SFTB with respect to , if there exist scalars , , two symmetric positive-definite matrices , and a set of feedback control matrices , such that the following inequalities hold:
where for all .
Proof. Assume the mode at time be . Taking into account that if , then we have and , otherwise if , then and . Consider the following Lyapunov-Krasovskii functional candidate for the Markov jump LDS (2.10): Then, we have Denote Taking into account that if , then , otherwise , then . Noting that . Thus, when , it follows that where By Lemma 2.5, it follows from (3.1) and (3.7) that When , taking into account condition (3.2), the similar to (3.9), we can derive the following inequality: Thus, for all , we have That is to say, for all , it follows that By (3.12), it is obvious that From (2.2) and (3.13) and noting that , we have Let and noting that , we have On the other hand, for all , we have Combing with (3.14)–(3.16), we can derive Noting condition (3.3), it is obvious that for all . This completes the proof of this theorem.
Theorem 3.2. For the given failure rate and recovery rate with , the closed-loop Markovian jump LDS (2.10) is SFTB with respect to , if there exist scalars , , two symmetric positive-definite matrices , and a set of feedback control matrices , such that (3.3) and the following inequalities hold: where for all .
Proof. Noting that
Applying Lemma 2.5, it follows from (3.18) and (3.19) that conditions (3.1) and (3.2) hold. Therefore, the Morkovian jump LDS (2.10) is stochastic finite-time boundedness according to Theorem 3.1.
Then, we only need to prove (2.13) satisfied under zero-value initial condition. Let us assume the mode at time be . Taking into account that if , then we have and , otherwise if , then and . Let us choose for the Markovian jump LDS (2.10). We denote
Thus, when , we have
where are the same as the above. Thus, according to Lemma 2.5, we can obtain from (3.18) and (3.23)
When , taking into account condition (3.19) and (3.21), the similar to the above deduction, we can derive that the following inequality holds:
Thus, for all , we can obtain
According to (3.26), it is obvious that
From (3.27), we have
Under the zero-value initial condition and noting that for all , we have
From (3.29) and noting that , we have
Thus, this completes the proof of the theorem.
Denoting and applying Lemmas 2.5 and 2.6, one can obtain from Theorem 3.2 the following results on the stochastic finite-time stabilization.
Theorem 3.3. For the given failure rate and recovery rate with , there exists a state feedback controller with and such that the closed-loop Markovian jump LDS (2.10) is SFTB with respect to , if there exist scalars , , two symmetric positive-definite matrices , and a set of feedback control matrices , such that the following inequalities hold: where for all .
Remark 3.4. It is easy to check that condition (3.33) is guaranteed by imposing the conditions for all : It follows that conditions (3.31), (3.32), and (3.34) are not strict LMIs; however, once we fix the parameter , the conditions can be turned into LMI-based feasibility problem:
Remark 3.5. From the above discussion, we can obtain that the feasibility of conditions stated in Theorem 3.3 can be turned into the following LMIs based feasibility problem with a fixed parameter . Furthermore, we can also find the parameter by an unconstrained nonlinear optimization approach, which a locally convergent solution can be obtained by using the program fminsearch in the optimization toolbox of Matlab.
Remark 3.6. If we can find feasible solution with the parameter , by the above discussion, we can obtain that the designed controller can ensure both stochastic finite-time boundedness and robust stochastic stabilization of the family of network control systems.
4. Numerical Examples
In this section, we present two examples to illustrate the proposed methods.
Example 4.1. Consider a Morkovian jump LDS (2.10) with parameters as
and , where satisfies for all and . Moreover, we assume the failure rate and the recovery rate .
Then, we chose , and , Theorem 3.3 yields to , , and
Thus, we can obtain the following state feedback controller gains
Furthermore, let , and , by Theorem 3.3, the optimal bound with minimum value of relies on the parameter . We can find feasible solution when . Figures 1 and 2 show the optimal value with different value of . Then, by using the program fminsearch in the optimization toolbox of Matlab starting at , the locally convergent solution can be derived as
with and the optimal value and .


Example 4.2. Consider a Morkovian jump LDS (2.10) with
and the failure rate and the recovery rate . In addition, the other matrices parameters are the same as Example 4.1.
Then, let and , by Theorem 3.3, we can find feasible solution when . Furthermore, when , it yields the optimal value and and the following optimized state feedback controller gains:
Thus, the above LDS with Morkovian jumps is stochastically stable and the calculated minimum performance satisfies .
5. Conclusions
This paper addresses the SFTB control problems for one family of linear discrete-time systems over networks with packet dropout. Under assuming packet loss being a time homogenous Markov process, the class of linear discrete-time systems can be regarded as Markovian jump systems. Sufficient conditions are given for the resulting closed-loop linear discrete-time Markovian jump system to be SFTB, and state feedback controllers are designed to guarantee SFTB of the class of linear systems with Markov jumps. The SFTB criteria can be tackled in the form of linear matrix inequalities with a fixed parameter. As an auxiliary result, we also give sufficient conditions on the robust stochastic stabilization of the class of linear discrete-time systems with data packet dropout. Finally, simulation results are also given to show the validity of the proposed approaches.
Acknowledgments
The authors would like to thank the reviewers and editors for their very helpful comments and suggestions which could have improved the presentation of the paper. The paper was supported by the National Natural Science Foundation of China under Grant 60874006, supported by the Doctoral Foundation of Henan University of Technology under Grant 2009BS048, supported by the Foundation of Henan Educational Committee under Grants 2011A120003 and 2011B110009, and supported by the Foundation of Henan University of Technology under Grant 09XJC011.