Stochastic Dynamics of Nonautonomous Cohen-Grossberg Neural
Networks
Chuangxia Huang1and Jinde Cao2
Academic Editor: Yong Zhou
Received15 Feb 2011
Accepted21 Mar 2011
Published29 May 2011
Abstract
This paper is devoted to the study of the stochastic stability of a class of
Cohen-Grossberg neural networks, in which the interconnections and delays are time-varying.
With the help of Lyapunov function, Burkholder-Davids-Gundy inequality,
and Borel-Cantell's theory, a set of novel sufficient conditions on th moment exponential stability and almost sure exponential stability for the trivial solution
of the system is derived. Compared with the previous published results, our method
does not resort to the Razumikhin-type theorem and the semimartingale convergence
theorem. Results of the development as presented in this paper are more general than
those reported in some previously published papers. An illustrative example is also
given to show the effectiveness of the obtained results.
1. Introduction
For decades, the studies of neural networks have attracted considerable multidisciplinary research interest. Ranging from signal processing, pattern recognition, programming problems, and static image processing, neural networks have witnessed a large amount of successful applications in many fields [1β7]. These applications rely crucially on the analysis of the dynamical behavior of the models [8β16]. Most existing literature on theoretical studies of neural networks is predominantly concerned with deterministic differential equations.
Recently, studies have been intensively focused on stochastic models [17β24]; it has been realized that the synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes, and it is of great significance to consider stochastic effects on the stability of neural networks described by stochastic functional differential equations, see [25β34].
In [17], Liao and Mao studied mean square exponential stability and instability of cellular neural networks (CNNs). In [18, 26], the authors continued their research to discuss almost sure exponential stability for a class of stochastic neural networks with discrete delays by using the nonnegative semimartingale convergence theorem. In [25], exponential stability of stochastic Cohen-Grossberg neural networks (CGNNs) with time-varying delays via Razumikihin-type technique were investigated. In [19], Wan and Sun investigated mean square exponential stability of stochastic delayed Hopfield neural networks (HNNs) by using the method of variation of constants. Also with the help of the method of variation of constants, Sun and Cao in [29] investigated th moment exponential stability of stochastic recurrent neural networks with time-varying delays.
However, to the best of our knowledge, few authors have considered the problem of th moment exponential stability and almost sure exponential stability of stochastic nonautonomous Cohen-Grossberg neural networks. In fact, in the process of the electronic circuitsβ applications, assuring constant connection matrix and delays are unrealistic. Therefore, in this sense, time-varying connection matrix and delays will be better candidates for modeling neural information processing.
Motivated by the above discussions, in this paper, we consider the stochastic Cohen-Grossberg Neural Networks (SCGNN) with time-varying connection matrix and delays described by the following non-autonomous stochastic functional differential equations:
or
where , , , , , , and . Here denotes the state variable associated with the th neuron at time ; represent an amplification function; is an appropriately behaved function; and are activation functions; and represents the strength of the neuron interconnection within the network; corresponds to the time delay required in processing, ; is the diffusion coefficient matrix and is an n-dimensional Brownian motion defined on a complete probability space with a natural filtration (i.e., .
Obviously, model (1.1) or (1.2) is quite general, and it includes several well-known neural networks models as its special cases such as Hopfield neural networks, cellular neural networks, and bidirectional association memory neural networks [10, 16, 27, 28]. There are at least three different types of stochastic stability to describe limiting behaviors of stochastic differential equations: stability in probability, moment stability and almost sure stability (see [35]). When designing an associative memory neural network, we should make convergence speed as high as possible to ensure the quick convergence of the network operation. Therefore, th moment exponential stability and almost sure exponential stability are most useful concepts as they imply that the solutions will tend to the trivial solution exponentially fast. This motivates us to study th moment exponential stability, and almost sure exponential stability for System (1.1) in this paper.
The remainder of this paper is organized as follows. In Section 2, the basic assumptions and preliminaries are introduced. After establishing the criteria for the th moment exponential stability and almost sure exponential stability for System (1.1) by using the Lyapunov function method, Burkholder-Davids-Gundy inequality and Borel-Cantellβs theory in Section 3, an illustrative example and its simulations are given in Section 4.
2. Preliminaries
Throughout this article, we let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all -null sets). Let be the Banach space of continuous functions which map into with the topology of uniform convergence. For any , we define , .
The initial conditions for system (1.1) are ; here is -valued stochastic process , is -measurable, . For the sake of convenience, throughout this paper, we assure , which implies that system (1.1) admits an equilibrium solution .
If , according to the formula, define an operator associated with (1.2) as
where , , and .
To establish the main results of the model given in (1.1), some of the standing assumptions are formulated as follows: there exist positive constants , , such that for each , there exist positive functions , such that there exist positive constants , such that each satisfies the Lipschitz condition, and there exist positive constants , such that
Remark 2.1. The activation functions are typically assumed to be continuous, differentiable, and monotonically increasing, such as the functions of sigmoid type. These restrictive conditions are no longer needed in this paper. Instead, only the Lipschitz condition is imposed in Assumption . Note that the type of activation functions in have already been used in numerous papers, see [5, 10] and references therein.
Remark 2.2. We remark here that non-autonomous conditions ()β() replace the usual autonomous conditions which is more useful for practical purpose; please refer to [4, 13] and references therein.
Remark 2.3. The delay functions considered in this paper only needed to be bounded; they can be time-varying, nondifferentiable functions. This generalized some recently published results in [4, 13, 26β29]. Different from the models considered in [4, 13, 29], in this paper, we have removed the following condition:β For each is a differentiable function, namely, there exists such that
Definition 2.4 (see [35]). The trivial solution of (1.1) is said to be th moment exponential stability if there is a pair of positive constants and such that
where is a constant; when , it is usually said to be exponential stability in mean square.
Definition 2.5 (see [35]). The trivial solution of (1.1) is said to be almost sure exponential stability if for almost all sample paths of the solution , we have
Lemma 2.6 ([35] Burkholder-Davids-Gundy inequality). There exists a universal constant for any such that for every continuous local martingale vanishing at zero and any stopping time ,
where is the cross-variation of . In particular, one may have if and if ; although they may not be optimal, for example, one could have .
Lemma 2.8 ([36] Borel-Cantell's lemma). Let be a sequence of events in some probability space, then (i)if , then ;(ii)moreover, if are independent of each other, then implies
where denotes occurring infinitely often within , that is, . β." is the abbreviation of βinfinitely often".
3. Main Results
Theorem 3.1. Under the assumptions ()β(), if there are a positive diagonal matrix and two constants , , such that
where
then the trivial solution of system (1.1) is th moment exponential stability, where denotes a positive constant. When , the trivial solution of system (1.1) is exponential stability in mean square.
Proof. Consider the following Lyapunov function:
As denotes a positive constant, we can get the following inequality: if and denote nonnegative real numbers, then . Using this inequality, then the operator associated with system (1.1) has the form as follows:
where
The remaining part of the proof is similar to that of Theorem 3.3 in [33]; we omit it.
In Theorem 3.1, if we let be the identity matrix, we can easily obtain the following corollary.
Corollary 3.2. Under the assumptions ()β(), if there are two constants , such that
where
then the trivial solution of system (1.1) is th moment exponentially stability.
Remark 3.3. Compared with [10, 12], our method does not resort to the Razumikhin-type theorem or Halanay inequality.
Theorem 3.4. Suppose system (1.1) satisfies assumptions ()β() and the inequality (3.1) hold; if , , and are bounded functions for all , , then the trivial solution of (1.1) is almost sure exponential stability.
Proof. Let be an integer such that and ; consider the following Lyapunov function:
Using the formula, we have
Calculating the integral of (3.9) from to , we have
where
From Theorem 3.1, there exists a pair of positive constants and , such that
Furthermore, from ()β() and inequality (3.13), we have
where
For any two different norms , , , as the space is a finite dimensional space, there exist two positive constants , , such that
As is continuous local martingale, then from Lemma 2.6, , and (3.16), it follows that
According to (3.11), (3.13), (3.14), and (3.17), we have the following inequality:
Therefore,
where
For each integer , we set . Then, from Lemma 2.7, we have
Therefore, in view of Lemma 2.8, for almost all , we have that
holds for all but finitely many . Hence, there exists an , for all excluding a -null set, for which (3.21) holds whenever . Consequently, for almost all ,
if , . Hence
Therefore, the trivial solution of (1.1) is almost sure exponential stability.
Remark 3.5. Compared with [26, 32], our method does not resort to the semimartingale convergence theorem. Since system (1.1) does not require the delays to be constants, furthermore, the model is non-autonomous, it is clear that the results obtained in [19, 25β32, 34] cannot be applicable to system (1.1). This implies that the results of this paper are essentially new and complement some corresponding ones already known.
Remark 3.6. By Theorems 3.1 and 3.4, the stability of system (1.1) is dependent on the magnitude of noise, and therefore, stochastic noise fluctuation is one of the very important aspects in designing a stable network and should to be considered adequately. It should be noted that the assumptions of the boundedness of , , and in Theorem 3.4 are not necessary; we use these assumptions just to simplify the process of the proof. In fact, in view of (3.15), (3.20), (3.21), and (3.22), similar to the proof of Theorem 3.4, we have the following theorem.
Theorem 3.7. Suppose system (1.1) satisfies assumptions ()β() and the inequality (3.1) hold, if there exist positive constants , , such that for any , we have
then the trivial solution of (1.1) is almost sure exponential stability.
Remark 3.8. Furthermore, the derived conditions for stability of the following stochastic delayed recurrent neural networks can be viewed as byproducts of our results. The significant of this paper does offer a wider selection on the networks parameters in order to achieve some necessary convergence in practice.
Remark 3.9. For system (1.1), when , , and , then it turns out to be following stochastic delayed recurrent neural networks with time-varying delays
Using Theorems 3.1 and 3.4, one can easily get a set of similar corollary for checking the th moment exponential stability and almost sure exponential stability for the trivial solution of this system.
4. An Illustrative Example
In this section, an example is presented to demonstrate the correctness and effectiveness of the main obtained results.
Example 4.1. Consider the following stochastic Cohen-Grossberg neural networks with time-varying delays:
where and is any bounded positive function for . Each satisfies the Lipschitz condition, and there exist positive constants , such that
In the example, let ; by simple computation, we obtain
Choosing , one can easily get that
Thus, it follows Theorem 3.7 that system (4.1) is the third moment exponentially stable and also almost sure exponentially stable. These conclusions can be verified by the following numerical simulations (Figures 1, 2, 3, and 4).
Remark 4.2. Let ; we can find that [29, Theorem 1] is not satisfied; therefore, they fail to conclude whether system (4.1) is th moment exponentially stable even when the delay functions are differential and their derivatives are simultaneously required to be not greater than 1. It is obvious that the results in [19, 25β32, 34] and the references therein cannot be applicable to system (4.1).
Acknowledgments
The authors are extremely grateful to Professor Yong Zhou and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improved presentation of this paper. This work was jointly supported by the Foundation of Chinese Society for Electrical Engineering (2008), the Excellent Youth Foundation of Educational Committee of Hunan Provincial (10B002), the Key Project of Chinese Ministry of Education (211118), and the National Natural Science Foundation of China (11072059).
References
M. A. Cohen and S. Grossberg, βAbsolute stability of global pattern formation and parallel memory storage by competitive neural networks,β IEEE Transactions on Systems, Man, and Cybernetics, vol. 13, no. 5, pp. 815β826, 1983.
X. Yang, C. Huang, D. Zhang, and Y. Long, βDynamics of Cohen-Grossberg neural networks with mixed delays and impulses,β Abstract and Applied Analysis, vol. 2008, Article ID 432341, 14 pages, 2008.
C. Huang, L. Huang, and Z. Yuan, βGlobal stability analysis of a class of delayed cellular neural networks,β Mathematics and Computers in Simulation, vol. 70, no. 3, pp. 133β148, 2005.
J. Cao, G. Chen, and P. Li, βGlobal synchronization in an array of delayed neural networks with hybrid coupling,β IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 38, no. 2, pp. 488β498, 2008.
J. Cao and J. Liang, βBoundedness and stability for Cohen-Grossberg neural network with time-varying delays,β Journal of Mathematical Analysis and Applications, vol. 296, no. 2, pp. 665β685, 2004.
J. Cao, G. Feng, and Y. Wang, βMultistability and multiperiodicity of delayed Cohen-Grossberg neural networks with a general class of activation functions,β Physica D, vol. 237, no. 13, pp. 1734β1749, 2008.
J. Cao, D. W. C. Ho, and Y. Yang, βProjective synchronization of a class of delayed chaotic systems via impulsive control,β Physics Letters A, vol. 373, no. 35, pp. 3128β3133, 2009.
J. Qiu and J. Cao, βDelay-dependent exponential stability for a class of neural networks with time delays and reaction-diffusion terms,β Journal of the Franklin Institute, vol. 346, no. 4, pp. 301β314, 2009.
J. Cao and L. Li, βCluster synchronization in an array of hybrid coupled neural networks with delay,β Neural Networks, vol. 22, no. 4, pp. 335β342, 2009.
J. Cao and F. Ren, βExponential stability of discrete-time genetic regulatory networks with delays,β IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 520β523, 2008.
W. Lu and T. Chen, β-global stability of a Cohen-Grossberg neural network system with nonnegative equilibria,β Neural Networks, vol. 20, no. 6, pp. 714β722, 2007.
C. Huang and L. Huang, βDynamics of a class of Cohen-Grossberg neural networks with time-varying delays,β Nonlinear Analysis: Real World Applications, vol. 8, no. 1, pp. 40β52, 2007.
Z. Yuan, L. Huang, D. Hu, and B. Liu, βConvergence of nonautonomous Cohen-Grossberg-type neural networks with variable delays,β IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 140β147, 2008.
Q. Liu and J. Wang, βA one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming,β IEEE Transactions on Neural Networks, vol. 19, no. 4, pp. 558β570, 2008.
J. H. Park and O. M. Kwon, βFurther results on state estimation for neural networks of neutral-type with time-varying delay,β Applied Mathematics and Computation, vol. 208, no. 1, pp. 69β75, 2009.
J. H. Park and O. M. Kwon, βDelay-dependent stability criterion for bidirectional associative memory neural networks with interval time-varying delays,β Modern Physics Letters B, vol. 23, no. 1, pp. 35β46, 2009.
X. X. Liao and X. Mao, βExponential stability and instability of stochastic neural networks,β Stochastic Analysis and Applications, vol. 14, no. 2, pp. 165β185, 1996.
S. Blythe, X. Mao, and X. Liao, βStability of stochastic delay neural networks,β Journal of the Franklin Institute, vol. 338, no. 4, pp. 481β495, 2001.
L. Wan and J. Sun, βMean square exponential stability of delayed Hopfield neural networks,β Physics Letters A, vol. 343, no. 4, pp. 306β318, 2005.
F. Wen and X. Yang, βSkewness of return distribution and coefficient of risk premium,β Journal of Systems Science & Complexity, vol. 22, no. 3, pp. 360β371, 2009.
J. H. Park and O. M. Kwon, βAnalysis on global stability of stochastic neural networks of neutral type,β Modern Physics Letters B, vol. 22, no. 32, pp. 3159β3170, 2008.
J. H. Park and O. M. Kwon, βSynchronization of neural networks of neutral type with stochastic perturbation,β Modern Physics Letters B, vol. 23, no. 14, pp. 1743β1751, 2009.
J. H. Park, S. M. Lee, and H. Y. Jung, βLMI optimization approach to synchronization of stochastic delayed discrete-time complex networks,β Journal of Optimization Theory and Applications, vol. 143, no. 2, pp. 357β367, 2009.
S. M. Lee, O. M. Kwon, and J. H. Park, βA novel delay-dependent criterion for delayed neural networks of neutral type,β Physics Letters A, vol. 374, no. 17-18, pp. 1843β1848, 2010.
X. Li and J. Cao, βExponential stability of stochastic Cohen-Grossberg neural networks with timevarying delays,β in Advances in Neural Networks, vol. 3496 of Lecture Notes in Computer Science, pp. 162β167, 2005.
H. Zhao and N. Ding, βDynamic analysis of stochastic Cohen-Grossberg neural networks with time delays,β Applied Mathematics and Computation, vol. 183, no. 1, pp. 464β470, 2006.
J. Hu, S. Zhong, and L. Liang, βExponential stability analysis of stochastic delayed cellular neural network,β Chaos, Solitons & Fractals, vol. 27, no. 4, pp. 1006β1010, 2006.
W. Zhu and J. Hu, βStability analysis of stochastic delayed cellular neural networks by LMI approach,β Chaos, Solitons & Fractals, vol. 29, no. 1, pp. 171β174, 2006.
Y. Sun and J. Cao, βpth moment exponential stability of stochastic recurrent neural networks with time-varying delays,β Nonlinear Analysis: Real World Applications, vol. 8, no. 4, pp. 1171β1185, 2007.
C. Li, L. Chen, and K. Aihara, βStochastic stability of genetic networks with disturbance attenuation,β IEEE Transactions on Circuits and Systems II, vol. 54, no. 10, pp. 892β896, 2007.
H. Zhang and Y. Wang, βStability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays,β IEEE Transactions on Neural Networks, vol. 19, no. 2, pp. 366β370, 2008.
C. Huang, P. Chen, Y. He, L. Huang, and W. Tan, βAlmost sure exponential stability of delayed Hopfield neural networks,β Applied Mathematics Letters, vol. 21, no. 7, pp. 701β705, 2008.
C. Huang, Y. He, and P. Chen, βDynamic analysis of stochastic recurrent neural networks,β Neural Processing Letters, vol. 27, no. 3, pp. 267β276, 2008.
Q. Song, J. Liang, and Z. Wang, βPassivity analysis of discrete-time stochastic neural networks with time-varying delays,β Neurocomputing, vol. 72, no. 7β9, pp. 1782β1788, 2009.