Abstract

In this paper, we study the problem of exponential stability for the Hopfield neural network with time-varying delays. Different from the existing results, we establish new stability criteria by employing the method of variation of constants and Gronwall’s integral inequality. Finally, we give several examples to show the effectiveness and applicability of the obtained criterion.

1. Introduction

Since Hopfield [1] proposed the Hopfield neural network named after him in 1984, these types of artificial neural networks have been widely applied in many aspects, such as combinative optimization [24], image processing [5, 6], pattern recognition [7], signal processing [8], communication technology [9], and so on. The Hopfield neural network has been extensively investigated in the past decades [1026]. In the practical application of neural networks, because of the time delay of information transmission between two neurons and the influence of hardware, such as the limited speed of switch, the phenomenon of time delay is inevitable. Therefore, the introduction of a time delay in the study of neural networks has widely been of concern [1523]. Because the number of hidden layers and the initial value of connection weights of the neural network are random, the stability of the system is not being guaranteed. If the control system is unstable, the convergence of the network will lose its foundation. Therefore, stability is a very important property for neural networks. In the study of the stability of Hopfield neural networks, researchers usually construct Lyapunov functional and combine with linear matrix inequality or integral inequality to analyze the stability of the system. It is no doubt that Lyapunov’s method is a powerful tool in the study of the stability of differential equations, but how to construct an appropriate Lyapunov functional is the key to solve these problems. In addition, constructing different Lyapunov functions for the same system will lead to different stability ranges, which is also an uncertain problem. Besides, the operation of the linear matrix inequality is very complicated. Zhang et al. proposed a method based on weight delay to study the stability of a class of recurrent neural networks with time-varying delays [25]. They obtained a new delay-dependent stability criterion for neural networks with time-varying delays by constructing a Lyapunov–Krasovskii functional and using Jensen’s integral inequality. However, the results obtained by the authors are complicated. To describe the complexity of these results, we give another specific example. Wang et al. [27] studied the delay-dependent stability of a class of generalized continuous neural networks with time-varying delays in system (1) (for the meaning of parameters in the formula, please refer to article [27]):

They shift the equilibrium point of system (1) to the origin by the transformation and obtain

They constructed a new Lyapunov–Krasovskii functional and then used Jensen’s integral inequality to obtain the following criterion for system (2). The origin of system (2) is globally asymptotically stable, if for given diagonal matrices and and positive scalars , , , , , , , and , there exist symmetric definite matrices , and , positive definite diagonal matrices , and , and matrices and such that the following inequalities hold, :

These symbols are defined in [27].

There are some problems with this result:(i)Do the matrices , and exist?(ii)For such complex matrix inequalities, how does one ensure the existence of the unknown matrices?(iii)If they exist, how are they represented?

If one does not solve these problems, the stability of the original equation remains unsolved. In fact, the stability depends only on the coefficient matrices of the system, not on the existence of those unknown matrices.

We have also paid attention to some recent research results [2830]. Their conclusions are also based on the creation of the Lyapunov–Krasovskii functional. They all assume that some unknown matrices satisfying some matrix inequalities make the system stable, and it is unknown whether these unknown matrices exist.

To solve this problem, in this paper, we will use the technique of integral inequality to construct a new stability criterion, which is only related to the coefficient matrix and independent of those unknown matrices.

Gronwall’s integral inequality plays an important role in the qualitative theory of differential equations. Many researchers extended it and used it to solve numerous problems [3136]. However, it is rare to study the stability of a neural network system. In this paper, we use Gronwall’s inequality to avoid the above problems and obtain new criteria for the exponential stability of a class of Hopfield neural network with a time-varying delay. Similar to the model studied by Wang et al. [27], we consider the following system:where denotes the neuron state vector, is the activation function, and is the time-delay term. , and are the interconnected matrices with appropriate dimensions. The initial state is a continuously differentiable vector function. is the bias value, and denotes transmission delay and satisfies , where and are constants.

In this paper, we define the norms of the matrix and the -dimensional vector as follows:

We assume that all activation functions satisfy the following conditions:(i): is continuous and differentiable, and (ii) is bounded on R, that is, for , and is a constant(iii) for all , where is a constant, and let

Lemma 1. If and the activation function satisfies conditions (i)–(iii), then the equilibrium point of system (4) must exist and be unique.

Proof. If is the equilibrium point of system (4), thenAccording to the definition of A, the inverse matrix of A exists; therefore, (6) is equivalent toLet and , then (7) can be expressed asTo prove that (8) is true, we create the following mapping:From conditions (i)–(iii), is a continuous mapping of ; then, H(u) is also a continuous mapping of . According to the definition of the norm of the n-dimensional vector and assumption (ii), we have thatwhere .
Let , then is a bounded convex set and H(u) is a continuous mapping of . According to Brouwer’s fixed-point theorem, there must exist such that . As formula (8) holds, there exists an equilibrium point in system (4). To prove the uniqueness of the equilibrium point, we suppose is another equilibrium point of system (4). Then,We havei.e.,According to the condition , we have and . This equation shows that the equilibrium point is unique.
Let the equilibrium point of system (4) be and . In this situation, system (4) can be rewritten aswhere , , and the initial state is . The meaning of the other symbols is the same as that of system (6). Let activation function be a continuous function that satisfies a Lipschitz condition for all . That is, assume thatfor some constant and for all

Definition 1. System (14) is said to be globally exponentially stable, if there exists a constant and , such that

Lemma 2 (Gronwall’s inequality [31]). Let K be a nonnegative constant and and p(t) are nonnegative and continuous functions on the interval and satisfy the inequalitythen

2. Stability Analysis

In this section, we discuss the global exponential stability condition for the trivial solution of system (14).

The linear term in system (14) can be expressed as

The fundamental solution matrix of (19) is

Let the initial time and the corresponding initial value be , then the solution of system (19) can be expressed as

For convenience, we denote .

Theorem 1. Suppose that the activation function satisfies conditions (i)–(iii) with the Lipschitz constant L; ifthen the trivial solution of system (14) is globally exponentially stable.

Proof. For , the initial value is ; by using the method of constant variation, we obtain that the solution of system (14) satisfies the following equation:Taking the norm on both sides of the above formula, without loss of generality, for , we obtainAccording to Lemma 1 (Gronwall’s inequality), we obtainTherefore,Since , then the delay system (14) is globally exponentially stable. The proof is completed.

Remark 1. How to obtain better stability results in time-delay systems has been the concern of many scholars. Some scholars use improved integral inequality techniques and construct better Lyapunov–Krasovskii functional and estimate its derivative to obtain new results. In [29], the authors discuss the exponential stability and generalized dissipative analysis of time-delay generalized neural networks. Based on Lyapunov–Krasovskii functional (LKF) and Wirtinger single integral inequality (WSII) and Wirtinger double integral inequality (WDII) techniques, they establish new criteria for exponential stability of generalized neural networks with delays. However, as we see, their results are still based on the assumption that there are some unknown symmetric matrices. They only used some examples to verify the validity of the results, but failed to prove the existence of these unknown symmetric matrices theoretically. In this paper, the stability criterion is only related to the coefficient matrix of the system and has nothing to do with other unknown matrices.
Next, we consider several special cases.
For the following system without time delay,we have the following corollary.

Corollary 1. Suppose that the activation function satisfies the Lipschitz condition; if , the trivial solution of system (27) is globally exponentially stable.
For the following system with constant time delay,we can get the following corollary.

Corollary 2. Suppose that the activation function satisfies the Lipschitz condition; if , the trivial solution of system (28) is globally exponentially stable.

3. Numerical Examples

In this section, we provide four illustrative examples to demonstrate the effectiveness of Theorem 1.

Example 1. We consider the following two-dimensional neural network model without delay:where the activation function satisfies the Lipschitz condition with the Lipschitz constant .
We takeand then, for . When , the initial value is . According to Corollary 1, the zero solution of system (31) is exponentially stable, and the state rail diagram of the system is shown in Figure 1.
If we take and , then , and . When , the initial value is . The state rail diagram of the system is shown in Figure 2. According to the literature [36], we know that system (31) is global asymptotic stability, but we can see that it is not exponentially stable from Figure 2.

Example 2. We consider the following two-dimensional neural network model with constant delay:where the activation function , , satisfies the Lipschitz condition and .
If we taketime delay , , , and , then . According to Corollary 2, the zero solution of system (33) is exponentially stable. When , the initial value is . The state rail diagram of the system is shown in Figure 3.

Example 3. We consider the following two-dimensional neural network model with variable delay:where the activation function , satisfies the Lipschitz condition and.
If we takeand time delay , then , , , , and . According to Theorem 1, the zero solution of system (33) is exponentially stable. When , the initial value is, and the state rail diagram of the system is shown in Figure 4.
If we takeand time delay , then . When , the initial value is . The state rail diagram of the system is shown in Figure 5. According to the literature [36], we know that system (33) is global asymptotic stability, but we can see that it is not exponentially stable from Figure 5.

Remark 2. In [30], the author also gives a two-dimensional example. According to the criterion of exponential stability obtained in paper [30], it is necessary to find some symmetric matrices that meet the specified matrix inequalities. Although the authors can find these matrices, the results obtained by this method are accidental and uncertain and they cannot guarantee the existence of symmetric matrices that meet the conditions. The stability judgment method used in the example in this paper is according to the data of the coefficient matrix without the unknown parameters or matrix of the third party. Although the result is relatively conservative, this is a sufficient condition and has obvious advantages for judging the stability of the system.

4. Conclusion

In this work, we have studied the exponential stability for the Hopfield neural network with a time-varying delay. We use the method of variation of constants of ordinary differential equations to obtain an equation satisfied by the state variable of the neural network. Then, we used Gronwall’s inequality to analyze this system and obtained new criteria for the exponential stability of the neural networks with time-varying delay. Our result is related only to the coefficient matrix of the system and not to the existence of the other unknown matrices. It is easy to test the exponential stability for specific systems by using these criteria.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All authors contributed equally to the manuscript. All authors read and approved the final manuscript.

Acknowledgments

The authors would like to express their gratitude to the authors of [37] for the information and ideas. This work was supported in part by the National Natural Science Foundation of China under Grants 11961021 and 11561019, in part by the Guangxi Natural Science Foundation under Grants 2020GXNSFAA159084 and 2020GXNSFAA159172, and in part by the Hechi University Research Foundation for Advanced Talents under Grant 2019GCC005.