Abstract
It is challenging to reconstruct a nonlinear dynamical system when sufficient observations are not available. Recent study shows this problem can be solved by paradigm of compressive sensing. In this paper, we study the reconstruction of chaotic systems based on the stochastic gradient matching pursuit (StoGradMP) method. Comparing with the previous method based on convex optimization, the study results show that the StoGradMP method performs much better when the numerical sampling period is small. So the present study enables potential application of the reconstruction method using limited observations in some special situations where limited observations can be acquired in limited time.
1. Introduction
Dynamical systems play a key role in many scientific disciplines including physical, biological, and social systems. The dynamical models can help us analyze the way that the present state of the system depends on the past state and predict the possible state in the future. To accomplish these goals, accurate dynamical models describing these systems are required, so the reconstruction of dynamical systems from observation of time series has become a central issue in these scientific disciplines [1]. It is challenging to reconstruct a nonlinear dynamical system without prior knowledge. The delay-coordinate embedding method [2] is the traditional model-free reconstruction method for nonlinear dynamical systems. The mathematical foundation of the method is Takens’s theorem [3] which was laid out three decades ago. The reconstruction system by the delay-coordinate embedding method can be used to estimate some import quantities of the original system such as the dimension of attractor [4], the Lyapunov exponents [5], and unstable periodic orbits that constitute the skeleton of attractor [6]. With development of deep learning, some model-free methods based on neural networks are also developed [7, 8]. One disadvantage of model-free methods is that they cannot give accurate dynamical model of the underlying system. The mathematical tools of the dynamical system cannot be used to analyze the behaviors of the system.
In general, it is difficult to reconstruct an accurate dynamical model of the underlying system without any prior knowledge. But the nonlinear functions describing the dynamic of the system can be approximated by power series in many famous chaotic systems, such as the Lorenz system [9], Rössler system [10], hyperchaotic Chen system [11], and Hénon map [12]. In this case, the reconstruction of an accurate dynamical model can be converted to parameter identification and can be achieved by some parameter identification methods such as the synchronization-based method [13], particle swarm optimization algorithm [14], genetic algorithm [15], and differential evolution algorithm [16]. In general, the number of unknown parameters to be estimated can be large, so large amount of observations may be required. However, it is difficult or expensive to acquire observations in some special situations. Can the system be constructed when sufficient observations are not available? In fact, though the number of unknown parameters is large, many of them are zero (the sparsity condition). So, the compressive sensing method [17, 18] for sparse signal recovery can provide a solution to the problem. In [19], the authors firstly studied the reconstruction of chaotic systems by a compressive sensing method. The results show that reconstruction of the chaotic system is possible with limited observations. Then reconstruction of dynamical networks [20–22] is also studied based on the compressive sensing method. However, we find that reconstruction success rate of the chaotic system is low when the numerical sampling period is small. The possible reason for this is that the algorithm gets trap in local optima when restricted isometry property (RIP) breaks down. In fact, the above reconstruction method is solved by the convex optimization methods [23]. In addition, there exist greedy methods for sparse signal recovery which allow faster computation compared to the convex optimization methods. In practical applications, the widely used ones are the orthogonal matching pursuit (OMP) [24], compressive sampling matching pursuit (CoSaMP) [25], and the iterative hard thresholding (IHT) [26]. Based on these methods, some stochastic greedy methods have also been developed, such as stochastic iterative hard thresholding (StoIHT), stochastic gradient matching pursuit (StoGradMP) [27], and stochastic compressive sampling matching pursuit (StoCoSaMP) [28]. In general, the randomness typically has a dramatic and positive impact on escaping from local optima. So, we will try to use the stochastic greedy method to reconstruct the chaotic dynamic systems, when the numerical sampling period is small.
2. Problem and Method
Let us consider chaotic dynamical systems described bywhere is the state vector and () are nonlinear functions which describe the dynamic of the system. Here, we assume that the accurate model of the system is not available, but () can be approximated by power series:
For many famous chaotic systems, such as the Lorenz system [9], Rössler system [10], and Hénon map [12], the nonlinear functions () describing the dynamic of the system are just a part of power series. By the above assumption (2), the reconstruction of the chaotic system (1) can be converted to the identification of parameters . Moreover, assuming that observations of and at some time are available, and written as , (). Then, we obtain the following equation:where () are dimensional vectors, () are dimensional vectors, and is dimensional matrix:which is usually called design matrix or measurement matrix. Obviously, if we have observations of and which make a nonsingular matrix, then () can be solved from (3) easily, and the chaotic system (1) is reconstructed exactly at the same time. In general, the number of coefficients to be estimated is large, and large amount of observations are required, when (dimension of system) is large. In some cases, however, we cannot get enough observations of and , i.e., , and the reconstruction of chaotic systems when is an interesting and significant problem. According to the conventional wisdom, the system (1) cannot be reconstructed when because the equation is underfitting, which has a nonunique solution. But most elements of are zero for reconstruction of the chaotic system (i.e., the is sparse), so the equation (3) can be solved by the compressive sensing method. In [19], the authors firstly studied the reconstruction of chaotic systems by the compressive method. The results show that the reconstruction of the chaotic system is possible with limited observed data, when the nonlinear function described by the dynamic is unknown. The constructed systems are accurate, which can be used to predict the crisis of chaotic systems. However, we find that success rate of reconstruction is low, when the numerical sampling period is small. The reason for this may be that the algorithm gets trap in local optima that may not be global when RIP condition breaks down. Actually, the reconstruction method in [19] is to solve the following basis pursuit problem:
Above basis pursuit can be recast into a linear constraint minimization problem, and corresponding MATLAB coding (l1magic) can be found at site [29]. In fact, convex optimization problem (4) is relaxation of the following nonconvex problem:where is norm which counts the number of nonzero elements, and is sparsity level. Recently, some greedy matching pursuit algorithms have been proposed to solve this problem, such as OMP [24] and CoSaMP [25]. The OMP method usually offers a much faster runtime than the convex approach but lacks strong recovery guarantees. CoSaMP offers both the advantage of fast runtime and strong recovery guarantees as a convex method [27]. The CoSaMP methods have been used remarkably in the signal processing community due to their simplicity and computational efficiency. However, we find the success rate of reconstruction is very low, when we tried to reconstruct chaotic systems by the CoSaMP method. The reason for this may due to get trap in local optima when the initial condition is not properly chosen. Basing on CoSaMP methods, some stochastic greedy methods have been developed, such as StoGradMP [27] and StoCoSaMP [28]. Though a comprehensive analysis of the phenomenon of escaping local minima in these stochastic greedy methods is difficult, balancing randomness and greediness carefully has a dramatic and positive impact on escaping from local optima. So, we will try to use the StoGradMP method to reconstruct the chaotic dynamic systems, when the numerical sampling period is small. The algorithm basing on StoGradMP is described in Algorithm 1, which consists of following steps at each iteration:(1)Randomize: Randomly select a subset of with specified probability, where is sampling number.(2)Proxy: Form a signal proxy . is the gradient of , where is the element of , and is the row of measurement matrix of .(3)Identify: Locate the largest components of the proxy. The operation gets the indices of largest entries (in magnitude) of to compose of .(4)Merge: Form the subspace , where is the set of newly identified components in the identify step, and is the set of components that appear in the current approximation.(5)Estimate: Solve a suboptimization problem with search restricted on . This is a least squares problem for present problem.(6)Prune: Find the subspace which is closest to the solution just found in the estimate step. The operation chooses the indices of largest entries (in magnitude) of to compose of .(7)Update: Produce a new approximation. The operation sets .
|
According to theoretical results in [27], the above method is linear convergence in expectation when the measurement matrix satisfies RIP condition. Though it is an open problem to prove whether a general matrix satisfies the RIP condition, the numerical results show the present method is effective for reconstruction of chaotic systems. Indeed, the present algorithm is a stochastic version of CoSaMP. Comparing with original CoSaMP, the main difference is gradient computation in step proxy. The present method do not need to compute the gradient of . Instead, it only needs to compute the gradient of , where is a random chosen subset of . This property not only makes the algorithm efficient in large scale problem, but also introduces some random perturbation which usually has positive impact on escaping from local optima. In the previous paper [28], the authors also proposed a stochastic version of the CoSaMP method. Comparing the original CoSaMP method, a greedy or a stochastic step at every iteration is randomly executed. During a stochastic step, choose atoms randomly to merge into the support. In a greedy step, signal proxy is formed as the CoSaMP method. So this method is only applied for a problem where the least square loss is used. The StoGradMP method can be applied for loss function which does not exhibit quadratic structure. This type of loss function will present in our further study, so we choose the present StoGradMP-based method.
3. Numerical Experiments
In this section, some numerical simulations are performed to show that the StoGradMP method is more effective than the convex method used in [19], when the numerical sampling period is small. To show the importance of random perturbation, the reconstruction results by the CoSaMP method are also presented. There are two input parameters for StoGradMP: the sparsity level and halting criterion. Though there are several strategies for choosing for some particular application, these methods are not suitable for present problem. For reconstruction of chaotic systems, the number of nonzero coefficients is very small, so we run the algorithm with iteratively adding in some range and select the best approximation. For stopping criterion, we use fixed number of iterations. Some other simple stopping criterions can be found in [25], which may also be useful in practice. In a randomize step, we simply select all the subsets of with the same probability. In an estimate step, we use the conjuncture gradient method to solve the least square problem. To solve convex problem (4), we use the method in [23], and corresponding MATLAB coding (l1magic) can be found at site [29].
In simulation, we do some processing on original observed data. The observed data () and measure matrix are centralized and normalized, i.e., letwhere is e lement of , is the column of measurement matrix , and is the element of . Because all elements of the first column of are one, the centralization step can avoid the estimation of the constant term in function (). Each element of is constructed by ; thus, the difference of amplitude of may be large. The normalization step can remove impact of the amplitude of on reconstruction. Moreover, the normalization may make the matrix satisfy RIP condition, if is constructed properly [19].
Then, instead of solving equation (3), we solvewhere and . After the above equation is solved, can be acquired bywhere is the lth element of . the constant term of can be computed by (3), when () are known.
To measure the success of reconstruction, the following scales are defined:where is the dimension of system, and is an input parameter which should be large enough such that nonlinear functions () can be presented by power series (2). Obviously, if (small constant) for all , the unknown system is reconstructed successfully. To show the performance of each construction method, the success rate is computed. The success rate is the ratio of number of successful reconstructions to total number of independent realizations. In this section, the famous Lorenz system [9], Rössler system [10], hyperchaotic Chen system [11], and chaotic memristive circuits [30] are performed as illustrative examples. For these four systems, the highest power of is 3, so we set . In numerical simulation, the fourth-order Runge–Kuttamethod with a time step 0.001 is adopted to create simulation data.
3.1. Example 1: Lorenz System
The famous Lorenz system can be described as follows:
The Lorenz system is derived from a two-dimensional fluid layer uniformly warmed from below and cooled from above. describe the rate of change of convection, the horizontal temperature variation, and the vertical temperature variation with respect to time. are positive system parameters, and the system exhibits chaotic behavior for some value of these parameters. The Lorenz equations also arise in simplified models for lasers, dynamos, thermosyphons, and so on. In this example, let , , and . Under this group of parameters, nonlinear system (11) has chaotic attractor. For a different numerical sampling period, the success rates of reconstruction by the convex method (solved by l1magic) and StoGradMP are shown in Figures 1 and 2 separately. In these two figures, we set , and 1000 independent realizations are computed to obtain the success rate. The numerical results show that the StoGradMP method performs much better when the numerical sampling period is small. In Figure 3, the reconstruction results by the CoSaMP method are also presented. Comparing with Figure 2, one can see that random perturbation has dramatic and positive impact on improving success rate of reconstruction.



3.2. Example 2: Rössler System
The famous Rössler system can be described as follows:
Here are the states of the system, and are positive system parameters. Rössler designed the Rössler attractor in 1976, but the original theoretical equations were later found to be useful in modeling in chemical reactions. The original Rössler paper says the Rössler attractor was intended to behave similarly to the Lorenz attractor, but it is simpler and has only one manifold. In this example, let , , and . Under this group of parameters, the nonlinear system (12) has chaotic attractor. All the other parameters are the same as in example 1. The corresponding numerical results are shown in Figures 4–6. These results also show that the StoGrandMP method is much better when the numerical sampling period is small, and random perturbation has dramatic and positive impact on improving success rate of reconstruction.



3.3. Example 3: Hyperchaotic Chen System
The hyperchaotic Chen system can be described as follows
Here are the states of system, and are positive system parameters. The positive Lyapunov exponent has an important impact on the behaviors of chaotic systems. In some applicable problems, one may face systems that have more than one positive Lyapunov exponent. This may cause the system dynamics expand in several different directions at the same time. In this example, let , , , , and . Under this group of parameters, the nonlinear system (13) has hyperchaotic attractor (two positive Lyapunov exponents). All the other parameters are the same as in example 1. The corresponding numerical results are shown in Figures 7–9. These results also show that the StoGrandMP method is much better when the numerical sampling period is small, and random perturbation has dramatic and positive impact on improving success rate of reconstruction.



3.4. Example 4: Chaotic Memristic Circuits
In this example, we consider a simplest memristic circuit [30] which consist of an inductor, capacitor, and nonlinear memristor. The system can be described as follows:where is the voltage across capacitor, is the current through inductor, and is the internal state of memristor. The parameters are , , , and . We choose , , , , ,, and . Using these component values from the circuit, we get the following values for parameters in the system (14): , , , and . Under this group of parameters, nonlinear system (14) has chaotic attractor. All the other parameters are the same as in example 1. The corresponding numerical results are shown in Figures 10–12. These results also show that the StoGrandMP method is much better when the numerical sampling period is small, and random perturbation has dramatic and positive impact on improving success rate of reconstruction.



4. Conclusions
Reconstruction of the nonlinear dynamical system from observed time series is interesting and significant in many scientific disciplines. In general, it is difficult to reconstruct the exact dynamical model when sufficient observed data are not validated. Recently, some studies show that this problem can be solved by the compressive sensing method which is widely used in sparse signal recovery. But we find that the reconstruction success rate of previous reconstruction methods is low, when the numerical sampling period is small. The reason for this may due to get trap in local optima that may not be global when RIP breaks down. In this paper, we try to solve the problem by stochastic gradient matching pursuit developed recently. In general, randomness has positive impact on escaping from local optima. Numerical simulation results show the stochastic gradient matching pursuit method is much more effective than the previous reconstruction method when the numerical sampling period is small. So the present study enables potential application of the reconstruction method based on limited observations in some special situations where limited data just can be acquired in limited time.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
The authors are grateful for the support of the Special Foundation for Basic Scientific Research of Central Colleges (No. CHD2013G2121017), the Fundamental Research Funds for the Central Universities (Nos. CHD310812152002 and CHD300102129202). Y. Z. X.’s visiting to the University of Oklahoma was sponsored by the State Scholarship Fund of China.