Abstract
In this article, based on the linear multistep method, we combined the simplified reproducing kernel method (SRKM) with the optimization method to solve advanced IDEs with piecewise constant arguments. This article also discussed the convergence order and the time complexity of the method. It is proved that the approximate solutions and their derivatives obtained by this algorithm are uniformly convergent. Through two numerical examples, it is proved that the proposed algorithm is obviously better than other methods.
1. Introduction
For almost half a century, impulsive differential equations (IDEs) have been regarded as a summary of many natural phenomena and also the mathematical structure of many practical problems. They play a pivotal role in different fields of biomathematics and applied physics [1–3]. They have been applied to too many practical problems such as biomechanics, population dynamics, and optimal control. Generally speaking, the analytical solutions of IDEs are difficult to obtain, especially for complex cases such as nonlinearity, fractional order, and piecewise constants. In fact, in most practical problems, we can get the approximate solution or numerical solution of IDEs. Therefore, the existence of IDEs and their numerical solutions have attracted more and more attention from scholars [4–7].
Generally speaking, an IDE system with piecewise constant arguments constitutes a very interesting type of problems, which is essentially an important mathematical model. However, the numerical solution problem for IDEs with piecewise constant arguments is rarely noticed. Wiener [8] analyzed many important properties of solutions of IDEs with piecewise constant arguments. Bereketoglu et al. [9] proved the existence of solutions of first-order nonhomogeneous advanced IDEs. Zhang [10–13] studied the oscillation and asymptotic stability of the Runge–Kutta method for IDEs. We know that the Runge–Kutta method and Euler method are mainly used to solve IDEs. Only a few authors adopt the reproducing kernel method to solve IDEs.
In this article, we study the linear multistep method in reproducing kernel space to solve the following advanced IDEs:where denotes the greatest integer function, , and are real constants, and . In addition, we assume that (1) has a unique solution.
Since the reproducing kernel method (RKM) was proposed at the beginning of last century, more and more scholars have used it to solve initial boundary value problems [14–16]. Geng [17, 18] solved the singularly perturbed problem and the nonlocal boundary value problem by RKM. Li [19, 20] applied RKM to solve a variety of fractional models. SRKM can be used to obtain highly smooth analytical solutions easily. In recent years, many scholars have studied SRKM [21–23]. Zhao [24] proposed the convergence order theory of SRKM. Mei [25–28] solved many integral equations and impulsive problems by using SRKM. In this paper, the SRKM is combined with the optimization method to solve (1), and this method has great advantages in time complexity and step-by-step solution of the model.
This article mainly introduces the following contents. The next section introduces the definition of the reproducing kernel space and the transformation of the initial model. In the third section, the SRKM and optimization method for solving (1) are given, and the convergence and time complexity of the proposed method are analyzed. At the end of this paper, we give two numerical experiments and some conclusions.
2. Preliminaries
In order to expand the description of the algorithm, we mainly introduce some definitions of reproducing kernel space and simplification methods in this section. In the full text, . From [9], the solution of (1) is unique.
Definition 1. (see [16]). The simplified reproducing kernel space is defined as follows:Its reproducing kernel is , and the space can be similarly defined.
Because there are many impulsive points in the solution of (1), this paper presents a piecewise algorithm, that is, we first solve (1) in .
If , we would have .
On the other hand, ; therefore, .
So, in the interval [0,1), we can simplify equation (1) intoLet ; obviously, is a continuous function.
In other words, solving (3) is equivalent to finding a function that satisfieswhere and are real constants and is an unknown constant.
3. The Linear Multistep Method
In this section, for solving (4), we need to establish the linear multistep method and SRKM and give the convergence analysis and time complexity analysis of the algorithm.
By (4), , a linear operator is defined in this paper:
Wu and Lin [16] proved that is a bounded operator. So, (4) is equivalent to the following form:
Put . Take , which is dense on .
Theorem 1. where and is the adjoint operator of .
Proof. .
From [14], for each fixed , it follows that the function system is linearly independent on . Moreover, is completed in .
Let
Then, we can obtain that .
Let , which is an orthogonal projection operator.
Theorem 2. If satisfies equation 5, then satisfies
Proof. and .
Theorem 3. uniformly converges to , in which satisfies equation 5.
Proof. .
Therefore, while satisfies (5), ,
Since is a continuous function and uniformly,
Therefore, is the solution of (11), where if :
In other words, is the approximate solution of (2) on [0, 1).
As ,
In order to find approximate solution , we must find . Considering that and are known functions, we use and to do inner product operation on both sides of (12). The following equations can be obtained:
Let
Considering that is linearly independent in , which means exists,
So, are expressed by . Substituting (15) into (12) yields
According to the previous analysis, we have
The smaller the value of , the higher the approximation between and . Therefore, the following optimization model is used to solve :
For (18), we use software to solve and substitute into equation (16) to get .
Lemma 1 (see [24]). In , if is the approximate solution of by SRKM, then , where is a constant.
Theorem 4. converges to , and the convergence order is not less than the second order.
Proof. Through the previous analysis and proof, we know that is also the solution of in . According to Lemma 1, converges to , and convergence order is at least second order. Therefore,where , .
Furthermore, the formula for calculating the convergence order of the algorithm is as follows:
Theorem 5. is the time complexity of the algorithm.
Proof. According to the previous statement, we can roughly divide it into the following parts to calculate of equation 12.(1)Solving the coefficient matrix of equation (14): assume that the amount of calculation required to compute each inner product is , so the time complexity of calculating all inner products is .(2)Solving ( is the coefficient matrix of equation (12)): use the decomposition method to solve equation (14); decomposition is a well-known method, the complexity of decomposition is , and the complexity of solving a system of trigonometric equations (such as is a vector of ) is . Here we need to solve trigonometric equations, so the total complexity is .(3)The time complexity of calculating in equation (12) is .Therefore, the total time complexity is
Let ; therefore, is the approximate solution of (1) in .
If , then , and we can get the following equation by the same simplification method:
We can further solve (23) on the interval .where is a known function and is an unknown constant.
(13) are in the same form; therefore, we can use the method proposed in this section to solve (23), that is, the approximate solution of (1) in . Similarly, approximate solutions of (1) in can be obtained by the linear multistep method, .
4. Numerical Experiments
Example 1. Let us consider the following advanced IDEs [12].The exact solution is
Example 2. Consider the following advanced IDEs [13].The exact solution isIn Figures 1 and 2, the variation law of error is reflected. As seen in Tables 1 and 2, the more the nodes, the smaller the error of the numerical results. In other words, we can use enough nodes to get a more accurate approximation, which is consistent with the theory presented earlier in this paper. Each curve in the two figures is the result of each step of our algorithm, which shows that the step-by-step solution algorithm in this paper is very suitable for the solution of (1).


5. Conclusion
Based on SRKM and optimization model, this paper first proposes a numerical algorithm for solving IDEs with piecewise constant arguments. Since the SRKM proposed does not consider complex boundary and initial conditions, SRKM is quite simple. Numerical results show the superiority of the method. From the tables and figures of the examples, it can be seen that with the increase of , the error becomes smaller and smaller. The solution of impulsive differential equations has always been a difficulty in academic circles, which puzzles scholars’ research because of its piecewise smoothness. The idea of step-by-step solution proposed in this paper can solve this problem well, and the algorithm can be applied to other impulsive differential equations.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This study was supported by Big Data Research Center of Zhuhai College, Beijing Institute of Technology (XJ-2018-05), and two projects (2019KTSCX217 and ZH22017003200026PWC).