Abstract

This paper introduces a novel approach for solving first-order stiff initial value problems through the development of a one-step family of three optimized second-derivative hybrid block methods. The optimization process was integrated into the derivation of the methods to achieve maximal accuracy. Through a rigorous analysis, it was determined that the methods exhibit properties of consistency, zero-stability, convergence, and A-stability. The proposed methods were implemented using the waveform relaxation technique, and the computed results demonstrated the superiority of these schemes over certain existing methods investigated in the study.

Keywords: one-step methods; optimized method; second-derivative hybrid block

1. Introduction

First-order differential equations can be solved using various approaches. However, many of these methods are inefficient for complex problems. Consequently, scientists continue to seek techniques that can accurately address these challenges without introducing additional errors.

In the context of second-derivative methods, Enright [1] formulated second-derivative multistep methods for solving first-order stiff ordinary differential equations (ODEs). Building on Enright’s work, modified second-derivative linear multistep methods are developed by Ogunfeyitimi and Ikhile [2] for the solution of first-order initial value problems (IVPs).

Stability is an important aspect as highlighted by Cash [3], particularly when solving stiff problems. To overcome limitations posed by Dahlquist’s theorem (Dahlquist [4]), several authors have explored hybrid block methods for the solution of first-order differential equations (see, for example, [59]). Ngwane and Jator [10] proposed a hybrid second-derivative method with two equally spaced off-step points. The method was used to solve first-order stiff IVPs.

Yakubu and Kwami [11] reformulated and developed some discrete block hybrid schemes into implicit two-derivative Runge-Kutta collocation methods of order and to solved equations such as Equation (1) in Section 2. Sunday [12] introduced a pair of two-step optimized second-derivative hybrid methods for solving first-order stiff systems of ODEs. The methods were developed with equidistant and nonequidistant hybrid points, and the results obtained from the numerical experiments outperformed the compared methods.

In the quest for more accurate solutions, Ramos [13] developed an optimized two-step hybrid block method with two optimal points to solve first-order IVPs. The results obtained from Ramos’s method were better than some existing methods. Also, a fifth-order one-step multiderivative hybrid implicit Runge-Kutta method for the solution of IVPs was derived by [14], and the results obtained were better than the compared methods. More recently, authors such as [1517] introduced some numerical methods for the solution of IVPs, and the obtained results suggested that the methods are suitable. In addition, authors such as [18, 19] developed similar methods for the solution of second-order ODEs, and the results obtained are better than the reviewed works. Further, a time-efficient reformulation of the Lobatto III family of order eight methods for the solution of IVPs was proposed by Qureshi et al. [20]. The one-step method by [20] utilized 3 optimal points, and the numerical results are better than the compared methods. Abdulganiy et al. [21] developed a functionally fitted block hybrid Falkner method for the solution of ODEs. The method was applied to the Kepler equations and related problems, and the results obtained are very good. In the work of Ramos, Quershi, and Soomro [22], Simpson’s type block method with time efficiency and order stars was derived for solving ODEs. The method was implemented in a variable step size, and the results obtained show that the method is suitable. It is important to note that many methods in the literature lack high-order accuracy and the stability properties required for solving first-order stiff IVPs.

The motivation in this study is to derive a family of one-step optimized second-derivative hybrid block methods with optimal points that will give maximum accuracy for solving equations such as Equation (1) in Section 2 and having higher order with very good stability characteristics, for example, -stability.

2. Derivation of Optimized Hybrid Methods

This section presents the derivation of a one-step optimized second-derivative hybrid block method for solving first-order IVPs of the form where is continuous within the interval of integration . We assumed that satisfies Lipschitz’s condition which guarantees the existence and uniqueness of the solution of Equation (1). The IVPs are well-posed, and the problem is solved over an interval , and . The step size is defined as for .

The continuous method is based on approximating the exact solution at grid points by a polynomial of the form where the first and second derivatives of Equation (2) can be approximated by where are unknown coefficients that would be obtained by imposing some interpolation points and collocation points at specified nodes. The hybrid block methods are formulated by considering points, where is the number of interpolation points and collocation points.

The derivation assumes that the solution of the IVPs is to be approximated over the interval . Interpolation is imposed at , and collocation is imposed at , plus prescribed off-step points defined as . Here, it is assumed that and where is the number of off-step points. Accordingly, a system of equations with unknowns is obtained from

Here, represents the second derivative of and

In addition, every off-step point is associated with another point . This requirement is consistent with the nature of collocation points which are usually symmetric about some point on the integration grid. For the one-step methods considered in this work, the symmetric is about . The first three off-step points have been considered in the form displayed in Table 1.

Equations (5)–(8) are solved for the unknowns to obtain a solution that is in terms of , and for . This solution is substituted into Equation (2) to obtain the continuous approximation equation which takes the form where . We remark that for each block of integration , there are unknowns:

The specific methods are developed by evaluating Equation (10) at these unknown points.

For the first case when , we obtain

The value of is determined by optimizing the local truncation error of the main method . The local truncation error corresponding to is

The optimal local truncation error can be obtained by setting in Equation (12). The unique solution within the range is

By selecting in Equation (13), the truncation error reduces to and the implicit second-derivative hybrid block method is given by the following system, where .

For the second case when gives

The value of is determined by optimizing the local truncation error of to obtain

The optimal local truncation error can be obtained by setting the coefficient of the term to zero in Equation (17), which has a unique solution with . The optimized value of is

Using as given in Equation (18), the truncation error reduces to

Substituting the value of into (16) gives the following system of equation, where and .

The third case, when , is obtained in a similar way. However, due to the cumbersome derivation process, the full equations are not included here. The local truncation error corresponding to the main method is

Here, the local truncation error is optimized by selecting to be the root of . Thus, the unique solution within the range is

With this value of , the truncation error becomes

The block method equations are given in Equation (24), with , , and :

3. Analysis of the Methods

In this section, we present and analyze the properties of the one-step block hybrid methods derived using the off-step points, in particular, truncation error and order, zero-stability, and linear stability. The method is reformulated into a linear equation as where

The vectors and are defined as

3.1. Local Truncation Error

Theorem 1. The local truncation error in the integration block , for the hybrid block method, is defined as and this is given by

For , when considering the grid points

Proof 1. Given a sufficiently differentiable function , the truncation error of the hybrid block method can be written in terms of a linear operator as Using the Taylor series to expand Equation (28) gives where is a positive integer. We note the identity It is worth noting that Equation (30) is an extension of the work of Butcher [23] who presented a related study on implicit Runge-Kutta methods. Substituting the identity in Equation (29) gives the result in Equation (27). In Equation (28), is a sufficiently differentiable function, and by expanding the terms , , and around and collecting terms of order , Equation (28) becomes where are constant vectors. The method is said to be of order if and .
The vector is the error constant (see Kashkari and Syam [24]).
The local truncation errors for the block methods with off-step points, for the optimal off-step points, are presented below.
The method with gives Thus, gives indicating that the method has order .
In the method with , we obtain When , we have indicating that the method has order .
For three off-step points , the truncation errors are Therefore, the method with gives indicating that the method with has order .
From the above results, it can be deduced that the proposed methods are consistent because the order for any number of off-step points (see Sunday et al. [25]). Further, for a given off-step points, the error constant is indicating that the method has order .

3.2. Zero-Stability

Zero-stability is an indication of the stability of the method, and this can be determined from Equation (25), in the limit . In this limit, Equation (25) becomes

The characteristic polynomial is defined as

The characteristic polynomials of the methods when are , , and , respectively.

A numerical method is said to be zero-stable if the roots of the characteristic polynomial satisfy , and for those roots with , the multiplicity must not exceed the order of the differential equation in consideration (see Lawal, Yahaya, and Yakubu [26]).

This result shows that all the derived methods are zero-stable. Since all the methods satisfied the condition of consistency and zero-stability, it can be inferred from Dahlquist’s theorem that they are all convergent.

3.3. Linear Stability

The stability region shows the behavior of the method, and this is determined as follows. By applying the method in Equation (25) on the following Dahlquist [4] test equations, and where . Setting , this yields , where is the amplification matrix. The methods derived with have the following spectral radii, respectively.

The stability regions are given in Figure 1.

A numerical method is said to be -stable if its region of absolute stability contains the entire negative (left) complex half-plane (see Lambert [27]). The methods with are all -stable as indicated in Figure 1.

4. Implementation of Hybrid Methods

In this section, the implementation details of the methods were discussed through numerical experiments. The term in Equation (10) which approximates the second derivative at is calculated using Equation (9). We first linearize the nonlinear system of IVPs as shown below.

Consider a nonlinear system of first-order IVPs of the form where is a nonlinear function with coefficient in the -th equation, and is the remaining component which may or may not be a nonlinear function in , . Here, runs from 1 to , where is the number of equations.

The method is sequentially applied iteratively to Equation (37), with the solution of one decoupled equation used immediately when solving the next equation. The iterative scheme is in the form

The linearization method used the Gauss-Seidel method for decoupling and solving large systems of nonlinear equations that are otherwise cumbersome to implement. This approach was adopted from the waveform relaxation technique as discussed in [2830].

Since Equation (38) is linear, it can be applied to the method by decomposing the function as where

The matrix equation form of Equation (25) becomes where is defined as a diagonal matrix whose coefficients are obtained from . Similarly, is a matrix whose coefficients are obtained from . Thus, simplifying Equation (39) gives the following equation: where

The numerical examples are solved using the scheme (Equation (40)) applied iteratively, over 10 iterations for each integration block . The numerical experimentation suggests that sufficient convergence would have occurred before the 10th iteration level for all values. By the linearization of nonlinear IVPs via partitioning, the optimized hybrid block method defined by Equation (40) is referred to as the waveform relaxation optimized second-derivative hybrid block method. The method is implemented efficiently as one-step block numerical integrators for the solution of Equation (1), simultaneously obtaining the approximations for over the subintervals .

The rate of convergence is computed using Equation (41) with two different step sizes and , as given in Tables 24. Since this is available for Examples 2 and 3. The formula for the rate of convergence is given as

5. Results and Discussion

In this section, we present numerical results obtained from applications of the methods to selected problems that have been previously used in some published studies for numerical experimentation. To illustrate the improvement in the accuracy of our methods, we have compared the results against results generated by some selected published studies.

The new methods derived when are denoted as NMm1, NMm2, and NMm3, respectively.

Example 1. Consider the system of nonlinear stiff IVPs. The exact solution is and .

It is worth noting that this problem was previously solved using generalized second derivative linear multistep methods by [2] among others. This serves as a point of comparison for the accuracy.

Example 2. Consider the linear stiff systems solved by [2, 13]. with the exact solution of

For comparison, the numerical results of the present methods are tabulated in Tables 2 and 3 with those in references [2, 3133].

Example 3. Consider the linear stiff systems solved by [5, 12]. The exact solution is For comparison, the numerical results for NMm1, NMm2, and NMm3 are presented in Table 4 alongside the results from [5].

Example 4. Consider an electric circuit problem with two capacitors and two resistors in series. The voltage across the capacitors, and , are modeled into the following system of differential equations: where and are resistances, and are capacitances, and and are the voltages. The following values are given:
(1 kOhm) is the resistance of the first resistor, (2 kOhm) is the resistance of the second resistor, (100 μF) is the capacitance of the first capacitor, (200 μF) is the capacitance of the second capacitor, and is the initial voltage. The problem is solved with

Example 5. Consider the Van der Pol oscillator [32]. where (the stiffness parameter) and the problem is solved with .

Example 6. A chemical reaction occurs involving 2 species and , where each species reacts with one another and one reaction occurs much faster than the other. The problem is model where and are the concentration of species and and the reaction rates are and . We choose and to illustrate the phenomenon of stiffness.

Example 7. Consider a chain of radioactive decay involving two isotopes, where each isotope decays into the next with different decay rates. The problem is modeled into a system of first-order ODEs. where and are the concentration of the isotope and and are the decay rates. The problem is solved with and to illustrate the stiffness.

Table 5 presents a comparison of the maximum error (MaxErr) of and . The result in Table 5 shows a significant difference between the new methods and the compared methods. This indicates that the new methods outperformed the compared methods. Figures 2(a) and 2(b) show the rate of convergence of the methods with , while Figures 3(a) and 3(b) show how the error trends progress towards the endpoint when using for Example 1.

For the NMm2, Figures 4(a) and 4(b) analyze the convergence at the iteration level, and Figures 4(c) and 4(d) display the error trends using for Example 1.

For the NMm3, Figures 5(a) and 5(b) show the convergence at each iteration, and Figures 5(c) and 5(d) analyze the errors at different nodes using for Example 1.

Tables 2 and 3 present the comparison of maximum errors and convergence rates using different step sizes. The results in Tables 2 and 3 clearly explain that NMm1, NMm2, and NMm3 have better accuracy than the Generalized Adams Methods (GAMs) by [32] and the variable-step boundary value methods based on the reverse Adams’s method by [33]. The calculated rates are consistent with the order of the methods.

Furthermore, we present the following Figures 6(a)6(c) to illustrate the error trends of , , and of Example 2 with NMm3. These figures were obtained with a step size .

The results in Table 4 show the performance of our new methods against the compared method. The new methods exhibit higher accuracy than the three-step hybrid block second-derivative backward differentiation formula (HBSDBDF) derived with three off-step points by [5]. Notably, the NMm3 gives a better convergence rate than HBSDBDF. This rate aligns with the order of the derived schemes.

Figures 7(a) and 7(b) show the solution profile of Example 4. The example was solved with NMm1 in the interval [0-10]. Similarly, Figures 8(a) and 8(b) show the solution profile of example 4. Example 4 was solved in the interval [0-20] with NMm2.

The results displayed in Figures 9(a)9(d) show the solution profile of the Van der Pol oscillatory problem. The results from Figures 9(a) and 9(b) were obtained with NMm2 in the interval [0-40], while the results from Figures 9(c) and 9(d) are obtained in the interval [0-60] with NMm3.

Figures 10(a) and 10(b) present the solution profile of the chemical reaction problem of Example 6. The example was solved with NMm1 using in the interval [0-10].

The radioactive decay in Example 7 was solved with NMm2 using in the interval [0-5]. The solution profile is shown in Figures 11(a) and 11(b).

6. Conclusion

This study provided a systematic approach that developed one-step optimized second-derivative hybrid block methods for solving first-order IVPs. These methods are found to be zero-stable, consistent, convergence, and -stable and have an order of accuracy , where is the number of off-step points. Importantly, all three methods were successfully implemented without requiring initial starting values.

The numerical experiments validated the superior accuracy of the methods when compared to existing approaches in the literature. Further, the rate of convergence is consistent with the order of the methods. The results underscore the effectiveness and enhanced performance of the new methods, suggesting their suitability for solving stiff first-order ODEs and giving accurate solutions even with larger step sizes. Further research may be carried out on this study.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding

This research was funded by the University of KwaZulu-Natal, Pietermaritzburg, South Africa. Open access funding was enabled and organized by SANLiC Gold.

Acknowledgments

Sincere thanks are due to the Tertiary Education Trust Fund of Nigeria, Ibrahim Badamasi Babangida University Lapai, Nigeria, and colleagues for their valuable suggestions.