Abstract

In industrial applications, Stewart platform control is especially important. Because of the Stewart platform’s inherent delays and high nonlinear behavior, a novel nonlinear model predictive controller (NMPC) and new chaotic neural network model (CNNM) are proposed. Here, a novel NMPC based on hyper chaotic diagonal recurrent neural networks (HCDRNN-NMPC) is proposed, in which, the HCDRNN estimates the future system’s outputs. To improve the convergence of the parameters of the HCDRNN to better the system’s modeling, the extent of chaos is adjusted using a logistic map in the hidden layer. The proposed scheme uses an improved gradient method to solve the optimization problem in NMPC. The proposed control is used to control six degrees of freedom Stewart parallel robot with hard-nonlinearity, input constraints, and in the presence of uncertainties including external disturbance. High prediction performance, parameters convergence, and local minima avoidance of the neural network are guaranteed. Stability and high tracking performance are the most significant advantages of the proposed scheme.

1. Introduction

Stewart platform is a six-degree-of-freedom parallel robot that was first introduced by Stewart in 1965 and has potential uses in industrial contexts due to its good dynamics performance, high precision, and high rigidity. The control of the Stewart platform is quite challenging due to the nonlinear characteristics of dynamic parameters and time-varying delays. Stewart platform has more physical constraints than the serial manipulators, therefore solving their kinematics and dynamics problem is more difficult, and developing an accurate model of the Stewart platform has always been a concern for researchers in this field [1].

There has been a lot of research done on using the neural networks to the model nonlinear systems [24]. In the study of Chen et al. [5], to control nonlinear teleoperation manipulators, an RBF-neural network-based adaptive robust control is developed. As a result, the RBF neural network is used to estimate the nonlinearities and model uncertainty in system dynamics with external disturbances. To handle parameter variations, the adaptive law is developed by adapting the parameters of the RBF neural network online while the nonlinear robust term is developed to deal with estimation errors. Lu employed a NN approximator to estimate uncertain parametric and unknown functions in a robotic system in the study by Lu and Liu [6], which was subsequently used to construct an adaptive NN controller for uncertain n-joint robotic systems with time-varying state constraints. As outlined in [7], an adaptive global sliding mode controller with two hidden layers is developed. The system nonlinearities were estimated using a new RNN with two hidden layers. An adaptive sliding mode control scheme based on RBFNN-based estimation of environmental parameters on the slave side is proposed in the study by Chen et al. [8] for a multilateral telerobotic system with master-slave manipulators. The environment force is modeled generally.

Changes in the structure of the neural network during the training, as well as the use of chaos theory in the neural network, have been considered to cover the behavioral diversity of nonlinear systems. In the study by Chen and Han and Qiao [9, 10], the number of neurons in the hidden layer is changed online. In the study by Han et al. [11], to optimize the NN structure, a penalty-reward method is used. Aihara presented a chaotic NN model in the study by Aihara et al. [12]. Hopfield NN is introduced in the study of Li et al. and Farjami et al. [13, 14] as a chaotic RNN with a chaotic dynamic established temporarily for searching. Reference [15] introduces a context layer that uses chaotic mappings to produce chaotic behavior in NN throughout the training phase in order to prevent local minima. Reference [16] discusses the designing of a chaotic NN by using chaotic neurons which show chaotic behavior in some of their activity areas. In this aspect, the behavior of the neurons and network will change according to the changes in the bifurcation parameters of the neurons which have mostly been inspired by biological studies. A logistic map is utilized as an activation function in the study by Taherkhani et al. [17], which iteratively generates chaotic behavior in the nodes.

In the study of Dongsu and Hongbin [18], an adaptive sliding controller has been used to identify fixed unknown parameters, followed by external disturbances compensation. In the study of Ghorbani and Sani [19], a Fuzzy NMPC is introduced to handle uncertainties and external disturbances. In the study of Jin et al. [20], different parallel robots’ NN-based controlling approaches have been reviewed. The applicability of RNN, feedforward NNs, or both for controlling parallel robots has been discussed in detail, comparing them in terms of controlling efficiency and complexity of calculations.

In this paper, due to the inherent delays of the Stewart platform and the design of the controller based on future changes, special attention is paid to the model predictive control. To predict the system behavior over a predefined prediction horizon, MPC approaches require a precise linear model of the under-control system. Stewart platform is inherently nonlinear and linear models are mostly inaccurate in dynamical nonlinear systems modeling. These all bring up the motivation for using nonlinear models in MPC, leading to NMPC.

The most significant features of NMPCs include the following: (I) nonlinear model utilization, (II) state and input constraints consideration, (III) online minimization of appointed performance criteria, (IV) necessity of solving an online optimal control problem, (V) requirement of the system state measuring or estimation, for providing the prediction. Among universal nonlinear models, which are used for predicting the behavior of the system in future, the neural networks are significantly attractive [21, 22].

The effectiveness of the NNs in nonlinear system identification has increased the popularity of NN-based predictive controllers. Nikdel [23], has presented a NN-based MPC to control a shape memory alloy-based manipulator. For nonlinear system modeling and predictive control, a multiinput multioutput radial basis function neural network (RBFNN) was employed in the study of Peng et al. [24]. The recurrent neural networks (RNN) perform well in terms of modeling dynamical systems even in noisy situations because they naturally incorporate dynamic aspects in the form of storing dynamic response of the system through tapping delay, the RNN is utilized in NMPC in the study of Pan and Wang [25], and the results show that the approach converges quickly. In the study of Seyab and Cao [26], a continuous-time RNN is utilized for the NMPC, demonstrating the method’s appropriate performance under various operational settings.

In this paper, we will continue this research using the hierarchical structure of the chaotic RNNs, application to NMPC of a complex parallel robot. This paper’s contributions and significant innovations are as follows: (I) a new NMPC based on hierarchical HCDRNNs is suggested to model and regulate typical nonlinear systems with complex dynamics. (II) To overcome the modeling issues of complex nonlinear systems with hard nonlinearities, in the proposed controller, the future output of the under-control system is approximated using a proposed novel hierarchical HCDRNN. Note that the equations of motion of such systems are very difficult to solve by mathematical methods and bring forth flaws such as inaccuracy and computational expenses. (III) The weight updating laws are modified based on the proposed HCDRNN scheme, considering the rules introduced in the study of Wang et al. [15]. (IV) On the one hand, propounding the novel hierarchical structure, and on the other hand, the use of chaos in weights updating rules, significantly reduced the cumulative error. (V) The extent of chaos is regulated based on the modeling error in the proposed HCDRNN, in order to increase the accuracy of modeling and prediction. (VI) The control and prediction horizons are specified based on closed-loop control features. (VII) Weights convergence of the proposed HCDRNN is demonstrated and system stability is assured in terms of the Lyapunov second law, taking into account input/output limitations. Furthermore, the proposed controller’s performance in the presence of external disturbance is evaluated.

The remainder of this work is structured as follows: Section 2 describes the suggested control strategy in detail, Section 3 discusses the simulation results to validate the efficiency of the proposed method, and Section 4 discusses the final conclusions.

2. The Proposed Control Strategy

The MPC is made up of three major components: the predictive model, the cost function, and the optimization method. The predictive model forecasts the system’s future behavior. Based on the optimization of the cost function and the projected behavior of the system, MPC applies an appropriate control input to the process. This paper uses a novel HCDRNN as the predictive model. Moreover, to optimize the cost function, it uses a type of improved gradient method which utilizes the data predicted by the proposed HCDRNN.

Figure 1 shows a block diagram of the designed control system in which represents the desired trajectory for the coordinates origin of the moving plane. In which, and are the outputs and inputs of the Stewart platform, shows the output predicted by the NN model. Finally, the optimization block extracts the control signal, , by minimizing the cost function using the improved gradient descent method.

2.1. Stewart Platform

Figure 2 shows the Stewart platform. All parameters and variables are the same as what Tsai used in [27].

The dynamic model of Stewart platform is introduced in equation (1), which is obtained based on the virtual-work principle [27].where are the manipulator Jacobian matrices, is the resultant of the applied and inertia wrenches exerted at the center of mass of the moving platform, are the vectors of input torque and forces, which are applied to the center of mass of the moving plate from the prismatic joints of the robot. For more details about the robot and its mathematical model, the interested reader can see the reference [27].

2.2. The Proposed Hyper Chaotic Diagonal Recurrent Neural Network

In general, the structure of the NNs may be categorized into feedforward or recurrent types. Possessing the features of having attractor dynamics and data storage capability, the RNNs are more appropriate for modeling dynamical systems than the feedforward ones [28]. Reference [15] introduces the essential concepts of the chaotic diagonal recurrent neural network (CDRNN). This study introduces an HCDRNN, the structure of which is depicted in Figure 3.

The proposed HCDRNN is made up of four layers: input, context, hidden, and output. The hidden layer outputs with -step delays are routed into the context layer through a chaotic logistic map. The following equations describe the dynamics of the HCDRNN.where and show the inputs and output of the HCDRNN, represents the hidden layer’s output. A set of is defined as vectors of previous steps’ values of . shows a symmetric sigmoid activation function. represents the chaotic logistic map, with as a positive random number with normal distribution. The input, context, and output weight matrices are represented as , , and , respectively. is the chaos gain coefficient. The degree of chaos within the HCDRNN can be adjusted by adjusting the parameter , which ranges from 0 for a simple DRNN to close to 4 for an HCDRNN. This fact allows you to regulate the level of chaos within the NN by altering the parameter in such a manner that the reduction in training error leads to a progressive decrease in the extent of chaos until it reaches stability. The value of the parameter ’s value could be altered as follows. As the change is exponential, the NN will rapidly converge.where is the samples’ absolute training prediction error. The prediction error, , represents the difference between the system’s actual output, , and the output of HCDRNN, .

and represent the maximum and minimum threshold of the parameter , respectively. is the annealing parameter, and is the prediction error threshold. To minimize the error function, , the weight update laws for the output, hidden, and context layers are based on the robust adaptive dead zone learning algorithm reported in [30].

Accordingly, weights updating laws are modified here for the proposed structure of the HCDRNN as follows [29]:

2.2.1. Output Layer

If then and do not change, otherwise:

2.2.2. Hidden Layer

If then and do not change, otherwise:where is the first derivative of the activation function in the hidden layer and , and .

2.2.3. Context Layer

If then and do not change, otherwise :

In these equations, “ are the robust adaptive dead zones for output, hidden and context layers, respectively.”

Remark 1. Theorems A.1, A.2, and A.3 in the appendix prove the convergence of neural network weights.

Remark 2. As illustrated below [11], it is expected that a multiinput single-output nonlinear autoregressive exogenous (NARX) model may represent the undercontrol nonlinear system utilizing the delayed system’s inputs and delayed system’s outputs.In this equation, is an unknown function.
Based on Remark 1 and Remark 2, an array of HCDRNNs is used to forecast the system’s behavior in a -step-ahead prediction horizon after the training and weights updating operations. The structure of this HCDRNN array is depicted in Figure 4.

Remark 3. Each HCDRNN in the array, as shown in Figure 4, is trained independently, and its weight matrices differ from those of the other HCDRNNs. As a result, the formulation in Sections 2.2.1 to 2.2.3 should be changed based on the input-output permutation for each element of the hierarchy. Remark 1 is, however, applied to all of the HCDRNNs in the array.

2.3. The Proposed HCDRNN-NMPC

A finite-horizon NMPC cost function would be the same as indicated in reference [11].where is the reference signal, is the system output, and is the predicted output through the prediction horizon. is the control signal variations during the upcoming control horizon. and are weighting parameters, determining the significance of the tracking error versus the control signal variation in the cost function, . is prediction horizon and is control horizon . However, the equation faces the following constraints [11]:

The control signal, , based on the improved gradient method is given below [11, 31]:where represents the learning rate of the control input sequence and represents the Jacobian matrix, , which is computed as a matrix with the dimension of .

The Jacobian matrix, , can be computed based on the chain rule.in which, and can be computed recurrently knowing that if then . It means that the computations should be completed from to . Moreover, considering the structure of , the can be calculated recurrently based on equation (26).

Algorithm 1 summarizes the proposed HCDRNN-NMPC scheme [29].

Step 1. Determine and , such that .
Step 2. Get , , and in each control step, such that:
(i) is the desired values for next steps.
(ii) is the last optimal sequence of the predicted control signal.
(iii) is the delayed input-output vector of nonlinear system.
Step 3. Predict the outputs of the system for next steps by the proposed HCDRNN.
Step 4. Calculate by equation (17).
Step 5. Compute and by equations (15) and (16), respectively.
Step 6. Apply as the first element of vector to the nonlinear system, and go back to Step 2 for the next sample time.

In steps 4 and 5 of the proposed algorithm, as the number of system inputs increases, the dimensions of the Jacobian matrix increase, and the estimation error increases due to the discretizations performed in calculating the derivatives. Choosing the appropriate sampling, time is used as a solution to reduce the estimation error in this paper.

2.3.1. Stability Analysis for HCDRNN-NMPC

The stability of NMPC-HCDRNN is demonstrated by considering the convergence of the model, which is proved in Remark 1 and Appendix, and the fact that the neural network training is done offline.

Theorem 1. Consider the constrained finite-horizon optimal control presented by (17) and (18). Lyapunov’s second law ensures the asymptotic stability of the proposed controller due to the limited input and output amplitudes and the semidefinite negative if the neural network weights’ convergence is proven and the predictive control law is as given in equations (19) and (20).

Proof. The constrained finite-horizon optimal control given in equation (13) can be rewritten as in (19) by rewriting the cost function along the control horizon: is the optimal control sequence obtained at time using the optimization algorithm. If is the suboptimal control sequence extracted from and considered as , the suboptimal cost function is defined as follows:Using the difference of and , and assuming that , equation (21) is written as follows:Therefore, if is the optimal solution of the optimization problem time using the control law described in equation (16), it outperforms , which is suboptimal and its cost function is smaller according to equation (22).Hence, the proof is complete.

3. Simulation

To control the Stewart platform, HCDRNN-NMPC is used such that the upper moving plane of the platform tracks the desired trajectory. The simulations have been carried out by MATLAB software, 2015 version. To evaluate the efficiency of the control method against external disturbances, the effects of the disturbance applied to the force on one of the links of Stewart platforms have been investigated.

3.1. Neural Network-Based Model

To predict the behavior of the Stewart platform, input-output data of the system under different operating configurations are required. To generate the training data, the inverse dynamics of the Stewart platform are solved for several random desired trajectories, based on the algorithm presented by [27] and the parameters introduced by [32]. The applied sampling time is .

3.1.1. Training

The general structure of the HCDRNN is designed in such a way that its inputs vector, , includes the previous position of the moving plane, , and the forces exerted on each link, , as in equation (23), and its output vector, , includes the position of the moving plane as in equation (24).

The number of elements of determines the number of input layer neurons. Accordingly, thirteen input nodes have been considered for six links. As the network’s output comprises three positions and three directions, separate networks should be considered for each output and, therefore, there would be six MISO networks in our case. For the aforementioned networks, a supervised learning scheme is considered to train the networks with regard to the inputs and outputs. Divided into two sets, 70% of the data were chosen for training and 30% of them for testing. At the beginning of the training of the HCDRNNs, the weight matrices are randomly valued. Tangent sigmoid is selected as the activation function, and the input-output data are normalized. The values for the NN parameters are defined as , , , , , , and as well as are randomly initialized.

The neural network training is done offline, but during the training, the coefficient is adjusted online in such a way that the behavior of the Stewart platform is covered by creating chaos in the neural network structure, and as the training error decreases, its value changes such that the neural network’s chaos is decreased. Figure 5 shows the chaotic property of the HCDRNN.

The impact of the number of hidden layer neurons on the approximation performance is studied. Table 1 reports the results of this study for 7 to 43 neurons, where their performances are compared in terms of training time and MSE.

As shown in Table 1, the training time increases with an increase in the number of neurons in the hidden layer. Considering the fact that the MSE value is proper when there are 7 neurons in the hidden layer, it would be more appropriate to use 7 neurons in the hidden layer for the prediction of the Stewart platform.

For a sinusoidal trajectory, the results of one-step-ahead, two-step-ahead, and three-step-ahead system behavior predictions are investigated. Table 2 reports the MSE of the prediction error.

Table 2 indicates a reliable prediction by the HCDRNN without any accumulated error. As a significant conclusion, it is shown that the use of chaotic context layer, besides the use of different weight matrices that were trained for each step, in the proposed hierarchical structure, overcome the error accumulation in n-step-ahead predictions.

3.2. The Results of HCDRNN-NMPC

In this paper, the values for the parameters are considered as follows: , , , , and . The performance of the NMPC is compared with the MPC, which both are evaluated by the integral absolute error (IAE).where shows the total number of samples. Some research studies are using other metrics like mean square error (MSE) and/or integral square error (ISE). Each of these metrics has drawbacks that led us to use of IAE instead. Metrics that use error squares magnify the errors greater than one and minify the errors less than one, which is not precise in robotics motion errors. Input and output signals are bounded in the intervals mentioned in equation (26).

3.2.1. Sample Trajectories

The performance of the controller to track the three paths demonstrated in sections (1), (2), and (3).(1)A sample trajectory has been designed, which is intended to be tracked with the undercontrol Stewart platform. Trying to calculate the control signal applied to link 1, assuming that the forces exerted on other links remain fixed.where and . Considering the desired trajectory for the robot’s movement and the actual trajectory on axis, tracking error varies within the range of for all three axes which are neglectable.Figure 6 shows the three-dimensional path of the top plane of the Stewart platform. As found in Figure 6, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. Moreover, Figure 7 shows the force exerted to link 1.As it is shown in Figure 7, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link.(2)The second sample trajectory sets out to track the following two-frequency trajectory.Considering the desired trajectory for the robot’s movement and the actual trajectory on axis, tracking error range on axis, is reported in Table 3.Taking Table 3 into consideration, it can be concluded that the tracking error varies within the range of for all three axes which are neglectable.Figure 8 shows the three-dimensional path of the top plane of the Stewart platform, and Figure 9 shows the force exerted to link 1.As found in Figure 9, the NMPC has extracted the control signal in a way that the Stewart platform’s output tracks the reference signal along three axes well. As found in Figure 9, the control signal applied to link 1 provides for the limit needed for the forces exerted in each link, assuming that the positions of other links remain unchanged.(3)In the third sample path, which is similar to the path presented in the reference [33], to evaluate the performance of the proposed controller in tracking the paths with rapid changes, the following two-level step path has been examined.

Considering the desired trajectory for the robot’s movement and the actual trajectory on axis, tracking error range on axis is reported in Table 4. The three-dimensional path traveled by the Stewart platform is shown in Figure 10. The force exerted to link 1 is shown in Figure 11.

As shown in Figures 10 and 11, due to intensive changes in the desired path, the controller made a control effort to extract the control signal in order to reach the desired path, and after reaching the desired path, the control signal did not change. The transient phase of the response is well observed in this optimal path, and as reported in Table 4, the tracking error in tracking the reference signal for x, y, and z axes changes between [−5.1–5.5], which is desired considering the severe changes of the reference and the control signal applied to link 1 satisfies the applied force’s constraint.

3.3. External Disturbance Rejection

The effect of external disturbance is assessed here, in the form of a pulse signal with amplitude, during 1 to 1.4 seconds, applying to another link’s force. The proposed control performance is shown in Figure 12.

Figure 13 demonstrated the tracking error and control signal in the presence of external disturbance.

By applying disturbance, the output is changed and a control effort is made to enhance the tracking. Some advantage of the proposed method can be the low number of oscillations in disturbance rejection, the smaller overshoot and undershoot than the initial disturbance magnitude, resulting in a more uniform output, and a significant improvement that is the smaller and more smooth control signal.

3.4. Simulation Results Analysis

The nonlinear model predictive controller requires high computational to extract the control signal, but because of the system’s nonlinear model, it can achieve the desired control performance with minimum error. Because the Stewart platform has unknown dynamics, the NN was used to model it. Chaos theory was used in NN to reduce and speed up control calculations, which accelerates the learning dynamics and thus solves the problem of predictive control being slow. Furthermore, involvement with local minimums is avoided by employing chaos in the neural network and increasing the order of chaos by employing more chaotic functions in the hidden layer, resulting in hyper-chaos in the proposed neural network. Table 5 compares the prediction performance of the DRNN and proposed HCDRNN.

Table 6 compares the performance of the proposed control with the proportional-integral-derivative (PID) control [34], the sliding mode control [18], and the fuzzy NMPC [19] and DRNN-NMPC. The comparison results are recorded in terms of IAE.

The proposed method provides a minor value of IAE compared with the other method.

PID controllers need a set of big gains for the proportional, integral, and derivative coefficients, and this makes the control signal highly sensitive to external disturbance so that the control signal rises to a large value with the lowest level of disturbance. However, the control inputs are bounded for factual reasons, thus, the control signal computed through the PID controller would not be applicable in practice.

As demonstrated in Figure 13(a), when the external disturbance is applied to the robot, tracking error has a small value in the range of , which is proof of the proposed method’s high performance.

4. Conclusion

This paper proposed a novel hierarchical HCDRNN-NMPC for modeling and control of complex nonlinear dynamical systems. Numerical simulations on the control of a Stewart platform are prepared to demonstrate the performance of the proposed strategy in tracking and external disturbance rejection. One of the most essential aspects of the suggested method is its hierarchical HCDRNN’s ability to accurately estimate the system’s output via a forward-moving window. The hierarchical structure enables the proposed mechanism to precisely adjust each HCDRNN for predicting the outputs of the system for only one specified sample ahead. This enhances the ability of the predictive model in adapting with variations of the complex dynamical systems. This paper provides the adaptive weight update rules for the proposed v-step delayed HCDRNN. Moreover, for determining the sequence of the control signal, an enhanced gradient optimization method is used. Results of the provided simulations and comparisons indicate superior performance of the proposed control system in tracking and removing the effect of the external disturbances. In future research studies, the effects of merging HCDRNNs instead of the hierarchical structure will be investigated, which will last to create a deep HCDRNN structure. The suggested controller’s robustness against various forms of disturbances will also be tested.

Appendix

The adaptive dead zone vector method described in [15] was used to demonstrate the convergence of neural network weights for each layer. As a result, the relationship between neural network prediction error, output error, and bounded disturbance of each layer has been used.

A Proof of Weight Convergence for the Output Layer

In order to evaluate the convergence of the weights of the output layer, the predictive error of the neural network, , is defined as in the following equation (A.1).in which is the desired output, is the output predicted by the neural network, and is the noise. and are the optimal or suboptimal values of the output layer’s weight and output of the hidden layer of the neural network. Equation (A.2) describes the relationship between the predictive error of the neural network, error of the output layer , and bounded disturbance of the output layer .

Theorem A.1. Assume that the dead zone vector is defined as in which . If the weights of the neural network and the dead zone vector is updated such that equation (A.3) is satisfied:

Then, the neural network’s error is limited as follows and the weight of the output layer converges.

, , , and are the optimal or suboptimal value of the dead zone vector and weight of the output layer.

Proof. By using and substituting equation (10) and equation (11) in equation (A.3), we have:By substituting equation (A.2) in equation (A.5), we have:Since , the following inequality can be used.Hence:Since the above equation is limited and smaller than zero, .

B Proof of Weight Convergence for the Hidden Layer

Similar to the proof in Section A, the predictive error of the HCDRNN, , is defined as in equation (B.1).

is the activation function. Equation (B.2) describes the relationship between the predictive error of the neural network, error of the hidden layer , and bounded disturbance of the hidden layer .

Theorem A.2. Assume that the dead zone vector is defined as in which and . If the weights of the neural network and the dead zone vector is updated such that equation (B.3) is satisfied.

Then, the neural network’s error is limited as follows and the weight of the hidden layer converges.

, , , and are the optimal or suboptimal value of the dead zone vector and weight of the hidden layer.

Proof. By substituting equation (12) and equation (13) in equation (B.3), we have:Since is the inner part of the activation function , an approximation of regarding is considered as the product of and . Thus,where and .
Hence,By substituting and in equation (B.7), and considering and , we have,Hence,Since the above equation is limited and smaller than zero,.

C Proof of Weight Convergence for the Contex Layer

Similar to the proof in Sections A and B, the predictive error of the HCDRNN, , is defined as in the following equation :

Equation (C.2) describes the relationship between the predictive error of the neural network, error of the contex layer , and bounded disturbance of the contex layer .

Theorem A.3. Assume that the dead zone vector is defined as in which and . If the weights of the neural network and the dead zone vector is updated such that equation (C.3) is satisfied.Then, the neural network’s error is limited as follows and the weight of the contex layer converges., , , and are the optimal or suboptimal value of the dead zone vector of the contex layer.

Proof. By substituting equation (14) and equation (15) in equation (C.3), we have,Since is the inner part of the activation function , an approximation of regarding is considered as the product of and . Thus,where , , and .
By considering equation (C.6), we have,By substituting , in equation (C.7) and considering and , the following inequality can be used.Hence,Since the above equation is limited and smaller than zero, .

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest in preparing this article.