Abstract
This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.
1. Introduction
An artificial neural network (ANN), usually abbreviated as neural network (NN), is a mathematical model that is inspired by the structure of the brain biological neural networks. A neural network consists of an interconnected group of elements called neurons, arranged in different layers, and it processes information by propagating data through these elements. Neural networks use “weights” to change the parameters of the connections to the neurons. These weights are calculated through a learning process of the available data. Simply speaking, neural networks are nonlinear statistical data modeling tools used to model complex relationships between inputs and outputs. Two challenges face neural network designer: the selection of neural network topology (i.e., number of layers and number of neurons) and the initialization of the weights. It is well recognized that a perfect way to decide an appropriate architecture and assign initial values to start neural network training has yet to be established. A common criticism of neural networks came from those two challenges where wrong selection of topology or initial values for weights leads to slow operation [1–5].
A lot of work has already been devoted to the application of neural networks (NN) in dynamics and control, both with respect to controller design and to system modeling and identification [6–10]. Modeling dynamic systems or controllers by neural networks consumes a lot of the network designer time in selecting the appropriate structure and in training the network and had to be done offline. The network size also affects the execution time and has high impact on network design and selection especially for real-time problems.
The network design technique given in this paper is systematic to relieve the network designer from the intensive task of finding an appropriate structure for the neural model. For linear systems, based on the work done by Kaiser [11] and Kassem and Sameh [12], the paper can find the network weights and biases analytically so that the neural network can work directly without the need for training.
Nonlinear systems can be modeled by training its linearized models about specific operating point without the need to change the size or the structure of the network. The training is fast and can be done online, as it uses the linear system knowledge to speed up the training process [12].
A modified version of time-delay neural network is presented in this work, and when it is combined with the analytical modeling, it can fast and accurately model linear and nonlinear dynamic systems. The technique is tested on different flight and space dynamics models and showed good match with the linear and nonlinear equations.
2. NN Analytical Model with One Hidden Neuron
This section represents the mathematical equations for linearizing a neural network model consisting of an input neuron, a hidden neuron, and an output neuron (Figure 1). The input and output neurons have linear activation functions, and the hidden neuron has a sigmoid function. The ability of this neural network model to approximate the straight-line equation is an essential feature to allow direct representation of difference equations. During the following calculations, denotes a continuous nonlinear transfer function applied in the neurons such as the sigmoid function .
Assuming that is to be approximated for , within an error limit of , , select on the sigmoid horizontal axis with such that for with a constant . In other words, this step selects the interval on the horizontal axis of the sigmoid, where the sigmoid function can be seen as linear.
Define a function , as Finally, further modifications (see Kaiser 1994 [11] for details) yield where
Taking as the weight to the corresponding input unit, as the bias of the hidden neuron, as the weight of the link between the hidden neuron and an output unit, and as the bias of the output unit results in the desired approximation. This analytic approximation forms the base for the algorithm that will be used to model the dynamics of linear systems. Figure 1 shows the model used in this paper for approximating the linear function.
3. Modified Time-Delay Neural Networks
Time-delay neural networks (TDNNs), introduced by Waibel et al. [13], form a class of neural networks with a special topology. They are used for position-independent recognition of features within a larger pattern. In order to be able to recognize patterns place or time invariant, older activation and connection values of the feature units have to be stored. This is performed by making a copy of the feature units with all their outgoing connections in each time step before updating the original units. The total number of time steps saved by this procedure is called delay.
In this work, the TDNN is modified to fit exactly the general difference equation for a linear th order dynamic system with constant coefficients shown as follows [14]: where , denotes the discrete variable at the th instant and denotes the input value which could be another difference equation.
The output is separated to one side of the equation, while its delayed values and the input are moved to the other side of the equation as follows: Now, the output value is just a summation of linear relations in the form which can be modeled analytically as an NN with one hidden neuron. This way, the one hidden neuron NN is used as a building block, and the complete network is built systematically as shown in Figure 2.
4. The Algorithm
First, the nonlinear system is linearized about an operating point (trim condition) using any linearization technique such as Taylor series or small disturbance theory. Second, an analytic NN model is build using this linear model after discretization. Finally, the model is trained with the nonlinear model data points and with initial values for weights and biases taken from the linear analytic NN model. The detailed modeling algorithm is shown in Algorithm 1. For linear systems steps 2, 3, and 4 only apply, as there is no linearization, and there is no need for training.
|
5. Test Cases
Three test cases will be given in this section illustrating the application of the proposed modeling approach using NN. The first two test cases in Sections 5.1 and 5.2 show the result of modeling of unstable linear dynamics without details of the proposed algorithm. The third nonlinear test case presented in Section 5.3 will be accompanied with more details.
5.1. Helicopter Longitudinal Dynamics
The state space linear model of longitudinal motion of a helicopter [15] is given by where , , and are pitch rate, pitch angle of the fuselage, and horizontal forward velocity, respectively, and the control input is the rotor tilt angle. The system is unstable with both zeros and poles in the right hand plane, for example, system poles Using the linear algorithm, the NN can fit the system’s three states very accurately for unit step input as shown in Figure 3.
(a)
(b)
(c)
5.2. F-16 Longitudinal Dynamics
The equations for longitudinal motion of the F-16 fighter are considered for a trimmed flight condition with ft/s, rad, rad, and rad/s. The state vector is , where , , , and are the longitudinal velocity, angle of attack, pitch angle, and pitch rate, respectively. The single control input is the elevator deflection in degrees. The state equation in matrix form becomes [16] where Using the linear algorithm, the NN can fit the system’s four states very accurately for unit step input , as shown in Figure 4.
(a)
(b)
(c)
(d)
5.3. Generic Fighter Nonlinear Flight Dynamics
Consider the flight dynamics model of a high-performance aircraft for the short-period mode [17–19], where the motion involves rapid changes to the angle of attack and pitch attitude at roughly constant airspeed. The model covers both linear and nonlinear flight behavior through an extensive range in angle of attack. The model is mathematically described as [18] where
and is the angle of attack in degrees, is the pitch rate in degrees per second, is the elevator control surface deflection in degrees, and is the normal force coefficient and the sole nonlinearity in the model. The system dynamics can be summarized as(a) with angle of attack <14.36, the aircraft has stable dynamics, (b) with angle of attack between 14.36 and 19.6 degrees, the aircraft has oscillatory instability, (c) onset of limit cycle oscillations (LCO) at high angles of attack.
It is obvious that a single linear model cannot represent this nonlinear behavior. So, this example will show the strength of the proposed algorithm. It will start with a single linear NN model and then train it to capture the nonlinear behavior, then adding a second model at different operating point to improve the accuracy.
The linearized model for deg is given by
The discretized system (12) using s is given by Using the linear algorithm, the NN can fit the system’s two states very accurately for the unit step input , as shown in Figure 5. This assumes that the angle of attack does not exceed value 14.6 deg.
(a)
(b)
We can see the difference when using the linear NN model for predicting behavior of the nonlinear system when angle of attack exceeds value of 14.6 deg., as shown in Figure 6. Therefore, the NN needs some training. A step input of for five seconds is used to train the system, and a fifteen seconds step input with same value is used for testing, and the results are shown in Figure 7. Initial weights and biases were taken from the linear NN to speed up the convergence.
(a)
(b)
(a)
(b)
When initiating the NN learning with zero or random weights and biases, it has been noticed that it does not converge to the correct values or in best case converges after 5 times the required number of iterations compared with the case of using linear NN weights and biases.
It is clear that the NN works relatively good compared with its size (4 sigmoid elements). The model can be improved by adding another model (at different operating points).
Linearizing the system for , we get the following state space model: Equation (14) is discretized using to give Same procedure is used to produce the NN model for this linear system. Adding the two NN models, we get double the size NN (8 sigmoid elements). After training the concatenating model, we can get an improved model which produces a much better results as shown in Figure 8. It is clear that this improved model can catch the limit cycle very well, so there is no need to add another NN model in the region between 14.36 and 19.6 degrees.
(a)
(b)
5.4. Satellite Attitude Dynamics
This final case study shows a space-related problem and also illustrates the effectiveness of the proposed NN for systems with sparse dynamic matrix . In this case study, the linearized attitude dynamics for a symmetric satellite for small angles and gravity gradient torques approximation is given by [20] where The satellite altitude = 700 km, its moments of inertia , , and kg-m2, and an initial attitude is represented by (0) = 5°, (0) = 5°, and (0) = 5°. Figure 9 shows initial condition response of the satellite using the state-space and NN models. The NN model is built by direct substitution without the need to train the network, and the results show good match. Also, the NN model is very efficient, as it uses only eight (one hidden neuron NN) for matrix A representing only the nonzero elements.
(a)
(b)
(c)
6. Conclusions
The procedure and the test cases described in this paper show that the linear neural network based on the analytic one hidden neuron NN is best suited as a starting point for fast learning nonlinear networks. It also shows that the modified Time-delay neural network combined with the analytical network form a very efficient algorithm for flight dynamics equations approximations. The presented algorithm relieves the network designer from the work intensive task of finding a suitable structure for the neural network, given that the designer has at least some qualitative knowledge about the modeling task. It also shows a systematic way of increasing the size of the neural network to improve the accuracy without the need of guessing. It is clear that the algorithm gives some guidance to the learning process and helps the network escaping local minima by using initial guess from the linear model. This illustrates the importance and usefulness of this algorithm in the field of neural network modeling of nonlinear flight dynamics.