Abstract

Load training is an important part of the daily training of aerobics athletes. Therefore, the research on the characteristics of aerobics movement of athletes in load training has been paid extensive attention by teaching units. Deep learning algorithm in artificial intelligence algorithm is used to study the characteristics of calisthenics teaching and training, which is a beneficial exploration to improve the scientific of calisthenics training programs. The basic principles of the neural network algorithm and implementation process are described. After the basic structure and characteristics of the network are analyzed, a comprehensive solution to optimize and improve the regularized deep belief network algorithm is proposed. The results show that compared with other learning algorithms, the feature classification error rate of the optimized regularized deep belief network algorithm is 6% lower than that of other algorithms. Although the training speed of the model decreases, the convergence period of the deep neural network algorithm can better achieve the extraction accuracy of abstract features of training data when the deep learning period increases. Being less affected by input parameters, the algorithm will have better stability and be more conducive to extracting the features favorable to classification. In the field of deep learning, the advantages of the deep neural networks can effectively classify the training characteristics of aerobics athletes and provide scientific decision-making basis for subsequent teaching and training.

1. Introduction

Artificial intelligence algorithms supported by machine learning, deep learning, and other theories, with the help of computer-aided platforms, better realize accurate and efficient rule classification and feature mining of large amounts of data. In the training of aerobics, a lot of functional training is needed, so it is necessary to effectively improve the level of aerobics skills. In the past, the research on feature data of functional training mainly relied on artificial means, and the analysis period was often long and the analysis effect was not very ideal. In this paper, the deep neural network is introduced to carry out feature classification research on the load characteristics of aerobics teaching and training, aiming to find out the movement characteristics of athletes’ functional aerobics training accurately and to provide scientific and accurate data support for the optimization of subsequent training programs. The main direction of this research is to discuss the feasibility of neural network algorithms in deep learning theory by analyzing the basic theory of artificial neural networks. Based on the deep belief model, the regularized deep belief network algorithm with batch normalization is constructed. The simulation results show that the optimized regularized deep belief network algorithm has advantages in the accuracy of feature classification and algorithm iteration.

A deep neural network plays a very good supporting role in feature extraction, pattern recognition, and other aspects. The principle of a feature extraction algorithm based on a deep neural network is to select important feature information from research data by simulating the information characteristics of the biological nervous system. In this paper, the study focuses on the neural network algorithm, pays attention to the latest achievements in the field of in-depth learning, analyses the basic modules, model structure, and implementation process of the in-depth neural network, explores how to better improve the advantages of the indepth neural network algorithm, and helps to effectively classify the characteristics of athletes’ load training in Aerobics Teaching and training.

The deep learning theory is introduced to the basic artificial algorithm. Based on the need of extracting load training features, which are optimized to improve the accuracy and robustness of the deep neural network in mining training features. For the analysis technology of the basic principle and process of the artificial neural network, insufficient training of traditional deep belief network, combined with batch normalization algorithm, a batch orthodoxy classification method is proposed, and the modified optimization algorithm is applied to the classification of aerobics load training characteristics. Experiments show that the optimized deep belief network effectively enhances the classification and mining of large-scale and complex training data by the neural network algorithm and provides real and reliable data support for the subsequent optimization of the aerobics teaching program.

The main contents of this paper are as follows: Section 1 discusses the advantages of deep neural network algorithm in data feature classification and how the artificial intelligence algorithm based on computer technology can effectively and accurately analyze complex data. Section 2 describes the principle and advantages of the deep neural network algorithm and puts forward the original intention of this paper. The principles and implementation process are described. Section 3 gives the design flow of the deep neural network algorithm. The reliability of the batch normalization algorithm of the basic depth confidence network algorithm for data feature classification is introduced. Section 4 introduces the optimization strategy and performance characteristics of the hybrid algorithm which combines the batch normalization algorithm and the depth neural network algorithm. Section 5 summarizes the research content of the optimized depth neural network algorithm and analyses the future research direction of aerobics teaching classification and training data mining.

Research on neural networks by scholars at home and abroad have been well applied in many industries and fields. In the research on GPS positioning, Sun et al. proposed a hybrid algorithm to improve the selection of optimal satellite combinations. The accuracy of satellite positioning is improved by improving the accuracy of geometric dilution. [1]. In Wozniak and Giabbanelli’s research on forest data recognition, the way to improve the accuracy of recognition and classification by BP neural network is discussed. The clustering algorithm is introduced to optimize the BP neural network, which makes up for the slow convergence of the BP neural network algorithm [2]. Tian et al. introduced an optimized BP neural network to solve the reliability problem of excessive noise data by using a new executive [3]. In the application of gesture recognition, Li et al. discussed how to improve the BP neural network algorithm to search for local minima. By introducing the chaos principle, a hybrid algorithm model of genetic algorithm and BP neural network is constructed, which improves the quality of the optimal solution of the algorithm [4]. Wang and Jeong discussed the feasibility of optimizing the BP neural network by using wavelet transform theory and discussed back propagation characteristics [5]. Li et al. explored the application of optimization in the comprehensive error compensation of multiaxis machine tools. By updating both the read factor and amplification factor, the convergence effect of the algorithm is improved. In practical application, an error prediction model of machine tool processing accuracy is constructed [6]. In Song et al.’s study on forest data recognition, the way to improve the accuracy of BP neural network recognition and classification is discussed. The clustering algorithm is introduced to optimize the BP neural network, which makes up for the slow convergence of the BP neural network algorithm [7].

In academic research in recent years, Liu explored how to use visualization tools to help users better interpret depth models in recent years [8]. By developing and deploying an interactive visualization system based on improved ActiVis and constructing multiple coordinated views, the results of deep neural network models at various levels, such as instances and subexample, can be studied [9]. Ahn et al. studied how to classify driver’s head data using a depth neural network, and proposed a multitask learning depth neural network based on the small gray images. The multitask learning to monitor accurately by using the multiview pictures transmitted by sensors under different environmental conditions such as illumination, vehicle vibration, driver’s posture change, and external occlusion [10]. Sharma et al. focused on how to reduce the operating bandwidth and maintain the classification accuracy of the algorithm. By constructing a bit-level processing element array, the operating bandwidth of the deep neural network is dynamically fused, so that the accuracy of the algorithm is not affected under the finest granularity computing conditions [11]. Lubbers et al. introduced a layered interactive particle deep neural network to study the molecular characteristics of two-character computational data sets. The deep neural network achieves the most advanced prediction performance on organic molecular data sets. It can also identify the uncertain regions of the model while realizing the limited and accurate prediction of energy [12]. Zhu et al. explore how to effectively apply depth neural network in image classification. By constructing a training model using features, the performance of the algorithm is effectively improved [13]. Groene al. studied the variance of human behavior and brain measurement interpretation by building three feature models. Among them, the deep neural network was used to simulate the advanced visual features of human eyes. Experiments show that the depth neural network can effectively classify scenes, which is very similar to the function of the human eyes [14]. In the research of computer single-layer scanning data, You et al. explored the use of denoising model by the deep neural network, to improve the effective classification of images. Experiments show that this scheme can improve the reliability of information retrieval and has a good application prospect in clinical medicine [15].

Through the analysis and application of artificial neural networks and depth neural networks, it can be seen that there are many good results of intelligent algorithms in image classification, prediction models, image retrieval and positioning, real-time picture monitoring, and other applications. However, under the support of current big data, it is still a worthwhile research area to improve the intelligence level of aerobics teaching and training by studying the characteristic data of aerobics symbol load training. Therefore, in this paper, the deep neural network algorithm will be optimized, and it will be explored how the optimization algorithm model can provide strong data support for feature classification in aerobics load training [16].

3. Methodology

3.1. Neural Network Algorithms

To solve the problem of nonlinear separability, multilayer functional neurons need to be used. Therefore, one or more hidden layers are usually added between the input layer and the output layer. Both hidden layers and output layers are functional neurons with activation function and threshold. More generally, the common neural network is the hierarchical structure shown in Figure 1. There is no same-layer connection or cross-layer connection between neurons. Such neural networks are usually called multilayer feedforward neural networks. BP algorithm is a typical learning algorithm with tutor guidance. The basic idea is to learn a certain number of sample pairs (input and expected output), that is, the input of the sample is transmitted to each neuron of the network input layer. After being calculated by the hidden layer and output layer, each neuron of the output layer outputs the corresponding expected value [17]. If the error accuracy between the expected value and the expected output does not meet the requirements, the error is propagated back from the output layer (note that the error is propagated) and modified through the modification formula of weight and threshold, so that the error between the output of the whole network and the expected output is continuously reduced until the accuracy is met. In other words, the BP network attributes the error between the network output and the expected output to the fault of weight and threshold. Backpropagation apportions the error to each functional neuron. The adjustment of weight and threshold should also be adjusted along the direction of the negative gradient, which is the fastest decline of the error function.

The learning and training of the BP neural network mainly include two processes: forward propagation and error backpropagation. In the forward propagation process, the external data variables are input from the input layer, processed by the neural nodes of each hidden layer, then transformed nonlinearly, and finally output information from the output layer [18]. In this paper, an output error prediction model of a parallel mechanism based on virtual experiments and a BP neural network is constructed. A fast prediction method for output error of parallel robot mechanism is proposed. Considering the hinge installation error and hinge axis error of the parallel mechanism, a virtual prototype model containing the above input errors is established. The output error of the mechanism is solved by virtual experiment simulation. It is assumed that the errors of mechanical components in mass production obey normal distribution and multiple groups of input error data subject to normal distribution are constructed [19]. Then, the BP neural network prediction model of the mechanism is established. If the actual output result of the output layer deviates greatly from the expected target value, the error backpropagation is carried out. In this process, the error will be backpropagated to each layer of BP neural structure one by one, and the error value will be shared with all neural nodes. Each neural node takes this information as information to continuously and repeatedly adjust the connection weight between the input layer and the hidden layer, the connection weight, and threshold between the hidden layer and the output layer. Until the error is reduced to an acceptable level or until the preset number of learning times. Its structure is shown in Figure 1.

From the above knowledge, the perceptron has only one layer of functional neurons (the so-called functional neurons are neurons with a threshold, and the input layer has no threshold). It has poor learning ability and can only solve linear problems, or even nonlinear separable problems such as XOR.

To solve the problem of nonlinear separability, multilayer functional neurons need to be used. Therefore, one or more hidden layers are usually added between the input layer and the output layer. Both hidden layers and output layers are functional neurons with activation function and threshold. More generally, the common neural network is the hierarchical structure shown in Figure 2. There is no same-layer connection or cross-layer connection between neurons. Such a neural network is usually called multilayer feedforward neural network. The BP neural network learning algorithm flow is shown in Figure 2.

The convergence speed is affected by the variety of error surface distribution in two-dimensional space. The factors that affect the stability and speed of nonlinear convergence are mainly related to the model—mainly the structural stiffness. For some structures, from a conceptual point of view, it can be considered as a geometrically invariant stable system. However, if the stiffness of several main members with similar structures is very different, it may lead to large errors in numerical calculation. The gridding method is essentially a method of using known point values for dimensional space interpolation. The main differences between different methods are the range of sampling points and the weight of known points. The difference in sampling point range is global interpolation and local interpolation, and the difference in known point weight lies in the difference in weight function (basis function). Because the multiple regression method is a trend surface mapping method, and the local polynomial interpolation method can also be regarded as the local trend surface method, that is, using a surface to fit the known data can adopt different powers, which is mainly used to distinguish the regional field from the local field, so the method itself plays a role in removing the fine structure. Although the number of hidden layers can enhance the network’s ability to deal with negative linearity, it will also bring too complex technical problems. It is not right that the more the hidden layer is, the better the situation will be. After it exceeds a certain number, the number of hidden layers needs to be determined by experiments (Figure 3).

3.2. Deep Neural Network Algorithms and Their Optimizing Strategies

The deep learning here mainly comes from the deep learning model theory. It is to understand the number of nonlinear operation combinations obtained in the neural network learning from the level of multilayer flying linear function relations. The common shallow structure of the neural network and deep structure of neural network often use fewer neurons to get better generalization performance. Figure 4 is a deep neural network structure with three hidden layers. Deep learning makes good use of the learning characteristics of deep neural networks. At present, there are three main structures of deep neural networks. The first is the generative depth structure, which is used to describe the joint probability distribution of the research object and the high-order correlation characteristics of the data. The second is the discriminatory depth structure, which is to discriminate and judge the pattern classification and describe the posterior distribution of the data. The third is a hybrid structure, which is a combination of generative and discriminatory structures, which can effectively distinguish data.

It is a deep neural network superposed by a restricted Boltzmann machine (RBM). Because of its multilayer structure, it has advantages in obtaining compression coding of data sets. Figure 5 is the construction process diagram of the deep belief network. In the pretraining stage, a greedy unsupervised learning algorithm is used to train each RBM from bottom to top, so that an unsupervised deep belief network can be obtained through repeated iterative training. The gap between the output structure of the money box transmission and the label data can be obtained. In another word, supervised learning from top to bottom is used to fine-tune the deep belief network of unsupervised learning.

The output of the previous RBM in the figure is the input of the latter RBM. If the deep belief network contains one hidden layer, the joint probability distribution of the model can be shown in formula (1). Here, when is known, which is the probability distribution of the deep belief network,

In practical applications, deep belief networks are often undertrained. Therefore, the BN is a deep belief network. The batch regularization algorithm uses an independent regularization method to process the scalar features, which is the batch processing of input data samples. In the fine-tuning of traditional deep belief networks, the distribution of input data at each level is affected by the variation of parameters. Under this condition, the complexity of networks is very high. If the activation function is used to map the output value as the input data to the latter layer, the distribution of the input data will not change significantly in the training. In another word, the optimization goal of making the deep belief network run more stable can be achieved. Based on this optimization idea, the batch regularization is in the fine-tuning stage. The structure of the batch regularized depth belief network (BN-DBN) is shown in Figure 6.

Figure 6(b) uses the batch regularization algorithm to process the input features of the batch regularization algorithm (BN) layer and then deactivate the function layer as the input value. In BU processing of a deep neural network with hidden layer l, formulas (2)–(4) will be used. Each batch of training samples is . The input value of a k dimension x is expressed by . The sample set is D, the mean is , and the variance is . Each dimension of the input value x is regularized with

Batch regularized deep belief network (BN-DBN) mainly uses scale transformation and translation transformation parameters to make the model more expressive. In this way, the gradient propagation of the deep belief network will not be affected if some layer of the network has parameters to be transformed. When the weights of new parameters are too large, the gradient of the whole model will be reduced by using scale transformation. This method can realize that the whole batch regularized depth belief network (BN-DBN) is always in a stable state during parameter training.

4. Batch Regularized Deep Belief Network (BN-DBN) Performance Experiments

4.1. Experimental Conditions and Parameters

The hardware condition of this test is Windows 10 operating system, with 4.00 GB memory and MATLAB R2014a tool. The database used in the experiment is the characteristic database of the aerobics training team of M University in 2017. It contains 50,000 training samples and 12,000 test samples. The experimental data collected in this database are randomly obtained, using 5000 training samples and 1500 test samples. The parameters selected in the experiment are based on the parameters obtained after a large number of experiments. In the unsupervised learning stage, without considering the batch regularization algorithm (BN), the number of hidden dangerous layers of the network is 2, and there are 100 neuron nodes in each layer. The batch size is 120. When the number of iterations exceeds 10, the momentum parameter is set to 0.85. When the number of iterations is less than 10, the momentum parameter is set to 0.65. The maximum number of iterations is 200. The initial learning rate of the supervised stage is set to 0.05, and the maximum initial value of fine-tuning times is 100. In the experiment, the batch regularized deep belief network is compared with the original model, and the performance advantages are discussed. The experiment adopts the contrast method. The other feature extraction algorithms are basic depth belief network (DBN) of an automatic encoder (AE) and dropout-DBN.

4.2. Stability Experiment of the BN-DBN Algorithm

The setting of the initial learning rate in traditional DBN will directly affect the stable operation and convergence speed of the algorithm. It often needs a lot of practical experiments to select the appropriate learning rate. In the experiment, it is needed to verify the stability of the BN-DBN algorithm. When other parameters are fixed, the learning rate should change between 0.0005 and 0.05. Based on the characteristic data set of aerobics athletes’ weekly load training in 2017, the BN-DBN algorithm was compared with ANN, K-nearest neighbor algorithm (KNN), basic depth belief network (DBN), and dropout-DBN. The results are shown in Figure 7. Under the curve operation of BN-DBN and other algorithms, the classification error rate is 0.6% lower than that of the dropout-DBN algorithm. The results show that the algorithm has high stability.

4.3. Performance Experiment of BN-DBN Algorithm

Compared with traditional neural network algorithms, the greatest advantage is depth. The algorithm is more likely to exhibit overfitting phenomenon. To verify the performance of the BN-DBN algorithm proposed in this paper on a deeper level, it is needed to assume that other parameters are fixed, which will increase slowly, up to 10 hidden layers, or to use aerobics training data as experimental data. This is the performance of the classification accuracy of the BN-DBN algorithm and other algorithms under the change of hidden layer (Figure 8).

The experimental data above show that, in the training of feature extraction, the statement that the performance of the network will be better if the number of hidden layers increases is not right. With the same data scale, if the number of hidden layers increases continuously, the classification curves of the four algorithms are in a downward trend. This is mainly because the hidden layer has to extract sample features at different levels in each different layer. The more the number of hidden layers is, the richer the concept of feature expression will be, which will reduce the classification. But the abstraction effect of the BNBDN algorithm is better, which means that although the parameters will change constantly, BNBDN runs well and is more suitable for feature extraction.

5. Conclusion

ANN is an artificial intelligence algorithm that simulates the neural structure of the human brain to reproduce the thinking ability. It has a good application prospect in the research of prediction, classification, and feature extraction. The general deep learning process refers to inputting the feature vector into the neural network. After the operation of the internal code, the neural network outputs the results we need. Different from ordinary programs, the scale of the neural networks is larger, and it can process a large amount of data and get results more quickly. To achieve what a single program cannot do, the premise of the use of the neural networks needs a lot of data for training, that is, it takes longer to create a neural network. A deep belief network algorithm is proposed, and the algorithm is optimized and updated according to the need for aerobics load training feature classification. BN is introduced to optimize the deep belief network. The algorithm and the scalar features of each layer in the neural network are processed by the independent regularization method of the batch regularization algorithm, which is the batch processing of input data samples. This paper has a 0.6% lower classification error rate than dropout-DBN when it is optimized. The BN-DBN can extract the characteristics of data more accurately, and the algorithm has high stability. The abstraction effect of BNBDN algorithm is better, and it is more suitable for feature extraction. However, the study still has some limitations. BN-BDN algorithm still has room for improvement, so building a hybrid model structure is the direction of future research.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding this work.