Abstract
Due to the lack of maintenance support samples, maintenance support effectiveness evaluation based on the deep neural network often faces the problem of small sample overfitting and low generalization ability. In this paper, a neural network evaluation model based on an improved generative adversarial network (GAN) and radial basis function (RBF) network is proposed to amplify maintenance support samples. It adds category constraint based on category probability vector reordering function to GAN loss function, avoids the simplification of generated sample categories, and enhances the quality of generated samples. It also designs a parameter initialization method based on parameter components equidistant variation for RBF network, which enhances the response of correct feature information and reduces the risk of training overfitting. The comparison results show that the mean square error (MSE) of the improved GAN-RBF model is , which is approximately 1/2 of the RBF model, 1/3 of the Elman model, and 1/5 of the BP model, while its complexity remains at a reasonable level. Compared with traditional neural network evaluation methods, the improved GAN-RBF model has higher evaluation accuracy, better solves the problem of poor generalization ability caused by insufficient training samples, and can be more effectively applied to maintenance support effectiveness evaluation. At the same time, it also provides a good reference for evaluation research in other fields.
1. Introduction
As an important part of equipment maintenance support work, equipment maintenance support effectiveness evaluation has always been the research focus in the field of equipment support, which is mainly used to evaluate maintenance support strength, find out weaknesses in time, and formulate improvement plans, so as to improve the maintenance support effectiveness. In recent years, with the rapid development of machine learning technology, maintenance support effectiveness evaluation based on the neural network has become a research hotspot. The neural network has good characteristics of self-adaptive, self-learning, and strong fault tolerance, which can recognize new samples by learning database samples, and is widely used in prediction and evaluation [1–3]. Some scholars have carried out relevant research on this method. For example, Wang et al. [2] proposed the joint operation effectiveness evaluation algorithm based on an adaptive wavelet neural network in combination with specific expert experience and an intelligent algorithm, aiming at the problems of subjectivity and uncertainty in joint operation effectiveness evaluation. To solve the effectiveness evaluation problem of complex, multifunctional, and poor samples, Liu et al. [4] constructed a system effectiveness evaluation model based on grey RBF neural network according to the three-layer structure of system effectiveness evaluation index system. In order to effectively evaluate and predict combat effectiveness of surface to air missile weapon system, Qiao and Zhao [5] reduced and decorrelated the original data through the principal component analysis method and then trained the BP neural network with the principal component as the input.
These researches based on a deep neural network greatly enhance the diversity of equipment maintenance support effectiveness evaluation methods and have foreseeable research prospects, but there is always a pain point: due to imperfect data collection system and nonstandard data collection procedures in the army for a long time, the current maintenance support data quality is generally poor and the type is relatively single, so it is impossible to form complete and sufficient training samples for neural network input [6, 7]. For this reason, the neural network evaluation of maintenance support effectiveness always faces problems such as overfitting of small samples and low generalization ability, which makes it difficult to achieve the ideal application effect in the fields of data classification and pattern recognition [8, 9]. The insufficiency of maintenance support training samples greatly limits the application of neural networks in maintenance support effectiveness evaluation.
Generative adversarial network (GAN) is a popular unsupervised learning algorithm in recent years. Compared with other generation models, it can avoid complex calculations and generate better image quality. Some scholars use its data generation characteristics to carry out relevant research in combination with their own fields. For example, Yin et al. [10] proposed a smoke feature extraction and detection model combining deep convolution generative adversarial network and convolution neural network to effectively reduce the false alarm rate of smoke detection. Yao et al. [11] put forward a fault diagnosis method based on generative adversarial network and residual network, which uses GAN to track the distribution of rail fastener failure data, balancing and expanding the existing data set. Luo and Wang [12] proposed a demosaicing image generation method based on a generative adversarial network in order to reduce noise and artifacts when reconstructing digital camera color images, which effectively eliminated image artifacts. These studies show that GAN can play a better role in dealing with problems such as insufficient data, unbalanced data, and more noise in some fields. Therefore, it is envisaged to apply the data generation characteristics of GAN to the field of equipment maintenance support to solve the problem of insufficient samples in neural network evaluation of maintenance support effectiveness. Based on the optimization of GAN and RBF, respectively, this paper combines the data generation of GAN with the training of RBF, constructs an improved GAN-RBF evaluation model, and compares it with other common models to verify its validity. In order to facilitate theoretical analysis and practical operation, this paper does not use complex deep convolution generative adversarial network and convolution neural network but chooses original GAN and RBF network with simple structure and concise training.
The arrangement of this paper is as follows. In Section 2, based on the brief introduction of GAN and RBF, the improved GAN-RBF evaluation model is constructed; the index system, data processing, evaluation criteria, and evaluation steps of maintenance support effectiveness evaluation are introduced in turn. In Section 3, the training results of the improved GAN-RBF model and the other three common models are shown in the form of graphs. In Section 4, by analyzing the accuracy and complexity of four models, and comparing them with similar research, the validity of the improved model is verified. Finally, Section 5 concludes this paper with a discussion of future research extensions.
2. Method
2.1. Main Algorithms
2.1.1. Generative Adversarial Network
GAN is a deep learning model, which can automatically define potential loss function and learn data distribution of original sample set. It has become one of the most promising unsupervised learning methods on complex distribution in recent years, which has been successfully applied in image generation, image inpainting, 3D object generation, and other fields.
(1) Basic Idea. Based on the idea of game theory, GAN regards generative model and discriminative model as both sides of the game and trains the generative model and discriminative model alternately, so that the two models are both optimized and finally reach the Nash equilibrium: the samples generated by the generator are infinitely close to the real sample distribution, while the discriminator cannot distinguish the true from the false and predict the true probability for a given sample is 0.5.
(2) Network Structure. GAN is mainly composed of a generator and a discriminator. The structure is shown in Figure 1.

The generator is actually a kind of maximum likelihood estimation. After capturing the target sample distribution, the generator converts the original input into target samples of the specified distribution by converting the parameters. The discriminant is actually a dichotomy, which determines whether the data generated by the generated model is a true sample.
In practice, the fully connected neural network is generally used as a generator and discriminator.
(3) Objective Function. The objective function of GAN is as follows: where is the generator and is the discriminator; is the probability that the discriminator judges that the sample is true, which is a real number in the range of 0–1; is the sample generated by the generator after receiving random noise; is the real sample distribution and is the generated sample distribution.
Formula (1) is a minimax optimization problem, which is essentially two optimization problems, respectively, corresponding to the minimization of the generative model and the maximization of the discriminative model in the alternating iterative training process [13]. The minimax objective function of GAN integrates the objective function of the generative model and discriminative model and describes the alternating iterative training process, which achieves the perfect unity of mathematical form.
2.1.2. Radial Basis Function Network
RBF network is a kind of feedforward neural network with radial basis function as activation function, which has the characteristics of data-driven, independent data center searching and fast learning speed. It has a uniform approximation to nonlinear continuous function and avoids the emergence of local minimum problem, which is widely used in time series analysis, data classification, pattern recognition, and other fields [1, 14].
(1) Basic Idea. By taking the radial basis function (such as Gaussian function) as the base of the hidden unit to form the hidden layer, RBF can directly map input vectors to the hidden space without connecting by weight. The hidden layer transforms low-dimensional input into high-dimensional space, which makes a linear nonseparable problem in low-dimensional space linearly separable in high-dimensional space. The network output can be obtained by linear weighted summation of hidden cell output.
(2) Network Structure. RBF network usually has three layers, including the input layer, hidden layer, and output layer. The structure is shown in Figure 2.

The number of input layers is , which is composed of signal source nodes and only transmits data information without any transformation of input information; the number of hidden layers is , and the vector is mapped from the low dimension to the high dimension space by using radial basis function, so that linear separability in low dimension is transformed into linear separability in high dimension; the number of output layers is , and the network output is obtained by linear weighted summation of hidden units outputs.
(3) Activation Function. The hidden layer neurons of the RBF network generally use the distance between the input vector and central vector (such as Euclidean distance) as the function independent variable and use radial basis function (such as Gaussian function) as activation function.
The activation function of the RBF network based on Gaussian kernel is as follows:where ; is the central parameter of the -th basis function, ; is the width parameter of the -th basis function, which is actually the variance of Gaussian function; is the norm of vector , which represents the distance between and .
The output of RBF network is as follows:where is the weight of the basis function.
2.2. Construction of Improved GAN-RBF Evaluation Model
In this compound model, the maintenance support training samples are amplified by GAN, and then, the samples are used as the input of the RBF network. It is not a simple integration of the two but needs to increase relevant optimization according to the characteristics of maintenance support samples in order to ensure that GAN is suitable for the expansion of maintenance support samples and ensure that the RBF network can effectively train maintenance support samples.
First of all, the optimization of GAN is discussed. The composition of maintenance support data is complex; for example, it is divided by the functional system, including maintenance support command system, maintenance equipment support system, and maintenance equipment transportation delivery system; divided by operational level, including group army, integrated brigade, and integrated battalion; and divided by equipment type, including vehicles, ships, aircraft, ordnance, and armor. Complex categories bring some difficulties to maintenance support samples amplification based on GAN, so it is difficult to ensure the consistency between generated sample category and original category, which leads to the distortion of generated samples. Zhu et al. [15] pointed out that the L2 loss function can effectively improve the distortion of samples generated by the generator due to the inaccurate gradient of GAN's discriminative model. Therefore, this paper introduces the L2 loss function, defines a new class constraint on the basis of original antagonism constraint to determine the loss function of GAN synthetically, and ensures that GAN can generate ideal class samples without obvious distortion.
Secondly, the optimization of RBF is discussed. The parameter initialization of the RBF network has a great influence on its training. The traditional methods include the self-organizing selection center method and random selection method, both of which have some limitations. The former is more complex in the calculation, while the latter is generally applicable to the representative sample distribution [16]. They are not suitable for maintenance support samples with large volumes and complex categories. Therefore, aiming at the initialization of center parameter, width parameter, and weight, an optimization method is designed with parameter components changing at equal intervals: define a center parameter , width parameter , and weight . The components of each center parameter are changed from small to large with equal spacing, and the spacing is adjusted by neurons number in the hidden layer so that weak input information can produce a strong response near the smaller center, and different input characteristics can be reflected more clearly in different centers, so that the center initialization is as reasonable as possible, reflecting the characteristics of Gaussian kernel.
2.2.1. Amplification of Maintenance Support Samples
The determination of the loss function of GAN is the most important work. Considering the optimization of GAN, two kinds of constraints are determined, which make the loss function of GAN more reasonable.
(1) Adversarial Constraint. Adversarial constraint refers to the following: in the alternating iterative training, generative model and discriminative model play games with each other, confront each other, constrain each other, and finally reach the state of mutual optimization where it contains the idea of a zero-sum game between generative model and discriminative model [17]. The constraint is expressed as the objective function of GAN. According to the objective function, the adversarial constraint is defined as follows [18]:
According to formula (4), the adversarial loss function of the discriminative model can be easily determined as
Correspondingly, the adversarial loss function of the generative model is
(2) Category Constraint. When the category of generated samples deviates greatly from the original category, serious distortion will occur to the generated samples, resulting in convergence failure in the whole training process. The ideal representation of sample generation is that the sample has the highest probability in the target category and the second probability in a similar category [19]. Therefore, category constraint is defined as follows: where is the cross-entropy function, is the depth neural network to be attacked, whose output is the class probability vector of the sample, and is the class probability vector reordering function of the original sample and target class .
According to formula (7), it is easy to determine the class loss function of the generative model as follows:
There is no class constraint in the discriminative model, so it has no class loss function.
Considering the above two constraints, the final loss functions of discriminator and generator are as follows:where and are balance factors to adjust the two constraints.
When the loss function is determined, the GAN training is started, and iteration number is set as , original sample and generated sample set are and respectively, the sample size is , and sample size taken out in a single time is . Usually, generator parameters are fixed first [20], the sample is taken from the sample set and , respectively, and discriminator parameter is updated by random gradient rise:
Then, fix the discriminator parameters, take the sample from the sample set , and use random gradient descent to update generator parameter :
Repeat the above process until the specified iteration number is reached and the training is over [21]. The amplified sample set is obtained, whose sample number is .
2.2.2. Training of Evaluation Model
This paper uses the RBF network model based on Gaussian kernel (Figure 2) to train samples. The amplified sample set generated by GAN is used as the input, the input vector is , the output is , and the expected output is .
Firstly, three parameters of the RBF network are initialized by the optimization method. Calculated as follows:where and are the minimum and maximum values of the samples received by the -th neuron in the input layer, respectively, and is the total number of neurons in the hidden layer.where is width adjustment coefficient, whose value is less than 1; is the distance between the -th sample received by the -th neuron in the input layer and the corresponding center.where and are the minimum and maximum values of all expected outputs in the -th output neuron and is total neurons number in the output layer.
When the three parameters are initialized, the neural network begins to train where the random gradient descent method is used to update it iteratively [22]. The iterative calculation is as follows:where is the central component of the -th hidden layer neuron for the -th input neuron in the -th iteration, is the width component corresponding to the central component, is the adjustment weight of the -th hidden layer neuron for the -th output neuron in the -th iteration, is the learning factor, and is the momentum factor.
The loss function is defined aswhere is the actual output of the -th sample of the neural network, is the expected output of the corresponding sample, and is the total number of amplified samples.
Iteratively update center parameter, width parameter, and weight, and then calculate the loss. When the loss is within the acceptable range, stop training [23].
Summarize the operation process of the GAN-RBF evaluation model, as shown in Figure 3.

2.3. Index System of Maintenance Support Effectiveness
On the basis of field investigation and expert opinions [1, 2], combined with the current operational development trend, and considering main operation factors of equipment maintenance support system, the evaluation index system of equipment maintenance support effectiveness is established, as shown in Table 1.
2.4. Data Processing
The types and dimensions of maintenance support effectiveness evaluation indexes are different, so it will be difficult to balance the importance of input components when inputting them into neural networks, which will not only reduce the training speed of GAN and RBF networks but also affect the accuracy of sample generation and weight iteration. [24]. Therefore, before sample training, the data must be normalized and transformed into a standardized interval [0,1]. The maintenance support effectiveness index is divided into benefit-index and cost-index, which have the characteristics of the bigger the better and the smaller the better [25]. This paper uses the range method to normalize these two types of indexes. Benefit-index: Cost-index:
2.5. Evaluation Step
(1)Construct evaluation index system of equipment maintenance support effectiveness(2)Select original data and random noise of maintenance support for GAN training, and normalize them(3)Sent original data and random noise into GAN for training to generate amplified samples(4)Train RBF network by amplified samples, and obtain network output, that is, the predictive value of maintenance support effectiveness(5)Calculate the accuracy and complexity of the evaluation model and compare it with other evaluation methods to verify its effectiveness2.6. Model Evaluation Criteria
The maintenance support effectiveness evaluation is generally carried out by the test base, aiming at accurately estimating the maintenance support effectiveness of a certain support unit, so as to help improve the maintenance support strength [1]. Obviously, accuracy is the most important index of maintenance support effectiveness evaluation, so this paper takes accuracy as the primary evaluation criterion for models. Secondly, the complexity of the general composite model is higher than that of the previous single model. If the complexity of the improved GAN-RBF model is too high, even if its accuracy is high, its performance is still low, so the model complexity should be taken as an auxiliary reference index. Because the test equipment in the test base has sufficient memory and the test time is relatively abundant, the evaluator is tolerant of model complexity. If the complexity of the improved GAN-RBF model is not much different from that of the common model, it is completely acceptable.
2.6.1. Accuracy
Firstly, when the model output is obtained, a set of real effectiveness sets is used as the baseline for comparison, and the fitting effect of model-predicted values to real values is observed from a macroperspective.
Then, the mean square error (MSE) of the models is calculated, and the accuracy of each model is compared from a microperspective. MSE is the square's expected value of the difference between the predicted value and real value for evaluation models, which is used to measure data deviation degree and reflect the actual situation of error. The smaller the value is, the higher the mode accuracy is [26]. MSE can show the advantages and disadvantages of the four models more clearly. The calculation formula is as follows:where is the model-predicted value and is the model real value.
2.6.2. Complexity
The algorithm complexity is an index to measure the execution of the program, which consists of time complexity and space complexity. Time complexity determines the training/prediction time of the model, while space complexity determines the number of the parameters of models. Because algorithm design is more in pursuit of high efficiency, and the increasing computer storage capacity can basically meet algorithm space need, the algorithm complexity is generally dominated by time complexity [27]. There are two methods to calculate the time complexity: preanalysis method and poststatistics method. The former uses the big O notation method, which is suitable for predicting the complexity when the algorithm program is not written. The four models in this paper have corresponding programs. It is obviously more convenient and quicker to estimate the algorithm complexity by directly measuring running time by using the postevent statistics method.
3. Result
The hardware configuration of this experiment is as follows: the running environment is Windows 7 operating system, the CPU is Intel (R) Core (TM) i7-4790k CPU @ 4.00 GHz processor, the running memory is 16 G, the graphics card is NVIDIA GeForce RTX 2080Ti, and the programming environment is PyTorch 3.7.3.
Maintenance support data is usually collected at the lowest level of indicators [28], and the data in this paper is collected at the second-level index. According to the 14 second-level indexes of maintenance support effectiveness index system (Table 1), 28 groups of 392 original samples were selected from the database, and 100 groups of 1400 amplified samples were generated by GAN. The original sample set and amplification sample set are shown in Table 2. After inputting these samples, the RBF network will output five first-level indexes values, which are predicted values of maintenance support effectiveness components. Compared with the direct output of final effectiveness value, the former method enriches the analysis at the level of effectiveness components. If the final effectiveness value is wanted, it can be realized by changing the activation function and units’ number in the output layer of the RBF network.
In order to verify the validity of the improved GAN-RBF model, this paper compares three common neural network evaluation models of Elman neural network, BP neural network, and ordinary RBF network. To ensure the contrast effect, the same general network parameters are set for the four models. According to 14 second-level indexes and 5 effectiveness components, 14 nodes are set for the input layer and 5 nodes for the output layer. According to experience, the hidden layer is set to 75% of the nodes number in the input layer and set to 10 nodes. The maximum number of iterations is 100, the learning rate is 0.1, and the allowable error is 0.001. According to the requirement of the test base for maintenance support effectiveness accuracy, the target value of the iterative loss is set to 10–15.
The original sample set is used to train four models. For the improved GAN-RBF model, the input of the RBF network is actually the amplified sample set generated by GAN. Test results are as follows.
3.1. Iteration Curve of Models
The iterative process of four models is shown in Figure 4.

3.2. Predicted Value and MSE of Models
Twenty-eight groups were randomly selected from the amplified sample set as the test set to test the four trained models. The output of evaluation models is the predicted value of five effectiveness components E1, E2, E3, E4, and E5. Taking a group of real effectiveness sets as a baseline, we observe the fitting effect of predicted values to real values, as shown in Figure 5.

According to formula (19), the single group MSE and the mean value of 28 MSEs are calculated. MSE mean values of four models are shown in Figure 6.

In order to observe the fluctuations of four model MSEs, a single group of MSE was made into a line chart, as shown in Figure 7.

3.3. Running Time of Models
Through real-time calculation, the training/prediction time of the four models is obtained, as shown in Table 3.
4. Discussion
4.1. Model Iterative Analysis
Figure 4 shows that, in the initial stage of iteration, the convergence speed of the improved GAN-RBF model is faster than that of the BP model and lower than that of the Elman model and RBF model. However, after about 40 iterations, the BP model and Elman model have fallen into local optimum, and the loss function values are 103 and 10–5, respectively, which do not reach the target value of the loss. At this time, the convergence speed of the RBF model is obviously reduced to that of the improved GAN-RBF model. It is surpassed by the improved GAN-RBF model after about 70 iterations, and the target value of the loss is never reached in subsequent iterations. The improved GAN-RBF model always keeps a stable and fast convergence speed, does not fall into local optimum, and achieves the ideal loss value.
From the perspective of model iteration, the improved GAN-RBF model can not only maintain a stable and fast convergence speed but also achieve a minimum loss value, with high prediction accuracy and no fitting problem. Although the complexity of the improved GAN-RBF model is increased after adding GAN, and the initial convergence speed is slightly slower than the Elman model and RBF model, it can always maintain a stable and fast convergence speed and, finally, achieve convergence accuracy better than other models.
4.2. Model Accuracy Analysis
Firstly, the output fitting effect of each model is discussed. Observing the position of effectiveness component predicted value relative to real value in Figure 5, the improved GAN-RBF model is the closest among the four models, whose prediction curve is obviously more in line with effectiveness real value. From the perspective of the output fitting effect, the improved GAN-RBF model is superior to the traditional three neural network evaluation models. In addition, the E1 and E2 predicted values of the four models are more aggregated and the differences are smaller. It can be concluded that the prediction effects of E1 and E2 are generally better than those of E3, E4, and E5. As the two most basic and important maintenance support effectiveness components, E1 (mobility support effectiveness) and E2 (equipment repair effectiveness) are more complete than other effectiveness components in data collection and processing equipment and procedures, and the sample quality is relatively high [1]. Therefore, its prediction effect is relatively good. This shows the importance of sample quality for the neural network evaluation model from another aspect.
Secondly, the MSE of each model is discussed. Figure 6 shows that MSE of Elman, BP, RBF, and improved GAN-RBF models are, respectively, , , , and . MSE of the improved GAN-RBF model is the smallest, and the prediction accuracy is undoubtedly the best. The height difference of the four bars in Figure 6 is obvious. The bar height of the improved GAN-RBF model is approximately 1/2 of the RBF model, 1/3 of the Elman model, and 1/5 of the BP model, and the prediction accuracy is far superior to the other three evaluation models. It can be roughly estimated from Figure 7 that the volatility order of a single MSE is BP > Elman > improved GAN-RBF ≥ RBF, and the volatility of the improved GAN-RBF model can be called optimal together with the RBF model after excluding some abnormal points.
It can be seen that the prediction accuracy of the improved GAN-RBF model is the best whether it is evaluated from the fitting effect of model output or the size and fluctuation of model MSE.
4.3. Model Complexity Analysis
When the accuracy of the improved GAN-RBF model is verified to be optimal, it is still necessary to analyze its complexity. If its complexity is not too high, and it is within the acceptance of maintenance support effectiveness evaluators, it can be considered that the performance of the improved model is really good. If its complexity is too high, its performance cannot be considered as good.
Table 3 shows that the training time of the improved GAN-RBF model is 142 s, which is better than the BP model and Elman model and weaker than the RBF model. Its prediction time is 2.01 s, which is the best among the four models. After adding GAN to RBF, the complexity of this composite model has been improved to some extent, and training time has been increased. However, this increase is relatively small and completely within the acceptable range. On the other hand, through the optimization of GAN and RBF, the prediction efficiency of this composite model is improved, which is 0.36 s faster than the fastest RBF model among the other three models. Under the effect of optimal design, the complexity of the improved GAN-RBF model has no negative effect and even has higher efficiency in the prediction scenario.
On the whole, among four evaluation models, the improved GAN-RBF model ranks first in the most important index “accuracy”, and the model complexity is reasonable, which achieves the ideal evaluation effect. Li et al. [24] also used GAN to amplify the sample of the deep neural network and applied it to evaluate weapon system effectiveness, whose MSE is about . Apart from different research objects, the MSE of the GAN-RBF model in this paper reaches , which has higher accuracy than that model from the numerical point of view. The above discussion illustrates the following: (1) using amplified samples generated by GAN to train the RBF network can alleviate the problem of poor generalization ability caused by small training samples and effectively improve the training effect. It is a feasible way to combine GAN with a general neural network to solve the problem of small sample size in effectiveness evaluation or other research. (2) The optimization of two key points, the determination of GAN loss function and the initialization of RBF parameters, is also very effective. The category constraint based on the category probability vector reordering function is introduced to avoid the simplification of sample category and enhance sample quality. The parameter initialization method based on parameter components equidistant variation is simple to calculate, enhances the response of correct feature information, and reduces the risk of training overfitting.
5. Conclusion
Small sample evaluation of equipment maintenance support is a difficult problem in the equipment support field. The neural network evaluation model based on amplified samples proposed in this paper exploratively combines the optimized GAN and RBF, respectively, to evaluate maintenance support effectiveness. The experimental results show that, compared with the traditional neural network evaluation method, the improved GAN-RBF evaluation model has higher evaluation accuracy and can evaluate effectiveness value more accurately. This method not only solves the problem of insufficient maintenance support training samples but also provides a good reference for evaluation research in other fields. However, there are still some deficiencies in this paper. For example, in the maintenance support data, text data occupies a certain proportion. The quality of maintenance support data is not high because GAN cannot directly backpropagate the discrete sequence samples, which affects the training effect of the RBF network. Therefore, how to improve generating quality of maintenance support text data will be the focus of the next step.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Acknowledgments
The authors thank Professor Xingxin Li for his help in collecting experimental data. This work was supported in part by the National Defense Pre-Research Fund (9140A27010215JB34422).