Abstract

In this paper, a recognition model based on the improved hybrid particle swarm optimisation (HPSO) optimised backpropagation network (BP) is proposed to improve the efficiency of radar working state recognition. First, the model improves the HPSO algorithm through the nonlinear decreasing inertia weight by adding the deceleration factor and asynchronous learning factor. Then, the BP neural network’s initial weights and thresholds are optimised to overcome the shortcomings of slow convergence rate and falling into local optima. In the simulation experiment, improved HPSO-BP recognition models were established based on the datasets for three radar types, and these models were subsequently compared to other recognition models. The results reveal that the improved HPSO-BP recognition model has better prediction accuracy and convergence rate. The recognition accuracy of different radar types exceeded 97%, which demonstrates the feasibility and generalisation of the model applied to radar working state recognition.

1. Introduction

Radar working state recognition is used in developing models of the reconnaissance pulse signal characteristics and estimating the internal working state of a radar with prior knowledge. The rapid and accurate recognition of the radar working state is important for determining the radar threat level, evaluating the radar interference effect, and realising the interference decision.

When the radar is in a different working state, the signal parameter characteristic exhibits an obvious change. The radar working state recognition can be summarised as a pattern-classification problem. The existing research methods can be summarised as follows: methods based on statistical decisions, methods based on ambiguity decisions, methods based on syntactic structures, and methods based on artificial intelligence [1]. Various studies [2, 3] have solved the radar working pattern matching problem based on the Dempster–Shafer (D–S) evidence theory fusion method. However, for multifunctional radars with complex signals, the calculation of information fusion exhibits exponential growth. Other studies [4, 5] have introduced the recognition algorithm of the fuzzy function, which objectively reflects the samples’ real attributes, but the selection of the ambiguity function was not theoretical. Some studies [68] have proposed a syntactic model for accurately extracting radar words from an intercepted radar pulse train for multifunctional radar state recognition. However, these studies did not consider the influence of unstable factors, such as inaccurate prior information and poor signal data, which resulted in poor fault tolerance and generalisation ability. Thus, the improvement of the efficiency of radar working state recognition requires further investigation.

The backpropagation (BP) neural network has been widely used [9, 10] owing to its good self-learning and adaptive ability and has been applied to radar working state recognition. Some studies [11, 12] have used neural networks to establish the corresponding relationship between the signal characteristic parameters and the radar working state to form a recognition database and then identify unknown working states. However, the BP neural network has high requirements regarding the dataset quality and is sensitive to the initial network weights and thresholds of the gradient descent algorithm. Moreover, the BP can easily fall into local optima and has slow convergence speed. Considering the abovementioned problems, related studies [13, 14] have used swarm intelligence optimisation algorithms to optimise the BP neural network parameters. Since Kennedy and Ebarhant [15] proposed particle swarm optimisation (PSO) based on the influence of bird predation behaviour by group movement, numerous studies have increasingly analysed the phenomenon of different biological populations and were inspired by it to propose various metaheuristics. For example, Mirjalili [16] proposed a moth search algorithm (MSA) based on the way that moths spiral around a light source at night. Wang et al. [17] proposed monarch butterfly optimisation (MBO) inspired by the behaviour of monarch butterflies with seasonal changes. Heidari et al. [18] proposed Harris hawk optimisation (HHO) based on Harris’ surprise predation mechanism. Li et al. [19] proposed the slime mould algorithm (SMA) based on the behaviour and morphological changes of slime mould in the spread and foraging process. The PSO method finds the optimal solution through collaboration and information sharing between the individuals in the group and has the characteristics of simple principles, fewer parameters, and strong global search capabilities. Therefore, this method has been widely used in function optimisation [20], neural network training [21], fuzzy control systems [22], and other fields, and the algorithm is considered to be relatively mature. However, PSO may easily converge prematurely. Similar to other swarm intelligence optimisation algorithms, PSO has the disadvantages of slow convergence speed, low optimisation accuracy, and ease of falling into local optima [23] when solving complex optimisation problems.

Considering the problems of the traditional PSO algorithm, this paper proposes the improved HPSO algorithm based on the following three aspects:(1)Crossover and mutation operations in the genetic algorithm (GA) are introduced to update the particles, the hybrid population information is used to enhance the population diversity, and the algorithm’s convergence speed and accuracy are improved(2)Improved nonlinear decreasing inertia weights are used to balance the development and exploration of the algorithm(3)An asynchronous learning factor update strategy is used to achieve stronger global search capabilities and faster convergence

This study considered the multifunctional phased array radar to investigate the quick and accurate identification of the radar’s working state. A radar working state recognition model based on the improved hybrid PSO BP neural network (HPSO-BP) is proposed, using the improved HPSO global optimisation ability to optimise the structural parameters of the BP network and improve the recognition ability of the BP neural network. In this study, three types of radars were selected to construct the working state recognition models. The simulation results were compared to the results obtained by the BP, GA-BP, and PSO-BP to verify the accuracy, timeliness, and generalisation of the improved algorithm.

The rest of this paper is organised as follows. A detailed introduction to the HPSO algorithm and a description of the relevant optimisation technique are presented in Section 2. The radar working state recognition model is presented in Section 3. The simulation experiment and analysis of the results are presented in Section 4. Finally, the conclusions drawn from this study are discussed in Section 5.

2. Improved HPSO Algorithm

2.1. HPSO Algorithm

When the population converges, the similarity of the particles increases; therefore, the traditional PSO algorithm cannot easily jump out of a locally optimal solution. The HPSO algorithm abandons the method wherein the traditional PSO algorithm updates the particles by tracking the extremum and learns from chromosomal crossover and mutation operations in the GA. By combining the characteristics of the GA’s global optimisation, the HPSO algorithm searches for the optimal solution through particle swarm crossover and mutation. The steps of the algorithm are summarised as follows:Step 1. Initialise the particle velocity, position, and parameters, and set the population size and maximum number of iterations .Step 2. Calculate the fitness value, individual extremum, and group extremum.Step 3. Equations (1) and (2) are used to update the particle velocity and position, respectively:where , and are random numbers between [0, 1], is the current iteration number, represents the particle velocity, represents the individual extreme value, represents the group extreme value, represents the particle position, is the inertia factor, is the individual learning factor, and is the group learning factor.Step 4. Equations (3)–(5) are used to perform cross and mutation operations on particles and :where is the crossover probability, is the new particle of variation, and and are the upper and lower bounds of the particle values, respectively.Step 5. Update fitness value, individual extremum, and group extremum.Step 6 (Interiteration Optimisation). When the maximum number of iterations is reached, the optimal value should be the output. Otherwise, Steps 3–5 are repeated until the maximum number is reached.

In the HPSO algorithm, the selection of parameters, such as the inertia weight and learning factor, affects the optimisation ability of the algorithm. Therefore, these parameters must be improved.

2.2. Improvement of Inertia Weight

The inertia weight reflects the ability of inheriting the velocity of the previous particles. As the value of increases, the global search is improved. As the value of decreases, the local search is improved. In the traditional PSO algorithm, the fixed influences the algorithm’s convergence. The linear regression of results in the algorithm falling into local optima and requires many iterations. A previous study [24] proposed a nonlinearly decreasing to improve the PSO algorithm and search efficiency. However, in the early stage of the algorithm’s iterations, decreases too fast, which results in local convergence instead of the globally optimal solution. Accordingly, this study added two deceleration factors to so as to stably maintain a larger value in the early iteration for global optimisation and rapidly maintain a smaller value in the late iteration for algorithm convergence. Therefore, the convergence ability is improved and the global optimal solution is ensured. The improved nonlinear decreasing inertia weight is expressed as follows:where is the initial inertia weight, is the inertia weight when the iteration reaches the maximum number, and , and and are deceleration factors.

2.3. Improvement of Learning Factors

The learning factors and determine the maximum length of the particle’s flight toward the optimal direction of the individual or group. A larger will result in the particles tending to search in their own neighbourhood, while a larger will result in the particles searching within the groups’ scope. To ensure that the algorithm maintains an effective balance between global search and local search, this study used an asynchronous learning factor, as expressed in equation (7). In the early stage of the search, is larger and is smaller, which improves the self-learning ability of the HPSO. In the later stage of the search, is smaller and is larger, which makes the HPSO algorithm quickly approach the global optimal solution:where , , is the current number of iterations, and is the maximum number of iterations.

3. Recognition of Radar Working State

3.1. Structure of Radar Working State Recognition Model

This study combined the three-layer BP neural network with the improved HPSO algorithm. The structure of the radar working state recognition model based on the improved HPSO-BP is shown in Figure 1.

Generally, the modelling steps of the improved HPSO-BP are as follows:(1) Extract the characteristic signal parameters: the characteristics of the radar signal intercepted by the reconnaissance plane include the carrier frequency, pulse repetition period, and pulse width. The ranges of the characteristic parameters corresponding to each working state of the radar are different.(2) Generate and test the dataset: a random number generation method is used to generate the dataset. To reflect the characteristics of the radar signal parameters, the uniformity and independence of the dataset must be tested. After normalisation, the dataset is divided into the training and test sets.(3) Train the improved HPSO-BP neural network model: after building the neural network model, the characteristic parameters of the training set are used as the model’s input, and the working state of the radar is used as the output of the model to realise supervised autonomous learning. Then, the model performance is tested using the test set.(4) Radar working state recognition: the trained neural network model is used as the knowledge base for the radar working state recognition. With this model, the radar working state can be assessed, and the radar’s threat level can be estimated in real time.

3.2. Recognition Procedure Based on Improved HPSO-BP

To prevent the BP neural network from falling into local minima and improve the prediction accuracy and convergence rate of the network, this study used the global search capability of the improved HPSO algorithm to find the network’s optimal initial weight and threshold such that the BP neural network can achieve better recognition performance. The algorithm’s step flow is illustrated in Figure 2.

The specific steps of the algorithm are as follows:Step 1. During the data preprocessing, the dataset is generated using the random number generation method. After verifying that the dataset has uniform distribution characteristics, the input parameters are normalised in the interval of [0, 1]. The dataset is divided into a training set and a test set.Step 2. The structure of the BP network and the PSO parameters are determined. The structure of the BP network is determined by the dimensions of the input feature vector and the expected output dimension, and the calculation formula is expressed by equation (8). The BP network structure determines the particle dimension, and the calculation formula is expressed by equation (9):where is the number of nodes in the input layer, l is the number of nodes in the output layer, m is the number of nodes in the hidden layer, and D is the particle dimension.Step 3. The network training error is set as the fitness value of the particle:where is the expected output of the network, is the actual output, and is the set size. As the fitness value decreases, the particle performance is improved.Step 4. The particle position is updated according to equations (1)–(7) and the particle with the smallest error in each generation is considered as the current optimal particle.Step 5. When the maximum number of the particle swarm iterations is reached, the output optimal particle position is used as the network’s initial weight and threshold value. Otherwise, steps 4–5 are repeated.Step 6. The neural network is trained until the training target error is achieved, and the test set’s result is predicted.

Combined with the implementation steps of the improved HPSO-BP algorithm, the specific pseudocode is summarised in Algorithm 1

(1)Begin
(2)For () // is the particle population size
(3) Initialise velocity and position for particle ;
(4) Calculate the fitness value of particle and set ; // is the individual extreme value
  (5) End for
(6);// is the group extreme value
(7)For () // is the maximum number of iterations
(8) Improve , , and according to equations (6) and (7);
(9)For ()
(10)  Update and according to equations (1) and (2);
(11)  Apply the crossover and mutation operator to particle according to equations (3)(5);
(12)  Calculate the fitness value of particle ;
(13)  If ()
(14)   ;
(15)  End if
(16)  If ()
(17)   ;
(18)  End if
(19)End for
(20)End for
(21)Determine the optimal weight and threshold of the BP neural network according to ;
(22)While (the training target error is not reached)
(23)  Train the BP neural network;
(24)End while
(25)Predict the results on the test set;
(26)End

According to the pseudocode of the algorithm, we gradually analysed the time complexity of the algorithm as described below:(1)In steps 25, the population is randomly initialised, and the fitness is calculated. The population size is N, the solution dimension is , and the time complexity is .(2)In steps 720, there are two loop levels. The loop times are T and N, respectively. In the loop, the time complexity is mainly affected by steps 1012. In step 10, the time complexity of updating the particle’s D-dimensional velocity vector and position vector is . In step 11, the worst case is that all individuals participate in crossover and mutation, and the time complexity is . In step 12, the time complexity of the individual fitness calculation is . The sum of the worst time complexity of the operations from steps 10 to 12 is , and the time complexity magnitude is . Therefore, the total time complexity of steps 720 is .(3)In steps 2224, according to the pseudocode analysis of the BP algorithm in the literature [25], the training time complexity of the BP neural network is , where is the number of samples.

Therefore, the time complexity of the entire optimisation algorithm is , and the time complexity magnitude is . Hence, the time complexity of the improved HPSO-BP algorithm is related to the total number of particles, number of training samples, dimensionality of the solution dimension (BP network structure), and number of iterations of the algorithm and is mainly affected by the total number of particles.

4. Simulation Experiment and Analysis of Results

This experiment considered an airborne-phased array radar in air-air combat mode [26] as an example and used the improved HPSO-BP neural network model to identify its working state. The results were compared to the simulation results obtained using the BP, GA-BP, and PSO-BP models in terms of recognition accuracy and convergence. Additionally, the working state parameters of radars A and B obtained from the literature [27] were used to verify the model’s generalisation performance.

4.1. Generation and Verification of Datasets

In a complex electromagnetic environment, the radar signal parameters typically exhibit obvious randomness. Owing to the radar’s military-level sensitivity, it is currently difficult to obtain real data for the radar operating parameters. Some studies [26, 27] have randomly generated data based on a table of radar operating parameter ranges, but have not tested the randomness of the data. Uniformity and independence are two important indicators for testing the randomness of data. In this study, classical random number generation methods, such as the square method, linear congruence method, and Mersenne rotation algorithm, were used to generate the dataset. The dataset’s uniformity and independence were tested to select the dataset generation method used in this experiment. Table 1 lists the radar operating parameter ranges obtained from the literature [26].

4.1.1. Uniformity Test

In this study, the chi-square test was used to assess the uniformity. Under the same significance level, as the chi-square value decreased, the significance probability increased, and the uniformity improved. By considering the repetition variable of the S1 state in Table 1 as an example, the 1000 randomly generated sample data were normalised and divided into ten groups. The uniformity test results are presented in Table 2.

As presented in Table 2, when the standard significance level was set to 0.05, because the probability of the progressive significance was greater than 0.05, there was no significant difference between the actual and theoretical distributions, which indicates that the data generated by each method were uniformly distributed in each interval.

The chi-square value calculated by the Mersenne Twister algorithm was minimum, while the progressive significance was maximum, which indicates that the uniformity was optimal.

4.1.2. Independence Test

In this study, the correlation coefficient test was used to perform the independence test. As the correlation coefficient decreased, the significance probability increased, and the independence of the data was enhanced. The autocorrelation test results obtained using the SPSS software are shown in Figure 3.

As shown in Figure 3, when the degree of freedom was greater than 13, the significance probability of the random numbers generated by the linear congruence algorithm was less than 0.05, which indicates that the random numbers are correlated. In contrast, the significance probability of the random numbers generated by the other two algorithms was greater than 0.05, which indicates that the random numbers generated by the Mersenne Twister algorithm and middle-square algorithm have a stable distribution and satisfy the independence condition.

Combined with the two abovementioned testing methods, the random numbers generated by the Mersenne Twister algorithm have higher quality and satisfy the uniform distribution conditions, compared with the other two algorithms.

4.2. Model Evaluation Index

To quantitatively evaluate the prediction accuracy of the model, this study adopted the recognition accuracy rate (ACC) and determination coefficient () as the evaluation indices. The convergence rate was evaluated based on the number of convergent steps. With a larger ACC, the result was more accurate. Moreover, was in the range of [0, 1], and the performance of the model improved as approached 1. With fewer convergent steps, the performance of the model also improved. The specific calculation formulas of the relevant indices are as follows:where k is the number of samples, N is the number of sample categories, is the number of correctly identified samples of class , is the actual output of the sample, and i and are the ideal output of sample i.

4.3. Model Parameters’ Settings
4.3.1. Parameters’ Setting for BP Network Structure

There are five characteristic variables in the airborne-phased array radar, and the number of input layer nodes was set to 5. The output results corresponding to the six states are represented by values 16, and the number of output layer nodes is set to 1. The results obtained using the trial and error method demonstrate that the error was minimised when the number of hidden layer nodes was eight; therefore, the network structure was 5-8-1, as shown in Figure 4. The main parameters of the BP network were set as follows:(i)Activation function of hidden layer: Tansig(ii)Activation function of output layer: Purelin(iii)Training function of neural network: Trainlm(iv)Maximum iteration step of neural network: 1000(v)Training target error of neural network: 0.001(vi)Learning rate factor of neural network: 0.01

To avoid the shock of network training, the selection of sample categories should be balanced, the number of samples in each category should be approximately equal, and cross input is required for the samples belonging to different categories. This study adopted the Mersenne Twister algorithm to randomly generate 100 samples for each radar working state, and each sample had the five abovementioned characteristic variables.

4.3.2. Parameters’ Settings for Improved HPSO

(1) Comparison of Inertia Weight of Improved PSO. According to the improved formula of the inertia weight presented in Section 2.2, when the deceleration factor was and was 3, the proposed improved nonlinearly decreasing , the nonlinearly decreasing [20], and linearly decreasing are shown in Figure 5.

From Figure 5, the following conclusions can be drawn:(i)The linearly decreasing can easily fall into local optima(ii)In the early phase of iteration, the nonlinearly decreasing maintains a large value search, but the decline rate is too fast and may easily lead to local convergence without a global optimal search.(iii)The deceleration factor decelerates the decreasing rate of the improved nonlinear in the early stage of iteration, which results in maintaining a large global search value. Additionally, its rapid decrease in the late stage of iteration results in maintaining a stable value for local search and is more conducive to convergence.

Figure 6 compares the relationship between the fitness values and iteration times of the particles under different inertia weight methods. Compared with the other two methods, the number of iterations of the improved nonlinear PSO that is required to reach the optimal fitness is significantly reduced to avoid falling into local optima, and the optimal fitness of the proposed method is smaller. Therefore, the setting of the deceleration factor is reasonable. Furthermore, this demonstrates that the improved PSO algorithm can overcome the shortcomings of the slow iteration of particles and falling into local optima. When the number of PSO iterations is 20, the fitness curve of the IPSO-BP model converges, and the algorithm searches for the optimal value. If the search continues, the time cost of the search increases.

(2) Comparison of Learning Factor of Improved PSO. The value of the PSO learning factor affects the efficiency of information transfer between the individual particles and the group and ultimately determines the particle change speed and convergence effect. In most cases, this was considered to be constant. To prevent the basic PSO algorithm from falling into local optima and appearing premature during the optimisation, a shrinkage factor was introduced into equation (1), according to the literature [28], to eliminate the speed boundary limit and ensure the bounded and convergence characteristics of the PSO algorithm. The shrinkage factor calculation formula is expressed as follows:

The shrinkage factor calculation formula is expressed as follows:where φ is the contraction factor and c is the learning factor.

To compare the advantages of using asynchronous learning factors for improving the PSO algorithm, this study adopted three learning factors, namely, the asynchronous learning factor, synchronous learning factor, and shrinkage factor. Under the same change trend of the inertia factor, the relationship between the fitness values of different learning factors and the number of iterations was established. As shown in Figure 7, the introduction of the shrinkage factor still has the problem of easily falling into local optima and requiring more iterations. The optimal fitness value achieved by the synchronous learning factor of the improved PSO was significantly lower compared with that of the shrinkage factor of the improved PSO. However, the required number of iterations was still high. The dynamic adaptive change of the asynchronous learning factor satisfies the requirements of fast optimisation and has a low number of evolutions. Hence, the parameter optimisation problems, such as falling into local optima and having low recognition accuracy, are avoided. After the optimal BP is obtained, the algorithm’s recognition accuracy and search efficiency are improved.

In this study, the mean square error between the actual output and the BP neural network’s ideal output was selected as the fitness function. The total number of network weights and thresholds were considered as the PSO dimension.

According to the common value range of the other parameters, the improved operating parameters of the HPSO algorithm were set as presented in Table 3.

4.4. Analysis of Results
4.4.1. Comparison of Prediction Accuracy

In this study, the neural network toolbox in MATLAB was used in the simulation experiment. After the normalisation process, the dataset was used as the model input. In each state, 80% of the 100 samples were randomly used as training samples and the remaining 20% were used as test samples. To facilitate the comparative analysis of the model, the training sets were used to train the BP, GA-BP, PSO-BP, and improved HPSO-BP neural network, while the test sets were used to test the trained network. The comparison of the prediction accuracy was evaluated by considering the recognition accuracy and determination coefficient. The network structures of the four models were identical. The simulation program for each model was run in MATLAB, and the results are shown in Figure 8.

To further verify each model’s prediction performance, the calculation results of each prediction model’s evaluation indices are presented in Table 4.

As shown in Figure 8, the four models can realise the correct recognition of the radar working state. The overall recognition result of the improved HPSO-BP model is the most similar to the actual value, followed by the PSO-BP model, GA-BP model, and BP model with the worst recognition performance, respectively. This indicates that the PSO can overcome the shortcoming of the BP model’s low recognition accuracy. However, owing to the limitations of PSO, the recognition results for some mutations have larger errors compared with the actual values. The improved HPSO-BP algorithm improves this shortcoming. The recognition accuracy and were 0.975 and 0.986, respectively, both of which are higher compared with those of the BP, PSO-BP, and GA-BP models. This indicates that the improved HPSO-BP model has the highest degree of fit and best recognition accuracy among the four models. Compared with PSO-BP, the recognition accuracy and increased by 8.3% and 4.6%, respectively, which further demonstrates that the proposed algorithm improves the model’s performance compared with the traditional PSO algorithm.

4.4.2. Convergence Comparison

To evaluate the convergence, the number of convergent steps was considered. The test results are shown in Figure 9.

By comparing the error curves of the four network models during training, it can be seen that the mean square error decreased as the number of training steps increased. The following conclusions are drawn:(i)The traditional BP neural network model tends to fall into local optima, and the convergence speed is slow.(ii)The GA-BP model improves the convergence rate, but the parameter optimisation time is still long owing to the introduction of the GA’s coding, selection, crossover, mutation, and other operations.(iii)The PSO-BP model algorithm is simple and has fast convergence speed, but the PSO algorithm can easily fall into local optima.(iv)Owing to the global search optimisation ability of PSO, the improved HPSO-BP network model tends to converge at step 26 with the best performance of 0.006375. The convergence speed is significantly higher compared with that of the other three algorithms.

The improved PSO algorithm is feasible and superior when applied to the optimisation of the BP neural network weights and threshold parameters. Therefore, the improved HPSO-BP algorithm is more efficient in identifying the radar working state.

4.4.3. Model Generalization Performance Verification

To test the generalisation performance of the improved HPSO-BP radar working state recognition model, we selected the operating state parameter ranges of the model A and model B radars [27] as presented in Table 5.

First, according to the Mersenne Twister algorithm, 100 samples were randomly generated for each working state of each radar type; 80% of the samples were randomly selected as the training set, and 20% of the samples were selected as the test set. Therefore, 80 test samples were used for each radar. Secondly, the number of the input layer nodes of the network was set to 3, the number of output layer nodes was set to 1, and the number of hidden layer nodes was set to 7. Hence, the network structure was 3-7-1, as shown in Figure 10. The other parameters were set using the method described in Section 4.3.

Table 6 compares the proposed model’s evaluation indices to those of the BP, GA-BP, and PSO-BP models.

As presented in Table 6, the accuracy of the improved HPSO-BP model was increased by up to 24.3%, and the improved HPSO-BP model had the highest recognition accuracy and minimum number of convergence steps, followed by the PSO-BP model, GA-BP model, and BP model with the worst performance, respectively. Because the data quality of different radar models is different, there may exist interference from abnormal data such as random noise; therefore, the accuracy improvement is also different. However, the indices of the radar working state recognition model based on the improved HPSO-BP are superior to those of other recognition models.

Based on the three abovementioned types of radar recognition results, it is demonstrated that the proposed method has good generalisation ability in the classification and recognition of the radar working state.

5. Conclusions

A model based on the improved HPSO-BP is proposed to identify the radar working state. The conclusions drawn from this study are as follows:(1)By introducing cross variation based on PSO, an improved nonlinear decreasing inertia weight strategy, and an asynchronous learning factor, the improved HPSO algorithm achieved better global optimisation and particle searching speed and avoided falling into local optima.(2)By combining the global optimisation capability of the improved HPSO algorithm with the recognition capability of the BP neural network, the corresponding relationship between the radar signal parameters and the working mode was established, and the working state recognition model of the radar was built.(3)The proposed model overcomes the shortcomings of the traditional BP neural network, such as the slow convergence rate and low recognition accuracy. Compared with the standard BP network, GA-BP network, and PSO-BP network, the proposed model has higher accuracy and faster convergence rate. Additionally, the proposed model has higher recognition rate for different radar types and better generalisation.

This study has various limitations. For example, the BP algorithm is a supervised learning algorithm, and the recognition of newly arrived category samples lacks adaptability. In an actual combat environment, the radar is an enemy and its working state is unknown. Therefore, the scope of future work is to investigate an unsupervised incremental target state recognition method and give full play to the cognitive electronic countermeasure system’s adaptability and timeliness.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China Youth Program (61703411).