Abstract
Fault diagnosis is a guarantee for the reliable operation of heterogeneous wireless sensor networks, and accurate fault prediction can effectively improve the reliability of wireless sensor networks. First, it summarizes the node fault classification and common fault diagnosis methods of heterogeneous wireless sensor networks. After that, taking advantage of the short learning time, fewer parameter settings, and good generalization ability of kernel extreme learning machine (KELM), the collected sample data of the sensor node hardware failure is introduced into the trained kernel extreme learning machine and realizes the fault identification of various hardware modules of the sensor node. Regarding the regularization coefficient and the kernel parameter in KELM as the model parameters, it will affect the accuracy of the fault diagnosis model of the kernel extreme learning machine. A method for the sensor nodes fault diagnosis of heterogeneous wireless sensor networks based on kernel extreme learning machine optimized by the improved artificial bee colony algorithm (IABC-KELM) is proposed. The proposed algorithm has stronger ability to solve regression fault diagnosis problems, better generalization performance, and faster calculation speed. The experimental results show that the proposed algorithm improves the accuracy of the hardware fault diagnosis of the sensor nodes and can be better applied to the node hardware fault diagnosis of heterogeneous wireless sensor networks.
1. Introduction
Heterogeneous wireless sensor networks (HWSNs) are a kind of distributed sensor network, which is composed of a large number of fixed or mobile wireless sensor nodes in the form of self-organization and multihop transmission [1, 2]. The monitoring data collected by the sensor nodes can be transmitted between multiple nodes in a hop-by-hop manner. With the expansion of the scale of sensor networks, practical applications are faced with harsh environments such as high humidity, high and low temperatures, corrosive gases, high-energy particle radiation, drastic changes in pressure, and strong electromagnetic noise interference [3–5]. At the same time, the sensor nodes during their life cycle are affected by factors such as the manufacturing process, actual environment, and node energy, which are also very prone to failure and affect the network structure. This also poses great challenges to the connectivity, reliability, security, and lifetime of HWSNs. This is the key problem that needs to be solved urgently in the widening of application fields of heterogeneous wireless sensor networks. The sensor nodes of heterogeneous wireless sensor networks are often deployed in some special areas, and it is often impossible for humans to directly maintain equipment [6, 7]. Therefore, it is urgent to apply practical fault diagnosis technology to the node fault diagnosis of heterogeneous wireless sensor networks.
Through the fault diagnosis of the sensor node, various abnormal conditions and fault states of the sensor node can be timely and accurately diagnosed [8]. At the same time, predictive measures are given to guide the reliable and stable operation of the sensor network to ensure the stability, reliability, and availability of the network [9, 10]. The failure loss of the sensor node hardware is reduced to a minimum, so that the sensor node can effectively play the maximum role. The hardware failure of sensor nodes is the basis and prerequisite for various failures of heterogeneous wireless sensor networks, and it is also a problem that needs to be solved urgently in the diagnosis of heterogeneous WSNs, and its research has very important significance [11, 12].
1.1. Problem Statement and Motivate
Heterogeneous wireless sensor networks have made considerable progress in hardware design, data processing performance, and communication stability. At the same time, the market’s demand for network reliability and sustainability is also increasing. From the perspective of the entire wireless sensor network, the network is required to have better environmental applicability. On the other hand, the sensor nodes in wireless sensor networks require better robustness. Due to the influence of many unavoidable factors and the use of WSNs in extremely complex and harsh environments, the probability of sensor node failure is much higher than other systems. A faulty sensor node will cause data errors or missing during the collection cycle, directly reducing the reliability of the collected data, and ultimately affecting the overall decision-making of the system.
In practical applications, because the monitoring area is generally large, the external environment where the node is located is harsh and complex, and various faults are often prone to occur. The reliability and work continuity requirements of monitoring network operations are gradually increasing. After a node fails, it is prone to node failure, unreliable, or distorted data collected, causing network crashes and other problems. The probability of hardware failure of sensor nodes in HWSNs is much higher and more serious than other problems. It is particularly important to study fault diagnosis methods of sensor nodes in heterogeneous wireless sensor networks.
1.2. Contribution
In this work, a node fault diagnosis method of the heterogeneous sensor network based on kernel extreme learning machine optimized by the artificial bee colony algorithm (ABC-KELM) is proposed. In comparison with the current general selection approaches, the main contributions of our work in this paper can be summarized as follows: (1)Characterize the issues of the node fault diagnosis for HWSNs and classify the current fault diagnosis of the fault diagnosis methods(2)Propose a node fault diagnosis method of the heterogeneous sensor network based on kernel extreme learning machine optimized by the improved artificial bee colony algorithm (IABC-KELM).(3)Evaluate the performance of the proposed algorithms by comparing them with the fault diagnosis methods of the KELM, PSO-KELM, and ABC-KELM algorithm
The remainder of this paper is organized as follows: Section 2 discusses the related work and classifies the current fault diagnosis of the fault diagnosis methods. Section 3 describes the basic principles of kernel extreme learning machine. Section 4 Describes the algorithm design idea of kernel extreme learning machine optimized by the artificial bee colony algorithm, and Section 5 designs the implementation fault diagnosis steps of iABC-KELM for HWSNs. Section 6 provides the parameters and simulation results that validate the performance of the proposed algorithm. Section 7 concludes the paper.
2. Classification of Node Hardware Failure and Related Work
Generally speaking, a sensor node is composed of a power module, a control processor module, a sensor module, a storage module, and a communication module. From the perspective of the functional division of wireless sensor networks, it is mainly composed of terminal nodes, routing nodes, and sink nodes. The terminal node completes the collection of perception data, the routing node is responsible for the data transmission, and the sink node is responsible for data collection and uploading to the processing center. The terminal node is particularly important in the entire data collection process. The terminal node is mainly composed of five modular modules: power supply module, CPU control module, wireless communication module, sensing module, and storage module. The fault classification of the node hardware of the heterogeneous wireless sensor network is shown in Figure 1. The node hardware failures of wireless sensor network mainly include impact fault, short circuit fault, bias fault, offset fault, periodic interference fault, nonlinear dead time fault, open circuit fault, and drift fault. The common faults and causes of sensor components are shown in Table 1.

There are many reasons for the failure of WSNs’ sensor nodes, and the effects of the failure are also different. Generally, most sensor nodes are active, and sensing nodes are distributed in harsh application environments. The nodes are powered by dry batteries, and it is inconvenient to replace the power supply or charge. When the node is working, the energy is gradually consumed. When it is lower than the normal working setting value, the node is unstable, the measurement is inaccurate, and the collected data has a large error, which affects the normal operation of the monitoring system. Common faults of the sensor nodes mainly include impact faults, short circuit faults, bias faults, battery exhaustion, periodic interference, nonlinear dead zone faults, open circuit faults, and temperature drift faults. The commonly used methods for fault diagnosis of the sensor nodes are shown in Figure 2.

At present, many experts and scholars have begun to conduct a lot of research on fault diagnosis of sensor nodes and have achieved some results. Shahriar et al. [13] studied the problem of recovering a batch of VNs affected by a substrate node failure. The combinatorial possibilities of alternate embeddings of the failed virtual nodes and links of the VNs make the task of finding the most efficient recovery both nontrivial and intractable. It is reported in [14] that the author proposed a data transmission scheme considering node failure for finishing validation of block data on blockchain, which firstly sets response threshold level to detect failure node, and then using greedy idea constructs communication tree to organize all nodes forwarding block data. Ghimire et al. [15] proposed a fault diagnosis method based on rough set theory for the problem of wireless sensor network node hardware fault diagnosis, which effectively diagnoses and locates each module of the sensor node with high diagnostic accuracy. Islam et al. [16] researched the fault diagnosis of sensors and proposed a wavelet packet-based fault diagnosis method, which can be effectively applied to sensor perception diagnosis applications. Gangsar et al. [17] proposed a heterogeneous wireless sensor network fault diagnosis method based on support vector machine to improve the fault diagnosis accuracy and network performance for the problem of low diagnosis accuracy of sensor node fault diagnosis. Cheng et al. [18] proposed an improved DFD node fault diagnosis method to greatly improve the accuracy of node fault diagnosis. Preeth et al. [19] proposed a sensor node fault diagnosis method based on artificial immune and fuzzy -means. The problem was transformed into the problem of obtaining the optimal solution of the objective function, and the intelligent optimization algorithm was introduced to accurately realize node fault diagnosis.
The above methods partially improve the fault diagnosis accuracy of the node hardware or introduce intelligent optimization methods to improve the accuracy of the fault diagnosis. However, these algorithms have more complex calculations, poor algorithm convergence speed, and sacrifice network energy to improve the accuracy of fault diagnosis. In this paper, by introducing the machine method, the kernel extreme learning machine has the characteristics of the short learning time, few parameter settings, and good generalization ability, and the collected fault sample data of the node hardware is introduced into the trained extreme learning machine. Regarding the regularization coefficient and the kernel parameter of the kernel extreme learning machine as the optimization target, these problems affect the accuracy of the fault diagnosis model of the nodes. Therefore, a node fault diagnosis method of the heterogeneous sensor network based on kernel extreme learning machine optimized by the artificial bee colony algorithm (ABC-KELM) is proposed in this paper, and the method realizes the identification of the hardware faults of the sensor nodes. The proposed algorithm is simple to implement, fast in network training, and high in fault diagnosis accuracy, which can effectively improve the reliability and stability of the network.
3. Kernel Extreme Learning Machine
Kernel extreme learning machine (KELM) is a new machine learning algorithm that combines kernel learning and extreme learning machine. In 2012, Huang et al. conducted indepth research on least squares support vector machines and found that kernel functions have inherent advantages in processing large-scale and complex data [20]. Later, Huang et al. introduced the kernel function into the ELM to form an extreme learning machine (KELM) with least square optimal solution. The algorithm data processing is simpler, can reduce the adjustment parameters of the algorithm, and improve the convergence speed of the algorithm [21]. The kernel extreme learning machine (KELM) introduces the kernel function into the extreme learning Machine. It is a decision model with stronger robustness and better generalization performance than the conventional ELM model. The kernel extreme learning machine is composed of a data generator, a trainer, and a kernel learning machine [22]. The structure of KELM is shown in Figure 3. Among them, the data generator is used to generate a series of sample vectors of unknown probability distribution functions, and the trainer can be used to distinguish sample categories or fit actual data models. The kernel extreme learning machine uses the data set (, ), (, ), ...,(, ) composed of several observed samples to construct an appropriate function to approximate the actual trainer [23].

Suppose there are arbitrary samples (, ), where and , for a single-layer neural network with hidden neurons, the hidden layer output can be expressed as a matrix
In order to minimize the system training error, we need to consider minimizing empirical risk and minimizing structural risk. Therefore, here, we also minimize the error and output weights, which can be expressed by the following formula.
Wherein, the parameters are and , and it is the error between the network output corresponding to the training sample and the true value. According to the KKT condition, the above solution formula is equivalent to
The parameter represents the Lagrangian operator of the th sample, and the parameter is a nonnegative constant. The corresponding optimization constraints are as follows:
Putting the formulas (5), (6), and (7) into formula (4), we can get
Put formula (8) into the equation get the classification decision function:
Usually, the training data is much larger than the number of hidden nodes of the extreme learning machine; so, we can get
Putting formulas (10) and (11) into formula (8), we can get
The approximation function of the training function can be written as
According to the kernel theory, the activation function of the extreme learning machine is written in the form of a kernel function:
The expression of the kernel extreme learning machine is where the parameter is the hidden layer matrix of the neural network, the parameter is the generalized inverse matrix of , the parameter is the prediction target vector, and the parameter represents the diagonal element of the symmetric matrix plus the offset 1/. It can be seen from formula (15) that the kernel extreme learning machine is an optimized solution obtained by combining the kernel learning theory and the standard optimization methods. Due to relatively weak optimization constraints, the KELM algorithm has better generalization performance.
The characteristics of only one parameter (s) in the radial basis function (RBF) are more beneficial to optimization; so, the RBF radial basis function is used as the kernel function, namely,
Since the regularization factor and the kernel parameter need to be set in advance, the performance of the KELM method is greatly affected by these two parameters. The steps of the kernel extreme learning machine algorithm are as follows:
Step 1. Initialize the data sample , , and select the kernel function of the kernel extreme learning machine and the regularization factor .
Step 2. Calculate the hidden layer output matrix of KELM.
Step 3. Calculate the output function of the KELM trainer.
Compared with the extreme learning machine algorithm, the kernel extreme learning machine has a stronger ability to solve regression prediction problems, better generalization performance, and faster calculation speed when it obtains better or similar prediction accuracy. In the kernel extreme learning machine algorithm, its own characteristics determine that the specific form of the feature mapping function in the hidden layer node does not need to be given, but the value of the output function can be obtained only by knowing the form of the kernel function. At the same time, there is no need to set the number of hidden layer nodes when solving the output function value; so, there is no need to set the initial weight and bias values of the hidden layer.
However, in actual application of the KELM method, its performance is affected by important parameters in the model. Practice has proved that the classification accuracy of KELM is affected by the regularization coefficient and the setting of the kernel function parameter , and the classification accuracy is easily trapped in a local minimum. Therefore, the parameters of KELM are improved by the artificial bee colony optimization algorithm to obtain the optimal hardware fault diagnosis model of the sensor nodes.
4. Kernel Extreme Learning Machine Optimized by the Artificial Bee Colony Algorithm (ABC-KELM)
The main idea of the algorithm is to use the better search ability of the artificial bee colony algorithm to optimize the regularization factor and the kernel parameter of KELM and find the optimal parameters or approximate optimal parameters and a more compact network structure. While ensuring to maximize the classification accuracy of the KELM model, minimize the output weight norm as much as possible to further improve the generalization ability and classification results of the KELM model.
4.1. Artificial Bee Colony Algorithm
The artificial bee colony algorithm (ABC) imitates the advanced swarm intelligence behavior exhibited by bee colony foraging behavior, searching for the best nectar source nearby through two mechanisms of division of labor and role conversion between different bees [24]. The bee colony carries out different activities according to their division of labor and realizes the information sharing and interaction of the entire bee colony through the form of swing dance. This method enables the bee colony algorithm to converge quickly, find the global optimal solution quickly, and is robust when solving combinatorial optimization problems. At the same time, the ABC algorithm has the advantages of fewer parameters and simple implementation and has been widely used in many fields such as polynomial function optimization, neural network parameter optimization, and industrial system design optimization [25]. The bees in the algorithm can be divided into three types: the employed bees, the observation bees, and the scout bees. The location of the food source represents a set of feasible solutions to the problem to be optimized, and the amount of nectar represents the fitness value of the feasible solution. The number of the employed bees and food sources is equal, and the location of each food source corresponds to one employed bee [26].
Assuming that the number of food sources is SN, the fitness value of the i-th food source is .
The employed bee searches for food sources according to formula (17). If a new food source is found in the neighborhood of the existing food source j, and the fitness value of is greater than the fitness value of , the employed bee will replace with ; otherwise, keep .
In the formula, the parameter is a random value between [-1, 1], the parameter is a random number between [1, ], , and the parameter is the dimension of the feasible solution.
When all the employed bees have completed the search, they will share the food source information with the observation bees in the hive. The observing bee will select the th () food source according to the roulette method; that is, the greater the probability corresponding to the food source, the greater the probability of being selected. The calculation formula of the parameter is
Then, the observation bee searches for the new food sources in the selected food source neighborhood, calculates and compares the fitness values of the new and old food sources, and retains the food sources with a large fitness value.
When the food source reaches the limit cycle after the number of iterations, the position information is still not updated. Correspondingly, the observing bee abandons the food source and becomes a detective bee and searches for a new food source according to formula (19).
In formula (19), the parameters and are the upper and lower limits of the th element in the feasible solution.
The ABC algorithm has the advantages of the simple calculation process, fewer parameters to be set, strong global search capability, and good robustness. The ABC algorithm is mainly used to find the optimal ELM input layer weight and hidden layer threshold [27].
The ABC algorithm has many advantages such as group intelligence, few parameters and easy operation, superior convergence speed and convergence accuracy, good robust performance, and good expansion performance, which makes the ABC algorithm widely concerned and tried to be applied to engineering optimization problems. However, when the basic ABC algorithm is applied to solve the high-dimensional complex optimization problems, it is similar to other swarm intelligence optimization algorithms and has obvious defects such as easy to fall into local optimal solutions and poor convergence accuracy [28]. Therefore, some researchers have improved the ABC algorithm. The above improvement strategies have accelerated the convergence speed and accuracy of the algorithm to a certain extent. However, there is still the possibility of premature convergence when facing the complex high-dimensional optimization problem to be solved and the possibility of falling into the local optimal solution. If the ABC algorithm is premature, the Cauchy mutation operation is used; otherwise, the algorithm iteration continues.
When the artificial bee colony algorithm falls into the local optimal solution, because the standard ABC algorithm lacks the ability to jump out of the local optimal solution, this paper introduces a disturbance mechanism into the algorithm at this time to improve the ability of the ABC algorithm to jump out of the local optimal solution. The specific perturbation implementation method is as follows: first, perform Cauchy mutation perturbation on the optimal individuals copied with a certain probability to increase the diversity of the population. Then, the optimal individual after n mutations is optimized twice, giving the algorithm the ability to jump out of the local optimal solution. The Cauchy mutation operation is as follows:
In formula (20), the parameter is the current optimal individual position in the colony, and the parameter is the disturbance amplitude adjustment parameter, which is 0.5. Cauchy(0,1) is the random number generator that satisfies the Cauchy distribution.
4.2. Kernel Extreme Learning Machine Optimized by the Artificial Bee Colony Algorithm
The improve kernel extreme learning machine optimized by the artificial bee colony algorithm is proposed in this paper, which has the following advantages in fault diagnosis of HWSNs. First, the ABC algorithm based on Cauchy mutation is a global search algorithm that can avoid local optima. Secondly, compared with traditional neural networks that need to optimize the input layer weights, hidden layer thresholds, and output weights, this method only needs to optimize the regularization factor and the kernel parameter of KELM. Then, the output weight is calculated, which improves the training speed and has better generalization ability.
The specific steps for optimization are as follows: (1)Randomly generate SN food sources and initialize the maximum number of iterations MCN, the maximum lag parameter limit, and the colony size. The location of each food source θi () is a set of input layer weights and hidden layer thresholds, and the calculation formula of food source θi is
In formula (21), the parameter is initialized to a random value between [-1, 1], and the parameter is initialized to a random value between [0, 1]. The parameters and are the number of the input layer and the hidden layer nodes, respectively. (2)The calculation formula of the fitness function is
The parameters in formula (22) are the input layer weights of KELM corresponding to the th food source and the mean squared error (MSE) of the hidden layer threshold training. The calculation formula of the mean squared error is
In formula (23), the parameters and are the actual output value and the expected output value, and the parameter is the number of samples.
The employed bees search for a new food source in the neighborhood of the existing food source according to formula (17), calculates and compares the fitness value between the new and old food source, and retains the food source with a large fitness value. The dimension of the feasible solution is (3)After all the employed bees have selected their food sources, the observation bee selects the search object according to the probability of the th () food source. Then, search for new food sources in the neighborhood of the th food source according to formula (17) and keep the food sources with large fitness values(4)A certain food source has not been updated after the limit iterations, and the bee corresponding to the food source searches for a new food source according to formula (19)(5)Repeat steps (2) to (4) until the number of iterations reaches MCN, and the loop ends(6)The parameters corresponding to the food source with the largest fitness value in the cycle are used as the optimal input layer weight and hidden layer threshold of KELM
Decoding from the returned optimal solution can obtain the optimal input layer weight and hidden layer threshold. The flow chart of the artificial bee colony optimization extreme learning machine is shown in Figure 4.

The time complexity indirectly reflects the length of time the algorithm executes. In the ABC-KELM algorithm, it is assumed that the execution time required to initialize the parameters (under the condition that the population size is , and the spatial dimension is ) is , and the time to generate a uniform distribution is . The time required to find the fitness value is , and then the time complexity of the initial stage of the ABC-KELM algorithm is as follows:
Assuming that the execution time required for the iterative update of each dimension of the individual is the same, which is , the time for comparing the advantages and disadvantages and selecting the best after iteration is . Then, the time complexity of the algorithm at this stage is
Therefore, the total time complexity of the ABC-KELM algorithm is
In the IABC-KELM algorithm, the time required for the initialization phase of the algorithm is basically the same as the ABC-KELM algorithm. Therefore, the time complexity of the initialization phase of the improved algorithm is the same as equation (25). In the algorithm loop, suppose the calculation time of the weighted center is , the calculation time of the individual learning position is , and the calculation time of the comparison and selection process between the learning individual and the initial individual is . Then, the time complexity of the loop part is
Therefore, the total time complexity of the IABC-KELM algorithm to solve the optimal of each generation is
In summary, the improved strategy of the IABC-KELM algorithm does not increase the time complexity of the algorithm solution compared to the initial ABC-KELM algorithm.
5. Comparison and Analysis of Algorithm Simulation
In order to obtain the output of the sensor component of the faulty node, it takes a lot of time and cost to do experiments due to the acquisition of a large amount of the fault data. Most of the existing literature uses Simulink simulation to obtain the fault output data. We refer to the use of Simulink in MATLAB to simulate the hardware fault of the sensor node and use it for fault diagnosis and prediction of the sensor node. The mathematical model is used to describe the output voltage signal changes after the occurrence of four types of common faults, including the impact faults, bias faults, short-circuit faults, and offset faults, which are prone to occur in the sensor nodes of WSNs.
Based on this, a fault diagnosis algorithm for sensor nodes based on kernel extreme learning machine optimized by artificial bee colony is proposed to diagnose hardware faults of the sensor nodes.
The steps of the node fault diagnosis method of the wireless sensor network based on IABC-KELM mainly include (1)Establish a fault diagnosis decision table for the wireless sensor network(2)Use discernibility matrix reduction algorithm based on core attributes to reduce redundant attributes(3)Extract fault diagnosis rules based on the simplest attributes(4)Establish the KELM fault diagnosis model according to the diagnosis rules and use the artificial bee colony algorithm to improve and optimize it, so that the improved algorithm has the best classification effect. Then, use the optimized kernel extreme learning machine algorithm to diagnose the hardware fault of the sensor node. The flowchart of the fault diagnosis strategy for the sensor nodes of WSNs based on IABC-KELM is shown in Figure 5

The important parameter regularization factor and the kernel parameter of the multicore function in the ABC-KELM algorithm are calculated by the artificial bee colony algorithm optimization iterative calculation. In order to prevent the ABC algorithm from overfitting and local optimal solution problems, and to obtain unbiased performance evaluation that we do not want, we introduce a disturbance mechanism into the algorithm to improve the ability of the ABC algorithm to jump out of the local optimal solution. In order to increase the diversity of the population, Cauchy mutation is performed on the optimal individuals with a certain probability , and then the optimal individuals after mutation are optimized twice, which endows the algorithm with the ability to jump out of the local optimal solution.
The improved artificial bee colony algorithm is used to optimize the parameter regularization factor and the kernel parameter of the kernel extreme learning machine model, and the obtained optimal parameter set is used to train the kernel extreme learning machine model. Finally, use the test set to test on the trained fault diagnosis model to obtain the node fault diagnosis results of HWSNs.
The main steps of node fault diagnosis of HWSNs based on the IABC-KELM algorithm are as follows:
Step 1. The node fault diagnosis data set of HWSNs is divided into training data set and test data set. Seventy percent of the node fault diagnosis data set is used as the training set, and the remaining 30% is used as the test set.
Step 2. Randomly generate the initial population and encode it. The parameters of the initialization algorithm include the number of artificial bee colonies SN, the maximum number of iterations MCN, the learning period , the learning coefficient , the probability i, and the position information of all the populations that are initialized.
Step 3. Using the solution vector obtained in step 2 for decoding, the regularization factor C and the kernel parameter s can be obtained, which are used to train the multi-core extreme learning machine model.
Step 4. Increase the number of iterations, .
Step 5. According to the execution probability , update the optimal solution from the update strategy. In the first iteration, the probability of each solution update strategy being selected is equal, and the probability is 0.25.
Step 6. Use the selected solution update strategy to update the solution.
Step 7. Use the parameter set obtained in step 6 to train the kernel extreme learning machine model on the training set and calculate the fitness value corresponding to each solution.
Step 8. Update the individual optimal fitness value and individual optimal position by comparing the current fitness value and the solution position. If the current fitness value is less than the fitness value in the historical record, the fitness value of the historical record is kept unchanged. Otherwise, generate a new solution according to the probability that changes with the number of iterations and go to step 7 to run; finally, go to step 9 to run.
Step 9. By comparing the current fitness value with the optimal fitness value in the historical record, the global optimal fitness value and the global optimal solution are updated. If the currently obtained optimal fitness value is less than the fitness value in the historical record, the historical record value will be kept unchanged; otherwise, it will replace the recorded fitness value.
Step 10. According to the result of the th solution update strategy phase, formula (17) and formula (18) are used to update the probability value of each strategy.
Step 11. If the algorithm meets the termination condition (if the iteration reaches the set maximum value), go to step 12; otherwise, go to step 4 to continue the algorithm.
Step 12. Obtain the optimal regularization factor and the kernel parameter from the global optimal solution.
Step 13. The parameter values (, ) obtained in step 12 are used for nuclear extreme learning machine training and tested on the fault diagnosis test data set, and the average value of the calculation results is taken to finally obtain the node fault diagnosis result of HWSNs.
The pseudocode based on the IABC-KELM algorithm is shown in Table 2.
6. Simulation Results and Analysis
6.1. Simulation Environment Settings
In order to verify the performance of the improved IABC-KELM algorithm in terms of convergence and optimization speed, the proposed IABC-KELM algorithm is compared with PSO-KELM algorithm, GWO-KELM algorithm, and ABC-KELM algorithm on function test and regression and classification data sets, respectively. The maximum number of population evolution in all experiments is set to 50, and the population size of the algorithm is 30. All experiments were run 50 times, and the root mean square error or the average and standard deviation of the classification accuracy were taken as the experimental results. The parameter settings of the PSO algorithm are as follows: learning factor , , , , the number of particles in the initial population is 30, and the maximum number of iterations is . The GWO algorithm sets the gray wolf population size to 30, the control parameters are and , and the maximum number of iterations of the algorithm is 1000. The parameters of the ABC-KELM algorithm and the IABC-KELM algorithm are set as follows: the number of the bees population NP is 30, the number of hired bees , and the maximum number of mining times is NP/2.
6.2. Test Objective Function Optimization
The simulation experiments in this section is carried out from the test objective function optimization (Sinc function fitting simulation experiment, regression data set simulation experiment, and classification data set simulation experiment) and the sensor node hardware fault diagnosis of WSNs.
6.2.1. Sinc Function Simulation Experiment Comparison
The four algorithms are compared by fitting the Sinc function. The expression of the Sinc function is as follows:
We set to generate 1000 data sets × uniformly distributed from [-10,10] and calculate 1000 data sets. Then, generate 1000 uniformly distributed noises ε ranging from [-0.2,0.2]. Let the training set be and then generate another group of 1000 data sets as the test set. Gradually increase the number of iterations of the four algorithms to fit the function. The ABC-ELM and IABC-ELM algorithm parameter settings are the same. The function fitting iterative calculation results of the four algorithms are shown in Figure 6.

It can be seen from Figure 6 that as the number of simulation iterations increases, the root mean square error of the function fitting of the four algorithms is gradually shrinking. It can also be seen that the fitting error based on the PSO-KELM algorithm is the largest, the fitting error based on the GWO-KELM algorithm is larger, and the fitting error based on the ABC-KELM algorithm is smaller. The IABC-KELM algorithm proposed in this paper has the smallest fitting error and the best fitting effect.
In addition, root mean square error (RMSE), mean absolute error (MAE), and relative standard deviation (RSD) indicators are used as evaluation indicators for error analysis. The calculation formulas of the three indicators are as follows:
Among them, the parameter represents the measured value, represents the predicted value, the parameter is the number of samples, the parameter = is the absolute error, and the numerator and denominator of RSD are in the form of standard deviation. Among them, the smaller the RMSE and MAE index values, the lower the prediction error, and the closer the RSD index value is to 1, indicating that the predicted value is closer to the measured value. The Sinc function fitting results are shown in Table 3.
It can be seen from Table 2 that calculated by the PSO-KELM algorithm, the index values of RMSE and MAE are the largest, and the index value of RSD is the smallest, and the test result performance is poor. The RMSE and MAE index values of the GWO-KELM algorithm are larger, and the index value of RSD is even smaller, and the test result performance is average. The RMSE and MAE index values of the ABC-KELM algorithm are smaller, and the RSD index value is more connected, and the test result performance is better. The RMSE and MAE index values of the IABC-KELM algorithm are the smallest, the index value of RSD is the largest, and the test result performance is the best. It shows that the error of the IABC-KELM model is relatively smaller, and the prediction accuracy is better than that of the PSO-KELM, GWO-KELM, and ABC-KELM algorithms. At the same time, it can be known from the data change trend of Table 2 that the IABC-KELM algorithm has the best performance. Optimizing the KELM regularization parameter and the kernel function can improve the prediction accuracy of the KELM model.
6.2.2. Comparison of Regression Data Set Simulation Experiment
In this section, we use 4 real regression data sets in the machine learning library of the University of California, Irvine, to compare the performance of the 4 algorithms. The names of the data sets are as follows: auto MPG (MPG), computer Hardware (CPU), housing, and servo. In the experiment, the data in the data set is randomly divided into training set and test set, 70% of which is used as training set and the remaining 30% is used as test set. In order to reduce the influence of the large difference of each variable, we normalize the data before the algorithm runs; that is, the input variable is normalized to [-1,1], and the output variable is normalized to [0,1]. In all experiments, the number of algorithm iterations is 50, and the average value of 50 experimental results is calculated. The experimental results of the regression data set of the four algorithms are shown in Table 4.
From the grid in Table 3, we can see that IABC-ELM has obtained the smallest RMSE, smallest MAE, and largest RSD value in the four data set fitting experiments. The performance of the algorithm is the best, and the performance of the algorithm is also the most stable. Judging from the results obtained by the number of iterations of 50 times, the convergence speed of PSO-KELM, GWO-KELM, and ABC-KELM is slow, and the calculated results are also poor. The accuracy of the fitting results calculated by the IABC-ELM algorithm proposed in this paper is the best. Therefore, among different data sets, the algorithm proposed in this paper can reach the smallest RMSE. Taking into account comprehensively, the performance of HABC-ELM is superior.
6.2.3. Comparison of Classification Data Set Simulation Experiments
In this experiment, we again used the machine learning library of the University of California, Irvine. The names of the four real classification data sets are as follows: Blood Transfusion Service Center (Blood), E. coli, iris, and wine. Same as the classification data set, 70% of the experimental data is used as the training set and 30% as the test set. The input variables of the data set are normalized to [-1,1]. In the experiment, the number of iterations of the algorithm gradually increased, and the accuracy and standard deviation (Std. Dev.: standard deviation) experimental results of the four algorithms are shown in Table 5.
From the results in Table 5, the classification accuracy of Blood Transfusion Service Center (Blood) of the four algorithms is above 80%, and the classification accuracy of E. coli, iris, and wine is above 90%. On the whole, among the four algorithms, the classification accuracy of the PSO-KELM algorithm is low, the classification accuracy of the GWO-KELM algorithm is medium, and the classification accuracy of the ABC-KELM algorithm is high. The IABC-KELM algorithm proposed in this paper has the best classification accuracy, which is 2.3%, 1.7%, and 1.3% higher than the average accuracy of the other three algorithms.
From the simulation results of function testing, regression, and classification data sets, the IABC-KELM algorithm proposed in this paper has the highest accuracy, the ABC-KELM algorithm has higher accuracy, and the GWO-KELM algorithm has poor classification accuracy. The accuracy of the PSO-KELM algorithm is the worst. It can be seen that the IABC-KELM algorithm proposed in this paper has the best performance in terms of function testing, regression fitting and data classification, and the fastest finding the optimal solution.
6.3. Comparative Analysis of Fault Diagnosis
At present, there are four main types of faults in WSNs: impact fault, offset fault, short-circuit fault, and offset fault. This paper mainly studies these four types of common faults. In order to obtain the output of the sensor component of the faulty node, due to the acquisition of a large amount of fault data, it takes time and cost to do experiments. Most of the existing literature uses Simulink simulation to obtain the fault output data. We refer to Simulink in MATLAB to simulate the hardware failure of the sensor node and use it for later failure prediction. A mathematical model is used to describe the output voltage signal changes after the occurrence of four common faults: impact fault, bias fault, short-circuit fault, and offset fault. Assuming that the true value of the output voltage of a sensor node is , and is the measurement noise of the sensor node, the node has a measurement error, and the measurement error conforms to a normal distribution. When the sensor node fails, the output voltage value can be expressed as f, the parameter represents the offset value, and the parameter represents the zoom factor. According to this mathematical model, four kinds of fault expressions are set, and Simulink is used to simulate the output signal of the sensor node.
Four kinds of hardware fault simulation are shown in Figure 7. The simulation time is set to 200 s, and the occurrence time of the four types of faults is set to 120 s. In order to simulate the actual signal more realistically, random noise is added to the signal collected by the sensor. The failure of the sensor node will cause the signal to change suddenly, and the sudden change of the signal shows a change in the mean value. Through simulation, the output signal of the sensing node under four types of faults is as follows: impact fault, bias fault, short circuit fault, and offset fault. Run the simulation module 50 times and sample the output results of each time to obtain samples of four kinds of node hardware faults: short-circuit fault, impact fault, bias fault, and offset fault. Figure 6 shows the simulation signal output of four types of faults: short-circuit fault, impact fault, bias fault, and offset fault. There are 50 groups of each type of the failure data sample, which has a total of 200 groups. Each sample is composed of 10 sampling values, of which 40 groups of each sample are used for training and 10 groups are used for fault prediction.

(a) Short-circuit fault

(b) Impact fault

(c) Bias fault

(d) Offset fault
The steps of establishing the fault diagnosis model of the node hardware is as follows: first, preprocess the collected 200 sets of node fault data and divide it into a training sample set and a test sample set. Then, use the IABC-KELM algorithm to train the training sample set and get the fault prediction model based on IABC-KELM after training. Then, the trained model is used to predict the data of the test sample set, and the hardware failure of the sensor node is predicted and classified. The hardware fault diagnosis results of the wireless sensor network nodes of the four algorithms are shown in Figure 8.

(a) PSO-KELM

(b) GWO-KELM

(c) ABC-KELM

(d) IABC-KELM
Figure 8 is a comparison of the classification results of the test set. Figure 8 contains 60 test set samples represented by circles and the prediction results of the four classification models. The PSO-KELM algorithm and the GWO-KELM algorithm have good generalization performance and are different from the KELM neural network, their weights and thresholds are determined by training samples in one step, the calculation is iterative, and the amount of calculation is small. It can be seen that the four algorithms have obtained good prediction results. The final prediction result of PSO-KELM is 93.3%, the final prediction result of GWO-KELM algorithm is 95%, and the final prediction result of ABC-KELM is 96.7%. The final result of the IABC-KELM prediction algorithm proposed in this paper is 98.3%.
In order to give a more vivid comparison of the fault diagnosis results of the four algorithms, the specific fault diagnosis results of 60 test set samples are given. The specific fault diagnosis comparison results are shown in Figure 9. Figure 10 shows the comparison of the accuracy of the wireless sensor network fault diagnosis of the four algorithms.


It can be seen from Figure 9 that in the 60 test set fault data sets, the fault diagnosis method based on PSO-KELM is wrong 4, and the fault diagnosis method based on GWO-KELM is wrong 3. Two fault diagnosis methods based on ABC-KELM are wrong, and one fault diagnosis method based on IABC-KELM is wrong. The algorithm proposed in this paper has the lowest error rate in fault diagnosis. It can be seen from Figure 10 that the fault diagnosis results of the four algorithms are all above 90%. Among them, the fault diagnosis method based on PSO-KELM has the lowest average accuracy, the fault diagnosis method based on GWO-KELM is lower, and the fault diagnosis method based on ABC-KELM is higher. The fault diagnosis method based on IABC-KELM proposed in this paper is the highest.
In order to more comprehensively reflect the performance of the four algorithms, the simulation time comparison of the four algorithms is given, as shown in Figure 11.

It can be seen from Figure 11 that no matter the initial population size of the intelligent algorithm changes from 30, 40, or 50, the simulation time of the four algorithms is about 2 s, with little difference, and they are all within the acceptable fault diagnosis time range.
Based on the above experiments, compared with the other three fault prediction classification methods, the IABC-KELM fault diagnosis method proposed in this paper has the highest diagnosis accuracy, the best performance of the algorithm, and the best performance. The IABC-KELM proposed in this paper can compare and is well suitable for the hardware fault diagnosis of heterogeneous wireless sensor network nodes.
7. Conclusion
The problem of node fault diagnosis in WSNs is an urgent problem in the key technology of HWSNs. This paper firstly analyzes the fault classification and fault causes of HWSNs nodes. The regularization coefficient and the kernel parameter in KELM are used as model parameters to affect the accuracy of the fault diagnosis model of the nuclear extreme learning machine. A fault diagnosis method for HWSN nodes based on an improved artificial bee colony algorithm and improved nuclear extreme learning machine (ABC-KELM) is proposed. Finally, the effect is verified by experiments. The experiment shows that the improved fault diagnosis method of nuclear extreme learning machine has better fault accuracy than the extreme learning machine and neural network. The proposed algorithm improves the accuracy of sensor node hardware fault diagnosis and can be better applied to HWSNs node hardware fault diagnosis.
In the next work, we will study the failures of other nodes in the wireless sensor network, while using new swarm intelligence optimization algorithms and machine learning methods.
Data Availability
The data used to support the findings of this study are included in the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
This work was supported in part by the Natural Science Foundation of Hubei Province under Grant 2020CFB304, the Open Found of Distinctive Disciplines in Hubei University of Arts and Science under Grant XK2020047, and the Educational Science Planning Issue of Hubei Province under Grant 2020GA057, in part by the talent introduction project of Oujiang College of Wenzhou University.