Abstract

Wireless sensor network is a network that integrates sensor technology, computer technology, information processing technology, and communication technology. This paper aims to study how to analyze and study the routing optimization of wireless sensor network based on deep learning and describe the neural network. This paper puts forward the problem of routing optimization, which is based on the dynamic programming of wireless sensor network, and then elaborates around its concept and related algorithms and designs and analyzes the case of wireless sensor network optimization. Through the comparative analysis of the five algorithms in computer simulation, although the average network delay performance of DPER reached 0.47 s, it could effectively prolong the life cycle of the network. The DPER algorithm not only improves the network life but also improves the network energy utilization rate, shortens the average path length of the network, and reduces the standard deviation of the remaining energy of the node.

1. Introduction

Wireless sensor network has the advantages of low cost, good fault tolerance, rapid deployment, stable network support, and long-term monitoring work and has broad prospects for implementation. However, due to the limitations of energy, storage, and processing capabilities of sensor nodes, wireless sensor networks have presented many challenging research topics for scientific and technological workers.

Extending the lifetime of wireless sensor networks has always been a challenging and critical issue for wireless sensor networks. In large-scale monitoring scenarios such as deserts, forests, and canyons, the acquisition nodes in wireless sensor networks are usually powered by batteries. When the battery is depleted, the node will fail if the battery is not replaced or recharged in time. Due to the complex environment in which the wireless sensor network is deployed, it is expensive to manually replace the battery in a timely manner. Therefore, it is very necessary to propose a strategy to optimize the nodes in the wireless sensor network in a timely manner and establish a reasonable routing scheme, reduce the overhead power cost in the network, and solve the problem of limited power in the wireless sensor network.

Through computer simulation, the DPER algorithm is compared with other two classical algorithms and two improved algorithms, and the validity of the algorithm transmission is verified. The innovations of this paper are (1) this paper combines deep learning with wireless sensor networks and introduces the theories and related methods of both in detail. (2) In the face of route optimization, five different algorithms are used for comparison. By evaluating the experimental results and comparing the performance of the five methods, it is concluded that the research algorithm of the wireless sensor network optimization method based on dynamic programming is a feasible method.

Technological advancements in processors, communications and integrated low-power computing devices, and wireless sensor networks (WSNs) are emerging. The purpose of Guleria and Verma was to propose an energy-efficient load-balancing cluster routing protocol using Ant Colony Optimization, which ultimately improved the network lifetime of WSNs [1]. Belkhira et al. primary methodology depended on examining the issue of organization lifetime by presenting another rule that carried out a mix between a hub’s leftover energy and its reachability to decide the ideal number of MPs [2]. Jain proposed a two-layer WSN architecture for routing and coverage hole detection and recovery based on dynamic clustering [3]. Kumar proposed a GA-based routing protocol for WBAN that was efficient in terms of energy efficiency and network lifetime [4]. Ramakrishnan and Shyry proposed an improved intelligent ant colony optimization algorithm that removed the concept of stagnation when identifying the best path for a packet from source to destination [5]. However, the research on the level of routing optimization was relatively shallow.

Deep learning is a technology based on artificial neural networks that is emerging in recent years. Kermany et al. developed a deep learning diagnostic tool designed to control patients with common treatable retinal blindness diseases [6]. Combining deep learning with hyperspectral data classification, Chen et al. proposed a new method for classification using spatially dominant information [7]. Ravi et al. given an exhaustive cutting-edge survey of exploration on the utilization of profound learning in well-being informatics, giving a basic examination of the innovation’s general benefits and likely entanglements and its future possibilities [8]. Young et al. studied important models and methods related to deep learning used in many NLP tasks and provided their evolution [9]. These algorithms solve some problems to a certain extent, but their accuracy and timeliness need to be improved.

3. Wireless Sensor Network Optimization Method

3.1. Wireless Sensor Networks
3.1.1. Development

The acquisition of data information is the first step for the information technology industry and products to be applied to the market. Without it, there would be no information acquisition, transmission, storage, processing, and application, and there would be no so-called informatization. As the most basic means of obtaining information, the function of the wireless sensor is similar to the five senses of human beings. It can directly perceive the state of external information and convert the physical information (pressure, temperature, light, humidity, and pressure) detected by the sensor into a power output with a certain relationship to meet the transmission of information and determine the output power of the relationship to cover the transmission of information [1013]. The development of wireless sensor network has successively gone through point-to-point direct connection, interface connection, bus connection, network connection, and wireless network connection [14, 15], as shown in Figure 1.

Sensor technology no longer works with a single node in the early days, but develops from a higher network level, which can make up for the limitations of a single node, such as the heterogeneity of node functions, the expansion of the spatial extent of information perception, the integration and processing of the collected information network, the flexibility of high user manipulation, and the reliability improvement of the network through redundant nodes [12, 16, 17].

3.1.2. Composition

In wireless sensor networks, sensor nodes can be deployed in the monitoring area by manual positioning, aircraft sowing, rocket launching, and then formed in a self-organizing manner. The sensor network adopts a random deployment method, and the deployment process is shown in Figure 2.

A large number of nodes develop randomly and densely in the monitoring area, forming a network through self-organization. After the node preprocesses the detected information, it transmits the data to the centralized node in a multistep relay mode. Then, through satellite communication, Internet, mobile phone and other channels, they finally reach the management center where the user is located. Users can also manage, configure, query, publish monitoring tasks through the management center, and collect data and return it to WSN [18]. The overall network structure diagram is shown in Figure 3.

The sensor node that constitutes the wireless sensor network is a micro-embedded system, as shown in Figure 4. The detection unit is responsible for data collection; the processing unit is responsible for data processing and controls the operation of the entire node, including processing locally collected data and data sent by other nodes; the communication unit is responsible for wireless communication with other nodes, exchanging control information and sending and receiving data; the energy power supply device provides the power required for the operation of the node. How to reduce the energy consumption of the node, especially how to reduce the energy consumption of the node communication to maximize the network lifetime is still a major issue for wireless sensor networks.

Wireless sensor networks have the following characteristics compared with existing wireless networks:

(1) The power, communication, computing, and storage capabilities of sensor nodes are very limited; (2) the deployment scale is large and the density is high; (3) the dynamics are strong; (4) it is data-centric; (5) it is reliable; and (6) it is application related.

3.1.3. Introduction of Typical Wireless Sensor Network Routing Protocols

Wireless sensor network routing protocols reflect the application characteristics of sensor networks from different angles. Different applications require different routing protocols, but the overall idea is to consider energy saving [1921]. The following will analyze and compare typical routing protocols from the perspective of protocol characteristics, as shown in Table 1. The content of the comparison includes parameters such as plane, layering, whether to save energy, and whether to find the optimal route.

3.2. Deep Learning
3.2.1. Concept

(1) Deep Learning. Researchers in 1959 proposed enabling computers to learn without being explicitly programmed. From the early stages of pattern recognition, researchers aimed to replace hand-designed features with trainable multilayer networks. Multilayer architectures can compute gradients using a backpropagation process. Training produces a large number of saddle points, where the gradient is zero, curved upwards in most dimensions. Until early 2010, DNN-based applications flourished, including Microsoft’s speech recognition system in 2011 and the image recognition AlexNet system in 2012.

(2) Deep Reinforcement Learning. RL agents aim to learn from the environment and take actions to maximize long-term cumulative rewards. The environment is modeled as MDP. The goal of the RL algorithm is to find the optimal policy. To find the optimal policy, the key is to determine the value of each state-action function, also known as the Q function [22, 23].

3.2.2. Development

Many core concepts in deep learning, such as distributed representation, back propagation (BP), and CNN, have appeared decades ago. However, DL has only achieved explosive development in recent years, thanks to the breakthrough of the threshold of its three core elements:(1)Technological progress: both experiments and theory have proved that the widely used sigmoid function will cause the gradient to disappear during model training, and this phenomenon is more serious in deeper models. The Retified Linear Unit (ReLU) greatly alleviates this problem, and the ReLU mathematical expression is simpler, and the calculation speed is faster to speed up the training [24]. In addition, some simple but extremely effective techniques or algorithms in DL, such as mini-batch gradient descent, batch normalization, adaptive optimization algorithms, and so on, have also greatly improved DL.(2)Data explosion: the era of information explosion has brought big data, and the emergence of massive data has laid a data foundation for the development of DL. When the data set is small, some shallow models, especially SVM, may get better performance; the deep model may have too many training parameters and the training sample size is too small, resulting in serious overfitting. DL is data driven: as shown in Figure 5, DL can significantly outperform other methods only when the training sample size is very large. On the other hand, DL relies less on manual intervention, and a high degree of automation is more in line with the processing and application of large-scale data.(3)Breakthrough in computing power: compared with a few decades ago, the computing power of computing hardware, especially GPU with massively parallel computing capability, has been greatly improved. Specialized hardware such as GPU can greatly accelerate the computation of models with highly parallel structures. Fast calculation is very helpful for accelerating the regression feedback of experimental results, shortening the research cycle of researchers, and promoting the progress of algorithms. The development of GPU and other hardware has also made the deployment and execution of DL energy efficient, reducing the commercial cost of DL [25].

3.2.3. Artificial Neural Network

First, a 3-layer neural network model looks like this. It is assumed the input vector of the neural network is , the hidden layer contains a neuron, and the output layer contains b neurons. Then there are

Here, is the output of the neural network, f(·) is the activation function, and the Sigmoid function is commonly used, namely

Neural networks can fit very complex functions, but the solution is more difficult. The structure of the neural network is shown in Figure 6.

3.2.4. Convolutional Neural Network

CNN were first proposed in the 1980s. The idea of CNN was inspired by human research on the cat visual system, but due to the limitations of computer resources at the time and other resource-limited problems, it could only stop there. One of the three deep learning giants proposed a standard LeNet-5 network structure, which was trained by gradient descent, and achieved good experimental results, making it possible to train deep neural networks. The essence of convolutional neural network is to construct multiple interconnected kernel convolution kernels, which can output data features and topological features. By performing layer merging functions on the data input side, the structure between data is hidden [26, 27]. As shown in Figure 7:

As the number of layers continues to increase, the derived features become more and more abstract, and these abstract features are finally merged through a fully connected hierarchy and classification problem. Feedback is addressed by Softmax or Sigmoid activation functions [28]. The detail of the Softmax classifier is

The prediction function can be split into two steps, taking row 1 of Z and multiplying that row with b:

All are computed, and the Softmax function is applied to obtain normalized probabilities:

Softmax regression is an extension on the basis of logistic regression to solve the multiclassification problem. In logistic regression, the training sample set consists of e labeled samples: , where feature is entered. The dimension of the feature vector m is y + 1, . The function is supposed as

The model parameters ϕ will be trained to minimize the cost function:

In a convolutional network, the process of convolution includes two parameters, the first parameter is the input, and the second parameter is the kernel function (the convolution kernel).

The unit output of the convolutional operation is called a feature map. Taking a two-dimensional image Z as input, and the two-dimensional convolution kernel is P, the convolution of z and P is

Mathematically, convolution is interactive because the convolution kernel is flipped relative to the input. However, in most of the existing neural network libraries, the convolution operation is based on a cross-correlation function that does not flip the kernel, and both methods are called convolution. The cross-correlation function is expressed as

For training sets containing samples, the loss function can be expressed using cross entropy as

At test time, the predicted value of the convolutional neural network is

The gradient of the convolution layer is calculated as follows: it is assumed that there is a subsample 1 behind each convolutional layer μ, the neuron has sensitivity to the μ layer (the sensitivity is the change of the deviation b, the deviation b is the derivative). The sensitivities of the elements necessary for all neurons in the next layer are added up, the effect is multiplied by the activation function, and it is extracted to calculate the activation input for the current layer μ. The “weight” sampling reductions for this layer are all defined as α (constant), so it simply needs to extend the result to the previous step to compute . The same calculation should be done for each map j in the convolutional layer, and subsequently, this result should correspond to the feature map of the down-mining layer:

An efficient algorithm for computing the above function is to use the Kronecker product:

With the known sensitivity characteristic map, it is possible to calculate the gradient value of the deviation for all elements in :

Finally, weight gradients are calculated using backpropagation, unless the same weights are shared among a large number of connections, the sum of the slopes of all relevant weight data should be calculated, and the bias value is calculated using the formula (15): is a block above the area of , which is obtained by multiplying by during the convolution process. The convolution process is used to compute the elements of the convolutional feature map on . Formula (16) is used in MATLAB to calculate :

A downsampling layer performs downsampling on the input map. If there are H input graphs, there must be H output graphs, although the output graphs may be slightly smaller. It can be expressed as formula (17):

As mentioned earlier, the activation function is the feature map obtained after computing the top-level map through the core, which can be trained and learned. The output feature map may include a combination of multiple inbound feature cards: the relationship between the convolutional layer feature map and the activation function should generally be expressed using the formula (18):

In the above types, is the j mapping and total level in l.

Common mathematical expressions of activation functions are given in Table 2.

4. Network Optimization Experiment of Wireless Sensor Network Based on Dynamic Programming

4.1. Construction of Simulation Environment

This paper chooses Matlab7.0 as the experimental simulation platform. In addition to its excellent numerical computing capabilities, the platform also provides professional text processing, symbolic computing, implementation control, and visual modeling and simulation. The performance can also be evaluated through experimental simulation and simulation of the network algorithm.

In the simulation of this experiment, the nodes were distributed on the experimental platform through a random function. The transmission radius of the sensor node was determined according to the transmission radius of the actual network. The energy consumption of each node in the network could be selected randomly or the initial energy could be given a fixed value; the source node and sink node were designated in advance and were recorded as the node with the smallest number and the node with the largest number; when the node data is transmitted, it is ignored as 0 without considering the node congestion and data queue.

The wireless sensor network simulation transmission area is 100 m × 100 m rectangular area with 50–130 sensor nodes randomly scattered. The sink node was located in the upper right corner of the area, and the receiving and sending range and energy were not limited. The transmission radius of sensor nodes was R, and the nodes could detect and know their geographic location information. The source node was a node in the trigger area, which could change with the experimental needs. The specific delay simulation parameter settings are shown in Table 3:

In the wireless sensor network scenario, the generation of random distribution simulated the random seeding of sensors to design the location of sensor nodes. The geographical coordinates of the sink node were randomly generated at the upper right corner of the network topology, within the communication range, to connect the neighbor nodes in the network. The network topology generation diagram and the network neighbor node generation diagram are shown in Figure 8:

For wireless sensor networks, a good routing topology will have a very obvious impact on the performance of wireless sensor networks. In the research generation graph of the wireless sensor network optimization method based on dynamic programming, the network state is divided by finding the minimum number of hops from each node to the sink node, and a global decision is made for the network in each state. As shown in Figure 9, the graphs are generated for the minimum number of hops when the network topology is 100 m × 100 m, the number of network nodes is 100 and the number of network nodes is 80, and the transmission radius is R = 30 meters:

The upper left corner of each node in the figure is marked with the minimum number of hops from the node to the sink node. In Figure 9(a), the minimum number of hops from source to sink is 7, and in Figure 9(b), the minimum number of hops from source to sink is 8. It can be seen from the figure that when the number of nodes in the network topology is larger, the average number of hops in the network is smaller, and on the contrary, and vice versa, the average number of hops in the network is larger.

4.2. Simulation Results and Experimental Data

The energy utilization balance in the organization alludes to estimating the energy utilization of every hub at a particular time, which mirrors the point of view of the whole organization and the overall beginning energy changes of every hub in the organization and the equilibrium of energy utilization, and furthermore reflects whether the calculation can drag out the organization life.

The smaller the energy consumption balance of nodes in the network is, the more uniform the overall network consumption is, and the network life will not be reduced due to the exhaustion of the energy of each edge node. If not, it means that it is poor for extending the network life.

The energy usage of the network is analyzed by the energy usage balance, and the energy usage balance of the nodes in the network is defined as

In formula (19), m is the total number of nodes in the wireless sensor network, is the initial energy of the nodes in the network, and is the remaining energy of node at time t. The statistical method is , where and are the times that the node receives and sends data packets, respectively, and , , , and represent the energy consumed by the node in receiving, sending, and routing control information each time. and are the times that the node receives and sends routing control messages, and is the average value of the remaining energy of all nodes in the network at time t. The energy usage balance is a typical square of calculating the remaining energy of all hubs within an organization at time t, reflecting the consistency of energy utilization at each hub throughout the organization’s activity, further reflecting the stack balance of energy for each center in the organization.

Figure 10(a) shows the change of the network energy balance curve of nodes when the simulation program runs until the first dead node appears. It tends to be obviously seen from the figure that the energy balance performance of the dynamic programming-based wireless sensor network energy-saving routing algorithm (DPER) proposed in this paper is better than that of MAODV, AODV, MGPSR, and GPSR routing protocols. This is because, compared with the two improved routing algorithms, the routing protocol proposed in this paper has a global consideration for the nodes selected each time, especially the energy consumption of nodes has obvious changes. Moreover, the combination of multihop transmission and direct transmission is adopted in this paper, which significantly increases the probability of surrounding nodes being selected, and reduces the balance of energy consumption of nodes in the network, especially nodes around the sink. Therefore, compared with the other two algorithms, the energy-saving routing algorithm of wireless sensor network based on dynamic programming can better realize the balanced consumption of network energy.

Figure 10(b) is a comparison graph of the average energy consumption in a single route from the source node to the sink node in the wireless network energy-efficient routing algorithm based on dynamic programming and four other algorithms. As can be seen from the figure, the energy-saving routing algorithm of wireless sensor network based on dynamic programming is significantly lower than the energy consumption of a single node. Among the other four algorithms, the AODV routing algorithm consumes the most energy in terms of shape. The main reason is that the algorithm is almost flooded in the route discovery stage, resulting in a large energy consumption, and the AODV routing algorithm does not consider the energy consumption problem. It can also be seen that the improved routing algorithms MAODV and MGPSR have obvious improvements over the previous two algorithms, AODV and GPSR. This minimizes the energy consumption of nodes in the network.

Figure 11(a) is the network delay comparison between the energy-saving routing algorithm for wireless sensor network based on dynamic programming and the other four algorithms. As can be seen from the figure, compared with GPSR, MGPSR, AODV, and MAODV, the average network delay performance of DPER is worse. This is because DPER considers more important factors and considers the energy balance consumption of the network globally through the dynamic programming algorithm, so that the energy variance of the network is as small as possible, resulting in complicated network initialization process and network route discovery process, which takes a long time to calculate. But the benefit of this is that it can extend the life cycle of the network.

The analysis also shows that the average end-to-end delay of running the AODV routing protocol is significantly greater than the end-to-end delay of running the GPSR and MGPSR routing protocols. This is because in the AODV routing protocol, the node adopts a three-step handshake method (flooding routing method) for negotiation in each route selection and forwards data to the route that needs data forwarding. The route selected by the protocol may not be optimal. The routing path is not the shortest path to the sink node. The GPSR and MGPSR routing protocols always select the path with the shortest distance from the sink node. Therefore, the end-to-end average delay under the condition of running the AODV routing protocol is greater than the end-to-end delay of the GPSR and MGPSR routing protocols.

Figure 11(b) is a comparison diagram of network energy consumption with a network topology size of 100 × 100 and a number of nodes of 300. It can be seen from the figure that as the number of nodes in the network topology increases, the energy consumption of the network is relatively uniform, which is consistent with the trend in Figure 10(a).

5. Discussion

This article mainly analyzes how to research wireless sensor network routing optimization based on deep learning. The concepts and algorithms of deep learning are expounded, artificial neural networks are studied, convolutional neural networks are explored, and the applicability of DPER in wireless sensor networks is analyzed through experiments.

The experimental simulation and description of the research algorithm of the wireless sensor network optimization method based on dynamic programming are mainly carried out. First, the simulation experimental environment is described in detail, and the experimental steps are analyzed step by step. Then, with the help of Matlab7.0, the experimental simulation is carried out. Through the experimental simulation of GPSR routing algorithm based on energy aggregation (MGPSR), directional AODV routing algorithm (MAODV), and dynamic programming-based wireless sensor network optimization method research algorithm (DPER), the relevant parameters are adjusted, the routing algorithm is optimized, and then the parameters such as network lifetime, energy consumption, average network hops, and network delay of wireless sensor networks are compared [29].

Through the experimental analysis in this paper, it can be seen very well that the research algorithm of the wireless sensor network optimization method based on dynamic programming is a relatively feasible method. It is helpful for theoretical research on energy-efficient routing algorithms for various wireless sensor networks. The experimental results show that this method is indeed an effective routing algorithm based on reasonable inferences.

6. Conclusions

The wireless sensor network deployed in the field or on each floor of a large building works in a passive environment for a long time, so the time that the wireless sensor network can work, that is, the life span of the wireless sensor network is limited. Therefore, how to use the existing science and technology to extend the life of the wireless sensor network as much as possible, and even make the wireless sensor network work continuously without dying due to energy problems is very necessary. Routing in traditional networks hardly needs to consider the energy sharing of nodes, but the energy efficiency of routing algorithms in wireless sensor networks is often more important than finding the shortest path. This paper mainly studied the residual energy and energy uniformity of nodes and proposed an energy routing algorithm for wireless sensor networks based on dynamic programming. In order to better prolong the lifespan of the network, the idea of dynamic programming is used to find an efficient way to generalize the data as much as possible, ensuring that the energy consumption of all network nodes is balanced. Due to the limitations of research time, research conditions, and the academic level, this paper inevitably has some shortcomings, it is hoped to be further improved.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Acknowledgments

This work was supported by National Key R&D Program of China (no. 2019YFB1600400).