Abstract

While the modern communication system, embedded system, and sensor technology have been widely used at the moment, the wireless sensor network (WSN) composed of microdistributed sensors is favored due to its relatively excellent communication interaction, real-time computing, and sensing capabilities. Because GPS positioning technology cannot meet the needs of indoor positioning, positioning based on WSN has become the better option for indoor localization. In the field of WSN indoor positioning, how to cope with the impact of NLOS error on positioning is still a big problem to be solved. In order to mitigate the influence of NLOS errors, a Neural Network Modified Multiple Filter Localization (NNMML) algorithm is proposed in this paper. In this algorithm, LOS and NLOS cases are distinguished firstly. Then, KF and UKF are applied in the LOS case and the NLOS case, respectively, and appropriate grouping processing is carried out for NLOS data. Finally, the positioning results after multiple filtering are corrected by neural network. The simulation results illustrate that the location accuracy of NNMML algorithm is better than that of KF, EKF, UKF, and the version without neural network correction. It also shows that NNMML is suitable for the situation with large NLOS error.

1. Introduction

Within the framework of modern Internet of Things (IoT) technology, the wireless sensor network (WSN) plays a very key role. The technology relies on a network of sensors built together. The network collects the required information through various sensors, processes the data through embedded and information distribution technology, and then transmits the data to the top device. In the practical application level, the positioning technology based on this network has relative advantages compared with other positioning means. Due to the practical application demand at the present stage, satellite positioning has significant advantages in the precision demand of outdoor positioning, but it has obvious disadvantages in indoor positioning. Therefore, WSN positioning [1], which is light and small in size, inexpensive in price, low in energy consumption, and topologically strong, will be a cost-effective choice in the indoor positioning field with higher positioning accuracy requirements.

There are two kinds of nodes in WSN positioning: one is the mobile node, whose coordinates and motion information are unknown; the other is the beacon node, whose coordinates are known. By transmitting signals, the distance from the mobile node can be obtained by means of time of arrival (TOA) [2, 3], time difference of arrival (TDOA) [4], angle of arrival (AOA) [5], RSSI [6, 7], and other possible ways, so as to achieve the positioning effect. In the actual signal transmission process, there is no ideal channel in the theoretical derivation. Even if the signal travels in a straight line, there is still a certain error, which is called the line-of-sight (LOS) error. Moreover, considering that possible obstacles may block the LOS path traveling along a straight line, the signal is compelled to travel along a non-line-of-sight path in the form of refraction or reflection, and the resulting error is known as a non-line-of-sight (NLOS) error [8, 9]. Therefore, it is worth studying how to effectively suppress the non-line-of-sight (NLOS) error of TOA, TDOA, AOA, and other measured values in the actual algorithm research.

The non-line-of-sight error different from the line-of-sight error of normal distribution is a kind of positive error of indefinite form that makes the measured value greater than the actual value. If the measurement value with such error is not properly handled, the positioning effect will be significantly worse. Up to now, although there have been a variety of ways to judge and solve NLOS errors such as those listed in Related Work [1025], how to reduce and weaken the impact of NLOS error on positioning is still a big problem to be solved.

In this paper, a Neural Network Modified Multiple Filter Localization (NNMML) algorithm is proposed. Our proposed algorithm has the following improvements: (1)The location accuracy of the proposed algorithm is better than that of the KF algorithm in the case of LOS and the UKF algorithm in the case of NLOS by means of multiple filtering after FCM grouping(2)Through parameter correction after each round of Kalman filter, the error caused by cycle parameter deviation is alleviated(3)Compared with the traditional hybrid algorithm, the follow-up neural network correction reduces the possibility that the trajectory in the positioning results does not conform to the real situation and reduces the error of most sampling moments(4)Because trajectory estimation is carried out not only by the least square method, but also by the BP neural network training method, the possibility of misjudgment is reduced

At present, many pioneers have proposed a number of methods to try to identify and reduce the error caused by NLOS, but there is still no relative optimal solution. In [10], an interacting multimodel based on Kalman Filtering was proposed by Liao and Chen. This model has good extensibility under LOS/NLOS transition conditions and opens up a new idea for later comers. In [11], Chen proposed a method that parameters are provided by both time of arrival and received signal strength. The algorithm in [11] performs extremely well under the given special LOS/NLOS environment. However, its universality is not strong, and it is difficult to apply it in the unknown environment of NLOS. Then, in [12], J. Svečko. estimated the distance as the state of the random system and realized the particle filter. In their particle filter algorithm, they used RSSI acquisition to calculate the important weight and resampled the weighted particles to ensure their reasonable distribution and density.

From [13, 14], the robust extended Kalman filter proposed by Hammes et al. is based on nonhorizon detection and probabilistic correlation coefficient adjustment, in which different distance subgroups are additively constructed. His improvement not only surpasses the classical extended Kalman filter in the non-LOS condition but also maintains the effect of the classical extended Kalman filter in the LOS condition so as to improve overall robustness. In [15], Fang et al. used the adaptive Kalman filter to modify noise parameters and improved the robustness compared with the classical method, and combined with various basic algorithms, the accuracy was significantly improved, while in [16], Cheng et al. proposed a triple filtering algorithm which used FCM to divide the errors in the NOLS environment into soft NLOS and hard NLOS and then integrated them with the filtering results under LOS. This enhances the robustness of his algorithm in the mixed environment. In [17], the Raccoon Optimization Algorithm-based Accurate Positioning Scheme (ROA-APS) was proposed to strengthen the local search process involved in the estimation of the NLOS node. The authors propose a robust positioning method which tackles this problem by detecting the NLOS through a decision tree in [18]. It plays a good role in the mixed environment where the non-line-of-sight error is relatively small. The authors in [19] proposed an idea that applied a data clustering method of unsupervised machine learning to classify the NLOS signals and exclude them. Its accuracy can be improved to extremely close to pure LOS environment, but the adaptability to high NLOS error is poor.

In literature [20], attention was paid to the selection of a suitable hybrid TDOA/RTT/DOA location base station in a hybrid LOS/NLOS environment. The authors of [20] jumped out of the framework of optimizing the algorithm and successfully extracted more accurate measurement data. Tian and his partners proposed a distance and angle probability model so as to identify the NLOS propagation in [21]. This model can work well in a more specific NLOS environment. In [22], deep learning (DL) was applied to the NLOS identification. A localization method using a robust extended Kalman filter and track-quality-based (REKF-TQ) fusion algorithm was proposed to mitigate the effect of NLOS errors in [23]. Compared with the Kalman Filtering based on IMM, this fusion algorithm has much higher accuracy. The study in [24] developed a coarse NLOS detection algorithm based on discrete power levels to efficiently achieve the coarse NLOS mitigation, which automatically discards most unreliable dynamic anchors, while in [25], to obviously alleviate NLOS effects, the authors proposed a polynomial fitting-based adjusted Kalman filter (PF-AKF) method. The method employed polynomial fitting to accomplish both NLOS identification and distance prediction, and it was really inspirational. In addition, in my previous conference paper [26], I have made some preliminary explorations on this topic.

Up to now, even if more and more new methods have been proposed, how to reduce and weaken the influence of NLOS error on positioning is still a big problem that is worth of further study.

3. Personal Method

3.1. Signal Model

In the plane of the node to be tested, () positions are randomly selected to place beacon nodes, and their coordinates, as known quantities, are, respectively, noted as follows:

Set the number of observations to be prepared, fix the real action track of the mobile node (which does not know the specific coordinates when measuring), and mark as at time . Then, the accurate distance between it and the beacon node is where in , represents the th moment and represents the th beacon node.

In the LOS environment, due to the nonideal channel, the actual measured distance through the TOA/TDOA/RSSI model is where is a Gaussian distribution which has a mean of 0 and a standard deviation , noted as . It owns the following distribution function.

In the NLOS environment, due to the existence of possible obstacles, which may block the LOS path traveling along a straight line and compel the signals to travel along a non-line-of-sight path in the form of refraction or reflection, the actual measured distance becomes more complex just as [27, 28] where has many possibilities, and here it is briefly summarized as one of Gaussian distribution, uniform distribution, and Poisson distribution.

If satisfies the Gaussian distribution, it will obey the Gaussian distribution owning the mean of and the standard deviation of , noted as . And its distribution function is as follows:

When satisfies a uniform distribution, its minimum and maximum values are and , respectively, i.e., , whose distribution function satisfies as follows:

When satisfies an exponential distribution, its rate parameter is expressed as , i.e., , whose distribution function satisfies as follows:

3.2. General Concept

In this paper, a new algorithm named as the Neural Network Modified Multiple Filter Localization (NNMML) will be put forward. Its algorithm flow is illustrated in Figure 1. At first, the NLOS judgment is carried out towards the estimated distance . The residual calculation method is adopted here, and the mean value of distance residual is obtained by calculating maximum likelihood estimation coordinates. This method has high practicability and high confidence interval, which can maintain an error rate of less than 5% in a high LOS/NLOS mixed environment.

After grouping, we perform targeted filtering on the data. For the LOS case, the updated results are worked out by traditional Kalman Filtering. In the linear environment, the effect of the traditional Kalman filter is always quite satisfactory. For the NLOS case, due to the large difference of NLOS errors, the measurement data will be classified by NLOS classification based on FCM. High NLOS measurements, medium ones, and soft ones will be processed after FCM. Then, the NLOS measurements after filtering can be obtained by the interacting multimodel (IMM).

After we get the updated results, the noise parameters such as the covariance matrix are adjusted adaptively by using corresponding formulae for KF and UKF. The filter itself is constantly used to judge whether the dynamic of the system has changed and updates the noise parameters so as to be used in the next filtering. This adaptive approach can improve the robustness of the algorithm in complex environments.

Then, the distance data of multiple nodes were integrated to obtain the preliminary positioning results by the least square method, and the curve was simply fitted according to multiple moment data. FCM was used to remove the preliminary positioning results seriously deviating from the overall trend. Finally, the BP neural network is used to take the preliminary positioning results that fit the overall trend as the training set. Specifically, we use gradient descent to complete the back propagation of network parameters. Eventually, the output after training fills the gap of the positioning results, and we get the full result after correction.

3.3. NNMML Algorithm
3.3.1. NLOS Judgment Based on Residual Calculation

Under the condition that the error value brought by NLOS is unknown, estimated distance values from () beacon nodes can be obtained at a certain measurement time . According to the permutation and combination, we can have group distance estimation.

Through the basic Newtonian least square method, the -pair maximum likelihood estimation coordinates of the mobile node calculated from the above -set data can be obtained. Then, the distance between each group and the coordinate of beacon node is figured out, and the average residual difference is calculated with the estimated distance as follows:

The calculated residual difference between the th beacon node and the mobile node that we achieve at time will make a comparison with the measurement noise standard deviation to confirm whether it belongs to the NLOS category or not.

As is shown in formula (10), if is zero, it is judged in the LOS case, otherwise in the NOLS environment.

3.3.2. Kalman Filtering

Since the KF algorithm still has satisfactory accuracy in the linear environment, Kalman Filtering is still adopted in the LOS environment.

Firstly, we define the basic state parameters between the th beacon node and the mobile node at time as : where represents the distance between the th beacon node and the mobile node and denotes the speed of mobile node.

Thus, state equation can be worked out as follows: where coefficient matrices are, respectively, is the time interval between two measurements and represents the process noise vector.

Moreover, measurement equation is obtained as follows: where column vector and delegates the measurement noise vector.

From the above state equation and measurement equation, the following iterative formula of the KF algorithm can be derived: where represents covariance matrix of the measured noise whose initial value is set as . where represents the covariance matrix of the observed noise which is independent of .

From following formula (19), the incremental Kalman gain in the iteration is ciphered out:

The updated state parameters and covariance required by the round of iteration can be obtained from the Kalman gain as follows: where is the identity matrix.

3.3.3. Unscented Kalman Filtering

Unscented Kalman Filtering (UKF) is an improved algorithm of Kalman Filtering. Although its advantages are not obvious in the LOS environment with linear signal propagation, it can significantly correct the nonlinear error and improve the positioning accuracy in the nonlinear environment. So UKF is adopted in the NLOS environment in this paper.

(1) Initialization. Its state equation and measurement equation can be derived from the same formulae in KF. Therefore, those formulae will not be repeated and elaborated in this part.

For the convenience of subsequent calculation, is defined as the basic state parameters between the th beacon node and the mobile node. and are, respectively, its mathematical expectation and its covariance matrix.

(2) Calculate the Sigma Points and Their Weights. Such a state, as an -dimensional random variate, owns a total of sigma points, which are obtained by formula as follows: where signified the th column in the covariance matrix .

The sigma point weights follow the following law .

And the weights of its variances satisfy , where follows the expression , is determined by the dispersion degrees of the above sigma points, and the optimal value of that can be obtained in Gaussian distribution is two. In addition, in this circumstance, is set as zero.

(3) State Prediction. From the state equation and measurement equation, the following state prediction of the UKF algorithm can be derived: where and stand for covariance matrix of measurement noise and covariance matrix of observation noise, respectively.

(4) State Update. On the basis of the formulae of Part 3, the updated variance and covariance matrix can be achieved as follows:

In the wake, the Kalman gain of UKF can be figured out:

In light of the above Kalman gain, the updated state parameter and its covariance will be done as follows:

At this point, one of its iterations ends.

3.3.4. Fuzzy -Means Clustering

In practice, the distribution of NLOS is more complex, which will lead to great uncertainty in the error parameters of NLOS. In order to alleviate this problem, the FCM method is adopted for the NLOS group after NLOS judgment, and then, filtering is carried out after classification.

At time , the distance between the beacon nodes and the mobile node is denoted as ,.

Let each element be divided into groups; is the cluster center matrix, and represents the membership degree of the th distance element to the th group, that is, the degree to which it belongs to this group. In this paper, is set to 3.

Then, the objective function and constraint conditions of FCM are obtained as follows: where represents the Euclidean distance between the th cluster center and the th distance element and indicates the fuzzy weight factor.

In order to make reach the minimum value, the updated objective function is obtained by the Lagrange multiplier method: where λ is the Lagrange multiplier.

The clustering center and the membership degree of its corresponding elements can be obtained through partial derivative calculation:

By membership degree, distance elements are divided into the group with the largest membership degree. Therefore, when here, they can be divided into high NLOS measurements, medium ones, and low ones.

3.3.5. Interactive Multimodel Algorithm

The Interacting Multimodel (IMM) algorithm is an algorithm that is based on the Bayesian theory, through multiple filters to achieve the purpose of model adaptation. There are four steps in the IMM algorithm: interactive input, filter use, probability update, and interactive output. In this paper, the algorithm is used to carry out the grouping calculation under NLOS cases in order to achieve a more robust state estimation.

Owing to NLOS cases, the state equation and measurement equation satisfy formula (22) in the UKF without further elaboration.

(1) Interactive Input. In the first step, the mixing probability can be figured out, whose values are the ratio of the initial probability after the weighted transfer probability to its normalized coefficient: where represents the transition probability which obeys the Markov Transition Probability Matrix and parameter in the matrix before is the dimension number.

Then, the measurement estimates can be weighted through the mixing probabilities. At the same time, its covariance is recalculated:

(2) Filter Use. The state parameters and covariance obtained by Equations (32) and (33) were used as the input of the filter, and UKF was selected as the filter in the NLOS environment. By the formulae in Section 3.3.3, the corresponding results can be calculated: .

(3) Probability Update. In this part, the probability is redistributed for the next iteration, and the updated mixing probability is as follows: where the above represents the maximum likelihood function on measurement equation, and it is a function of residual as the dependent variable, which is following formula (35).

(4) Interactive Output. The mixing probability obtained in (34) is used as the weight to finally get the state parameters and covariance.

3.3.6. Parameter Correction

In order to improve the overall adaptive performance of the algorithm, NNMML carries out an additional error correction for the observation noise covariance matrix and the measurement noise covariance matrix after each filtering iteration, so as to reduce the error caused by the initial covariance value in the iteration process and improve the accuracy of filtering. The relevant parameters and formulae will be briefly described below.

Let the correction constant be , which does not change in a given location. Then, we can reach the correction weight by the following formula:

The modified formulae for KF and UKF filtering methods are slightly different, which will be briefly described below.

(1) Parameter Correction Formulae Applied to KF. The modification of the measurement noise covariance matrix in the iteration is as follows:

The modification of the observation noise covariance matrix in the iteration is as follows: where matrices and are the same as described in KF filtering that

(2) Parameter Correction Formulae Applied to UKF. For the sake of simplifying the formal expression of the UKF revision, we define it for the time being that

The modification of the measurement noise covariance matrix in the iteration is as follows:

The modification of the observation noise covariance matrix in the iteration is as follows:

For fear of the loss of positive quality in the iteration of the above noise covariance matrix, the positive quality should be tested after correcting the parameters in each round. If the positive nature is lost, this round of parameter correction will be abandoned.

3.3.7. Preliminary Location Estimation

Through the above subalgorithm, the , which combines the results of the two filtering methods, can be obtained; namely, we get the revised distance between ith beacon node (coordinates ) and mobile node at every measured time.

If the coordinates of the mobile node at the time are defined as , an underdetermined system of equations of it is able to be listed as follows:

By simple transposition and matrix operations, it is capable of being rewritten as the following matrix equation: where and the matrices satisfy as follows:

Because of the high matrix dimensions, this system is overdetermined. So, we use the Gauss-Newton least square method to arrive at the answer.

3.3.8. Trajectory Correction Based on BP Neural Network

By analogy, the approximate position of the mobile node at all times can be obtained preliminarily.

After the above coordinate calculation, the approximate positions of mobile nodes at all times are obtained. However, in practical application, there are still a few moments when the positioning trajectory does not conform to the overall motion trend. Therefore, in order to alleviate such errors, this paper adopts the BP neural network to correct the overall trajectory.

We define the approximate trajectory of the mobile node as

namely, the trajectory for a parameter equation about the measuring interval . For example, the coefficients of the parametric equation can be obtained by the following least square method: where is the total moving steps and is the order of the parametric equation. In general, is set as 3.

Therefore, the Euclidean distance from the estimated position of the mobile node to its approximate motion trajectory at each time is as follows:

Euclidean distances are divided into two groups through fuzzy -means clustering (FCM) described above; that is, is set. Thus, the data of group which conforms to the trajectory and group which deviates from the trajectory can be obtained. The specific process of FCM is similar to the previous one and will not be repeated here.

BP (back propagation) neural network, a multilayer negative feedback network, uses the adaptive mapping ability of the network to carry out back propagation. It can realize arbitrary nonlinear operation from input to output. From the perspective of hierarchical analysis of the network structure, its structure generally consists of the input layer, output layer, and implication layer. The BP neural network compares the error between the network output and the expected output by solving the network weight corresponding to the optimization of the target result, constructs the corresponding error function, uses the error gradient descent method to solve, corrects the network weight, and returns to the input layer to recalculate until the error is small.

In this paper, the BP neural network takes group data which is in line with the motion trajectory as the training set and establishes the mapping relationship of in group data, so as to train and output the modified mobile node position at the time corresponding to group data.

The following formula (52) represents the information transfer relationship between the input layer and the output layer, where and are the number of nodes in the input layer and the implication layer, respectively. Among the input layer, is the input value of the th node in the input layer; is the weighting constant from the th input layer node to the th implication layer node. Among the implication layer, is the additional offset of the th node; is the transfer function, and its common ones are , , and . Here, we use ; is the weighting constant between the th implication layer node and the th output layer node; among the output layer, is the output value which belongs to the th node.

Back propagation is to calculate the output error of each layer through the output layer and adjust the weight and additional offset of each layer according to the error gradient descent method. The overall error objective function is as follows:

where , , and are the number of samples, the number of nodes in the output layer, and the output expectation, respectively.

Here, we take the error function of , the output result of the th implication layer, as an example to introduce the gradient descent method:

We choose the transfer function as follows, and it is clear that is a special and initial state of the following function:

This transfer function satisfies the following formula:

Then, the partial derivatives of the functions with respect to and are performed: where .

We start from the last layer to proceed layer by layer, pass the error forward by weight to get the error of the previous layer, and repeat successively. Meanwhile, the values of and are updated according to the gradient descent method to minimize the error: where are the learning rates and use the default value 0.05. The higher the learning rate is, the less the time to reach the training target will be. However, if the learning rates is too high, it may lead to a locally optimal solution, so we take a moderate value here.

By setting the maximum number of iterations and the training target, the modified results of at the corresponding time of group data were solved and combined with the data of group to obtain more accurate

4. Simulation Results

In this paper, the positioning accuracy of the algorithm is simulated by the simulation software MATLAB.

The environment set by this simulation is roughly as follows:

All beacon nodes and mobile nodes are in a plane with an area of during the measurement time. The mobile node moves in a smooth trajectory, and there is no nonphysical movement less than the measured interval. The coordinates of beacon nodes set are an array of rows and 2 columns randomly generated by the function “rand” in MATLAB, whose horizontal and vertical coordinates are known and fixed during measurement. In addition, a total of measurements have been made.

The error of LOS and NLOS satisfies as follows.

Since the LOS error is caused by white noise satisfying the normal distribution, its mean value is 0 and its standard deviation is . There are many errors in NLOS, and three kinds of noise satisfying Gaussian distribution, uniform distribution, and Poisson distribution are selected as representatives. The mean value of and the standard deviation of are satisfied. Poisson distribution satisfies the condition that rate parameter is .

In determining whether a certain beacon node and mobile node are in NLOS state at a certain moment, a number in the interval of [0,1] is randomly generated by the function “rand.” If the random number is greater than the NLOS threshold , it is considered to be in the NLOS environment; otherwise, it is deemed to be in the LOS environment.

In the correction of the BP neural network, we set three parameters with partial empirical direction, maximum number of iterations, training target, and number of nodes in the implication layer. The training target is the gradient value target set in the neural network. The setting of the maximum number of iterations prevents the neural network from falling into an endless cycle when it cannot reach the gradient goal for a long time. Using too few nodes in the implication layer results in underfitting. On the contrary, using too many nodes may also lead to overfitting, so 5 is selected for reference [29].

In order to show the localization effect of the NNMML algorithm in the LOS/NLOS mixed environment, the Kalman Filtering (KF) algorithm, Extended Kalman Filtering (EKF) algorithm, Unscented Kalman Filtering (UKF) algorithm, and Multiple-filter Localization (ML) without NNs were used as the comparison algorithm group to carry out one thousand repeated experiments (). Refer to Table 1 for some basic parameters of the data and the neural network without comparison items. The root mean square error is used as the error measurement standard, and its formula is as follows:

Figures 24, respectively, show the variation trend of RMSE of the five algorithms, when the NLOS error obeys Gaussian distribution, by changing the number of BN, the mean of NLOS error, and the standard of NLOS error.

In Figure 2, as the number of mobile nodes increases from 4 to 7, it can be clearly seen that the positioning errors of the five algorithms all show a downward trend. When the number of mobile nodes is small, the NNMML algorithm has obvious advantages and can better deal with the localization problem. However, as the number of mobile nodes continues to rise, the positioning accuracy of the NNMML algorithm and the ML algorithm is gradually approached, and its advantages become weaker.

In Figure 3, when the mean of NLOS errors rises from 2 to 6, the positioning errors of the five algorithms all rise gradually, but the NNMML algorithm rises at the gentlest speed. In this process, KF, EKF, UKF, and ML increased by 49.33%, 31.74%, 35.71%, and 19.40%, respectively, while NNMML only increased by 13.42%, and its initial value was the lowest. Compared with other algorithms, the location effect was the best.

In Figure 4, when the standard of NLOS errors rises steadily from 1 to 5, all the five algorithms have significant changes. When =1, the NLOS error tends to be a constant error, and the positioning error of NNMML and ML algorithm is similar. When the standard rises, the error ratio of NNMML to ML decreases from 99.24% to 89.54%, indicating that NNMML is more suitable for the case of relatively large NLOS.

Then, Figure 5 shows how RMSE of each algorithm changes with the rate parameter when the NLOS error satisfies the Poisson distribution. In this figure, when the rate parameter changes from 4 to 7, RMSE of all algorithms shows an upward trend, while RMSE of the NNMML algorithm keeps the lowest and the upward trend is not obvious. With the increase of error, the NNMML algorithm has a sharp increase in advantages over other algorithms. It can be preliminarily confirmed that the NLOS error of Poisson distribution is well suppressed by the NNMML algorithm.

Finally, Figure 6 shows the effect of the algorithm, if the NLOS error is uniform.

In Figure 6, the standard of NLOS errors rises steadily from 3 to 6 and remains 2. In this process, KF, UKF, and EKF change dramatically, while ML and NNMML algorithms fluctuate less. Although KF and EKF have good performance when the maximum and minimum values are 5 and 1, respectively, NNMML has obvious advantages of universality from the perspective of overall RMSE and its stability.

In addition to the above basic comparison, we selected TF-FCM, JPDA, RAPF, and other improved algorithms in [16, 30, 31] for horizontal comparison.

From Figures 79, it can be seen that our MMNNL algorithm is significantly stronger than TF-FCM and JPDA when the number of BN changes or the error parameters of NLOS change. Compared with the improved algorithm RAPF based on PF, it is better in partial effect when the overall filtering effect is similar. For example, when the number of BN is low, it has an advantage of 8.5% to 11.2%. The effect is gradually better when the mean of NLOS error increases. Compared with the RAPF algorithm, the optimization effect is enhanced in the high error region.

All in all, the NNMML algorithm has stronger error suppression ability and stronger robustness for uniformly distributed NLOS.

5. Discussion

The NNMML algorithm has excellent performance compared with conventional KF, EKF, and UKF in LOS/NLOS mixed environment. At the same time, compared with the ML algorithm without neural network, it also has a good effect in alleviating errors, with high robustness and precision. However, due to the addition of the neural network for correction, the overall algorithm consumes a little longer time. Therefore, our future work will focus on the optimization of time complexity and the improvement of redundant parts to reduce the time consuming of the algorithm. At the same time, our future work will also apply the NNMML algorithm to more specific and complex actual environments, instead of representing NLOS error completely through simple probability distribution, in order to improve the practical application value of the algorithm.

6. Conclusion

In this paper, an algorithm called the neural network modified multiple-filter localization is proposed. Firstly, LOS and NLOS cases are distinguished. Then, KF and UKF are applied in different environments, and appropriate grouping processing is carried out for NLOS. Finally, the positioning results after multiple filtering are corrected by the neural network. Simulation results show that it is better than KF, EKF, UKF, and multiple filtering without the neural network. It achieves better accuracy and robustness in LOS/NLOS hybrid environments.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of Hebei Province under Grant No. F2020501012.