Abstract
Unlike the conventional particle filters, particle flow filters do not rely on proposal density and importance sampling; they employ flow of the particles through a methodology derived from the log-homotopy scheme and ensure successful migration of the particles. Amongst the efficient implementations of particle filters, Exact Daum-Huang (EDH) filter pursues the calculation of migration parameters all together. An improved version of it, Localized Exact Daum-Huang (LEDH) filter, calculates the migration parameters separately. In this study, the main objective is to reduce the cost of calculation in LEDH filters which is due to exhaustive calculation of each migration parameter. We proposed the Clustered Exact Daum-Huang (CEDH) filter. The main impact of CEDH is the clustering of the particles considering the ones producing similar errors and then calculating the same migration parameters for the particles within each cluster. Through clustering and handling the particles with high errors, their engagement and influence can be balanced, and the system can greatly reduce the negative effects of such particles on the overall system. We implement the filter successfully for the scenario of high dimensional target tracking. The results are compared to those obtained with EDH and LEDH filters to validate its efficiency.
1. Introduction
For the analysis, inference, and comprehensive understanding of a dynamic system, two models are essentially required. The first model is the system model, which describes the change of its states with respect to time, and embodies the information to characterize the entire system. The second model is the measurement model which represents the noise measurements associated with the state vectors.
The probabilistic state-space representations of these models and the need for the information update upon having new measurements are preferably consistent with the practical Bayesian approach [1]. State-space approach provides great advantages in handling the multivariable nonlinear processes over conventional time series techniques [2].
Bayesian approach constructs a posterior probability density function of a state based on available information for the purpose of dynamic state estimation. The use of a recursive filter can be a good solution; in that, an estimation takes place when a new measurement is obtained. The received data is preferably processed sequentially rather than being processed as a whole in the recursive filter approach. Therefore, there is no need to store the complete data and use it when estimation is required upon receiving an instant measurement. The prediction and update stages must be employed for state estimation. At the prediction stage, the system model is used to construct the pdf of the state for the next measurement time. The state is deteriorated due to unknown disturbances; hence, during the prediction stage, the pdf of the state is mostly deformed or distorted. The update stage employs the Bayes theorem to modify the prediction pdf. Thus, the latest measurements establish the comprehension of the target state [1].
Particle filters are the sequential Monte Carlo algorithms which can be applied to any state-space model [3]. They are the generalized form of the Kalman filters. Particle representations of the probability densities are established with particle filters. The Bootstrap particle filter (BPF) updates the weight of each particle after drawing the particles from the prior distribution using the recent measurement likelihoods [4]. If salient measurements are employed or the dimension of the state is high, then many particles from the prior distribution will be in regions with low likelihood. This surely results in assigning low weights to many particles in BPF [5]. Hence Monte Carlo approximation for the posterior distribution is established depending on too few particles; the degeneracy due to weights leads to faulty posterior distribution depictions [2, 3].
The variance of the importance weights can be minimized using the optimal proposal distribution approximation [6]. Cutting-edge particle filters mimic the well-known strategy of the optimal proposal distribution approximation through establishing proposal distributions in efficiency [7]. Auxiliary particle filters (APF) were put forward to sample the particles more appropriately considering the new measurements in hand [8]. The variances of Monte Carlo estimates are reduced with Rao-Blackwellised particle filters through isolating some of the states wisely [9]. The unscented transformation aiming to employ an error-bounded deterministic sampling method is adopted in the unscented particle filter [10]. The approaches mentioned so far can be conveniently used under certain circumstances within a certain range of problem setting [11]. Particle filter use for the states in high dimensional space involved with the scenario of multitarget tracking still is a challenging task [12]. Conventional particle filter methods can be so naive to handle high dimensionality problem [13]. There are promising methods providing good performance, such as a well-known method, the equivalent weights particle filter [14]. It fights against the weight degeneracy but lacks the statistical consistency of the conventional particle filters. The separation of the state space through partitioning or segmenting has been adopted in some studies [15, 16]. The technique relies essentially on the factorization of the conditional posterior; hence it has a narrow scope of application. An all-intentionally approach is the use of Markov Chain Monte Carlo (MCMC) method to aid the particle filters [17]. The number of particles and consequently the cost of calculation grow exponentially when higher state dimensions are dealt with [18].
An unconventional strategy was proposed by Daum and Huang as a nonlinear filtering approach [19–21]. A homotopy between the logarithms of unnormalized prior and posterior distributions is presented at each step of Daum-Huang (DH) filters. The homotopy depicts the particle flow which can be regarded as the solution to a partial differential equation. The particle flow can be stated to be guaranteeing the gradual migration of the particles to the regions with higher posterior [18].
Extraordinary performance is reported to be obtained when DH filters are utilized in a number of nonlinear filtering systems. On the other hand, the researchers also report that the unqualified dependence of EDH filter on the extended or unscented Kalman filters (EKF/UKF), which are executed in parallel, yields performance drawbacks [22]. A solution to this is proposed in [18] through focusing on the cases that the system and the observation noise are both in the Gaussian form, the system map is linear, but the observations are in a highly nonlinear manner. The modified version is accordingly proposed and introduced as LEDH. The modification actually gives support in the linearization of the system and the calculation of each migration parameter of each particle individually rather than dealing with a single calculation associating all particles. It is not difficult to state that the cost of calculation is the weakness of this eminent filter despite the fact that it has a remarkably good performance.
Standard particle filters utilizing importance weights have been reported to be struggling with exponentially increasing sample size which is essentially due to high dimension of the state space. Therefore, particle filtering inherits weight degeneracy in case of having high dimensional filtering scenarios. The problems arising from high dimensionality have been discussed in the literature regarding different aspects.
Recent studies on particle flow filters provide solutions to the problems arising from weight degeneracy. Li et al. present new filters which integrate deterministic particle flows into a particle filter framework [23]. The proposed theoretical scheme can provide the adequacy of the particle filter. Also it can sustain the efficiency of particle flow methods. Li and Coates strive the computational burden of the particle flow particle filter by incorporating clustering of the particles [24]. In this study, we deal solely with improving the efficiency of particle flow filter which is inherently easier to implement and thus more suitable in practical scenarios. The main impact of CEDH is the clustering of the particles considering the ones producing similar errors and then calculating the same migration parameters for the particles within each cluster. Through clustering and handling the particles with high errors, their engagement and influence can be balanced, and the system can greatly reduce the negative effects of such particles on the overall system.
Surace et al. [25] focus on the varying aspects of the curse of dimensionality in continuous time filtering. They investigate the use of optimal feedback control scheme that deals with importance weights. Daum et al. [26] extended their studies through derivation of a new exact stochastic particle flow filter using a theorem established by Gromov. They conducted numerical experiments for a number of different high dimensional problems. In [27], the researchers combined the strengths of invertible particle flow and sequential Markov chain Monte Carlo (SMCMC) through constructing a composite Metropolis-Hastings (MH) kernel. They also proposed a Gaussian mixture model- (GMM-) based particle flow algorithm to construct effective MH kernels.
Our main objective is to reduce and balance the influence of particles which produce high errors through clustering in Particle Flow Filter. This will provide reduced cost of calculation while maintaining the performance of LEDH. The error value for each particle is taken into account and k-means++ algorithm [28] is employed for clustering. Such a clustering provides a significant benefit in the calculation of migration parameters of the particles which are designated to be in the same cluster.
Unlike the EDH filter which calculates of migration parameters all together and LEDH filter which calculates the migration parameters separately, the same migration parameters are pursued for the particles in the same cluster. The number of the migration parameters accompanying each cluster can be reduced and this yields an ease on the calculations of the LEDH.
In this study for each target velocity, position, derivative of the velocity, and derivative of the position are dealt with; thus we are involved with a tracking problem in the four-dimensional state space. Four targets are tracked utilizing our CEDH filter as well as EKF, BPF, EDH, and LEDH filters and performances are compared.
The structure of this paper is as follows. In Section 2 theoretical base in terms of established techniques for particle flow filters is represented. The implementation details of k-means++ algorithm are given and its use in CEDH filter is described in Section 3. The application of CEDH filter on a descriptive scenario is detailed in Section 4. Performance comparisons of the filters employed for multitarget tracking problem in the high dimensional state space are given in this section as well. Section 5 puts concluding comments on the proposed filter, discusses problems that arose in the study and possible future work, and lists practical pieces of advice resulting from the scenario.
2. Particle Flow Filter
Particle flow filter is the modified version of the conventional particle filters. It essentially fights against the particle degeneracy and ensures the fast convergence to the particles with the highest posterior distribution in the next step through employing a logarithmic homotopy function. The homotopy function depicts the transition between prior and posterior distributions for the flow of particles. The filter uses the Ito stochastic Partial Differential Equation (PDE) which is used to differentiate between the step parameter of the homotopy function and the state.
We consider a discrete-time nonlinear filtering task with the following models:where is a target state vector, is a measurement vector, is the process noise, and is the measurement noise. is a nonlinear map and is a nonlinear measurement map.
Bayesian rule to define the unnormalized marginal posterior distribution is as follows:Daum and Huang expressed the homotopy function in this form:where , , and is the real valued step parameter in the range of and represents the intensity or amount of particle flow. The homotopy function provides a continuous deformation from (when ) to the logarithm of the unnormalized posterior distribution, (when ).
In the original particle flow filter, the flow is improved so that the homotopy function remains constant as evolves. Partial differential equation use is required for this purpose, and the following equation can be referred to solve and calculate the flow of the particles:Daum and Huang derived the following expression utilizing the Fokker-Planck equation and developed and generalized the filter through proposing EDH filter [21]:The solution of this equation leads to the exact flow of the probability density. The flow quantities of the particles can be calculated as given in [21]:whereand and depict the covariance of the estimation error and measurement noise, respectively. is the measurement matrix, denotes the state variable that immigrates in each cycle, and represents the change of with respect to .
2.1. Implementation of Particle Flow Filter
Algorithm 1 includes the pseudocodes of the EDH filter implementation regarding the previously stated theoretical base. Particle migration task is realized following the steps on lines 7 to 16.
1 Initialization: Draw from the prior ; | |
2 Set and as the mean; as the covariance matrix. | |
3 for to do | |
4 Propagate particles ; | |
5 Calculate the mean value ; | |
6 Apply UKF/EKF prediction: | |
7 for to do | |
8 Set ; | |
9 Calculate by linearizing at | |
10 Calculate and from (8) and (9) using and ; | |
11 for to do | |
12 Evaluate for each particle from (7) | |
13 Migrate particles: | |
14 endfor | |
15 Re-evaluate using the updated particles . | |
16 endfor | |
17 Apply UKF/EKF update: ; | |
18 Estimate from the particles using ; | |
19 Redraw particles ; (Optional) | |
20 endfor |
is the number of particles, and is the number of time intervals. The UKF/EKF state and covariance matrix estimation are represented with and , respectively.
The steps of UKF/EKF estimation and UKF/EKF update are expressed on lines 6 and 17, respectively. The measurement matrix is calculated through linearizing the current estimate and depicted on line 9. The same and values are used to update all particles.
A simple but effective change stated in [18] is to replace the lines of 7-19 of Algorithm 1 with the pseudocodes given in Algorithm 2.
7 for to do | |
8 Set ; | |
9 for to do | |
10 Calculate by linearizing at ; | |
11 Calculate and from (8) and (9) using and ; | |
12 Evaluate for each particle from (7) | |
13 Migrate particles: | |
14 endfor | |
15 Re-evaluate using the updated particles . | |
16 endfor | |
17 Apply UKF/EKF update: ; | |
18 Estimate from the particles using ; | |
19 Set ; | |
20 Redraw particles ; (Optional) |
There are two major changes in Algorithm 2. For each particle associated part of the measurement function is linearized and and values are calculated using matrix. Thus the calculation of the migration parameters for each particle is performed individually with varying values of and . The other change is that the mean estimate from the UKF/EKF is replaced with the state estimate from the Daum-Huang filter as stated on line 19 [18].
3. Clustered Particle Flow Filter
EDH filter calculates migration parameters for all of the particles. LEDH filter calculates the migration parameters individually for each particle. CEDH filter employs clustering of the particles considering the ones producing similar errors. The calculation cost is significantly reduced through calculating common migration parameters for the particles within each cluster.
In this study, k-means++ algorithm [28] is adopted to fulfill the clustering demand of CEDH filter. The well-known k-means class methods aim at minimizing the mean square distance between points in the same set [29]. Arthur and Vassilvitskii proposed an algorithm that is -competitive. It strengthens the clustering scheme with a randomized seeding technique and improves both the speed and the accuracy in clustering.
3.1. Implementation of Clustered Particle Flow Filter
The cost of calculation is anticipated to be high in LEDH filter, since it pursues the calculation of the flow for each particle in each pseudo-time interval. The main objective of the CEDH filter is that it reduces the cost of calculation through implementing the steps of a clustering methodology. Thus, it reduces the influence of particles having a high error margin on the overall system and therefore improves the performance of the LEDH filter. The pseudocodes for the implementation of the CEDH filter are specified in Algorithm 3.
7 for to do | |
8 Set ; | |
9 for to do | |
10 Calculate by linearizing at ; | |
11 endfor | |
12 Cluster particles over the margin of error | |
13 for to do | |
14 Calculate and from (6) and (7) using and ; | |
15 Evaluate for each particle from (5) | |
16 Migrate particles: | |
17 endfor | |
18 Eliminate the largest cluster with the biggest error | |
19 Re-evaluate using the updated particles . | |
20 endfor | |
21 Apply UKF/EKF update: ; | |
22 Estimate from the particles using ; | |
23 Redraw particles ; (Optional) |
is the number of clusters. Algorithm 3, like the LEDH filter, changes the steps on lines 7 to 19 of the Algorithm 1. Clustering step takes place over the error margin of the particles as expressed on line 12. The migration parameters are calculated up to the number of clusters through employing the steps depicted on lines 13 to 17. Therefore in CEDH filter the particles are not associated with a single parameter calculation. Also, the migration parameters are not calculated for each individual particle.
CEDH filters fight to reduce the cost of calculation in LEDH filters which typically occurs due to exhaustive calculation of each migration parameter. In addition, while reducing the cost of calculation, that the particles with high error margin can be systematically clustered together leads to significant performance improvement. The assessment of the performance and calculation speed as the outcomes of CEDH filter implementation for the scenario of multidimensional target tracking will be presented in the next section.
4. Simulation and Results
The particle filters are applied to a multitarget tracking problem which is adapted from [30]. A wireless sensor network model of 25 acoustic amplitude sensor nodes located at the intersections of the grids on a rectangular region is used. The model is shown in Figure 1. There are four targets moving along two axes independently. The overall state vector contains four states for these four targets. Therefore, it is in the 16-dimensional state space.

The independent movement model of four targets () moving at a constant speed can be expressed as where consists of the x-y position and x-y velocity components of the corresponding target.is the transition matrix, is the process noise, is set to , and the covariance of the process noise for the filters is set asAll targets emit sounds with amplitude and each sensor records the sum of amplitudes. Thus, measurement function for the -th sensor located at is additive:where and in our simulations. The measurements are perturbed by Gaussian noise. There are sensors located at grid intersections as shown in Figure 1. The four targets are initialized with states , , , and , respectively.
Multitarget tracking scenario associated with the wireless sensor network model was simulated employing the filters of EKF, BPF, EDH, LEDH, and CEDH for different measurement sets. Each run starts with a different initial distribution. Figure 1 shows the true routes of the targets and the estimated routes obtained with CEDH. The simulation consists of 40 time steps and each algorithm is run separately 100 times. The optimal mass transfer (OMAT) metric is calculated by taking the mean of each time interval. The average OMAT error calculated for EKF, BPF, EDH, LEDH, and CEDH is given in Figure 2.

As shown in Figure 2, the EKF filter has the highest error margin. It is followed by the BPF, EDH, and LEDH, respectively. CEDH filters with different cluster numbers are the best performing filters at all time intervals.
Detailed information about the filters is given in Table 1. The computer used for the simulations is MacBook Air (Early 2015), which has a 1.6 GHz Intel Core i5 processor, 4 GB of 1600 MHz DDR3 memory, and an Intel HD Graphics 6000 1536 MB graphics card.
As shown in Figure 3, the CEDH algorithm ensured significant improvement in terms of calculation cost. It corresponds to approximately half of the execution time of the LEDH algorithm. Increasing the number of clusters improves performance; however this improvement brings extra cost of calculation to the filter.

5. Conclusions
We introduced Clustered Exact Flow Daum-Huang Particle Filter in this study. While improving the performance of the LEDH filter, CEDH also reduces the cost of calculation through utilizing a clustering strategy. The consideration of clustering of the particles with high error margins ensures that the influence of the particles on the overall system can be reduced. In CEDH filter, the particles are not accompanied with the calculation of same parameter as it occurs in EDH filter. CEDH filter succeeds in reducing the cost of calculation in LEDH filter arising due to comprehensive calculation of each migration parameter. The calculation cost is significantly reduced by calculating common migration parameters for the particles with similar error values. The given multidimensional target tracking problem demonstrates the success of the CEDH filter. In the future work, the effect of clustering on the system will be examined and the focus will be on the selection of the appropriate number of clusters and elimination of the appropriate cluster with the help of machine learning.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.