Abstract

Recently, lattice theory has been widely used for integer ambiguity resolution in the Global Navigation Satellite System (GNSS). When using lattice theory to deal with integer ambiguity, we need to reduce the correlation between lattice bases to ensure the efficiency of the solution. Lattice reduction is divided into scale reduction and basis vector exchange. The scale reduction has no direct impact on the subsequent search efficiency, while the basis vector exchange directly impacts the search efficiency. Hence, Lenstra-Lenstra-Lovász (LLL) is applied in the ambiguity resolution to improve the efficiency. And based on Householder transformation, the HLLL improved method is also used. Moreover, to improve the calculation speed further, a Pivoting Householder LLL (PHLLL) method based on Householder orthogonal transformation and rotation sorting is proposed here. The idea of PHLLL method is as follows: First, a sort matrix is introduced into the lattice basis reduction process to sort the original matrix. Then, the sorted matrix is used for Householder transformation. After transformation, it needs to be sorted again, until the diagonal elements in the matrix meet the ascending order. In addition, when using the Householder image operator for orthogonalization, the old column norm is modified to obtain a new norm, reducing the number of column norm calculations. Compared with the LLL reduction algorithm and HLLL reduction algorithm, the experimental results show that the PHLLL algorithm has higher reduction efficiency and effectiveness. The theoretical superiority of the algorithm is proved.

1. Introduction

Global Navigation Satellite System (GNSS) high-precision positioning usually uses carrier phase measurements to build positioning equations [1, 2], so it involves the problem of integer ambiguity resolution. Fast and accurate integer ambiguity resolution has a significant impact on the final positioning results. The processing methods of integer ambiguity mainly include four categories [3]: the solution based on special operation, the solution based on measurement domain, the solution based on coordinate domain, and the solution based on ambiguity domain. Because the integer ambiguity based on the ambiguity domain is solved by using the least squares principle, it has high efficiency and success rate and is widely used. Teunissen first proposed the least square ambiguity correlation adjustment (LAMBDA) [46], which uses the least squares method to reduce the correlation of the integer ambiguity, and searches to obtain the correct ambiguity, significantly improving the search efficiency of the ambiguity. Subsequently, a large number of scholars have studied the problem of ambiguity reduction, and put forward many effective methods. Liu et al. [7] constructed a joint decorrelation algorithm based on the upper and lower triangulation process to reduce the correlation between fuzziness; Xu et al. [8] used the Cholesky decomposition to construct an integer Gaussian matrix and proposed an inverse integer Cholesky decorrelation method; Liu et al. [9] reduce the correlation of the search space by reducing the dimension of the ambiguity covariance matrix, overcoming the ill-conditioned decomposition of the matrix caused by the Z transformation, and reducing the calculation time by 20%.

The GNSS integer ambiguity solution problem is equivalent to the Closest Vector Problem (CVP) in lattice theory [10]. The research on lattice basis reduction originated from the Gauss reduction proposed by Lagrange and Gauss for quadratic homogeneous equations [11]. Korkine and Zolotaref improved Gauss reduction, extended it to any dimension, and proposed a KZ reduction algorithm [12]. Lenstra et al. proposed Lenstra-Lenstra-Lovász (LLL) reduction algorithm, which can solve the shortest vector problem in the lattice in polynomial time [13, 14]. Schnorr proposed the Block KZ (BKZ) reduction to solve the problem that the KZ reduction basis can not guarantee polynomial time operation in any dimension [15].

Since Hassibi and Boyd applied the specification algorithm LLL in lattice theory to the ambiguity decorrelation, many scholars have studied the decorrelation performance of LLL from the perspective of lattice basis reduction theory [16]. Xu et al. proposed an inverse integer Cholesky decorrelation method with better performance than integer Gaussian decorrelation and LLL algorithm [17]. Fan Long et al. improved the LLL algorithm based on the block Gram Schmidt orthogonalization algorithm, greatly reducing the search time based on improving the average correlation coefficient and condition number [18]. Lu improved the LLL algorithm by reducing dimensions in blocks, effectively improving the operation efficiency of the LLL algorithm [19]. The main idea of the LLL algorithm is to use Gram-Schmidt orthogonal transformation to specify the lattice basis vectors, and its purpose is to make the lattice basis vectors orthogonalized as much as possible. Therefore, Schnorr improved the efficiency of LLL reduction by using Householder transform [20]. Kai et al. improved the LLL reduction algorithm based on the Householder orthogonal transformation of system rotation, and proposed an HLLL reduction algorithm to reduce the reduction effect, but sacrifice a certain reduction time [21].

In this paper, we mainly solve the problem that a large number of basis vector exchanges lead to poor ambiguity search efficiency in the process of lattice basis reduction. Based on the influence of preordering on the decorrelation efficiency and the idea of column norm modification calculation. We propose a Pivoting Householder LLL (PHLLL) for column selection of principal elements, which greatly reduces the number of basis vector exchanges. The results show that the PHLLL algorithm has a better Hadamard ratio than the LLL algorithm and the HLLL algorithm, and the computational efficiency is more efficient.

2. Integer Ambiguity Reduction Model Based on Lattice Theory

The observation equations under the double difference mode of GNSS high-precision positioning can be linearized as follows:

In equation (1), the symbol denotes the double difference observation value of carrier phase; the symbol denotes the double difference ambiguity vector; the symbol denotes an unknown number of baseline vectors after double difference; the symbol and denotes the design matrix of ambiguity and baseline, respectively; the symbol denotes error vector. The least squares problem of formula (1) can be reduced to the constrained minimum problem of the following formula:

In equation (2), the symbol denotes ; the symbol denotes covariance matrix of double difference carrier phase observations.

Since the integer constraint of in equation (2) cannot be carried out directly, the problem is usually solved based on two steps. At first, we ignore the integer property of and treat it as a real number. Then, we transform problem (2) into an ordinary unconstrained least squares problem and obtain the real number estimate of the ambiguity and its corresponding variance-covariance matrix. And then, construct the corresponding minimization criterion for the floating point solution as follow:

Once is obtained, the unconstrained baseline solution (4) can be adjusted with the residual to obtain the baseline solution .

In equation (4), the symbol and usually referred to as floating point solution; the symbols and are called fixed solution. The solution to equation (3) is to search the integer vector with the shortest distance in the ambiguity space.

If the covariance matrix in the ambiguity resolution process of equation (3) is Cholesky decomposed,

In equation (4), the symbol denotes the upper triangular matrix. Because is a full-rank matrix, each vector of the decomposed matrix is independent of each other, and the objective function of equation (3) can be transformed into

In equation (5), the symbol denotes . Based on the knowledge of lattice theory, if the is regarded as the basis matrix, that is, each vector in the matrix is regarded as the lattice basis; then, the -dimensional full-rank lattice can be constructed as follows:

Based on lattice theory, equation (6) is further equivalent to

Formula (8) shows that is the nearest vector to the objective vector in the lattice, which proves that the problem of solving the integer ambiguity is equivalent to solving the CVP problem on the lattice.

3. Lattice Basis Reduction Algorithm for Integer Ambiguity

3.1. LLL Reduction Algorithm

The essence of LLL reduction is to use the integer Gram-Schmidt orthogonal transformation to reduce the lattice basis. In order to diagonalize the basis matrix as much as possible, it is necessary to process the column vectors of the upper triangular matrix obtained by equation (5), so that they can be orthogonal to each other. LLL reduction algorithm adopts the Gram-Schmidt orthogonal transformation to decompose the basis matrix into

In Equations (9), the symbol denotes an approximate orthogonal matrix; the symbol denotes a unimodular matrix. The calculation formula of elements in matrix and matrix is as follows: where the symbol denotes the column vector of the matrix ; the symbol denotes the column vector of matrix ; the symbol denotes inner product operation; the symbol denotes rounding operation; and the superscript denotes iterations, and its initial value is 0, and the iteration stops when . After Gram-Schmidt orthogonal transformation, the covariance matrix can be expressed as

Since is an approximate orthogonal matrix, the covariance matrix obtained by equation (11) is more orthogonal. After the lattice basis is processed by Gram Schmidt orthogonality, the LLL algorithm can be completed by iterating on the scale reduction and basis vector exchange.

3.2. HLLL Reduction Algorithm

According to the relevant knowledge of matrix theory, in addition to Gram-Schmidt orthogonalization, the matrix can also be decomposed into a unitary matrix and an upper triangular matrix by using Householder, but the column matrix needs to be transformed column by column. The core idea of Householder transformation is to select a matrix to process the -dimensional matrix obtained from equation (5) column by column, so that the elements below the -col and -row are zero, and the zero elements introduced previously remain unchanged. The matrix is constructed as follows: where the symbol denotes identity matrix; the symbol denotes unitary matrix, which is called the Householder mirror operator, and is defined as follows:

In equation (13), , where .

Based on the above analysis, the overall transformation process of Householder is as follows:

Since the matrix is unitary, let us rewrite (14) as

So far, the matrix can be transformed into the product of an orthogonal matrix and an upper triangular matrix. If the upper triangular matrix is diagonal, then the basis vectors are directly orthogonal to each other. Therefore, how to diagonalize the upper triangular matrix as much as possible while keeping the lattice base volume unchanged is the goal of reduction.

In order to transform the into a diagonal matrix, the transformation matrix can be constructed to eliminate elements above the diagonal. And then, the following transformation matrix can be constructed:

The upper triangular matrix can be updated by using (16) right multiplication basis matrix to complete the reduction of vector size. The length reduction of adjacent vectors in the matrix can be judged by the elements in the matrix as follows:

If equation (17) is not satisfied, the adjacent two vectors are exchanged. After the exchange, the new orthogonal matrix needs to be calculated to eliminate the elements below the diagonal of the new column vector.

3.3. PHLLL Reduction Algorithm

The purpose of scale reduction and basis vector exchange in lattice reduction algorithm is to further improve the efficiency of subsequent ambiguity search. Hao [22] pointed out that simple scale reduction cannot improve the efficiency of ambiguity search, so how to reduce the number of basis vector exchanges is the key to improving the lattice basis reduction algorithm. Therefore, we can consider using some method to preprocess the basis vector to reduce the number of specifications. Preordering the covariance matrix can directly improve the decorrelation performance of the LAMBDA algorithm [23], and the LAMBDA algorithm is based on the idea proposed by the lattice reduction. Therefore, we can improve the HLLL algorithm by using the symmetric rotation sorting algorithm, which can improve the efficiency of subsequent search most, to reduce the number of basis vector exchanges. The modified algorithm introduces a sort matrix in the decomposition process to ensure that the diagonal elements in the matrix are arranged in an ascending order. First, we need to compare the size of diagonal elements in the original matrix and build a sort matrix to exchange the column vectors of the matrix. Then, the exchanged matrix is used for the Householder transformation. After the transformation, it needs to be sorted again and iterated until the diagonal elements in the matrix meet the requirements of ascending order.

In addition, consider the following properties for orthogonal matrices:

It can be seen from (18) that it is not necessary to calculate the column norm at every step when orthogonalizing by using the Householder mirror operator but to obtain a new norm by modifying the old column norm.

Through the improvement of the above two aspects, the workload of the householder transformation based on the column main element is reduced from to flop. The specific process of the PHLLL reduction algorithm is given in Algorithm 1.

Input: Matrix, Dimension
Output: Matrix after lattice basis reduction
1: function PHLLL()
2:  for; ; do
3:   
4:  end for
5:  for; ; do
6:   
7:   ifthen
8:    break
9:   end if
10:   
11:   
12:   
13:   
14:   
15:   for; ; do
16:    
17:   end for
18:  end for
19:  
20:  whiledo
21:   for; ; do
22:    
23:   end for
24:   ifthen
25:    
26:   else
27:    
28:    
29:   end if
30:  end while

4. Results and Discussion

In order to compare the performance of lattice based reduction of the LLL algorithm, BLLL algorithm, and PHLLL algorithm, this paper uses matrix condition number [24], Hadamard ratio function, and reduction time to explain the advantages and disadvantages of the algorithm. The matrix condition number is used to describe whether a matrix is ill-conditioned or not. The larger the condition number is, the more ill conditioned the matrix is. The condition number is defined as follows:

Hadamard ratio is used to describe the degree of orthogonality of a group of vectors. The value range is a. The closer the two vectors are orthogonal, the closer their Hadamard ratio is to 1. The reciprocal of the Hadamard ratio is called the orthogonal defect degree. Hadamard ratio of the basis of the definition lattice is as follows:

Each set of GNSS receiver observation data has its corresponding observation mode and specific geometric structure relationship. If only one set of receiver data is used for calculation and analysis, it cannot explain the correctness and effectiveness of the algorithm in a general sense. If only the measured data of the GNSS receiver is used to analyze the algorithm, the measured data of the GNSS receiver will be uncertain due to the interference of external factors such as weather conditions and the ambient conditions of the receiver. Therefore, the method of combining simulation data and measured data is adopted in this paper. All calculations are performed on the computer platform with Windows 10 operating system, and the processor is Intel (R) Core (TM) i7-8750H CPU @ 2.20 Hz with 16GB memory.

4.1. Simulation Experiment

Refer to the method of simulation experiment proposed by Chang X W et al. [25] to randomly simulate a group of 5-30 dimensional covariance matrices. By comparing the condition number, Hadamard ratio, basis vector exchange number, and reduction time after the lattice basis reduction processing, the three reduction algorithms are compared and analyzed.

The comparison between the number of conditions processed by the three algorithms and the number of conditions not processed is shown in Figure 1.

It can be seen from Figure 1 that the condition number of the matrix usually increases with the increase of the dimension, but because of the randomness of the simulation data, for example, the simulation matrix itself is less ill conditioned when it is 25 dimensions, the condition number will suddenly decrease. No matter how ill conditioned the untreated simulation matrix is, the number of conditions processed by the PHLLL reduction is less than that of the LLL reduction algorithm and the HLLL reduction algorithm.

The Hadamard ratio between the unprocessed covariance matrix and the covariance matrix processed by the three reduction algorithms is shown in Figure 2.

The results in Figure 2 also verify the changing trend of condition number in Figure 1. Because the larger the condition number, the more ill conditioned the matrix is, and the smaller the corresponding Hadamard ratio, the less orthogonal the matrix is. It can be seen from Figure 2 that the Hadamard ratio of the original data decreases with the increase of the dimension, and the steep increase of 25 dimensions is also because the random simulation matrix itself is more orthogonal (the Hadamard ratio of the original data is 0.4728 in 25 dimensions). It can be seen that in the case of low dimensions or the original matrix is more orthogonal (such as the 25 dimensions in figure 2), there is nearly no difference in the Hadamard ratio after processing by the three methods. However, with the increase of dimensions, the orthogonality of the original matrix decreases, and PHLLL’s reduction effect improves significantly.

The base vector exchange times of the three reduction algorithms during reduction processing are given below, as shown in Figure 3.

It can be seen from Figure 3 that the HLLL reduction algorithm does not improve the number of basis vector exchanges of the LLL reduction algorithm, while the PHLLL reduction algorithm significantly reduces the number of basis vector exchanges in the reduction process. Whether reducing the number of basis vector exchanges can improve the reduction time requires further comparison of the reduction time of the three algorithms. The processing time of the three reduction algorithms is shown in Figure 4.

It can be seen from Figure 4 that the reduction time increases with the increase of the covariance matrix dimension except for special cases. The reduction time of the HLLL algorithm is significantly higher than that of the LLL and the PHLLL algorithm, which is consistent with the conclusion of Kai et al. [21]. Compared with the HLLL algorithm, the improved PHLLL algorithm has a much shorter reduction time than the HLLL algorithm and has certain advantages compared with the LLL algorithm.

4.2. Measured Experiment

The two CORS stations of the new library of the Henan University of Technology and the National Key Laboratory Building are used to form a group of time intervals of 1 s, and the short baseline data of dual frequency multi system combination is used. The covariance matrix obtained according to equation (2) is as follows.

Three grid basis reduction algorithms are used to reduce the covariance matrix, and the results are shown in Table 1.

From the analysis of Table 1, we can see that the HLLL algorithm has improved the reduction performance compared with the LLL algorithm but has sacrificed the reduction time. The PHLLL algorithm with column-oriented sorting and column norm modification calculation has further improved the effectiveness of the reduction, is better than the reduction of the number of basis vector exchanges, and has obtained better reduction efficiency. This also proves that the number of basis vector exchanges is the main factor affecting the reduction time.

5. Conclusions

The idea of symmetric rotation sorting improved by the LAMBDA algorithm and the idea of column norm modification calculation are introduced into the lattice reduction algorithm based on the Householder transform. The reduction performance of the LLL algorithm, the HLLL algorithm, and the PHLLL algorithm is compared and analyzed from the four aspects of condition number, Hadamard ratio, basis vector exchange number, and reduction time by combining simulation experiments with actual experiments. The conclusions are as follows: (1)Scale reduction has little effect on ambiguity reduction efficiency. But the number of basis vector exchanges directly affects the efficiency of the lattice basis reduction algorithm(2)The PHLLL reduction algorithm proposed in this paper overcomes the disadvantage of low efficiency caused by the HLLL reduction algorithm on the premise of obtaining a better reduction effect and obtains a reduction rate superior to the LLL algorithm

Because of the direct impact of matrix dimension on lattice basis reduction algorithm, how to further improve the reduction efficiency in high-dimensional cases is a problem that needs to be considered in the follow-up of the PHLLL algorithm.

Data Availability

The datasets analyzed in this study are managed by the School of Surveying and Land Information Engineering, Henan Polytechnic University, and can be available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

Authors’ Contributions

C.T. designed the experiments and wrote the main manuscript. K.L. and Y.J. reviewed the paper. Z.Y. edited the paper. All components of this research were carried out under C.T. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This work is funded by the National Natural Science Foundation of China (Nos. 41774039 and 42204040), the State Key Lab Project of China (No. 6142210200104), and the Key Project of Science and Technology of Henan (No. 212102210085). The authors would like to thank C.K., who is a professor at The Ohio State University, for providing the valuable advice.