Abstract

For robust adaptive beamforming (RAB), the variable loading (VL) technique can provide a better trade-off between robustness and adaptivity than diagonal loading (DL). Despite its importance, few research efforts have explored the loading factor for VL to ensure robustness in various environments. Moreover, the performance of VL is restricted by the sample covariance matrix in snapshot deficiency situations. This paper proposes a modified variable loading (VL) method for robust adaptive beamforming, considering imprecise steering vector effects and finite sample size impairments. First, a novel subsampling method is used to construct the calibrated covariance matrix to improve the robustness of the VL in sample-starving scenarios. Then, a parameter-free method for the VL factor is proposed to further enhance the insensitivity to the steering vector mismatches of the antenna array. Simulation results verify the effectiveness and robustness of the proposed method as compared to the traditional VL and other widely used robust techniques.

1. Introduction

Adaptive beamforming has been widely applied in radar, sonar, wireless communications, and other fields in the past decades. The representative adaptive beamformer, e.g., the Capon beamformer (or MVDR beamformer) [1], can extract the desired signal and suppress interferences simultaneously with the theoretical interference-plus-noise covariance matrix and actual signal steering vector (SV). However, in practical array antenna systems, model errors such as array calibration error, antenna shape distortion, the direction of arrival (DOA) error, and local scattering may cause a mismatch between the assumed SV and the actual SV. Moreover, the limited data samples may lead to the perturbation of the covariance matrix. Therefore, robust adaptive beamforming (RAB) techniques against the SV errors and small sample size problems mentioned above are needed.

Over the last several decades, numerous RAB techniques have been proposed. The most popular algorithms can be divided into four categories: the diagonal loading algorithms [28], the eigenspace projection (EP) [911] algorithms, the steering vector estimation (SVE) algorithms [1217], and the interference covariance matrix reconstruction (INCM) algorithms [1826]. The diagonal loading RAB algorithm achieves performance improvement over the sample matrix inversion (SMI) algorithm by loading a scaled identity matrix onto the sample covariance matrix. Nevertheless, finding an appropriate loading level in various situations is not easy. An extended method for diagonal loading SMI (LSMI) has recently been developed in [27], aiming to further improve the robustness by using the tridiagonal loading matrix. The EP algorithms project the presumed signal-of-interest (SOI) SV onto the signal-plus-interference subspace of the sample covariance matrix (SCM) to eliminate the noise perturbation caused by the SV mismatches or sample data problems. However, the signal subspace may be corrupted by the noise subspace at low SNR, causing the projection subspace to be destroyed. The framework of SVE algorithms can be boiled down to using imperfect prior knowledge to estimate the desired signal steering vector in tandem with the standard SMI beamformer [28]. Their performance relies on the choice of user-defined parameters. The series of INCM algorithms aim to eliminate the SOI components in the SCM, which can provide excellent output performance via a brand-new interference-plus-noise matrix in the condition of high SNR. Nevertheless, owing to the loss of some actual array information during the reconstruction, most of the INCM-based RAB algorithms are hard to keep robust with array calibration error.

Among the above RAB algorithms, diagonal loading (DL) is widely studied and utilized due to its effectiveness and versatility. However, the DL methods have one inherent defect, suffering from a loss in adaptivity when achieving greater robustness. Thus, an effective way is to change the loading matrix for compatibility. The so-called variable loading (VL) [29] utilizes the inverse of the covariance matrix for loading and has the potential to maintain adaptivity when loaded. Subsequently, a more general form of the VL is presented in [30]. However, the performance improvement is not apparent compared to [29]. Both of the above VL methods are not suitable for small snapshot cases, and they just roughly set the VL factor as twice the DL factor, which is not a specific way, and the performance is relay on the DL method they choose. Recently, the VL in [31] considers the finite sample effects and substitutes the ideal noise power for the small eigenvalues. The VL factor is set in an adhoc manner based on the noise power, which only focuses on the low SNR environment.

Based on the framework of VL, we propose several modifications to maintain adaptive antijamming ability and improve robustness. The main contributions are as follows:(i)An enhanced covariance matrix is obtained in the proposed method via a novel subsampling method. That is, the sampling data are divided into two parts and regularize the noise eigenvalues by utilizing the two subsamples. This method is studied in statistics in [32, 33], which is only developed for the estimation of the random matrix. Unlike it, due to the determined signal components, the array observations are not random samples. Fortunately, the subsampling method can be applied in the array model since the estimation can estimate the signal eigenvalues while reducing the spread of noise eigenvalues. Consequently, this matrix preprocessing is suitable for the VL beamformer under low snapshots.(ii)The VL factor is computed automatically in an eigenvalue-based way based on the enhanced covariance matrix estimated by subsampling. This way is effective, despite its simplicity. An attractive feature of our approach is that it does not require a dependent parameter or prior knowledge such as noise power.(iii)Theoretical analysis and simulation results are conducted to demonstrate the effectiveness of the proposed method.

The remainder of this paper is organized as follows. Section 2 presents the signal model and background. Section 3 contains the presentation and analysis of the proposed method. The simulation results are provided in Section 4, and the conclusion is drawn in Section 5.

2. Background

Consider an array comprising M sensors. The minimum variance distortionless response (MVDR) beamformer is the solution to the following quadratic problem:

After excluding the insignificant scaling factor, the solution of (1) is given bywhere denotes the theoretical interference-plus-noise covariance matrix and represents the actual SOI steering vector (SV). The minimum power distortionless response (MPDR) criterion is closely related to the MVDR, which substitutes the theoretical array covariance matrix for in (2),

Here, the theoretical array covariance matrix can be expressed aswhere is the noise power and is the theoretical signal covariance matrix consisting of the SOI and interference components. and correspond to the power and SV of the lth signal with the total number Q.

In practical applications, the above theoretical and are always unavailable, and it is generally replaced by the sample covariance matrix , which is estimated by the array snapshots as follows:where denotes the array snapshots of sample size . Obviously, when , . In addition, the actual SOI SV is usually estimated as the presumed one . Then, the sample matrix inversion (SMI) based Capon adaptive beamformer is formedwhich is also the solution to the following optimization problem (omitting the nonessential scaling factor)

The above SMI beamformer suffers from performance degradation when contaminated by small snapshots or steering vector mismatch. To keep robustness, a popular technique with diagonal loading is presented, which adds a regularized constraint to (7) given byand its solution is , where is the loading level. It is the well-known diagonal loading SMI (LSMI) beamformer. However, the adaptive performance of RAB will be affected after diagonal loading. To maintain adaptivity along with the improvement of robustness, a more general quadratic term is proposed in [29] to replace the norm term in (8). Hence the optimization problem of variable loading beamformer can be expressed as

The weight vector can be calculated aswhere is the VL factor. Suppose the eigenvalue decomposition (EVD) of the SCM iswhere the unitary matrix contains M orthonormal eigenvectors of , and the diagonal elements of , are the corresponding eigenvalues. Thus, the VL beamformer can be rewritten aswhere are the dominant eigenvectors and are the subdominant eigenvectors. In this way, the loading level is set to instead of and is variable. Accordingly, the large eigenvalues are loaded slightly, resulting in less loss in adaptivity of antijamming. In contrast, the small eigenvalues (i.e., the noise components) are implemented with extensive loading, yielding better robustness on noise perturbation. Additionally, the loading level can be controlled by the VL factor to improve the performance further.

However, for cases with low snapshots, the SCM would be ill-conditioned. So, the noise eigenvalues may vary in a wide range, and very small eigenvalues may appear [3]. In particular, when the number of training data is less than the number of antenna elements, the SCM will be deficient-rank and zero eigenvalues may appear. This perturbation will limit the performance of VL. To prevent this, in the work of [31], the minimal eigenvalues in (12) are replaced with the ideal noise power , and the VL factor is set as . But it needs to estimate the noise power for a priori information. In addition, since the VL factor is fixed only by the noise power, when the signal-to-noise ratio (SNR) is high enough, the loading level will be too small to overcome the problem of SOI self-cancellation.

3. The Proposed Method

To better coordinate the robustness and adaptivity and improve the performance of the traditional VL methods, two major modifications are proposed in this letter. First, an enhanced covariance matrix is constructed to replace the original SCM, intending to reduce the noise perturbation in cases with limited snapshots. An analysis of the estimation method is given. Second, based on the enhanced covariance matrix, a parameter-free scheme for the VL factor is proposed, aiming to keep the robustness against imprecise steering vectors under various SNRs.

3.1. Estimating the Covariance Matrix

The array snapshots can be split into two nonoverlapping subsamples in the time domain, that is , where and . Usually, the noise component of and are considered independent and identically distributed, and the following process utilizes this specific property.

The covariance matrix for can be calculated by , where and . Perform EVD for the first subsample covariance matrix aswhere the unitary matrix contains the orthonormal eigenvectors of , and the diagonal elements of are the corresponding eigenvalues. Further, using the eigenvector matrix of and the second subsample covariance matrix , yields a new diagonal matrix

Then, substitute for the SCM eigenvalue matrix in (11), an enhanced covariance matrix can be constructed as follows:

3.2. Analyzing the Estimation of the Covariance Matrix

Here, we discuss the property of the above enhanced covariance matrix. Obviously, compared to the original SCM , reconstructs the eigenvalues while keeping its eigenvectors unchanged. Hence, we concentrate on the diagonal elements in the reconstructed eigenvalue matrix , which are

Perform EVD for , can be further written aswhere is the signal subspace correlation matrix with the eigenpairs , indexed by is the noise subspace correlation matrix with the eigenpairs , indexed by . When the signal subspace swap occurs at low SNRs, we assume the dimension of the signal subspace is the number of interferences.

Let us divide the estimated eigenvalues into two parts. The first part is indexed by , i.e., the dominant estimated eigenvalues; the second part is indexed by , i.e., the subdominant estimated eigenvalues. From the derivation given in Appendix, we can obtain

In the above equation, the dominant estimated eigenvalues can be further approximated as

This is because the two estimated eigenvectors and () are both highly correlated with the steering vector of the ith signal. Thus, the dominant estimated eigenvalues are close to the dominant eigenvalues of the second subsample covariance matrix , so they are still an approximation of the true signal eigenvalues.

For the subdominant estimated eigenvalues corresponding to noise, since and are independent, the operation can be treated as the estimation for a single eigenvalue of the random matrix in [32, 33], which is a process of regularization, upgrading the sample covariance matrix of the random sample to a well-conditioned one. Hence, the noise spread can be reduced for the subdominant estimated eigenvalues. It is worth noting that the signal subspace swap occurs with high probability at low SNRs with finite snapshots. In this case, may contain the SOI component. Fortunately, since the SOI is weak, the process of regularization would be scarcely influenced, and this would be shown in Simulation 1.

Another important issue is the choice of the subsample number . It should satisfy two conditions: (a) reducing the noise spread as much as possible and (b) making the estimated eigenvalues lightly biased with respect to the true ones. The experimental data in [32] indicates that when , this subsampling estimator can achieve a stable performance for improving the condition number of random sample covariance matrix. This strategy can be adopted to meet condition (a). Next, to satisfy condition (b), we must prevent distortion of and . As a result, both the sampling number and the sampling number must be greater than the number of signal sources Q. Based on the above remarks, can be roughly chosen in the set . Further discussion about the choice of the subsample dimension is given in Simulation 1.

Overall, if is chosen appropriately, the diagonal elements in (16) should be a good approximation to the eigenvalues of the ideal covariance matrix. In this way, an enhanced covariance matrix is constructed for variable loading with less noise spread.

3.3. Determining the VL Factor

The construction method for the enhanced covariance matrix in (15) can be rewritten as

Equation (20) reveals that , indexed by , can be regarded as the eigenpairs of . The steering vector of SOI can be projected on each eigenvector of to find out the eigenvector with the largest projection value, and its serial number k can be given by

That is to say, the eigenvalue , which is most associated with SOI power can be found out. Then we set a variable loading factor as the square of as follows:

When we choose the loading factor as (22), the loaded eigenvalue in (12) will be . At high SNR, the SOI variance is significantly larger than the noise variance, leading to . So, the aforementioned loaded eigenvalues for noise would be large enough as compared to those for signals. Accordingly, the contribution of noise eigenvectors in the beamformer weight vector in (12) is trivial, and the impact of noise errors can be reduced. At low SNR, the SOI variance is close to the noise variance, leading to . Therefore, the loading level is approximately for the subdominant eigenvalues and for the dominant eigenvalues. In this case, the adaptive and robust performance of the proposed method will be similar to the traditional VL in [31].

With the calibrated covariance matrix and the loading factor , the standard form of the corresponding beamformer can be expressed as

The direct computation of (23) may be costly. Note that the eigen-decomposition of the SCM has been performed in advance, and shares the same eigenvectors with . Hence, we can draw support from the form in (12) to avoid extra matrix inversion operations. The weight vector in (23) can be ultimately calculated as

To summarize, the procedure for implementing the proposed modified VL-based RAB method consists of the following steps:Step1: Split the array snapshots into two nonoverlapping subsamples and , and construct the SCM and two subsample covariance matrices , ;Step2: Eigen-decompose the SCM and the first subsample covariance matrix ;Step3: Estimate the eigenvalues of the calibrated covariance matrix via (16);Step4: Calculate the loading factor with an eigenvalue-based way via (21)∼(22);Step5: Compute our proposed beamformer using (24).

In our approach, the dominant computational demand comes from Step2, i.e., the eigen-decomposition of the matrix and , which requires . The complexity of the conventional diagonal loading in [3] and the automatic tridiagonal loading in [27] is . Therefore, the complexity of our method is higher than that of the other two loading-based methods, but it can obtain much better performance.

4. Simulation Results

Assume a uniform linear array of M = 30 omnidirectional array elements and half-wavelength space. The noise is spatially white Gaussian with unit variance. The desired signal is nominally located at , and three interferences are incident on the array from , , and . The interference-to-noise ratios (INRs) of the interferences are equal to INR1 = 30 dB, INR2 = 40 dB, and INR3 = 50 dB, respectively. All the sources mentioned above are assumed to be narrowband and uncorrelated with each other. The model uncertainties are caused by DOA mismatch, sensor location perturbations, along with magnitude and phase errors of array elements. The th element of the actual SV are modelled as

The DOA mismatch is assumed to be random and uniformly distributed in [−2°, 2°], unless stated otherwise. The sensor position perturbations are uniformly drawn from [−0.02, 0.02] measured in wavelengths. The sensor gain errors are modelled as independent and identically distributed zero-mean Gaussian random variables with the standard derivation 0.05. The sensor phase errors are assumed to be independent and uniformly distributed in [−5°, 5°]. The sample number of the first subsample is set as the integer value rounded to , i.e., , except for the explicit discussion of it in Simulation 1.200 Monte-Carlo runs are performed for each simulation case.

4.1. Simulation 1 The Performance of the Enhanced Covariance Matrix with Subsampling Method

In this subsection, we examine the performance of the enhanced covariance matrix. The following two SNR cases are considered:(i)SNR = 20 dB, i.e., the high SNR case without signal subspace swaps. The number of the dominant eigenvalues is set as ;(ii)SNR = −20 dB, i.e., the low SNR case with signal subspace swaps in high probability. The number of the dominant eigenvalues is set as .

We discuss the choice of the subsampling dimension and test the estimated performance in comparison to the sample covariance matrix (SCM). To evaluate the regularization performance for the noise eigenvalues, we apply the noise eigenvalue spread (i.e., the ratio of the largest noise eigenvalue to the smallest, the units are dB). To test the deviation between the estimated and theoretical signal eigenvalues, we take the root mean square error (RMSE) for experiment, which can be expressed aswhere denotes the ith signal eigenvalue estimated by our subsampling method or the SCM. is the ith signal eigenvalue of the theoretical array covariance matrix . dB(.) stands for the decibel operation. Here, we choose the decibel operation in order to balance the estimation error of signal eigenvalue in different magnitudes (otherwise, the result will be dominated by the estimation error of the strongest signal eigenvalue).

Figures 1 and 2 depict the estimation performance of different choices of versus number of snapshots in two SNR cases. It can be seen from these figures that the enhanced covariance reduces the noise eigenvalue spread substantially for the SCM. Even if the number of snapshots is rich, like K = 200, there is still at least a 3 dB performance improvement. For the signal eigenvalue RMSE, if the subsample dimension is higher than the signal number (excluding the 6 points— takes 0.2, 0.3, 0.7 or 0.8 when , and takes 0.2 or 0.8 when ), our estimated performance for the signal eigenvalues is close to the SCM, which is about a 1.2 dB gap in the worst case.

Here, we discuss the optimal subsample number choice. From Figures 1(a) and 2(a), we find that if , the estimator can obtain the best regularization performance for the noise eigenvalues. From Figures 1(b) and 2(b), we observe that if the total number of samples is larger than 100, an excellent choice for would be . But they suffer from low snapshots especially in the case . By contrast, the choice can ensure the good performance for the signal eigenvalue estimation in the whole range. Overall, the optimal choice of the subsampling dimension is and all the following simulations would take the choice (Considering may be an odd number, we take ).

4.2. Simulation 2 Output SINR of Beamformer with Steering Vector Mismatch and Sufficient Snapshots

In this subsection, we examine the performance of proposed beamformers with SV mismatch under sufficient snapshot conditions (i.e., the error is dominated by the single SV mismatch). The snapshots are set to be . We compare the output SINR of the proposed beamformer with that of the LSMI beamformer [3], the WNC (weight norm constraint) [8], the USC (uncertainty set constraint) [12], the INCM [21], the ATL (automatic tridiagonal loading) [27], and the traditional VL beamformer [31]. For the LSMI beamformer [3], the loading level is set as . For the USC, the parameter is used (we found it is the most appropriate parameter in this simulation scenario). For the INCM, the angular sectors of the SOI and the three interferences are assumed to be , , , .

In Figure 3, the mean output SINR is illustrated versus the input SNR. Figure 4 plots the output SINR versus the fixed DOA error when the random sensor position perturbations are present; the SNR is fixed at 10 dB. It can be seen from these figures that our method is the best performing one as compared with others. In Figure 3, the traditional VL suffers from SINR degradation at high SNR due to the SOI self-cancellation. The INCM can keep the main lobe undistorted at a wide range of SNR, but it is highly sensitive to the array calibration errors (like sensor position perturbation, sensor gain, and phase errors), causing a decline in anti-interference capability. From Figure 4, we can observe that the performance of the USC and WNC is very close to our method when the actual DOA is within the 3 dB main lobe scope (the angular interval is approximately in this simulation scenario). For the DOA mismatch between and , the proposed beamformer is obviously more robust than other beamformers. A drawback of our algorithm is that it is sensitive to the extremely large pointing error (the DOA mismatch exceeds ). This is because the factor in Subsection 3.3 may be chosen as the noise eigenvalue when the assumed SOI steering vector is biased from the true one too much. Nevertheless, this scarcely happens in practical scenarios.

4.3. Simulation 3 Output SINR of Beamformer with Steering Vector Mismatch and Finite Snapshots

In this subsection, we examine the performance of proposed beamformers with SV mismatch under finite snapshot condition (i.e., the error is a mixture of SV mismatch and finite sample impairment). We replace the INCM [21] with the well-known parameter-free beamformer GLC (general linear combination) [5] because the latter performs better in low snapshots. For the USC [12], when the sample covariance matrix is low rank, the diagonal loading with the above fixed loading level would be taken in the USC primarily.

Figure 5 shows the output SINR versus the number of snapshots at SNR = 0 dB, and Figure 6 plots the output SINR versus the input SNR with 30 snapshots (). From Figure 5, we observe that the proposed method exhibits good performance over other robust adaptive beamforming methods when the number of snapshots is below 100. In Figure 6, when the SNR is less than −5 dB, the proposed beamformer achieves similar performance to that of the traditional VL and the LSMI, but unlike these two ad-hoc ways, the proposed automatic VL does not require testing the noise power for a prior. Another two automatic loading methods (the ATL and the GLC) perform well at high SNR. But their loading levels are sensitive to interference power, causing performance degradation at low SNR along with strong interference. Overall, our proposed beamformer catches the performance reduction in this sample-starving environment compared to Simulation 2, but it still enjoys the strongest robustness among the methods tested.

5. Conclusion

A modified VL beamformer is proposed for robust adaptive beamforming. A novel subsampling method is utilized to estimate the calibrated covariance matrix, which applies the independent and identically distributed property of noise in the time domain. Then, the VL level is computed in an automatic way. The proposed method can improve the robustness of the adaptive beamformer contaminated by finite snapshots or SV mismatches in different SNRs. Simulations verify the effectiveness of the estimation for the covariance matrix. As expected, compared with other loading-based beamformers and several well-known robust beamformers, the proposed beamformer can achieve superior performance.

Appendix

Derivation of (18)

Suppose and are the signal subspace and noise subspace of the first subsample covariance matrix , and are the signal subspace and noise subspace of the second subsample covariance matrix . As shown in Figure 7, we provide a three-dimensional geometric interparetation considering one signal model. The two sample-estimated signal subspaces and are both the estimation of the ideal signal subspace. For insufficient snapshots, they are biased from the true ones with high probability. Even so, the distance between and is short. Applying the orthogonality between and and the orthogonality between and , we can derive is almost orthogonal to , and is almost orthogonal to . Hence, we have

Data Availability

The data that are used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 62001227.