Abstract

In this paper, by inserting the logarithm cost function of the normalized subband adaptive filter algorithm with the step-size scaler (SSS-NSAF) into the sigmoid function structure, the proposed sigmoid-function-based SSS-NSAF algorithm yields improved robustness against impulsive interferences and lower steady-state error. In order to identify sparse impulse response further, a series of sparsity-aware algorithms, including the sigmoid L0 norm constraint SSS-NSAF (SL0-SSS-NSAF), sigmoid step-size scaler improved proportionate NSAF (S-SSS-IPNSAF), and sigmoid L0 norm constraint step-size scaler improved proportionate NSAF (SL0-SSS-IPNSAF), is derived by inserting the logarithm cost function into the sigmoid function structure as well as the L0 norm of the weight coefficient vector to act as a new cost function. Since the use of the fix step size in the proposed SL0-SSS-IPNSAF algorithm, it needs to make a trade-off between fast convergence rate and low steady-state error. Thus, the convex combination version of the SL0-SSS-IPNSAF (CSL0-SSS-IPNSAF) algorithm is proposed. Simulations in acoustic echo cancellation (AEC) scenario have justified the improved performance of these proposed algorithms in impulsive interference environments and even in the impulsive interference-free condition.

1. Introduction

Adaptive filtering is famous for its numerous practical applications, such as system identification, acoustic echo cancellation, channel equalization, and signal denoising [15]. Due to easy complementation and low computational complexity, the least mean square (LMS) algorithm and the normalized least mean square (NLMS) algorithm become distinguished. However, the main disadvantage of these two algorithms is that they have a slower convergence speed in case the input signal is colored. For settling this issue, the subband adaptive filter (SAF) structure has been presented. This is because the colored input signal can be decomposed into multiple mutually independent white subband signals by the analysis filter bank [6]. Based on this structure and by solving a multiple-constraint optimization problem, the normalized SAF (NSAF) algorithm has been generated to speed up the convergence rate of the NLMS algorithm [7].

When identifying a sparse system, the traditional NSAF algorithm offers the same step size for all components of the weight coefficient vector regardless of the own characteristic of the sparse system. Thus, its convergence rate is dramatically degraded [8, 9]. For improving the convergence behavior of the NSAF algorithm in a sparse system, a family of proportionate NSAF algorithms [10, 11], such as proportionate NSAF (PNSAF), -law proportionate NSAF (MPNSAF), and improved proportionate NSAF (IPNSAF), have been proposed, wherein each tap of the filter is updated independently by allocating different step sizes which are in proportion to the magnitude of the estimated filter coefficient.

While all the above-mentioned algorithms, including the NLMS algorithm, the NSAF algorithm and its improved proportionate version have awful robustness against impulsive interferences. The classical sign subband adaptive filter (SSAF) algorithm derived from L1-norm optimization criterion only uses the sign information of the subband error signal, thus obtaining superb capability of suppressing impulsive interference [12], while its weakness is a relatively higher steady-state error and a slower convergence rate [13]. For the purpose of decreasing steady-state error and speeding up the convergence rate of the SSAF algorithm, variable regularization parameter SSAF (VRP-SSAF) [12], some variable step-size SSAF algorithms [14, 15], and affine projection SSAF [16, 17] have been proposed. Nowadays many researchers have demonstrated that making full use of the saturation property of the error nonlinearities can gain splendid robustness against impulsive interferences, such as normalized logarithmic SAF (NLSAF) [18], arctangent-based NSAF algorithms (Arc-NSAFs) [19], maximum correntropy criterion (MCC) [20], the adaptive algorithms based on the step-size scaler (SSS) [21, 22], and based on sigmoid function [23, 24], and M-estimate based subband adaptive filter algorithm [25].

In this paper, by inserting the logarithm cost function of the normalized subband adaptive filter algorithm with the step-size scaler (SSS-NSAF) [22] into the sigmoid function structure, the proposed sigmoid-function-based SSS-NSAF (S-SSS-NSAF) algorithm yields improved robustness against impulsive interferences and lowers steady-state error. In order to identify sparse impulse response further, a series of sparsity-aware algorithms, including the sigmoid norm constraint SSS-NSAF (SL0-SSS-NSAF), sigmoid step-size scaler improved proportionate NSAF (S-SSS-IPNSAF), and sigmoid norm constraint improved proportionate NSAF (SL0-SSS-IPNSAF), are derived by inserting the logarithm cost function into the sigmoid function structure as well as the norm of the weight coefficient vector to act as a new cost function. Since the use of the fix step size in the proposed SL0-SSS-IPNSAF algorithm, it needs to make a trade-off between fast convergence rate and low steady-state error. Thus, in its convex combination version, the proposed CSL0-SSS-IPNSAF algorithm is proposed. Simulations in the AEC scenario with impulsive interference have justified the improved performance of these proposed algorithms.

2. Review of the SSS-NSAF Algorithms

Suppose is a weight coefficient vector of the unknown system in system identification model, stands for the input signal, and L denotes the filter length, where T represents vector or matrix transposition. The desired output signal is usually modeled as , where is additive noise which contains Gaussian measurement noise plus impulsive interferences , i.e., . Figure 1 shows the multiband-structure of the NSAF algorithm. The input signal and desired output signal are, respectively, separated into N subband signals and by analysis filter bank . The subband output signals are obtained by filtering subband input signal through an adaptive filter which is an estimate of the unknown . Then, the subband signals and are decimated in a lower sampling rate to generate signals and , respectively. Here, n and k are used to index the original sequences and the decimated sequences. And the decimated subband output signal is expressed as , where . Thus, the ith decimated subband error signal is computed as , where .

In [22], two types of cost functions, i.e., tanh-type cost function and ln-type cost function, which use the square value of the normalized error signal with respect to the input signal, are introduced to subband structure to generate two novel SSS-NSAF algorithms. However, the tanh-type cost function needs to use the exponential function, which contains the sum of the normalized subband output errors with respect to the subband input vectors. As a result, it brings about a heavy calculation burden. In contrast, the ln-type cost function reduces computational complexity to a large extent. Therefore, due to its low computation, the proposed algorithm in this paper is primarily based on the simplified ln-type version of the step-size scaler. For the convenience of the discussion in the next section, the SSS-NSAF algorithm based on the tanh-type cost function is no longer presented. The ln-type cost function of the SSS-NSAF algorithm is given as follows:where is a constant parameter which controls the sharpness of the sharp. By using the gradient descent method, the SSS-NSAF algorithm is derived by minimizing the ln-type cost function with respect to the normalized subband error signal, and update equation of its weight coefficient vector can be derived easily as follows:where is the step size and plays a role as the step-size scaler, which helps to shrink the step size whenever impulsive noise happens and then eliminate the unfavorable effect of impulsive interferences on system updating.

3. Proposed SL0-SSS-IPNSAF Algorithm

3.1. Derivation of the Proposed SL0-SSS-NSAF Algorithm

By inserting the ln-type cost function of the SSS-NSAF algorithm into the sigmoid function structure, a new sigmoid function is defined as follows:where determines the sharpness of the sigmoid function. The aim of embedding the cost function of the SSS-NSAF algorithm into the sigmoid structure is to eliminate the adverse influence of impulsive interferences better, especially when the possibility of impulsive interferences is large.

Combining the above sigmoid function and exploiting the L0 norm constraint of the estimated weight vector, a new robust cost function is introduced as follows:where stands for the cost function of the proposed algorithm, indexes the L0 norm constraint, and is a small positive value that controls the weight between the sigmoid function and L0 norm constraint term.

Taking the derivative of (6) with respect to the estimated weight vector , we get the following:

By employing the gradient descent rule, the update equation of the coefficient vector of the sigmoid norm constraint SSS-NSAF (SL0-SSS-NSAF) algorithm is obtained as follows:where is the step size. Considering that L0 norm minimization is a Non-Polynomial (NP) problem, the following continuous differentiable function is usually used to approximate [13, 26, 27],where determines the attraction degree with regard to the small magnitude values of . Therefore, for , the mth component of the derivative of is easily calculated as follows:

Discussion 1. If the L0 norm constraint of the estimated weight vector is not considered, i.e., , the proposed SL0-SSS-NSAF algorithm becomes the sigmoid SSS-NSAF (S-SSS-NSAF) algorithm. Therefore, its coefficient vector update equation and cost function are expressed as follows:Combining the cost function formula (1) of the original SSS-NSAF algorithm and the above S-SSS-NSAF algorithm updating formula (11), it is easy to find out that the sigmoid function for the small subband error signal , then and , making the performance of the S-SSS-NSAF algorithm is similar to that of the original SSS-NSAF algorithm. While whenever impulsive interferences occur, the subband error signal will be very large, so does the value of , and thus approaches to constant one, which result in the termination of iteration of the SL0-SSS-NSAF algorithm. This demonstrates that the proposed sigmoid-function-based algorithms not only retain the outstanding performance of the original SSS-NSAF algorithm in the noise-free impulsive condition but also possess strong robustness against impulsive noise.
In fact, the robustness of the SSS-NSAF algorithm against impulsive noise primarily relies on the step size scaler. When impulsive noise appears, the step size scaler instantly scales down the step size to restrain the adverse effect from the contaminated subband error signal. Contrasting the update equation (3) of the weight coefficient vector of the SSS-NSAF and the S-SSS-NSAF’s update equation (11), and are the step-size scaler of the original SSS-NSAF and the proposed S-SSS-NSAF algorithms, respectively. As a matter of fact, the suppressing effect of the proposed algorithms on impulsive noise is stronger than that of the original SSS-NSAF algorithm, which can be observed from their cost functions. Figure 2 presents the stochastic cost functions of the proposed S-SSS-NSAF with and the original SSS-NSAF algorithm. Obviously, the stochastic cost function of the proposed S-SSS-NSAF algorithm is less steep than that of the original SSS-NSAF algorithm for large and small perturbations on the normalized subband error signal, which illustrates that the proposed S-SSS-NSAF algorithm can still obtain improved performance even in the impulsive-interference-free environment when compared with the original SSS-NSAF algorithm.

3.2. The Proportionate Version of the SL0-SSS-NSAF Algorithm

Inspired by the work in [10], these adaptive filtering algorithms containing zero attracting terms and the proportionate control matrix have gained improved performance in terms of convergence rate and steady-state error. Therefore, for obtaining a fast convergence rate of the proportionate control matrix and low steady-state error of the zero attracting term simultaneously, a gain control matrix is introduced to the SL0-SSS-NSAF algorithm to further accelerate its convergence tare. As a result, the proportionate version of the SL0-SSS-NSAF algorithm (SL0-SSS-IPNSAF) is yielded in an analogy waywhere , named proportionate matrix, is a diagonal matrix with its diagonal elements being . So far, a different method of choosing has been put forward [10]. Among them, due to the robustness to the different sparseness degrees of unknown impulse response, the following strategy is the most widely used procedure to compute the diagonal elements of the matrix .where , is the lth element of , is a small positive constant to avoid division by zero.

Discussion 2. From the equation formula (13) of the SL0-SSS-IPNSAF algorithm, some relating algorithms can be derived:(1)Letting the weight which controls the sigmoid function and L0 norm constraint term equal to zero, the SL0-SSS-IPNSAF algorithm becomes the S-SSS-IPNSAF algorithm(2)When the proportionate matrix becomes identity matrix, the SL0-SSS-IPNSAF algorithm reduces to the SL0-SSS-NSAF algorithm(3)If and , the SL0-SSS-IPNSAF algorithm turns into the S-SSS-NSAF algorithm

4. Adaptive Convex Combination of Two SL0-SSS-IPNSAF Algorithms (CSL0-SSS-IPNSAF)

Similar to all fixed-step-size adaptive filter algorithms, the proposed SL0-SSS-IPNSAF algorithm with a large step size has a fast convergence rate but a high steady-state error. Therefore, there always exists the conflicting demands of the fast convergence rate and low steady-state error in the proposed SL0-SSS-IPNSAF. In order to address this issue, the CSL0-SSS-IPNSAF algorithm is proposed by combining two different step sizes SL0-SSS-IPNSAF algorithms and the diagram of the adaptive combination scheme for an ith subband is presented in Figure 3, where stands for weight coefficient vector of the SL0-SSS-IPNSAF algorithm with large step size , corresponds a small step size , and , . The coefficient vector of the overall filter can be generated by using a variable mixing parameter ,where . Based on the convex combination strategy, the ith subband output signal of the overall filter can be formulated as follows:where , , are the decimated subband outputs of the component filters and . Similarly, the overall subband error signal can be expressed as .

From (15), we know that the performance of the overall filter largely relies on the choice of . Thus, an appropriate method to recursively compute is important. To constrain the value in the , a sigmoidal function which depends on an auxiliary variable is applied

According to the gradient descent method, the auxiliary variable can be recursively updated by minimizing the power of the system output error, which is equal to the sum of squared subband errors of the overall filter, i.e., ,where is the step size for adapting , and the introduction of is to prevent the update process of from stalling whenever is equal to 0 or 1. Besides, to make adaptation at a minimum is suggested to lie in [28].

Actually, the component filter with a small step size may reduce the convergence rate of the overall filter in the initial phase of iteration. The weight transfer scheme is utilized as follows to avoid this.

If , thenelsewhere the parameter must satisfy , and its recommended value is [29], , , and for can be calculated as follows:

5. Simulation Results

In order to measure the performance of the proposed S-SSS-NSAF, SL0-SSS-NSAF, S-SSS-IPNSAF, and SL0-SSS-IPNSAF algorithms, simulations are presented in the system identification and acoustic echo cancellation context with impulsive interferences. The cosine-modulated filter bank is utilized with the number of subband . The unknown impulse responses with their length L = 512 taps are illustrated in Figure 4. Assuming that the adaptive filter has the same length as the unknown vector, for examining the robustness of these proposed algorithms against impulsive interferences, the impulsive interferences is added into the output of the identified unknown system, which is modeled as , where indexes a Bernoulli process with the probability mass function expressed as and ( means the occurrence possibility of impulsive interferences), and is a zero-mean white Gaussian noise with variance . An independent white Gaussian measurement noise is added to the unknown system output with a 30 dB signal-to-noise (SNR). In subsequent Sections 5.1 and 5.2, the input signal is an AR(1) input and the unknown impulse responses are multiplied by at the middle of the iterations to investigate the tracking capability of all algorithms, while in the Section 5.3, a speech input signal is used.

All algorithms’ performance is measured by the normalized mean square deviation (NMSD), defined as . And All learning curves are obtained by averaging over 10 independent trails (except for speech input).

5.1. Impulsive Interference Environment

In this section, the proposed S-SSS-NSAF, SL0-SSS-NSAF, S-SSS-IPNSAF and SL0-SSS-IPNSAF algorithms are compared with the conventional SSAF [12], SSS-NSAF [22] algorithms with .

Since the proposed S-SSS-NSAF algorithm does not belong to a sparsity-aware family, the identified unknown system is a dispersive impulse response illustrated in Figure 4(a). Figure 5 presents the comparison of the performance of the proposed S-SSS-NSAF algorithm with that of the SSAF and SSS-NSAF algorithms. Compared with the conventional SSS-NSAF algorithms, the proposed S-SSS-NSAF algorithm obtain lower steady-state error with almost the same initial convergence rate. While when the unknown system is changed suddenly, its tracking capability is not pretty well.

The performance comparison of the proposed SL0-SSS-NSAF and SL0-SSS-IPNSAF algorithms with the SSAF and SSS-NSAF algorithms is reflected in Figure 6. The used impulse response is sparse, which is given in Figure 4(b). The SSS-NSAF-2 algorithm has almost the same performance as the SSS-NSAF-1 algorithm before the unknown system is changed abruptly, while the SSS-NSAF-2 algorithm has better tracking capability than the SSS-NSAF-1 algorithm. The proposed SL0-SSS-NSAF algorithm obtains lower steady-state error and stronger robustness against impulsive interference than the SSS-NSAF algorithms with the same convergence rate. It is noted that, as a proportionate version of the proposed SL0-SSS-NSAF algorithm, the proposed SL0-SSS-IPNSAF algorithm achieves significantly improved convergence behavior and better tracking capability than the SL0-SSS-NSAF algorithm with the same steady-state error. This demonstrates that the role of the proportionate scheme is to speed up convergence rate of the original algorithm.

Figure 7 compares the performance of the proposed S-SSS-IPNSAF and SL0-SSS-IPNSAF algorithms with that of the SSAF and SSS-NSAF algorithms in sparse impulse response with . The performance behavior of the SSS-NSAF-1 algorithm and the SSS-NSAF-2 algorithm is similar to those in Figure 6. As can be observed from Figure 7, the proposed S-SSS-IPNSAF algorithm provides a faster convergence rate, lower steady-state error, and more splendid tracking ability than the SSS-NSAF algorithms. By adding L0-norm constraint term into the S-SSS-IPNSAF algorithm, the proposed SL0-SSS-IPNSAF algorithm obtains the same convergence rate but lower steady-state error and better tracking capability. It can be concluded that L0-norm constraint term offers lower steady-state error in comparison to the original algorithm.

As can be seen from Figure 8, since the step size parameter of the proposed SL0-SSS-IPNSAF algorithm is fixed, the proposed SL0-SSS-IPNSAF algorithm with a large step size has a fast convergence rate but a high steady-state error, but the one with a small step size obtains lower steady-state error but slower convergence rate. By utilizing a convex combination scheme, the proposed CSL0-SSS-IPNSAF algorithm possesses a fast convergence rate with a large step size original algorithm and low steady-state error with small step size algorithm simultaneously.

5.2. Impulsive-interference-free Environment

Figures 9 and 10 illustrate the NMSD learning curves of the standard SSAF, SSS-NSAF-1, SSS-NSAF-2, the proposed SL0-SSS-NSAF, S-SSS-IPNSAF, and SL0-SSS-IPNSAF algorithms with . Obviously, even in an impulsive interference-free environment, these proposed algorithms with a sigmoid-function-based step-size scaler gain improved performance than the original SSS-NSAF algorithms in terms of convergence rate, steady-state error and tracking capability. With having the same steady-state error, compared with the SL0-SSS-IPNSAF algorithm, the SL0-SSS-NSAF algorithm converges slowly at the initial phase of the iteration but obtains faster convergence rate in near steady state. In Figure 10, the proposed S-SSS-IPNSAF algorithm gains a faster convergence rate than the standard SSS-NSAF algorithms with the same steady-state error. Like the results in impulsive interference environment, the proposed SL0-SSS-IPNSAF algorithm obtains the same initial convergence rate but lower steady-state error and better tracking capability than the S-SSS-IPNSAF algorithm.

5.3. AEC Scenario

The comparison of the NMSD learning curves of the standard SSAF, SSS-NSAF-1, SSS-NSAF-2, the proposed SL0-SSS-NSAF, S-SSS-IPNSAF, and SL0-SSS-IPNSAF algorithms in AEC is presented in Figures 11 and 12. The used speech input signal is given in Figure 13. As shown, the proposed SL0-SSS-NSAF algorithm performs not very well, while the proposed S-SSS-IPNSAF algorithm achieves a significantly faster convergence rate, lower steady-state NMSD, and better tracking ability than the original SSS-NSAF algorithm. Furthermore, due to the combination of the benefits of the proportionate scheme and the L0 norm constraint, the SL0-SSS-IPNSAF algorithm performs even much better than the splendid S-SSS-IPNSAF algorithm in the AEC scenario.

As the result observed from Figure 14, by utilizing a convex combination scheme and weight transfer strategy, the proposed CSL0-SSS-IPNSAF algorithm inherits a fast convergence rate with a large step size SL0-SSS-IPNSAF algorithm and low steady-state error with small step size SL0-SSS-IPNSAF algorithm simultaneously.

6. Conclusion

In order to improve the performance of the SSS-NSAF algorithm when identifying sparse system, a series of sparsity-aware algorithms, including the SL0-SSS-NSAF, S-SSS-NSAF, and SL0-SSS-IPNSAF algorithm, are proposed by inserting the logarithm cost function of the SSS-NSAF algorithm into the sigmoid function structure. Besides, the convex combination version of the SL0-SSS-IPNSAF is proposed to making the SL0-SSS-IPNSAF algorithm obtaining a fast convergence rate and low steady-state error. Simulations in the AEC scenario with impulsive interference have justified the improved performance of these proposed algorithms.

Although the proposed sparsity-aware algorithms in this paper essentially belong to linear adaptive filtering scheme, it also can be extended to the active noise control in linear systems and/or nonlinear systems [30, 31] and other fields [32] in the future.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant no. 61703060) and the Sichuan Science and Technology Program under grant no. 21YYJC0469.