Abstract
In order to resolve engineering problems that the performance of the traditional blind source separation (BSS) methods deteriorates or even becomes invalid when the unknown source signals are interfered by impulse noise with a low signal-to-noise ratio (SNR), a more effective and robust BSS method is proposed. Based on dual-parameter variable tailing (DPVT) transformation function, moving average filtering (MAF), and median filtering (MF), a filtering system that can achieve noise suppression in an impulse noise environment is proposed, noted as MAF-DPVT-MF. A hybrid optimization objective function is designed based on the two independence criteria to achieve more effective and robust BSS. Meanwhile, combining quantum computation theory with slime mould algorithm (SMA), quantum slime mould algorithm (QSMA) is proposed and QSMA is used to solve the hybrid optimization objective function. The proposed method is called BSS based on QSMA (QSMA-BSS). The simulation results show that QSMA-BSS is superior to the traditional methods. Compared with previous BSS methods, QSMA-BSS has a wider applications range, more stable performance, and higher precision.
1. Introduction
BSS is a signal processing method that extracts or restores each component of the source signal only through the received observation signals without any prior knowledge of the source signals and the transmission channels. BSS has gradually become a research hotspot and has been successfully applied in various fields, such as image and voice signal processing, biomedical signal analysis and processing, or antenna array signal processing [1–3]. However, the current BSS methods still have many shortcomings.
Firstly, when these traditional BSS methods are implemented, they involve the selection of nonlinear functions for separation operations according to the probability density properties and kurtosis values of the source signal, which contradicts the unknown nature of the source signal and channel. Moreover, these algorithms are difficult to jump out of the local optimum, and the convergence speed is slow, which affects the separation effect [4–6]. The swarm intelligent algorithms do not require any source signal or channel information as prior knowledge and do not need to select nonlinear functions. Moreover, the swarm intelligence algorithms have faster convergence speed, and it is easier to jump out of the local optimum and achieve higher accuracy. Although the swarm intelligent algorithms, such as particle swarm algorithm (PSO) [7], genetic algorithm (GA) [8], bacterial foraging algorithm (BFA) [9], slime mould algorithm (SAM) [10], artificial bee colony (ABC) algorithm [11], whale optimization algorithm (WOA) [12], grey wolf optimization (GWO) algorithm [13], crow search algorithm (CSA) [14], and bat algorithm (BA) [15], make up the deficiency of the traditional methods to some extent, the swarm intelligent algorithms containing many parameters need to be improved in terms of convergence speed, convergence accuracy, and stability. Improper parameter adjustment can easily degrade performance. In 2020, a new intelligent algorithm named slime mould algorithm (SMA) [10] is proposed, and this algorithm has fewer parameters setting and lower complexity.
In addition, the classical BSS algorithms often design the objective function based on a single independence criterion such as maximum kurtosis (MK), maximum negative entropy (MNE), minimum mutual information, or maximum likelihood criterion of the separated signals [16, 17]. However, using a single independence criterion may cause that the selected criterion is not the best criterion of the algorithm, which leads to the inability to obtain the best result. The best criterion for different algorithms may also be different. Shih-Hsiung and Yang [18] proposed a new algorithm named independent component analysis based on gravitational particle swarm optimization (GPSO-ICA), and the performance of this algorithm based on three different criteria is compared. Ebrahimzadeh and Mavaddati [19] compared the performance of the artificial bee colony (ABC) algorithm based on the different efficient cost functions. Kumar and Jayanthi [20] compared the performance of the three different independence criteria for the fast independent component analysis (FAST-ICA).
In recent years, the BSS in impulse noise has made great progress [21–23]. However, on the one hand, these traditional BSS methods in impulse noise are often only suitable for weak impulse noise and high SNR. However, under the strong impulse noise or the low SNR, the performance of these methods deteriorates or even becomes invalid. On the other hand, most of the existing research rarely applies filtering methods to BSS under impulse noise because traditional impulse noise suppression methods often require information from the source signals or noise as prior knowledge in order to achieve good performance. A dual-parameter variable tailing (DPVT) transformation function based on exponential function is designed by Luo et al. [24], which needs to adjust the threshold according to source signals and noise situation. Arce and Gonzalez [25] proposed the weighted myriad filtering method that needs the ideal signals as prior knowledge. A minimum mean square error (MMSE) without any prior knowledge is proposed by Moon and Weissman [26], but the performance of this method will deteriorate in impulse noise. A transformation function named Gaussian-tailed zero-memory nonlinearity (GZMNL) is optimized by Luo et al. [27], and this method still requires the standard deviation of the dispersion coefficient as prior knowledge.
Aiming at the above shortcomings, the following are the main contributions of this paper:(1)Aiming at the above shortcomings of these existing algorithms, we design a new intelligent algorithm named QSMA based on quantum coding, quantum-simulated rotation gate, and slime mould algorithm (SMA) [10] to obtain the optimal solution. The quantum rotation angle is designed, and the simulated quantum rotation gate is introduced to update the quantum positions. And the simulated quantum rotation gate further improves the search accuracy. Due to the parallel nature of quantum computing, quantum evolutionary algorithms have many advantages, such as the small population size does not affect the performance of the algorithm; the number of iterations is small, but the global optimization ability is strong, and the past historical information of the individual is effectively used in the evolution process [28, 29]. Compared with previous algorithms used to solve other engineering problems, QSMA achieves better convergence performance and fewer parameters setting.(2)A new hybrid optimization objective function based on two different independence criteria, MK and MNE, is designed. Corresponding weight coefficients are assigned to the two criteria, and the best criterion of the algorithm is judged according to the change of the performance evaluation index with the weight coefficient, so as to get a more accurate result.(3)In this paper, a filtering system named MAF-DPVT-MF is proposed, and we introduce this filtering system into the BSS model. Unlike the DPVT transformation function, MAF-DPVT-MF adjusts the threshold according to the observation signals after filtering. The combination with median filtering [30] method enhances the ability to suppress noise. MAF-DPVT-MF does not require any prior knowledge and can achieve BSS in impulse noise. Simulation results illustrate that this filtering system is not only suitable for weak impulse noise environment but also suitable for strong impulse noise and low SNR noise environment.
The rest of this paper is systemized as follows: The BSS model under source signals in impulse noise is presented in the next section. In Section 3, QSMA is proposed and applied in BSS; moreover, its convergence is further analyzed. Simulation results and conclusions are presented in Sections 4 and 5, respectively.
2. BSS Model under Source Signals in Impulse Noise
2.1. α-Stable Distribution
Stable distribution is used for impulse noise modeling [31] and is usually defined by characteristic function, as shown in the following equation:where j denotes the imaginary unit. denotes characteristic exponent, and the smaller the is, the greater the impulse degree of distribution is; the larger is, the smaller impulse degree of distribution is. represents symmetry parameter, and stable distribution is called standard SαS distribution when . When , the SαS distribution becomes Gaussian distribution, and the noise generated is Gaussian noise. When , the noise generated is weak impulse noise. When , the noise generated is strong impulse noise. denotes the dispersion of impulse noise and is also called scale parameter. represents location parameter. denotes the sign function. Since the SαS distribution does not have a second moment, the mixed signal-to-noise ratio (MSNR) is defined as follows:where represents the power of the th signal and .
We can generate impulse noise obeying the standard SαS distribution through the algorithm obtained by the conversion formula in [32]. The specific content of this algorithm is as follows:Step 1A random variable obeying a uniform distribution on is generated.Step 2Another exponentially distributed random variable with a mean of 1 is generated.Step 3A random variable obeying the standard SαS distribution is obtained according to the following formula: where From the above process, the impulse noise of each sampling point is as follows: Figure 1 illustrates the discrete realization of the Gaussian process when and the standard SαS process when , and . Figure 1 illustrates that as decreases, the impulse of noise continues to increase. When , the standard SαS distribution becomes Gaussian distribution, and the noise generated is Gaussian noise. When , and , the SαS distribution is still the standard SS distribution, and the noise generated is weak impulse noise. When , the SαS distribution is still the standard SαS distribution, and the noise generated is strong impulse noise. The MSNR is set as 10 dB.

(a)

(b)

(c)

(d)
2.2. Construction of the Filtering System
2.2.1. DPVT Transformation Function
The dual-parameter variable tailing (DPVT) [24] is a new nonlinear transformation function proposed in 2019. One parameter controls the linear region threshold, and the other parameter controls the tail decay speed. The nonlinear function with exponential function as the tail is designed as follows:where . By adjusting the value of c, the nonlinear function can achieve different degrees of suppression for large-value samples. denotes the linear region threshold. denotes the signal to be processed. However, since the noise situation is unknown in the BSS, improper threshold setting may cause the deterioration of performance. Consequently, we need to design a filter system that can adjust the threshold according to the noise situation.
2.2.2. MAF-DPVT-MF
are independent source signals and . t denotes the number of sampling points, and denotes the total number of sampling points. are impulse noises superimposed on , . A is unknown hybrid system matrix. denote observation signals. Because of impulse noises’ covering, we cannot directly see the peak of to determine the . Consequently, we have to use a moving average filter (MAF) [33] that can reduce the amplitude of impulse noise to obtain the .
By this way, we can determine the DPVT’s threshold as follows:
Through a lot of simulation experiments, this threshold setting method is proved to be effective and stable. The observation signals after passing DPVT enter the median filter to further suppress the noise. Finally, we can acquire the filtered observation signals .
2.2.3. Hybrid Optimization Objective Function
Preprocessing includes two steps that are centralization and whitening. The process of centralization is described as either orwhere , , and denotes mathematical expectation. We can acquire that are zero-mean observation signals through centralization. Whitening , that is, performing a linear transformation on obtains , . named whitened signal is preprocessed. is called whitening matrix, and denotes the identity matrix. The covariance matrix is calculated according to and performs eigenvalue decomposition on to obtain . The orthogonal matrix Q is composed of eigenvectors of R, and the diagonal matrix H is composed of eigenvalues corresponding to the eigenvectors. In summary, we can get the whitening matrix, that is, and . The objective function is optimized by the algorithm to obtain the separation matrix W. are both the separated signals and the estimated signals of . Since the characteristics of the source signals and the transmission channel are unknown, has randomness in amplitude and arrangement order, which is called the ambiguity of BSS. can also be further processed by the median filter to obtain (Figure 2). Preprocessing is also required for , and can be acquired after preprocessing. is the separation matrix corresponding to . is the whitened signal corresponding to . The absolute value of kurtosis is calculated by

And the negative entropy is calculated bywhere denotes negative entropy function coefficient. So hybrid optimization objective function can be denoted as follows:where and are both numbers between [0,1] and . and are the calculation formula for the absolute value of kurtosis and the negative entropy with as the independent variable, respectively. and are the calculation formula for the absolute value of kurtosis and the negative entropy with as the independent variables, respectively.
Figure 2 shows the BSS model after introducing MAF-DPVT-MF, which is also BSS model under source signals in impulse noise (Figure 2). Figure 2 describes the following process:Step 1The source signals suppressed by impulse noise enter the system to obtain the observation signals .Step 2 enters the DPVT; meanwhile, it also enters the MAF to obtain , and the threshold can be determined according to . This is the first parallel structure. After the signals are filtered by DPVT, can be obtained after entering the MF. MAF-DPVT-MF consists of the above process.Step 3 can be obtained after is preprocessed.Step 4The hybrid optimization objective function is established according to , and QSMA is used to find the optimal solution of the objective function; consequently, the separation system is determined. After passing through , the final separation signals can be obtained. This is the second parallel structure.Step 5 can also be further processed by the MF to obtain .
3. QSMA-BSS
3.1. The Proposed QSMA
Quantum slime mould algorithm (QSMA) is an intelligent novel optimization algorithm based on the oscillation mode of slime moulds in nature [10] and the quantum behavior metaphor of slime moulds. The population of quantum slime moulds is , and the maximum number of iterations for the entire population is . denotes iterations, and d denotes the spatial dimension of each quantum slime mould. The quantum position of the kth quantum slime mould is randomly initialize as , , , and . The quantum position of quantum slime mould is mapped to the position of quantum slime mould . The mapping rule is , where . and are the upper and lower limit of the r-th dimensional variable for the quantum slime mould’s position, respectively.
The k-th quantum slime mould’s r-th dimensional quantum rotation angle is updated as follows:where and represent two individuals’ numbers selected from population and . represents the weight of the k-th quantum slime mould. and are both uniform random numbers between 0 and 1. The k-th quantum slime mould’s quantum position will be selected to be updated with the discovery probability , for which a random number will be produced with uniform distribution among [0, 1]. and denotes the worst quantum position. is called oscillated weight coefficient with a range of , and is represented as follows: is called the inertia weight coefficient andwhere and are the maximum value and the minimum value of , respectively.
And the k-th quantum slime mould’s r-th dimensional quantum position is updated as follows:where denotes the global optimal quantum position.
QSMA can be used not only in BSS problems but also in other optimization problems.
3.2. QSMA-BSS
The separation matrix W is an orthogonal matrix, which can be expressed as the product of a series of rotation matrices using the Givens rotation transformation [34] to reduce the amount of calculation. The expression is as follows:where denotes the rotation matrix. denotes the identity matrix. denotes the identity matrix. denotes the identity matrix, and . denotes a matrix. D is the largest dimension of the separation matrix, and represent the number of row and column of the element containing the rotation angle in the rotation matrix, respectively. represents the sequence number of the rotation matrix arranged from left to right at the right end of the equal sign of equation (15). So represents the rotation angle of the -th rotation matrix arranged from left to right at the right end of the equal sign, where and . Based on the and equation (15), we can obtain the separation matrix corresponding to the k-th quantum slime mould’s position using the rotation angle of the rotation matrix as position information of the quantum slime mould. Preprocessing the separated signals obtained from can get .
Fitness function can be constructed based on the hybrid optimization objective function and the process above. Fitness value for the k-th quantum slime mould’s position can be calculated by the fitness function, that is,
According to the fitness function, the fitness value of each quantum slime mould’s position is calculated, and the fitness values are sorted. Then we can find the quantum position with the largest fitness value and the quantum position with the smallest fitness value in the population up to the current generation. Finally, the global optimal quantum position, that is, and the worst quantum position, that is, can be determined. and can be mapped to get and . The formula of in equation (11) is listed as follows:where denotes the q-th element in the label sequence vector , and according to the fitness values, the individual labels of these quantum slime moulds are sorted (in descending order) to obtain . denotes a uniform random number within [0,1].
And, the discovery probability is calculated by
The newly generated quantum position can be mapped as , and the fitness value of the can be calculated according to . Then the quantum positions are selected by the greedy selection strategy. If , then and . The quantum positions of the quantum slime moulds are sorted after greedy selection according to fitness value to find the quantum position with the largest fitness value and the quantum position with the smallest fitness value. In this way, we can record these quantum positions. denotes the recorded quantum position, that is, the global optimal quantum position after update. And, denotes the recorded quantum position, that is, the worst quantum position after update.
According to the introduction and analysis above, the description of QSMA-BSS in impulse noise is as follows:Step 1The observational signals are received, and then the proposed filtering system is constructed to filter the observational signals.Step 2The filtered observation signals are preprocessed, and a hybrid optimization objective function is constructed.Step 3The quantum slime moulds’ fitness is constructed and calculated; then the global optimal quantum position and the worst quantum position are determined.Step 4Each quantum slime mould’s quantum rotation angle is updated by equation (11). The quantum position of each quantum slime mould is updated using a simulated quantum rotation gate according to equation (14).Step 5The updated positions’ fitness values are calculated and evaluated by greedy selection to renew the global optimal quantum position and the worst quantum position.Step 6If , output the global optimal position and its corresponding separation matrix ; then we can obtain the separated signals ; otherwise, set to , then return to Step 4.
3.3. Computational Complexity of QSMA
For the iterations of the QSMA, each quantum position needs to be mapped to position, and the computational complexity is , where represents the population size and represents the dimension of each quantum position. The quantum rotation angle of each quantum position is calculated by equation (11) with the computational complexity . Each quantum position is updated by equation (14) with the computational complexity . Besides, the fitness value of each quantum slime mould is calculated, and the quantum individuals are updated according to the greedy selection strategy; then the global optimal quantum position is updated. The computational complexity is . The weight of each quantum slime mould is calculated by equation (18) with the computational complexity . And the discovery probability of each quantum slime mould is calculated by equation (19) with the computational complexity .
Upon termination of the QSMA after iterations, the computational complexity is .
3.4. Convergence Analysis of QSMA
Mathematical analysis is performed on the convergence performance of the continuous space optimization algorithm. By using the oscillation operator and the greedy selection strategy of these quantum slime moulds, a better large population can be obtained approximately from a finite population. Then we perform the convergence analysis from a probability perspective. The oscillation operator, that is, a mutation operator increases the population’s diversity, and the greedy selection strategy can retain the optimal quantum position until the current iteration in the evolution process. The combination of greedy selection strategy and oscillation operator is able to search for the global optimum in multidimensional space with a sequence of populations. The oscillation equation is designed to ensure that the overall convergence is close to the neighborhood of the optimal point. Next, we will prove that after a sufficient number of iterations, the overall probability density function should be concentrated near the global optimal value of the objective function value.
Let . According to the continuous optimization problem shown in equation (17), which is also represented by , where is called the objective function, denotes the feasible region, is the dimension of , and is the set of the real numbers. Then some common assumptions are formulated about and :(i) has finite global optimal points in the feasible region , and the maximum is called (ii), where and mean the maximum and minimum of the objective function value, respectively(iii), where denotes the -dimensional Lebesgue measure of a set(iv) satisfies
Based on the above conditions, a lemma is given in the following.
Lemma 1. For , the probability of the newly generated position falling into the set satisfieswhere is a number that may correspond to and represents the event probability of (). If diverges, the algorithm is convergent, and the global optimal solution is obtained. And it is irrelevant to the initialized population.
Proof. If the probability of the newly generated position falling into the set satisfies at the -th iteration, the probability that this event will not occur can be expressed as . Moreover, the probability of the event that a position with updates but never falls into the set can be expressed as follows:Since the greedy selection strategy can preserve the optimal position, there is a possibility that a position falls into the set . Consequently, for , we have the following statement:where is the maximal objective function value at the -th iteration.
When in equation (21) tends to be infinite, the following equation holds:From the statistics mathematical perspective, we can derive that diverges, where represents equivalent in the equation.
Substitute (23) into (22), and we getand diverges.
Then the algorithm converges. It is obvious that the convergence performance is irrelevant to the initialized population.
During the oscillation process, the slime moulds are generated in two methods to increase the diversity of its population, which can be expressed by oscillation equations (11) and (14). QSMA is constantly evolving toward the optimal solution by continuously updating the quantum position, and the quantum position can be linearly mapped to the position. The main generating equation of the quantum rotation angle is equation (11). Consequently, we mainly analyze equations (11) and (14).
When , because equation (12) and , the value range of starts to decrease from , that is, and ; finally, we have , where represents infinite approach. The newly generated quantum position in each iteration is constantly approaching the optimal quantum position in the previous iteration. Meanwhile, QSMA introduces a greedy selection strategy to retain the optimal solution and the rotation characteristics of the quantum rotation gate can make the quantum position evolve to the optimal quantum position.
When , becauseWe can obtain from the above equation (25) by dividing both sides of the equation by at the same time. Equations (11) and (12) can be thought that the current quantum position is attracted by the quantum position , that is, composed of the and . The quantum position is constantly updated that can be linearly mapped to position . And the r-th dimension of quantum position can be described as [35]. Both and are known to be random numbers with uniform distribution between 0 and 1. The effect of the quantum rotation gate operator is slight in the quantum area [0,1]; therefore, we can assume that the quantum rotation gate does not affect the current probability distribution. Then it can be considered that the quantum positions, and their mapping positions are updated toward the quantum position and the position with uniform random distribution. All the dimensions of are independent identically distributed, which follow uniform random distribution , where . The probability density function can be defined as follows:Owing to the rotation characteristics of quantum rotation gate, the quantum positions are led to evolve towards the optimal quantum position; meanwhile, the positions have a high probability of falling into the set , which is more than the probability generated by the uniform random density function. Thus, for generated by equations (11) and (12), we haveBased on the proof of Lemma 1, it can be concluded that QSMA can converge to the optimal solution through these equations. What needs attention is how the oscillation operator and the greedy selection strategy work. The oscillation operator expands the distribution and guides the solution to converge to the optimal solution. The greedy selection strategy retains the optimal solution to avoid falling into the local optimum. This means that QSMA will eventually converge to the global optimal value, and its convergence performance is irrelevant to the initial population.
3.5. Limitation Analysis
Just like other BSS methods, although QSMA-BSS achieves superior performance, this method is only suitable for BSS when the source signal is interfered by impulse noise. When the observation signal is interfered by impulse noise, the performance of the method decreases. Besides, when QSMA is applied to solve the superdimensional optimization problem, the performance may be reduced.
4. Simulation Results
The simulation software used is MATLAB R2018a. The hardware details are as follows: Inter (R) Core (TM) i5-10300H CPU (2.50 GHz), which processes 16 GB of RAM on Microsoft Windows 10.
4.1. Implementation of the Proposed Method
In the experiment, three different basic communication signals are used as source signals. is a binary signal. is a low-frequency sinusoidal signal. is an amplitude modulation signal. Three source signals are sampled at a frequency of 10 kHz, and the number of sampling points is 4,000 . Strong impulse noise is added to . MSNR of impulse noise is set as 10 dB. Figures 3(a)–3(f) show the complete process of BSS in impulse noise.

(a)

(b)

(c)

(d)

(e)

(f)
The waveform graph of is shown in Figure 3(a). The waveform graph of is shown in Figure 3(b). Figure 3(b) illustrates that is completely covered by strong impulse noise.
The hybrid system matrix A is set as .
Because we cannot get any information about source signals, we have to observe the to determine . The waveform graph of is shown in Figure 3(c), and this figure demonstrates that it is impossible for us to determine by observing due to the large amplitude of . After a lot of experimentation, we find that it is easy to cause noise suppression ability to decrease or the signal waveform after filtering is distorted if a fixed is randomly determined. Figure 3(c) demonstrates that the observation signals have been completely covered by impulse noise. Consequently, it is necessary to filter the if we want to determine . The neighborhood window size of moving average filter is set to 50.
The waveform graph of is shown in Figure 3(d), and this figure demonstrates that the amplitude of impulse noise is reduced through moving average filtering. Consequently, we can determine by equation (6).
And are filtered through the MAF-DPVT-MF to obtain . The waveform graph of is shown in Figure 3(e), and this figure demonstrates that impulse noise is suppressed.
DPVT’s function coefficient is set to 0.8, and the neighborhood window size of the median filter is set to 10. The parameter setting of refers to literature [27].
The negative entropy function coefficient is set as 48.
Parameters setting for the proposed QSMA is as follows: , , , , , , and . It is known that the convergence performance is irrelevant to the initialized population size from the convergence analysis of QSMA. is related to the dimension of the optimization problem. Consequently, the performance of QSMA is not limited to parameters setting.
After many experiments, these parameters are found to be suitable, so good simulation results can be achieved.
The separated signals acquired are shown in Figure 3(f). The simulation result illustrates that the proposed method can achieve BSS when source signals are interfered by strong impulse and low MSNR noise. Meanwhile, the proposed method can also recover the source signals covered by noise to a certain extent.
Next, instead of filtering the signals through MAF to determine the threshold value, we delete the MAF and randomly set to fixed value to observe the effect of signal separation (Figure 4).

(a)

(b)
If is set to a fixed value, Figure 4 demonstrates that the performance of the method will deteriorate. The fixed threshold value may cause the threshold value to be too small for the observation signal covered by noise. While the noise is suppressed, the signal is also suppressed, resulting in the distortion of the signal waveform.
4.2. Performance of QSMA Based on Hybrid Optimization Objective Function
To testify the superiority of the proposed method, the other nine intelligent algorithms that are GA, PSO, BFA, SMA, WOA, ABC, GWO, CSA, and BA are used in the following simulations as comparisons.
The number of population individuals for all algorithms is set to 30, and the iterations’ maximum number for all algorithms is set as 500. The simulation result is the mean of 100 simulations. Other parameters setting for QSMA remains unchanged.
In PSO, the inertia weight coefficient is linearly decreased from 0.9 to 0.1 over the course of iterations. The two acceleration constants are both equal to 2.0, and the maximum speed is set as 0.1 [7]. GA’s crossover rate is set as 0.8; then the mutation rate is set as 0.1; and other parameters setting refers to [8]. BFA’s mutation rate is set as 0.5, and other parameters setting refers to [9]. SMA’s parameters setting is derived from original literature [10]. In ABC, the percentage of onlooker bees is 50% of the colony. The percentage of employed bees is 50% of the colony, and the number of scout bees is selected as 1 [11]. In WOA, the constant for defining the shape of the logarithmic spiral is set to 1, and other parameters setting refers to [12]. In GWO, the inertia weight coefficient is linearly decreased from 2 to 0 over the course of iterations [13]. In CSA, the flight step is set as 2, and crow’s perception probability is set as 0.1 [14]. In BA, the initial loudness is set to 1 and the initial pulse frequency is set to 0.5. Then the two constants affecting the above two parameters are both set to 0.9, and other parameters setting refers to [15].
The characteristic index of impulse noise is 0.9 , and the MSNR is set as 10 dB.
Figure 5 demonstrates algorithm test results of the hybrid optimization objective function under different weight coefficients.

(a)

(b)

(c)
When , based on the MK, the objective function is constructed. The test result shows that QSMA can achieve the best convergence performance. When , based on the MK and MNE, the objective function is constructed. The test result shows that QSMA’s convergence performance gradually deteriorates. When , based on the MNE, the objective function is constructed. The test result shows that other algorithms’ convergence performance is gradually better than QSMA.
Figure 5 demonstrates that different algorithms correspond to different optimal independence criteria. PSO and WOA can obviously show better convergence performance when the objective function is designed based on MNE. QSMA and SMA can obviously show better convergence performance when the objective function is designed based on MK. In addition, the change of the independence criterion has little effect on the performance of some algorithms. In summary, it is significant to design the hybrid optimization objective function.
4.3. Performance of QSMA-BSS Based on Hybrid Optimization Objective Function
The crosstalk error is used as the performance evaluation index, and the crosstalk error is calculated by the following formula:where denotes the system matrix and . denotes the -th row and the -th column element of . denotes the largest element in the -th row of , and this element is located in the -th column. denotes the -th row and the -th column element of . denotes the largest element in the -th row of , and this element is located in the -th column. The separation accuracy of the signal increases with the decrease of .
The number of population individuals for all algorithms is set to 30, and the iterations’ maximum number for all algorithms is set as 40. Each experiment runs 100 times in the same MSNR, and the result is the mean of 100 simulations. The other parameters setting for all algorithms remains unchanged.
The BSS methods based on nine algorithms, named PSO-BSS, SMA-BSS, GA-BSS, BFA-BSS, WOA-BSS, ABC-BSS, GWO-BSS, CSA-BSS, and BA-BSS, are shown in Figure 6.

(a)

(b)

(c)

(d)
Figure 6 demonstrates the algorithm test results of the hybrid optimization objective function under different weight coefficients; meanwhile, we conduct experiments under strong impulse noise and weak impulse noise environment, respectively. When , Figure 6 demonstrates that QSMA-BSS has higher accuracy compared with the other algorithms. When , Figure 6 demonstrates that the performance of QSMA-BSS has become worse.
Firstly, the proposed method can achieve BSS no matter under strong impulse noise or weak impulse noise. Besides, it can be known from the simulation results that the average of crosstalk error is still less than 0.3 when MSNR is 0 dB, which indicates that the performance of this method is still very superior under low MSNR. Finally, in addition to QSMA, other algorithms can also be applied to the method proposed in this paper.
The BSS method based on MK and QSMA is named QSMA-MK-BSS. The BSS method based on MNE and PSO is named PSO-MNE-BSS. Therefore, the other comparison methods are named ABC-MNE-BSS, WOA-MNE-BSS, GWO-MNE-BSS, CAS-MNE-BSS, and BA-MNE-BSS accordingly (Figure 7).

(a)

(b)
Figure 7(a) shows that the performance of the QSMA-MK-BSS is still better than other algorithms designing objective functions based on MNE. Consequently, the proposed QSMA is still significant and useful. Meanwhile, it is necessary for BSS to design hybrid optimization objective function. By the designed function, we can choose different optimal independence criteria for different algorithms and then design the objective function based on the optimal independence criterion of the algorithm. Figure 7(b) shows that as the coefficient a continues to increase, the performance of QSMA-BSS gets better and better. In summary, the optimal independence criterion for QSMA is MK, and it can achieve higher precision when the objective function is designed based on the optimal independence criterion.
Fractional low-order moments (FLOM) are introduced into FAST-ICA in [23]. Consequently, we can also introduce fractional low-order covariance (FLOC) [36] into FAST-ICA. Two methods are achieved as comparisons to show the superiority of the proposed method. Two methods are named FLOM-FAST-ICA and FLOC-FAST-ICA (Figure 8).

To testify the superiority of the proposed method, the probability of successful signal separation is calculated. When the crosstalk error is less than 0.3, the experiment is successful. During the simulation, each experiment runs 100 times in the same MSNR.
Figure 8 shows that the performance of the traditional methods deteriorates in strong impulse and low MSNR noise environment, but the QSMA-BSS is still effective and robust.
5. Conclusions
In this paper, a filter system named MAF-DPVT-MF is designed. MAF-DPVT-MF does not require any information from source signals or environmental noise as prior knowledge and can effectively suppress impulse noise, thereby eliminating the influence of noise interference on BSS. The hybrid optimization objective function is designed based on two different independence criteria. Then a new intelligent algorithm named QSMA is proposed to search for the optimal solution of hybrid optimization objective function and its corresponding separation matrix within the search range. According to the change of the performance evaluation index with the weight coefficient, QSMA’s optimal independence criterion is determined to get more accurate results. And simulation results illustrate that the designed method is robust and effective when the unknown source signals are interfered by impulse noise with low MSNR. Compared with some traditional methods, the performance of the proposed method is superior. Except these algorithms mentioned in this paper, some of the most representative computational intelligence algorithms can be used to solve the problem, such as monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, and Harris hawks optimization (HHO).
In future research, firstly, the proposed filtering system can be applied to other engineering problems to suppress noise. Secondly, the proposed QSMA can also be used to solve other optimization problems. Finally, the work in this paper can be considered a stepping stone to more complicated future research in BSS, such as underdetermined BSS in impulse noise and BSS when the observation signal is interfered by impulse noise.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (no. 61571149), the Natural Science Foundation of Heilongjiang Province (no. LH2020F017), and the Initiation Fund for Postdoctoral Research in Heilongjiang Province (no. LBH-Q19098).