Abstract

The smoothed norm algorithm is a reconstruction algorithm in compressive sensing based on approximate smoothed norm. It introduces a sequence of smoothed functions to approximate the norm and approaches the solution using the specific iteration process with the steepest method. In order to choose an appropriate sequence of smoothed function and solve the optimization problem effectively, we employ approximate hyperbolic tangent multiparameter function as the approximation to the big “steep nature” in norm. Simultaneously, we propose an algorithm based on minimizing a reweighted approximate norm in the null space of the measurement matrix. The unconstrained optimization involved is performed by using a modified quasi-Newton algorithm. The numerical simulation results show that the proposed algorithms yield improved signal reconstruction quality and performance.

1. Introduction

Sparse representation of signals has been extensively studied for decades. The concept of signal sparsity and norm based on recovery techniques can be traced back to the work of Logan [1] in 1965, Santosa and Symes [2] in 1986, and Donoho and Stark in 1989 [3]. It is generally agreed that the foundation of the current state of the art in compressive sensing (CS) theory was laid by Candés et al. [4], Donoho [5], and Candés and Tao [6] in 2006. Mathematically, under sparsity assumptions, one would want to recover a signal , for example, the coefficient sequence of the signal in the appropriate basis, by solving the combinatorial optimization problem: where is an observed signal, is the dictionary matrix, means a coefficient vector for the linear combination, and is the norm of the vector indicating the number of its nonzero elements.

Exactly solving (1) is reported to be NP-hard [5, 7]; thus a relaxed approach [810] was proposed to address this challenge. By replacing nonconvex norm with convex norm, the method finds optimal solution using convex optimization [8, 1113]. It was also introduced [6, 1417] that, for many problems, the minimizing of the norm is equivalent to that of the norm under certain conditions. However, experimental results in [13] posed a strong question about the equivalence of minimizing both norms in practical problems. The sparse signal reconstruction (SSR) algorithm based on the optimization of a smoothed approximate norm is studied in [11] where simulation results are compared with corresponding results obtained with several existing SSR algorithms with respect to reconstruction performance and computational complexity. These results favor the use of the approximate norm.

In this paper, we present a new signal reconstruction algorithm for CS, which is based on the minimization of a smoothed approximate norm. But it differs from the previous algorithms in several aspects. First, we use approximation hyperbolic tangent function to approximate the norm, which is found to have better approximation quality of norm than the standard Gauss function. Second, the norm minimization algorithm in this paper is carried out in null space of the measurement matrix, in which the constraints on measurements are eliminated and become unconstrained. Third, we use a reweighting technique in the null space of the measurement matrix to force the algorithm to reach the desired sparse solution faster. The rest of the paper is organized as follows. In Section 2, the smoothed norm algorithm is briefly presented. Section 3 is about the signal reconstruction on the base of the reweighted approximation smoothed norm algorithm. Section 4 provides the experimental results. The paper is finally concluded in Section 5.

2. Smoothed Norm Algorithm (SL0)

Smoothed norm attempts to solve the problem in (1) by approximating the norm with a continuous function. Consider the continuous Gaussian function with the parameter :

The parameter determines the quality of the approximation and for any family of functions which approximates the Kronecker delta function.

Define the continuous multivariate function as It follows from (2) and (3) that and

Since the number of entries in is and the function is an indicator of the number of zero entries in , the norm of reconstructed vector is approximated by

Substituting this approximation into (1) yields the problem

The approach is then to solve the problem (6) for a decreasing sequence of ’s. The underlying thought is to select a which ensures that the initial solution is in the subset of over which the approximation is convex and gradually increases the accuracy of the approximation. Here, the final smoothed algorithm is proved to be faster with the possibility of recovering a sparser solution [11].

Algorithm 1. The final smoothed norm (SL0 algorithm) is as follows.(1)Initialization: choose an arbitrary solution from the feasible set, for example, the minimum norm solution of ; that is, (2)Choose a decreasing sequence for , .(3)For , let . Minimize function (6) using iterations of the steepest ascent algorithm.(a)Let .(b)Let (where is a small position constant); then (4)Final answer is .

3. Signal Reconstruction Based on Approximation Function

3.1. Approximate Norm with Approximate Hyperbolic Tangent Sequence

The key to SL0 algorithm is to select high quality smoothed continuous function to approximate norm. The optimal value of norm is achieved through the solution of the minimum value of the approximate smoothed continuous function. In general, the approximation of the norm with Gauss function cannot produce desired result. To effectively approximate norm, we analyzed inverse tangent function [18], hyperbolic tangent function, and approximate hyperbolic tangent sequence.

Inverse tangent function is as follows: Hyperbolic tangent function is as follows: Approximate hyperbolic tangent function is as follows: where is a parameter and is a component of the vector . The following properties can be obtained.

(1) For formula (11), let ; then since According to the definition of norm, when goes infinitely to zero, the value of goes as well to the nonzero elements of the vector , that is, the norm of vector . Therefore, it can be shown with the smoothed function . Obviously, formulas (9) and (10) have the same approximation property of norm as that of (11). The approximation performance of norm of each formula, however, is different. For further illustration of the advantages of approximate hyperbolic tangent function as smoothed continuous function, we compared the distribution of each approximation function at interval when . Figure 1 is the approximation quality comparison of the smoothed norm of the standard Gauss function, inverse tangent function, hyperbolic tangent function, and approximate hyperbolic tangent function. The estimation of norm by approximate hyperbolic tangent function outsmarted other functions. It shows that the approximate hyperbolic tangent function has better “steep nature" between the intervals of and the estimation of the norm is more accurate.

(2) Given , approximate hyperbolic tangent function has better convergence than other approximate functions; that is, , for arbitrary , , where .

Proof. Let , when or , and , when ,

From (9), (10), and (11), it is clear that the smaller the is, the more the local extreme values of objective function would be, and it would be difficult to get the global optimal value of the objective function. The determines the smoothness of the objective function. The bigger the , the smoother the objective function, and the less accurate it would be to estimate the norm. Conversely, the smaller the is, the better the approximation of the norm would be. In the process, we built a decreasing sequence to optimize each corresponding objective function until is small enough, so as to eliminate the influence of the local extreme value and get the global optimal value of the smoothed function. We, thus, use approximate hyperbolic tangent sequence for the approximation of norm.

Problem (1) can be approximated by the following model:

(3) There are lots of algorithms to the solving of problem (14), among which the most representative one is the steepest descent method. But there still exist two drawbacks.(a)“Notched effect" in search direction strongly hinders the convergence speed.(b)Search steps cannot be estimated. Usually, it is estimated with experiences, which lacks theoretical support.

We, therefore, propose the modified quasi-Newton algorithm to solve the above problem. First, calculate the Newton direction of the approximate hyperbolic tangent function: where We know that the Hesse matrix is nonpositive definite in Newton direction , which cannot ensure that the Newton direction is descent direction, so the Hesse matrix should be modified. Then, set up a new matrix: where is a group of proper positive numbers, is identity matrix, and the diagonal elements in matrix are all positive. Since choose as modification coefficients; then the diagonal elements in matrix can be shown as Also, it can be obtained that

(4) Usually, parameter is chosen as , , and . and are chosen as follows.

Let , for faster algorithm convergence; parameter should meet the following: and choosing , when , we can obtain , so when is smaller, can better reflect the sparsity of vector . Meanwhile, however, it is more sensitive to noise; the value of should not be too small. In summary, the steps are as follows.

Algorithm 2. The approximate smoothed norm (ASL0 algorithm) is as follows. (1)Initialization: , initial residual , and smoothed function’s initial parameter .(2)Calculate the modified Newton direction with (23) and use modified Newton algorithm to get .(3)Use gradient projection to have , and calculate the residual .(4)If the residual of the adjacent iterations meets , then stop. Otherwise, let , continue, and return to (2).(5)Further decrease , let , usually set , and return to (2). Otherwise, stop iterating and get the optimal value .

3.2. Reweighting the Approximate Norm Minimization

It is well known that all solutions of can be parameterized as where is a solution of and is a matrix whose columns constitute an orthogonal basis of the null space of [19]. Vector and matrix can be evaluated by using the singular-value decomposition (SVD) or by using the QR decomposition of matrix [20]. Using (25) the problem in (14) is reduced to where

Signal reconstruction based on the solution of the problem in (28) works well, but the technique can be considerably enhanced by incorporating a reweighting strategy, and the reweighted unconstrained problem can be formulated as follows [21]: where are positive scalars that are from a weight vector . Starting with an initial , where is the all-one vector of dimension , in the th iteration the weight vector is updated to with its th component given by where denotes the th component of vector obtained in the th iteration as and is a small positive scalar which is used to prevent numerical instability when approaches zero. As long as , the objective function in (28) remains differentiable and its gradient can be obtained as For a fixed value of , the problem in (26) is solved by using a quasi-Newton algorithm where an approximation of the inverse of the Hessian is obtained by using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update formula. It can be shown that function (28) remains convex in the region where the largest magnitude of the components of is less than , where vector is chosen to be the least squares solution of ; namely, . Based on this, a reasonable initial value of can be chosen as , where is a small positive scalar. As the algorithm starts at the origin , the above choice of ensures that the optimization starts in a convex region. This greatly facilitates the convergence of the proposed algorithm.

Algorithm 3. The reweighted approximate smoothed norm (RASL0 algorithm) is as follows.(1)Initialization: input , , , , , and , and set , , , and .(2)Perform the decomposition of and construct using the last columns of .(3)With and as an initial point, apply the BFGS algorithm to solve the problem in (28), where reweighting with parameter is applied using (29) in each iteration; denote the solution by .(4)Compute and update weight vector to with (29).(5)If , stop and output as the solution; otherwise set , , , and repeat from step (3).

4. Simulation Result

This section describes some experiments testifying to the performances of the proposed algorithm. All experiments are performed under Windows 7 and MATLAB V8.0 (R2012a) running on a Lenovo workstation with an Intel (R) Core (TM), CPU i7 at 2.40 GHz, and 6 GB of memory.

Simulation 1: Algorithm Performances Comparison of One-Dimensional Signal. In the experiment, the signal length and number of measurements were set to and , respectively. A total of 16 sparse signals with sparsity , , were used. Then a -sparse signal was constructed as follows.(1)Set to zero vector of length .(2)Generate a vector of length assuming that each component is a random value drawn from a normal distribution .(3)Randomly select indices from the set , and set , and . The measurement matrix was assumed to be of size and was generated by drawing its elements from , followed by a normalization step to ensure that the norm of each column is unity.

The performance of iteratively reweighted (IR) algorithm [22, 23] with , the SL0 algorithm, ASL0 algorithm and RASL0 algorithm with , , , and was measured in terms of the number of perfect reconstructions over 100 runs. The results obtained were plotted in Figure 2. It can be observed that the RASL0 algorithm outperforms the IR algorithm. On comparing the SL0 algorithm and the ASL0 algorithm, we note that the two algorithms are comparable for smaller than 45 but the RASL0 algorithm performs better for larger than 45.

The mathematical complexity of the SL0 algorithm, IR algorithm, ASL0 algorithm, and RASL0 algorithm was measured in terms of average CPU time over 100 runs for typical instances with and where varies in the range between 128 and 512. In Figure 3, it is noted that the RASL0 algorithm and the SL0 algorithms were more efficient than the IR algorithm, and the complexity of the RASL0 algorithm was slightly higher than that of the SL0 algorithm. The moderate increase in the mathematical complexity of the RASL0 algorithm was primarily due to the fact that the objective function in (26) needed to be modified in each iteration using (29). Typically the RASL0 algorithm converges in a small number of iterations. Figure 4 showed how the objective function in (26) converges in 24 iterations, where the parameters were set to , , , and .

Simulation 2: Algorithm Performances Comparison of Two-Dimensional Signal. As to examine the algorithms’ performances for different images, the reconstruction performance of each method is evaluated in terms of two indicators: the reconstruction relative error (RE) and peak signal-to-noise ratio (PSNR). The relative error of reconstructed image is defined as follows: where and are the original and the reconstructed image, respectively. Apparently, the lower the value of relative error is, the better the reconstructed performance will be. The PSNR of reconstructed images is defined by where , are the size of the image.

In experiment, the DCT basis is selected as the sparse representation dictionary; compression ratio is . Figure 5 shows that the quality of the reconstruction with RASL0 algorithm has been improved and details of images are better reconstructed.

Table 1 shows three different images, with the same compression ratio . From the comparison of the PSNR, the RE, the signal-to-noise ratio (SNR), and the matching degree (MD) among RASL0 algorithm, SL0 algorithm, and ASL0 algorithm, it is clear that the relative errors with RASL0 algorithm are 0.02% smaller than the SL0 algorithm, the SNR ratio is 3 dB higher, the PSNR is improved by 2 db, and the matching degree is also improved.

Figure 6 is the reconstruction effect comparison of image (Lena) with different algorithms. It shows that RASL0 outperforms OMP, GP, GPSR, and NSL0 algorithms. Table 2 is the quality comparison of different algorithms, indicating that RASL0 algorithm is better in all aspects.

5. Conclusion and Future Work

In this paper, we use approximate hyperbolic tangent function to estimate norm and design a RSNL0 algorithm based on minimizing an approximate norm of the signal in the null space of the measurement matrix, where a reweighting technique is used to force the solution’s sparsity and a quasi-Newton algorithm is used to accelerate the optimization. Simulation results are presented which demonstrate that the proposed algorithm yields improved signal reconstruction performance and requires a reduced amount of computation relative to iteratively reweighted algorithms based on the norm, with . When compared with a known algorithm based on a smoothed norm, improved signal reconstruction is achieved although the amount of computation is increased somewhat. However, how to choose the parameter decreasing sequence so as to eliminate the disturbance of the minimal value to the solution of the minimum value would be our future work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Basic Research Program of China (973 Program) under Grant no. 2013CB329404, the Major Research Project of the National Natural Science Foundation of China under Grant no. 91230101, the National Natural Science Foundation of China under Grant no. 61075006, and the Key Project of the National Natural Science Foundation of China under Grant no. 11131006.