Abstract

We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC) codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

1. Introduction

Compressed sensing (CS) is a novel signal sampling theory that exploits the sparsity of signal [1, 2]. In the noiseless settings, the CS problem can be considered as estimating an original sparse signal from the linear measurement vector , ; that is, , where is referred to as measurement matrix; this measurement process is also referred to as encoding. Generally, without additional information, it is impossible to recover from in the case . The signal is called -sparse if the signal has no more than nonzero entries, for -sparse signal and with an appropriate measurement matrix , even though when , essentially contains enough information to recover the original signal ; this recovery process is also referred to as decoding.

Candés et al. in [1, 2] used the Gaussian random matrix as the measurement matrix and the norm minimization to recover the original signal. However, most of elements in Gaussian random matrix are nonzero, which requires a lot of storage space. Furthermore, these random matrices are often difficult or expensive in hardware implementation. Recently, many researchers have exploited some excellent sparse matrices of channel coding into the CS system [35]; compared with random measurement matrices, the sparse matrices could reduce the storage space and are easy to implement in hardware. As a special class of sparse matrices, the parity-check matrices of Low-Density Parity-Check (LDPC) codes can be used as good measurement matrices for CS under norm minimization [3]. In [3], the authors have pointed out that the norm minimization decoding of LDPC codes is very similar to the norm minimization of CS. Inspired by the work of [3], the authors of [4, 5] constructed a class of deterministic measurement matrices from finite geometry LDPC (FG-LDPC) codes. However, these works only focus on the construction of measurement matrices.

From the class of measurement matrix, decoding algorithms can be assorted as algorithms with dense matrices and sparse matrices. Generally, decoding algorithms based on sparse matrices have lower complexity than the algorithms associated with dense matrices. As shown in our analysis in Section 3, the decoding algorithm with sparse matrices is faster than the algorithms using dense matrices. Since the random dense matrices satisfy the well-known restricted isometry property (RIP) with high probability, the standard decoding algorithms for these matrices are in line with greedy or convex programming [1, 2, 6, 7]. The RIP is a sufficient condition which guarantees unique and perfect recovery of sparse signals via -minimization [2]. It has been shown that the sparse matrices do not satisfy RIP when [8]. However, the decoding algorithms with sparse matrices explore the insights from coding theory to assist in sparse recovery. In fact, through the design of sparse measurement matrices and decoding algorithm, both the measurement cost and decoding complexity can be simultaneously reduced.

For the complexity of recovery algorithm, the norm minimization has a computation complexity of . To reduce this complexity, with the sparse measurement matrices, various low computational complexity message-passing algorithms have been introduced for reconstruction of sparse signals in CS [922]. In [9], the authors used the random sparse matrix as a new sparse measurement matrix and proposed an iterative algorithm of complexity of , while it only required measurements. It was essentially the same as the ideal of verification decoding of packet-based LDPC codes in [23] and is rediscovered by Zhang in [11]. These verification-based (VB) recovery algorithms in the context of CS have been analyzed rigorously via the density evolution in [10, 11]. In [12], the VB algorithm was applied to the wideband spectrum sensing, where the measurement matrix is designed based on a block sparse matrix. Based on special nonnegative sparse signals, a new message-passing algorithm, called the Interval-Passing (IP) algorithm, was proposed in [13], which has a better performance than the VB algorithm in [11]. In [14], a combination scheme of both the IP algorithm and the VB algorithm was proposed for nonnegative sparse signals, which can perform better than either the IP algorithm or the VB algorithm. Based on expander graphs of the sparse measurement matrix, Jafarpour et al. in [15, 16] proposed a different iterative algorithm with a complexity of using measurements in the noiseless case. It should be worth noting that the above fast recovery algorithms only can get rid of the noiseless case.

For the noise case, in [17], the authors assumed signal as Gaussian mixture priors and proposed a belief propagation (BP) algorithm. The main disadvantage with this approach is that decoding complexity grows exponentially. In a similar work [18], the authors assumed signal as Jeffreys’ priors and applied well-known least-squares algorithms to recover the signal. However, in [18], the sparsity was assumed to be known.

To overcome the restriction of the decoding algorithms discussed above, Pawar et al. in [1922] designed a hybrid mix of the LDPC codes and Discrete Fourier Transform (DFT) framework (LDPC-DFT) and proposed a fast Short-and-Wide Iterative Fast Transform (SWIFT) peeling decoding recovery algorithm. It only needs measurements and iterative step to achieve the exact signal recovery in the noiseless case. In the presence of noise, both measurements and computational complexity for exact signal recovery are . These frameworks also have been used in spectrum sensing in [24]; Hassanieh et al. in [24] presented a prime sampling technology that can capture GHz of spectrum in real-time with low speed analog-to-digital converters (ADCs); this prime sampling technology is essentially identical to the framework of [1922, 25]. These frameworks of [1922] also have been used in compressed sensing phase retrieval [26].

In this study, we are interested in the sparse LDPC-DFT measurement matrices associated with SWIFT recovery algorithm [1922], which can reduce both the measurement cost and computational complexity simultaneously. The authors in [19] analyzed the successful recovery probability of SWIFT algorithm via exploiting the connection between CS system and packet-communication system. Furthermore, in [22] the authors investigated the generalized family of LDPC-DFT measurement matrices in terms of the measurement cost, computational complexity, and recovery performance via a rough density evolution, where a local neighborhood of every edge in the graph should be cycle-free (tree-like). However, they only focused on reducing the number of measurements required for successful recovery, for given and .

In this paper, we investigate the performance measures of SWIFT recovery algorithm in the asymptotic case in order to give an estimate of the performance for given sparse measurement matrix and in the finite case, where our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. The analysis shows that there exists a threshold on the density factor ; if under this threshold, the recovery algorithm is successful; otherwise it will fail. It is shown that the threshold is dependent on the parameter of LDPC matrix, which provides a basis for us to design the sparse measurement matrices.

2. A Class of Hybrid LDPC-DFT Measurement Matrix and Their Fast Recovery Algorithm

In this section, we consider a special class of hybrid mix of the parity-check matrix of binary LDPC codes and DFT matrix (referred to as “LDPC-DFT measurement matrix”) and their fast recovery algorithm. We firstly introduce the sparse parity-check matrix of LDPC codes (referred to as “LDPC-based measurement matrix”) and its bipartite graph.

2.1. LDPC-Based Measurement Matrix and Measurement Graph

In channel coding, it is well known that the LDPC codes can be defined by the null space of its sparse parity-check matrix; the codes are classified as binary codes and nonbinary codes. In this paper, we only consider the case of binary codes. A sparse parity-check matrix can be represented by a bipartite graph. In a bipartite graph, two sets of nodes are defined as the code symbols and parity-check equations; they are labeled as “variable nodes” and “check nodes,” respectively. There is an edge between the variable node and check node if and only if appears in parity-check equation .

In CS, when the measurement matrix is sparse parity-check matrix, the measurement also has its own bipartite graph (referred to as “measurement graph”). We denote this measurement graph as , where the set of variable nodes represents the original signal and the check nodes represent measurement observations and is set of edges. The number of edges associated with every node ( or ) is called node degree. In this paper, we consider the case in which every variable node (check node) has same nodes degree (), which is also referred to as regular degree . Additionally, we denote the check nodes set as all the check nodes incident to variable node by edges. Similarly, the set of variable nodes is denoted as all the variable nodes incident to check node . In particular, due to the mathematical connection between channel coding and CS, we can analyze the fast recovery algorithm over the measurement graph. Hereinafter, we refer to the recovery process as decoding.

2.2. Decoding Algorithm Based on Sparse Measurement Graph

We firstly introduce the principle of the decoding algorithm based on sparse measurement graph through an example. Consider an example consisting of a sparse signal of length with nonzero elements. Let the nonzero elements of signal be as follows: , , , and ; let the number of measurements be . According to the definition of measurement graph, we have sparse measurement graph, shown in Figure 1.

In Figure 1, it is shown that the check nodes can be classified into three observation nodes.

Zero-Ton. A check node does not contain any nonzero elements of signal, for example, nodes 1, 3, 4, and 9 in the right side of Figure 1.

Single-Ton. A check node only contains one nonzero element of signal, for example, nodes 5, 6, and 7 in the right side of Figure 1.

Multiton. A check node contains more than one nonzero element of signal, for example, nodes 2 and 8 in the right side of Figure 1.

Assuming that there exists an “oracle” can decide exactly which check nodes are zero-ton, single-ton, and multiton, and the “oracle” can further decode the corresponding value and index of variable nodes incident to the single-ton. In the above example, the “oracle” can decide that the nodes 5, 6, and 7 are single-ton and decode the value and index of variable nodes 1, 13, and 8, respectively. Then, the “oracle” could subtract variable nodes contributions from other check nodes and form new single-tons; for example, in Figure 1 check node 8 subtracts the contribution of the variable node ; then check node 8 becomes a new single-ton.

Therefore, with the information of single-ton, the “oracle” repeats the following steps:(1)Selecting all the edges in sparse measurement graph with single-ton check nodes.(2)Removing these edges and the corresponding variable and check nodes connected to these edges.(3)Removing the other edges adjacent variable nodes that have been removed in step (2).(4)Subtracting the value of variable nodes that have been removed in step (3) from the corresponding check nodes.

When all the edges have been removed, the decoding is successful. The decoding algorithm is called “peeling decoding” in traditional erasure channel [23, 27].

Luby et al. in [23, 27] analyzed the performance of peeling decoding algorithm; it has been shown that the peeling decoding algorithm terminates successfully with probability of at least with good variable degree distribution. However, in the decoding algorithm, the “oracle” can (1) decide the type of check nodes and (2) exactly estimate the value and index of single-ton incident to variable node. In order to get rid of the oracle, Pawar and Ramchandran introduced the DFT matrix to decode the single-ton in [19]; they also proved that the corresponding SWIFT algorithm with measurements could exactly recover the original signal with probability of at least in the noiseless setting, where , arbitrarily close to 0.

2.3. Measurement Matrix Based on LDPC-DFT

Denote as the first two rows of DFT matrix; that is, Since the matrix can decide the type of check nodes and decode the value as well as index of corresponding variable nodes, the matrix is also referred to as “detection matrix.” We design the required measurement matrix via the following definition.

Definition 1. Let for some positive integers . Given sparse parity-check matrix of LDPC, denoting as th row of (), and given a detection matrix , then the measurement matrix is designed bywhere is row-tensor product and is standard Kronecker product.

From the definition of Kronecker product, it is shown that the value of every check node (observation) is a 2-length vector, for example, the following check nodes , also shown in Figure 2:

We use the following criterions to decide the type of check nodes and estimate the value and index of corresponding variable nodes.

Zero-Ton. th check node is a zero-ton if .

Single-Ton. th check node is a single-ton if and , ; its neighbor indexed by is nonzero with value given by .

Multiton. th check node is a multiton if or .

The VB algorithm with decoding successfully claims that the nonzero elements of signal must satisfy continuous distribution [10, 11]. However, the above criterions are identified with probability of one whether nonzero elements of signal obey continuous distribution or not. Hence, the measurement matrix based on LDPC-DFT could be more applicable for various signal types.

Theorem 2. Whether the nonzero elements of original signal obey a continuous distribution or take values from the finite discrete set, the above criterions of zero-ton, single-ton, and multiton are identified with probability of one.

Proof. Firstly, we prove the case of nonzero elements obeying the continuous distribution . Here, we only prove the type of zero-ton, which is valid for the other types with no major changes. If the nonzero elements of original signal obey the continuous distribution, for arbitrary , we have . For the sparse measurement graph, , when ; that is, ; by , we have , where ; the decoder can decide the type of check node as zero-ton via the definition of zero-ton.
Then, we consider the case of the nonzero elements of original signal taking values from the finite discrete set. Similarly, we only prove the type of zero-ton. Considering the worst case, for , . However the is 2-length complex vector; that is, when , it claims not only , but also . Due to the sparse measurement matrix, holds, if and only if , . Hence, the decoder can decide the type of check node as zero-ton via the definition of zero-ton.

In the context of analysis of framework and simulation, we adopt the following way to generate the original signal. Let be the set of nonzero elements of original signal and let be the density factor, that is, the probability that a signal element belongs to . For a given , every element of original signal takes value as follows: every element takes 0 with probability of or the value from the continuous distribution (discrete sets) with probability of . In this sense, the sparsity and density ratio are random variables. Furthermore, and , where is referred to as expected value.

2.4. Decoding Algorithm

In the above subsections, we describe an iterative decoding algorithm to decode the sparse signal with the help of “oracle” that is implemented via LDPC-DFT measurement matrix and single-ton criterions. In this subsection, the iterative decoding algorithm is showed by using the step described in Section 2.2 (also see pseudocode in Algorithms 1 and 2).

()Input: observation vectors , length of original signal , measurement matrix .
()Output: estimation of original signal .
()Initialization: .
()for each iteration do
()for    do
()  denote for observation vector of th check node
()  if    then
()    is zero-ton.
()  else
()   flag value location = SingletonEstimator()
()   if flag then
()   (:, location), (location) = value
()    end if
()   end if
()   end for
() end for
) Input: observation vector .
() Output: a boolean flag “single-ton”, estimation of signal: value and location;
() Initialization: flag = “false”.
() if    then
()  flag = “true”
()  value =
()  location =
() end if

Following the discussions in [19], it has been proven that the SWIFT decoding algorithm satisfies the criterions with probability of at least via the corresponding Theorem  2 in [27]. In this paper, we propose a general framework for the asymptotic analysis of SWIFT algorithm. Using the asymptotic analysis, we can track the evolution of density factor and the corresponding threshold can be determined for different .

3. Asymptotic Analysis of Decoding Algorithm

3.1. Analysis Model

In this paper, we only consider the regular measurement bipartite graph. However, the results can be generalized to irregular case.

Consider a sparse bipartite graph , where is the subset of variable nodes, is the set of check nodes, and is the set of corresponding edges. Since the locations of nonzero elements in signal are random, the graph is left-regular, as shown in Figure 1. In the SWIFT decoding algorithm, the criterions employed by the peeling decode algorithm depend on the degree of check nodes. To describe the asymptotic analysis, we classify the set of check nodes as being the set of check nodes having degree , . Furthermore, the set of nonzero elements of signal is classified as being the set variable nodes having edges connected to the set , .

Based on the above classification and criterions employed in Section 2, in each iteration of SWIFT algorithm, a variable node can be decoded successfully if the following condition holds.

Theorem 3. In each iteration, a variable node can be decoded successfully with probability of almost 1 if and only if .

Proof.
Sufficiency. When , from the definition of single-ton, it is trivial that the variable node can be decoded successfully with probability of almost 1. When , that is, the variable node being connected to more than one single-ton, since the decoder can decode successfully, with probability of almost 1, the corresponding variable nodes from the single-ton, we can conclude that value of these single-tons must be identical. Hence, this case is identical to the case .
Necessity. For the SWIFT algorithm, when a variable node is decoded successfully, there must at least exist a single-ton check node that connected to the variable node ; that is, .

We remodel the sparse measurement graph via the definition of as well as and Theorem 3, which allows us to analyze the evolution of and . Let denote the set of probability that a check node belongs to and let denote the set of probability that a variable node belongs to , where is the iteration number. Furthermore, we denote by the probability that a variable node belongs to . Given , , and at iteration , our asymptotic analysis can calculate , , and at next iteration . For given initial density factor , we can track the evolution of via the analysis approach. Once the probability decreases monotonically to zero with the iteration number increase, we refer to this case as the decoding algorithm recovering the original signal successfully; while the probability is bounded away from 0 with any iteration number, we call this case decoding algorithm failure. In what follows, we provide a rigorous calculation of the above sets of probability via combinatorial and probabilistic arguments.

3.2. Details of Asymptotic Analysis

Define , . From Theorem 3, it is known that a variable node that belongs to can be decoded successfully and a variable node that belongs to is left intact. Thus, the probability that a variable node in is being decoded successfully isHence, after th iteration, the probability of variable node remaining undecoded is

Once the variable node is decoded successfully, as discussed in SWIFT algorithm, the edges along with the variable node will be removed, which also results in the degree of check nodes being incident to these removed edges. Let denote the probability that the degree number of a check node reduces to number , where . Based on Theorem 3, when a variable node is decoded successfully, edges along with the set as well as edges along with the set are removed. To calculate the probability conveniently, we use the total probability law:

For the asymptotic analysis, we assume the length of signal and the designed sparse matrix is random. Hence, both the distribution of edges between variable node and the set and the edges between variable node and the set can be regarded as uniform distributions. Let and be the probabilities that an edge is removed given that the edge is incident to a check node in the sets and , respectively. We have

Before calculating and , we should introduce a significant conditional probability. Let denote the probability that a check node incident to an edge belongs to given a variable node incident to the edge in ; its diagram is shown in Figure 3. Then, the can be calculated by

From the definition of , we haveSimilarly, the calculation of can be obtained by To sum up, we can calculate the set of probability of a check node in via (6)–(11).

Next, the is discussed. When a variable node is decoded successfully, with the corresponding edges removal, it is also shown that these undecoded sets should be reclassified into the set . Let denote the probability that a variable node changes into ; from Theorem 3, we have . On the other hand, check nodes in the set also should be reclassified into the set . Let the set denote the check nodes that reclassified to . Similarly, we assume the length of signal and the designed sparse matrix is random. We can exploit the following: the distribution of edges incident to the set of check nodes and is uniform. It can be seen that the probability is where is the probability of an edge incident to the set moving to the set ; that is,

Based on the total probability law, can be expressed aswhere is a normalization factor, which makes the probability satisfying standardization of probability; that is, Thus, according to the calculation of (12)–(15), the probability can be obtained.

In what follows, we provide a precise summary of the above updated rules.

(1) Initialization. Setting the initial value to , in the random sparse bipartite graph, it can be seen that the probability of a check node that belongs to the set can be given by the following binomial distribution:From Figure 3 and the probability in (9), the probability is given by the following binomial distribution:

(2) Calculating . Substituting (7)–(11) into (6), then is calculated.

(3) Calculating . Substituting (13)–(15) into (12), then is calculated.

(4) Calculating . Substituting (4) and the probability in step (3) into (5), is calculated.

3.3. Comparison of VB Algorithm in the Asymptotic Regime

In this subsection, we compare the above analytical success thresholds of SWIFT algorithm with VB algorithm in the asymptotic regime. In Table 1, we have listed the analytical success thresholds of the SWIFT and VB algorithms for different graphs.

As seen in Table 1, for every graph, the SWIFT algorithm has a better performance than the VB algorithm. Overall, it is shown that the oversampling ratio improves when decreasing both and . Furthermore, when we keep compression ratio constant—that is, is a constant—such as (3, 6), (4, 8), and (5, 10), as shown in Table 1, in general, when decreasing , SWIFT and VB algorithms perform better. Based on the relationship between the threshold and , the sparse measurement matrix with superior performance can be designed.

4. Simulation Results and Analysis

In this section, simulation results are provided for analyzing the performance of the SWIFT decoding algorithm. We also provide asymptotic analysis described in Section 3 with . According to the comparison of SWIFT decoding algorithm and asymptotic analysis, it is shown that there is a good agreement between them.

4.1. Asymptotic and Finite-Length Results for SWIFT Algorithm

Two scenarios are adopted for verifying Theorem 2: (1) nonzero elements obey Gaussian distribution; (2) nonzero elements take value from the discrete set . The reconstruction signal-to-noise ratio is defined as . We declare that recovery is successful if ; for each point, 1000 Monte Carlo trials are performed. In the asymptotic analysis, we declare that the recovery is successful if . To calculate the threshold of , the search step is set to .

In the first experiment, regular sparse measurement matrix is designed. The length of signal is set to , a signal element belongs to the support set with probability of , and each nonzero signal element is drawn from the Gaussian distribution (or the discrete set ). The success recovery percentage of SWIFT algorithm versus the initial density factor is shown in Figure 4. From Figure 4, it can be observed that, with the initial density factor increasing, the performance of recovery decreases. In the figure, we can also see that the performance of two types of signals is almost identical. In addition, the threshold of the algorithm for the regular sparse measurement is obtained by the asymptotic analysis, portrayed by dotted line. As shown in Figure 4, the threshold has a good agreement with the simulation curves.

In the second experiment, we verify that the SWIFT only requires measurements and iterative step. In order to examine the measurement cost and run-time, a random sparse matrix with regular degree is adopted. Figure 5 illustrates the measurement costs and run-time as a function of the length of signal . As seen in the figure, the measurement and run-time costs remain almost constant independently of .

To investigate the level of agreement between the asymptotic analysis and SWIFT algorithm, we provide the evolution of with iterations for regular sparse measurement in Figure 6. In this experiment, two cases of value are adopted: one above the threshold and another below the threshold. As shown in Figure 6, there is also a good agreement between simulation results and asymptotic analysis for the two cases of .

To further show the degree of agreement between the asymptotic analysis and simulation results, we apply SWIFT algorithm to regular sparse graph with . The successful recovery of thresholds is shown in Table 2. As shown in Table 2, when increasing , the simulation results are closer to the asymptotic threshold.

4.2. Comparison of SWIFT Algorithm and VB Algorithm at Finite-Length

In this subsection, we compare the SWIFT algorithm with the VB algorithm in the noiseless setting. In the VB algorithm, two, and , regular sparse matrices are designed with signal length , respectively. To have a fair comparison, in the SWIFT algorithm, two, and , regular sparse matrices are generated with signal length , respectively. Each nonzero signal element is taken from a standard Gaussian distribution. The success recovery percentage of SWIFT and VB algorithms versus the sparsity is shown in Figure 7. As shown in Figure 7, the SWIFT algorithm performs better than the VB algorithm especially for large signal length, which coincides with the analysis in Section 3.3.

5. Conclusion

In this paper, we investigate a new class of LDPC-DFT measurement matrices and their recovery algorithm. These new measurement matrices exploit the sparsity of parity-check matrices of LDPC and the phase of DFT matrices to recover the original signal. In addition, a general framework for the asymptotic analysis of this fast recovery algorithm is proposed. The analysis shows that there exists a threshold on the density factor ; if below this threshold, the recovery algorithm is successful; otherwise it will fail. Moreover, it can be found that the threshold is dependent on the parameter of LDPC matrix which provides a basis for us to design the measurement matrices. We also observe that the SWIFT algorithm performs better than other schemes based on LDPC codes.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research was supported by a research grant from the National Natural Science Foundation of China (no. 61271354).