Abstract

The iterative approach is important for image reconstruction with ill-posed problem, especially for limited angle reconstruction. Most of iterative algorithms can be written in the general Landweber scheme. In this context, appropriate relaxation strategies and appropriately chosen weights are critical to yield reconstructed images of high quality. In this paper, based on reducing the condition number of matrix , we find one method of weighting matrices for the general Landweber method to improve the reconstructed results. For high resolution images, the approximate iterative matrix is derived. And the new weighting matrices and corresponding relaxation strategies are proposed for the general Landweber method with large dimensional number. Numerical simulations show that the proposed weighting methods are effective in improving the quality of reconstructed image for both complete projection data and limited angle projection data.

1. Introduction

Image reconstruction plays a major role in many fields such as X-ray computed tomography (CT). Many imaging processes can be modeled as the following linear systems:where denotes the reconstructed image, is measurement data, and , where equals the intersection length of the th projection ray with the th pixel . Therefore, the reconstructed image can be obtained by solving the system of linear equations. If there is a solution of (1), it is called consistent. Otherwise, it is called inconsistent. Iterative algorithm is important for image reconstruction. In this paper, we use a superscript to denote the transpose of a vector or matrix and use to denote the 2-norm of a vector or matrix. We use to denote the Hilbert space with the canonical inner product , where the number field can be the reals or the complexes . The orthogonal component of a subspace in is denoted as . For a matrix , the null space and the range of are denoted by (A) and , respectively. The orthogonal projection from to the null space is denoted by .

Because of the rapid increase in computing power of computers, iterative reconstruction algorithms become more and more effective. The general Landweber iterative method is the extensively used scheme which is written aswhere is the relaxation coefficient and the weighting matrices and are symmetric positive definite [1]. The weighted and relaxation strategies are important for the quality of the reconstructed image. Formula (2) becomes some specific iterative methods for the different weighted matrices. If where denotes the identity matrix and where denotes the th row of , (2) is the Cimmino’s iterative method [2, 3]. If and where is the number of nonzeros in the th column of , (2) becomes the simultaneous version of the diagonally relaxed orthogonal projection (DROP) method [4, 5]. The component averaging (CAV) method is obtained with and [6]. With and , the Landweber method is as follows:Convergence conditions of the method (2) have been established [79]. The semiconvergence behavior means that initially the iteration vector approaches a regularized solution, while continuing the iteration often leads to iteration vectors corrupted by noise [10]. Based on the analysis of the semiconvergence behavior, T. Elfving et al. proposed two relaxation strategies [4]. Later, these two strategies were improved by them [11]. If the matrix is positive definite, as a special case of the Richardson iteration, based on minimizing the spectral radius of iterative matrix, the relaxation coefficient of the Landweber iteration (3) is given bywhere and are the largest and the smallest eigenvalues of [12, 13]. However, if matrix is semipositive definite and singular, it is proved that the iteration generated by the Landweber scheme (3) withis divergent because of [8]. In our previous work, based on minimizing the spectral radius of the new iterative matrix, a new relaxation coefficientand an accelerating convergence strategy are proposed for the semipositive definite matrix , where are the largest and smallest positive eigenvalues of , respectively [9].

For high resolution images, the number of discretization points of original image is large. The image reconstruction problem is ill-posedness and the coefficient matrix of the reconstructing linear system has a high condition number. Therefore, the motivation of this work is to improve the condition number by weighting applicable matrices. In this paper, we derive an appropriate weighting matrix that can be used to improve the condition number. In addition, for suitable high resolution of reconstructed image, the grid number needs to be increased and the dimensional number of original discrete image becomes large. Given an adequately large number of projections, the discrete original image converges to the continuous original image and approaches , where is Radon transform and is the adjoint operator. Here, we do not give the rigorous theoretical proof. Therefore, the smaller eigenvalues of matrix tend to zero. The discrete original image can be represented as a linear combination of the eigenvectors of , and the remainder in this combination tends to zero vector. Then we derive the approximate iterative matrix and propose the new weighting algorithms for Landweber method. Based on minimizing the spectral radius of iterative matrix, we obtain the corresponding relaxation strategies for the proposed methods. The numerical simulations show the advantages of the proposed algorithms in both convergence speed and iteration error compared with Landweber method and Cimmino’s method. The proposed methods are applied to the reconstruction from limited angle projection data and improve the condition number of matrix .

The organization of the paper is as follows. Section 2 analyzes the relation between the condition number and the spectral radius of iterative matrix. In Section 3, we give a method of weighting matrices to improve the condition number. For the higher resolution images, based on the derived approximate iterative matrix, we propose a new weighting algorithm for the general Landweber and the corresponding relaxation strategies. In Section 4, numerical simulations of image reconstruction are carried out. The advantages of proposed algorithms are verified from two aspects: reconstruction by the complete projection data and reconstruction by the limited angle projection data. Finally, some relevant conclusions are discussed.

2. Preliminaries

2.1. Preparations

In this section, with the new derived iterative representation, the condition number is analyzed in the iterative matrix. For the sake of argument, some notations are explained as follows. Let be the distinct positive eigenvalues of which are ordered such that . Each has the algebraic multiplicity , respectively, for . Let be the corresponding orthonormal eigenvectors. It follows thatIf is an eigenvalue of with an algebraic multiplicity , let be the corresponding orthonormal eigenvectors for ; then , where is a zero vector. With the above notations, the 2-norm of the matrix satisfiesLet ,

, and , and then we havewhere represents the -order unit matrix and denotes the -order null matrix. The least-squares functionalis used to find the minimal 2-norm solution of (1). The gradient of isAll the minimizers of satisfy the equation , i.e.,which is called the normal equation. Equation (12) is always solvable and has a minimal 2-norm solution , which is equal to , where is the Moore-Penrose inverse of [14]. If (1) is inconsistent, which is only caused by the noise, can be decomposed into two parts , where , , and is caused by noise. By the conclusion of orthogonal decomposition, there is . Thus, for the inconsistent equation, the treatment is as follows:The minimal 2-norm solution can be written aswhere , for , . The initial guess can be decomposed aswhere , for , . We use the following convergence results proposed [8, 9].

Theorem 1. For , the th iteration generated by the Landweber method (3) satisfieswhere with for , , and .

Letwhere is the largest eigenvalue of .

Theorem 2. (a) For ,(b) Assume that the relaxation coefficients are chosen such that for . Then the iteration generated by the Landweber method (3) converges to a solution of the normal equation (12) for any and any initial guess ifUnder the conditions, the limit of the iteration is equal to

For the Landweber method (3), by (16), we define the error vector and the iterative matrix Thenandwhere . The following two relaxation strategies are proposed [9].

Proposition 3 (see [9]). For an initial guess , based on minimizing the spectral radius of the iterative matrix , the relaxation strategy for the Landweber method (3) is

Lemma 4 (see [9]). For any , its symmetric point about is , where and

For the Landweber method, Proposition 3 gives the relaxation strategy and Lemma 4 obtains an accelerating convergence strategy; i.e., in the case of only calculating .

2.2. Relation between the Condition Number and Spectral Radius

Definition 5. The condition number of the matrix is defined aswhere , are the largest and smallest positive eigenvalues of matrix , respectively.

In the following, we analyze the relation between the condition number and spectral radius. Let ; then the equivalent form of (3) can be written asBased on minimizing the spectral radius of the newly derived iterative matrix in (22), if and are all known, then the optimal relaxation strategy isIf only is known, then the second relaxation strategy is

For clarity, we use to represent the spectral radius of the iterative matrix in (22) being relative to matrix . By (22) and Lemma 4, let ; then andIf , then . Hence,According to (32), if matrices , satisfy , thenTherefore, in this case, we getBy (32), (34),

If of (28) is equal to and , respectively, two iterative algorithms are as follows:andwhere , are the largest eigenvalues of , , respectively. The iterative matrices corresponding to the two algorithms (38), (39) are denoted as , . By (35), (36), and (37), we have the following theorem.

Proposition 6. Assume that .
(1) If the optimal relaxation strategy (29) is selected, then(2) For the relaxation strategy (30) , if , thenIf , then

Proof. For , by (35),For , by (36), (37),The theorem is established.

3. New Weighting Algorithms

3.1. Reduction of Condition Number

According to Proposition 6, we consider making become smaller by the weighting matrix , in (2). Since is decreasing, by (9), using as the transformation matrix of similarity diagonalization, the positive eigenvalues of the diagonalization matrix constitute an increasing sequence. With the same as in (9), if , we obtainwhere . Therefore we propose that the weighted matrix in (2) is taken as .where and the matrix is symmetric positive definite. By (16), (22), and Theorem 1, the iterative matrix is

Proposition 7. The condition number of the matrix is denoted as , a function of . Then
(1) it is monotonically decreasing for and monotonically increasing for ;
(2) for .

Proof. Using the same as in (9), we obtainwhere are the positive eigenvalues of and the algebraic multiplicity of every positive eigenvalue is denoted as , respectively. For , , letIt is easy to getFor and , since is the farthest point from , we obtainWe need to find the minimum point of the fractionor the maximum point of its reciprocalFor , is a monotonically increasing function of ,andTherefore,whereSimilarly, for , is the farthest point from ,andTherefore, we need to find the minimum point of the fractionor the maximum point of its reciprocalFor , is a monotonically decreasing function of ,andTherefore,whereFor , the function is monotonically increasing for , so we getandSincewe obtainBy (57), (66), and (71), we can readily compute thatandSincewe obtainThis completes the proof.

3.2. Weighting and Relaxation Strategies
3.2.1. Analysis for Dimensional Number Increases

In this subsection, we analyze the relations between the original continuous image and discrete image, the Radon transform, and the coefficient matrix of reconstructing algebraic system. Based on these relations, the weighted methods are proposed. The original image is a continuous function generally, assumed to be -integrable. Algebraic iterative reconstruction is discretizing the original continuous image and Radon transform to be the discrete image and the linear system, respectively, and then finding an iterative solution of the linear system. To satisfy the suitable high reconstructed resolution for applications, the discretized grid is refined and the grid number is increasing, so the dimensional numbers of the iteration solution and coefficient matrix become larger. If the projected data are many enough, the discrete original image converges to the continuous original image and approaches as the grid number increases in a certain form. Since the imaging object is in a bounded domain, the image function is compactly supported. For the Radon transform in bounded domain, has singular value decomposition and is the positive definite compact operator [10]. By the theory of spectrum of the positive definite compact operator, as the dimension of the matrix becomes very large, the smaller positive eigenvalues of tend to be [10]. Since Radon transform has unique solution, the linear system (1) has unique solution as the projection data is appropriately large [10]. Due to the space limit of the paper, we will not prove these. Then the original discrete image isandwhere , for , , and the eigenvectors constitute the orthonormal basis of Hilbert space . As the grid number increases, the value of increases and the original discrete image tends to be the original image . Assuming that is an allowed iterative error. The remainder in (77) is denoted aswhere is unknown. If the reconstructed error vector , there exists such that . So can be negligible. By Theorem 2,By (47), the approximate iterative matrix is

3.2.2. Weighting and Relaxation Strategies

Based on analysis in Section 3.2.1 and minimizing the condition number of the approximate iterative matrix (81), by Proposition 7, the parameter in (46) is taken asand the corresponding iterative method iswhich is marked as NWL-a in this paper. Since is unknown, the value of is taken in a suitable interval . Then .

Now we study the relaxation strategy after determining the weighted matrix. For the iterative method (83), by (25), (47), and Proposition 3,whereWe hope that the spectral radius of the iterative matrix is smaller. By the inequality (41) in Proposition 6, the relaxation coefficient isThen is a monotonically decreasing function with respect to and satisfies for .

To compare the reconstructed results of different weighting algorithms, by Proposition 7, the parameter in (46) is taken as and , and the corresponding methods are marked as NWL-1 and NWL-2, respectively. By (25) in Proposition 3, for NWL-1, its relaxation coefficient isfor NWL-2,by (58),

4. Numerical Simulation

In order to illustrate the performance of the proposed algorithms, we simulate the reconstructions of parallel scanning model from noise-free, noise, and limited angle projections. The MATLAB software is used in numerical simulation. All the algorithms are implemented in a PC with 1.70 GHz processor and 4 GB of memory. As the original image, the Shepp-Logan phantom is obtained by the MATLAB function “phantom” and its pixel is . The original image is labeled in a 1D sequential order , where is the exact solution. The initial vector is selected as zero vector. The projection is obtained by the MATLAB function “radon”. The element in is the segment length of the th projection ray within the th pixel , which is calculated in terms of the method proposed in [15]. In the process of computing, all iterative reconstruction algorithms are computed by the form

For the sake of comparison of different algorithms, four quantitative evaluation indices are used to evaluate the quality of reconstructed images. The relative error is defined as follows:and the smaller its value is, the faster the convergence rate is. The other three kinds of image distance indices, , , and , are calculated by the method proposed in [16]. Let and denote the element of the row and the column , respectively, in the original image and reconstructed image . Here and are obtained by rearranging the exact solution and iterative solution , respectively. The formulae of image distance are as follows:where , denotes the largest integer less than andThe criterion describes the overall nearness between the original and reconstructed image, the criterion is the mean relative error of reconstruction, and the criterion is the maximum difference between the original and reconstructed image. The smaller the values of , the better the reconstructed results.

Based on the relaxation strategies proposed in Section 3, for Landweber, Cimmino, NWL-1, and NWL-2 algorithms, the corresponding relaxation coefficients are obtained by the conclusions (25), (87), and (88), respectively. For the algorithms NWL-a, the values of parameter and satisfy , .

The principle of determining the number of iterations is that the relative error does not significantly decrease as the number of iterations increases. According to the results of repeated experiments, the number of iterations is chosen as 200 and 1000, respectively, for the complete projection data and the limited angle projection data.

4.1. Complete Projection Data

We use projections with rays per projection for the reconstruction of complete projection data. The calculated coefficient matrix has the dimensions . For the algorithm NWL-a, the corresponding values of and are as follows.

The largest eigenvalue of is .

For , ; for , ; for , ; and for , .

We compare the performance of different algorithms in terms of reconstruction error and image distances. Table 1 shows the computation time for completing 200 iterations by formula (90). It can be seen that the time spent by other algorithms is similar except that the Cimmino algorithm takes a little longer time.

4.1.1. Noise-Free Projection Data

The comparison of relative errors from the proposed algorithms is shown in Figure 1, from which we observe that the relative errors obtained from the NWL-a method are less than those obtained from the Cimmino method, the NWL-1 and NWL-2 methods, and the Landweber method. Figure 2 shows the reconstructed phantom images from the noise-free projections. It can be seen that the reconstructed images obtained from the NWL-a method are clearer than those obtained from the Cimmino method, the NWL-1 and NWL-2 methods, and the Landweber method. The image distances for the noise-free projection data are given in Table 2, from which we see that the image distances from NWL-a algorithm are smaller than those from Cimmino and Landweber algorithms. Figure 3 plots the curves of relative error and it shows that the relative errors from the NWL-a method are less than those from the other methods. Figure 4 shows the comparisons of the 32nd column data in the original and the reconstructed images. It is observed that the main features of the reconstructed images obtained from the NWL-a method with are closer to those of the original image. The image distances generated from different algorithms are given in Table 3, from which we see that the image distances from NWL-a algorithm are smaller than those from DROP and CAV algorithms.

4.1.2. Noisy Projection Data

In this section, we test the performance of different algorithms for the projection data with noise. For this purpose, the white Gauss noise with signal-to-noise ratio (SNR) of 10 is added to the projections . The reconstructed results are shown in Figures 5, 6, and 7. The image distances for the noisy projection data are given in Table 4, from which we see that the image distances from NWL-a algorithm are smaller than those from Cimmino and Landweber algorithms. The conclusion is the same as above. The quality of the reconstructed images obtained from the NWL-a method is better than the quality of those obtained from the other methods.

4.2. Limited Angle Reconstruction

In the theory of continuous case, the problem of limited angle reconstruction has been discussed [1720]. Assume that is compactly supported and the support of is . In two-dimensional space, . Let . The Radon transform of is defined byLet and . If is only known on and a subset , the data is called angle-limited. By projection slice theorem,andLet and be the characteristic functions of and , . Thenwhere and . We let . It follows from (101) that the adjoint operator of is given byMultiplying on both sides of (101), we obtainThe integral equation (101) can be uniquely solved since is self-adjoint and positive definite [18]. Therefore the eigenvalues of the eigenfunctions of satisfy . It turns out that the eigenvalues are split into two rather distinct subsets: some are “close to 1” and the rest are “close to 0” [18]. When the range of known limited angles become small, more eigenvalues are “close to 0”, which leads to ill-posed problem becoming serious and degrading the quality of reconstructed image. Here we do not study the theoretical relation between the eigenvalue distributions of derived equation (103) in continuous case and those of discrete algebraic linear systems for limited angle reconstruction problem; we just use our methods to improve the condition number of discrete algebraic linear systems, so as to improve the results of limited angle reconstruction.

4.2.1. Limited Angle 0°−119°

To test the effectiveness of the new algorithms in this aspect, image reconstruction using limited angle projection data is simulated in this section. The projection angle is set from 0°−119° and we use 120 projections with 95 rays per projection. The calculated coefficient matrix has the dimensions . The number of iterations is . For the algorithms NWL-a, the corresponding values of and are as follows. The largest eigenvalue of is . For , ; for , ; and for , . Completing 1000 iterations, the Cimmino method and the proposed methods take about 15 seconds.

From Figure 8, we see that the reconstruction images from the NWL-a method are clearer than those obtained from the other methods. Although the results obtained from the Cimmino method are good, it is time-consuming to calculate the iteration matrix of Cimmino method before the iteration begins. Figure 9 shows that the NWL-a method makes the relative error smaller. The comparison of the 32nd column data is shown in Figure 10, from which we observe that, for incomplete projection data, the reconstructed image has a certain deviation from the original image. However the reconstructed images obtained from the NWL-a method are the closest to the original image.

4.2.2. Four Kinds of Limited Angle Projection Data

In order to further test the performance of new algorithms on limited angle reconstruction, comparing with the Cimmino method, we select four kinds of limited angle scans which cover , , , and , respectively. The number of iterations is 1000. Corresponding to limited angle , , , and , the calculated coefficient matrices have the dimensions, respectively, , , , and . For the NWL-a algorithm, the corresponding values of and are as follows. For , ; for , ; and for , , where denotes the largest eigenvalue corresponding to the parameter .

Figure 11 shows the reconstructed results by the methods FBP (Filter Backprojection), NWL-1, NWL-2, and Cimmino. Figure 12 shows the reconstructed results from the NWL-a method with . From these figures, it is observed that the quality of the reconstructed images becomes worse with the narrowing of the limited angle range. Corresponding to Figure 12, the image distances from the NWL-a method are represented by Tables 5 and 6. It can be seen that the image distances , , and obtained by NWL-a method with = 1.005, , are similar for each kind of limited angle projection data. The values of , , and become larger with the narrowing of the limited angle range, which is consistent with the fact. However the reconstructions of the main features in the original image are clear when the range of the limited angle is at least . In contrast, the reconstructed images obtained via the NWL-a method are more clear. The reconstructed images are not clear when the range of the limited angle is less than . Figures 13 and 15 show the comparison of relative errors from the six iterative methods. It is observed that the convergence speed of the NWL-a method is the fastest. From Figures 14 and 16, we see that the reconstructed image obtained from the NWL-a method is closer to the original image. The numerical simulations of limited angle reconstruction show that the NWL-a method has improved the reconstruction results.

The simulation results show that the proposed weighting method NWL-a () has better convergence results.

5. Conclusion

Based on the analysis of the relation between the condition number and the spectral radius of the iterative matrix, the new weighted matrices have been derived to improve the ill conditioned number. For the higher resolution of reconstructed image, the approximate iterative matrix has been derived for the weighted Landweber method with large dimensional number. According to these, the new weighting algorithm NWL-a and corresponding relaxation strategies for the weighted Landweber method are proposed. Numerical simulations verify the feasibility and the better performance of NWL-a method for complete projection data, especially for limited angle projection data. For , the reconstruction results obtained by NWL-a are similar. Therefore, when using the NWL-a method, a number can be selected from the interval as the value of the parameter . Since we have no imaging laboratory here, the proposed algorithm cannot be applied to actual experiments, which will be a part of our future work.

Data Availability

The Numeric Type data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors are partially supported by the National Natural Science Foundations of China (61671004, 61271012).