Abstract
Many applications in applied sciences and engineering can be considered as the convex minimization problem with the sum of two functions. One of the most popular techniques to solve this problem is the forward-backward algorithm. In this work, we aim to present a new version of splitting algorithms by adapting with Tseng’s extragradient method and using the linesearch technique with inertial conditions. We obtain its convergence result under mild assumptions. Moreover, as applications, we provide numerical experiments to solve image recovery problem. We also compare our algorithm and demonstrate the efficiency to some known algorithms.
1. Introduction and Preliminaries
In various fields of applied sciences and engineering such as signal recovery, image restoration and machine learning [1–9] can be formulated as convex minimization problem (CMP) in the term of sum of nonsmooth and smooth functions. Let be a real Hilbert space. CMP is modeled as follows: where and are two proper lower semicontinuous convex functions such that is differentiable on . For any , it is known that is an optimal solution to (1) if where is the gradient of is linear function, which is defined by where the derivative of at in the direction is and is the classical subdifferential of which is given by
It is known that is maximal monotone and if is differentiable, then is the gradient of denoted by . This leads to the classical forward-backward splitting algorithm (FBS) [10, 11] which is defined by and where and is the proximal operator. On the one hand, (5) includes the gradient algorithm , where and is a Lipschitz continuous gradient. Moreover, (5) includes the proximal point algorithm , where and is a nondifferentiable function. We know that the proximal operator is single-valued and is characterized by for all and . The iteration (5) has been attracted extensively by many researchers. See, for example, [12–18]. One popular method for solving (1) is the modified forward-backward splitting method (MFBS) or Tseng’s extragradient method [19]; MFBS is generated by and where is a real sequence. The convergence rate is well known for the speed of . Later, various schemes were proposed to improve the convergence and accelerate the method. Among them, Lorenz and Pock [20] have improved the convergence speed of FBS from the standard to .
Recently, Beck and Teboulle [21] introduced a fast iterative shrinkage-thresholding algorithm (FISTA-BT) by the following scheme.
Algorithm 1. FISTA-BT algorithm.
Initialization: and .
Iterative step: let and calculate as follows:
Step 1. Compute the inertial step:
where and
Step 2. Compute the step:
Set and return to Step 1.
Without the Lipschitz condition on the gradient of functions, Cruz and Nghia [22] proposed a new version of the forward-backward method (FISTA-CN) based on the linesearch rule.
Algorithm 2. FISTA-CN algorithm.
Initialization: , , , and .
Iterative step: let and calculate as follows:
Step 1. Compute the inertial step:
where and
Step 2. Compute the step:
where and is the smallest number such that
Stop criteria if , then stop.
If , then set and return to Step 1.
In 2017, Verma and Shukla [23] introduced the new accelerated proximal gradient algorithm (NAGA) which is generated by the following.
Algorithm 3. NAGA algorithm.
Iterative step: let and calculate as follows:
Step 1. Compute the inertial step:
Step 2. Compute
where . Set and return to Step 1.
This work presents a new splitting method called a new modified forward-backward splitting algorithm (NMFBS) for convex minimization problems. Our results extend and improve the corresponding results of Tseng [19] and Cruz and Nghia [22]. The step size defined in this work does not require the Lipschitz condition of the gradient functions. Finally, we also present the numerical experiments of our algorithm for solving image recovery problems and show the comparison of our proposed method to FISTA-BT [21], FISTA-CN [22], and NAGA [23].
2. Main Theorem
We assume that and are proper, lower semicontinuous, and convex functions; is uniformly continuous on bounded sets; and is bounded on bounded sets. The following is our algorithm.
Algorithm 4. The new modified forward-backward splitting algorithm (NMFBS)
Initialization: given , , , and .
Iterative step: let and calculate as follows:
Step 1. Compute the inertial step:
Step 2. Compute:
where and is the smallest number such that
Step 3. Compute the step:
Set and return to Step 1.
Following the proof as in [24], we can show the following lemma.
Lemma 1. The linesearch (17) has a finite step.
Theorem 2. Suppose that for some , , and . Then, generated by Algorithm 4 converges weakly to a minimizer of .
Proof. Let , and set . Then, we obtain Moreover, we have Using (19), we see that Combining (20) and (21), we have Now, set . Then, we obtain Also, we have Since , we obtain . Thus, by (22), (24), and the monotonicity of , we have So, we have and by the monotonicity of . Thus, we have We note that for all Using (26), we have Using (27), we have From (26), (27), (28), and (29), we obtain Using (17), (19), (23), and (30), we obtain Next, we will show that exists. From (31), we see that By Lemma 5 in [1], we have where . Since , we have which is bounded. Thus, By Lemma 1 in [25] and (32), we have that exists. From (31), we see that Noting , exists and , we have Since is uniformly continuous on bounded sets, we have By definition of , it is easy to see that . Then, From (35), (36), (37), and (39), we obtain By the boundedness of , we assume that is a weak limit point of ; i.e., there is a subsequence of such that . Since , we also obtain as . Using (6), we obtain It follows that By passing and using (36) and (38), we have by Fact 2.2 in [22]. Hence, by Theorem 5.5 in [26], we can conclude that converges weakly to a point in . We thus complete the proof.
Remark 3. The condition that for some can be dropedd in case is Lipschitz continuous on since it is bounded below from (see Proposition 4.4,4.11 [22]).
Remark 4. In the main theorem, we use the linesearch technique to calculate our step size at each iteration unlike the result of [4, 5, 16, 17]. It is worth mentioning here also that choice of the step size in our algorithm does not depend on the Lipschitz condition of the gradient function. Our proposed algorithms can be applied in image recovery which are more applicable than those of [4, 5, 16, 17].
3. Numerical Experiments
Medical imaging plays a crucial role in modern medicine and image data which are found in various clinical specialties, for routine diagnostics in X-ray imaging, monitoring intraoperative progress during surgical procedures and guidance and diagnosis in ailing. In practice, the degradations are unavoidable because the medical imaging systems limit the intensity of the incident radiation to protect the patient’s health. So how to improve image quality is a good choice for medical analysis. Image processing mainly consists of image deblurring, image denoising, and image inpainting which is a branch that usually can be employed optimization techniques to solve it.
The image restoration problem can be explained as follows: where is the observed image, is the blurring matrix, is an original image, and is additive noise. To solve problem (43), we aim to approximate the original image by transforming (43) to the following LASSO problem [27]: where is -norm. In general, (44) can be formulated in a general form by estimating the minimizer of sum of two functions when and . We next present our algorithm (NMFBS) for LASSO problem with and also compare its efficiency with FISTA-BT [21], FISTA-CN [22], and NAGA [23]. All computational experiments were written in Matlab 2020b and performed on a 64-bit MacBook Pro Chip Apple M1 and 8 GB of RAM.
Let be the original images size and , respectively. These are shown in Figure 1. To measure the quality of restored images, we use the peak signal-to-noise ratio (PSNR) in decibel (dB) [28] and the structural similarity index metric (SSIM) [29]. The iteration numbers for all algorithms is 1200th.

(a)

(b)
All parameters are chosen as in Table 1. The initial points are vectors of ones with the size of original images for all algorithms. The blurred images are shown in Figures 2–4. The parameter of FISTA-BT, FISTA-CN, and NAGA is defined as in Algorithm 1.

(a)

(b)

(a)

(b)

(a)

(b)
The numerical results are reported in Table 2 and Figures 5–8.




From Table 2, we see that numerical experiments of NMFBS are better than those of FISTA-BT, FISTA-CN, and NAGA in terms of PSNR and SSIM for all blur types.
We next provide some experiments of the recovered images for two cases to illustrate the convergence behavior of all algorithms in comparison. We plot the number of iterations versus PSNR and SSIM in Figures 6 and 8.
4. Conclusion
We have introduced the modified forward-backward algorithm for solving the convex minimization problem of the sum of two functions in a real Hilbert space. The proposed algorithm does not need to compute the Lipschitz constant of the gradient of functions. We have proved that the sequence generated by the algorithm weakly converges to a minimizer under some mild conditions. Our result can be applied effectively to solve image recovery as shown in numerical experiments. The comparative experiments showed that the proposed algorithm has a better efficiency than FISTA-BT [21], FISTA-CN [22], and NAGA [23] in terms of PSNR and SSIM for all blur types.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
Acknowledgments
K. Kankam was supported by School of Science, University of Phayao, under grant no. PBTSC65023. W. Cholamjiak was supported by the Thailand Science Research and Innovation Fund and the University of Phayao under grant no. FF65-UoE002 and no. FF65-RIM072. P. Cholamjiak was supported by the National Research Council of Thailand (NRCT) under grant no. N41A640094.