Abstract
In this paper, a new inexact parallel splitting algorithm based on the shrinkage algorithm is proposed. We bring in inexact terms, and new iteration points are obtained by the parallel splitting method. A new descent direction and a proper step length are derived. Under reasonable assumptions, the convergence of the algorithm is demonstrated. Some matrix correction problem experiments show that the algorithm is efficient and easy to implement.
1. Introduction
This paper considers a class of minimization problems in the following form:where , are convex functions defined on . Projection solution on and is not tricky.
In practical applications, there exist quite a few structured optimizations arising from the fields such as electronic engineering and computer science, including matrix processing [1], traffic network analysis [2], least squares semi-definite programming problem [3, 4], image restoration problem [5], and so on.
Use the following function to represent the Lagrangian function of (1):
Gabay and Mercier first introduced alternating direction method (ADM) in [6], which is an efficient method to solve (2). For a given iterate , a new iterate can be generated by ADM via the scheme
Recently, the ADMM has found many efficient applications, especially in machine learning, image processing [7–9], and matrix correction problem. Also, ADMM has been deeply investigated in previous work (see, e.g., [10–16]).
However, the ADMM may fail when it comes to difficult subproblems in practical issues. To avoid such situations, various methods have been studied, such as linearization (see, e.g., [17, 18]), adding asymptotic terms (see, e.g., [15, 16]) and inexact solutions (see, e.g., [19–21]). Eckstein and Bertsekas first introduced an inexact technique used to solve the ADM algorithm in [12], which has been widely used and popularized (for details, refer to the bibliography [17–30]). On the flip side, the parameter has a great influence on the convergence speed of the algorithm and the experimental effect. He et al. [21] introduced a method of adjusting the parameter to improve the convergence speed of the ADM. Also, this method was widely used. Recently, Chen et al. [22] proposed a new IPCM, which does not need to know the mathematical expression of the mapping; however, this algorithm has strict restrictions on parameters; based on IPCM in [22] and IPADM in [23], we get an improved inexact parallel splitting method for the model of (2), and our algorithm relaxes the constraints on parameters and is computationally simple.
2. Preliminaries
First, we review some preliminary knowledge that can help readers better understand the analysis that follows.
Let represent the subgradient of and represent the subgradient of , respectively. As described in the literature [17], the optimal solution of the function is equivalent to finding which makes the following formula:hold.
For the convenience of analysis, we let
Then, formula (4) is rewritten in the following compact form:where
Throughout, we make some reasonable assumptions to guarantee that the algorithm works properly:(a) and are monotone and Lipschitz continuous.(b)Projection on and is not difficult to solve.
3. New Inexact Parallel Splitting Method (NIPSM)
For the convenience of later analysis, we denote
Now we present implementation steps of the algorithm NIPSM. Step 0. Let , , , . Step 1. Generate an iterative sequence by the following method. Solve via where Solve via where Update via Step 2. The contraction step: Form : where form :wherewhere
Remark 1. that satisfies the conditions can be obtained because of the Lipschitz continuous property of and .
Remark 2. is a proper step length of correction step of the algorithm, and we will prove that later on.
4. Convergence Analysis of the Proposed NIPSM
Lemma 1. For a given point and , let generated by NIPSM satisfy
Proof. Combining (9)–(13) Lemma 1 can be obtained immediately. Here, we do not give a detailed proof, and one can refer to the proof in reference [22].
Lemma 2. For a given point , defined as (19) satisfies
Proof. It follows from Cauchy–Schwarz inequality, and we know thatFrom the definition of in (19) and the above formula, the conclusion is clearly established.
Lemma 3. For a given point and , the sequences generated by (9)–(13) satisfy
Proof. By and , we haveThen,The second inequality is obtained because the mapping is monotone on .
It follows from Lemma 3, and we know that is a descent direction of . Next we will show how to choose an optimal step size.
From (15), one can obtain thatwhere represents the eigenvalues of matrix and we know that .
For the convenience of analysis, we denote . It follows thatIt follows from Lemma 2 thatIn order to explain why we choose as defined in (18), we let
Lemma 4. Let be an arbitrary point in . We have
Proof. Lemma 4 shows that we should choose (18). In order to speed up iteration of the algorithm, in the actual calculation, we generally take a relaxation factor .
Theorem 1. The sequence is generated by NIPSM and . Then, the following conclusion holds:
Then,and algorithm converges to , which is a solution of (1).
Proof. By the previous lemma, we know thatTherefore, assertion (22) is proved, and we haveTheorem 1 states that the algorithm will terminate in finite step and converge to a solution of model (1).
5. Numerical Results
Consider the matrix correction problem:wherewhere and are two given symmetric matrices.
Obviously, problem (36) can be rewritten in the following form:
We choose , and , and denotes matrix dimensions. Table 1 lists the results of the number of iterations and the CPU time of both the inexact projection and contraction method in literature [22] (IPCM) and the proposed method (NIPSM).
6. Conclusion
In this paper, we propose an inexact parallel splitting algorithm with two separable operators. Numerical results show that our proposed method is efficient, and the larger the scale, the more obvious the advantage. Of course, we can also apply the parameter adjustment technique in reference [21] to the algorithm of this paper. How to extend this algorithm to convex optimization problems with more than two separable operators will be our next research direction.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.