Abstract

This study aims at investigation of a generalized variational inequality problem. We initiate a new iterative algorithm and examine its convergence analysis. Using this newly proposed iterative method, we estimate the common solution of generalized variational inequality problem and fixed points of a nonexpansive mapping. A numerical example is illustrated to verify our existence result. Further, we demonstrate that the considered iterative algorithm converges with faster rate than normal -iterative scheme. Furthermore, we apply our proposed iterative algorithm to estimate the solution of a convex minimization problem and a split feasibility problem.

1. Introduction

All through this study, we presume that is a real Hilbert space equipped with norm induced by inner product . Let be a nonempty closed convex subset of and be nonlinear mappings. The generalized nonlinear variational inequality is to locate a point such thatwhich was introduced by Noor [18]. We denote the set of solutions of (1) by .

If , then generalized nonlinear variational inequality (1) reduces to the classical variational inequality studied by Stampacchia [23], which is to allocate a point , such that

If is a dual cone of a convex cone , then generalized nonlinear variational inequality (1) coincides to generalized nonlinear complementarity problem which is to locate a point such that

It is worthy to adduce that variational inequalities which are unconventional and remarkable augmentation of variational principles provide well organized unified framework for figuring out a wide range of nonlinear problems arising in optimization, economics, physics, engineering science, operations research, and control theory, for example, [2, 8, 15, 20, 21, 24, 26, 33] and references cited therein.

Next, we recall the following definitions of a nonlinear mapping .

Definition 1. The mapping is said to be(i)-inverse strongly monotone or cocoercive if there exists a constant , such that(ii)-Lipschitz continuous if there exists a constant , such that

For , is nonexpansive, and if , then is a contraction. Note that -inverse strongly monotone mapping is -Lipschitz continuous.

It is customary to mention that variational inequalities, variational inclusions, and related optimization problems can be posed as fixed-point problems. This unusual formulation plays a dominant role in studying variational inequalities and nonlinear problems by employing fixed-point iterative methods.

Lemma 1. Let be a projection mapping of onto . For a given satisfies the inequality

Note that the projection mapping is nonexpansive [16]. For more details on projection mapping , we refer to [12]. By utilizing Lemma 1, the generalized nonlinear variational inequality (1) can be designed as a fixed-point problem as follows:

Lemma 2 (see [17]). Let be a projection mapping. For any , solves the generalized nonlinear variational inequality (1) if and only if

Relation (7) can be rescripted as

Let be a nonexpansive mapping and denotes the set of fixed points of . If , then

It is significant to achieve better rate of convergence if two or more iterative algorithms converge to the same point for a given problem. We recall the following concepts which are versatile tools to find finer convergence rate for different iterative methods.

Definition 2. (see [3]). Let and be two real sequences converging to and , respectively. Suppose that exists. Then,(i) converges faster than if (ii) and converges with identical rates if

Definition 3 (See [3]). Let and be two real sequences converging to the same fixed point . If and are two sequences of positive real numbers converging to 0 such that and for all . Then, converges to faster than if converges faster than .

Lemma 3 (see [4]). Let and be nonnegative sequences of real numbers satisfyingwhere and . Then .

Lemma 4 (see [31]). Let , and be nonnegative sequences of real numbers satisfyingwhere , and . Then, .

Mann, Ishikawa, and Halpern iterative methods are fundamental tools for solving fixed-point problems of nonexpansive mappings. In recent past, a number of fixed point iterative methods have been constructed and implemented to solve various classes of nonlinear mappings [2, 9, 10, 19, 22, 25, 2830, 34]. Agarwal and others [1] introduced the -iteration method which converges faster than some well-known iterative algorithms such as Mann, Ishikawa, and Picard for contraction as well as nonexpansive mappings. Due to the super convergence rate, it attracted number of researchers to study fixed-point problems, minimization problems, variational inclusions, variational inequalities, and alternate points problems in different settings. In [18], Noor utilized formulation (9) to propose following iterative algorithm:where is a sequence in . The author proved strong convergence of the proposed iterative algorithm. Furthermore, it is customary that the normal -iterative algorithm converges faster than the Mann and Picard iterative algorithm. Owing to its uncomplicated nature and faster convergence rate, Gursoy and others [14] investigated the following normal -iterative algorithm to examine (1) as follows:

Recently, Ullah and Arshad [27] introduced a more efficient iterative algorithm called the -iterative method for Suzuki’s generalized nonexpansive mappings as follows:where is a sequence in . They analyzed convergence and showed that their iterative procedure converges faster than the Picard [13] and -iteration process [1]. In recent work, Garodia and Uddin [11] developed a new iterative algorithm for Suzuki’s generalized nonexpansive mappings as follows:where is a sequence in . The authors approximated fixed-points and inspected the convergence. Also, they proved that the posed iterative method converges with faster rate than that of the -iterative method.

Stimulated by the work discussed in above-mentioned references, in this study, we investigate algorithm (15) to estimate the common solution of fixed points of a nonexpansive mapping and the generalized nonlinear variational inequality (1) as follows:where is a sequence in satisfying certain assumptions. We analyze strong convergence of our proposed iterative algorithm (16) under some mild assumptions. We also pose a modified form of our iterative algorithm (16) to investigate convex optimization and split feasibility problems. Theoretical findings are validated by an illustrative numerical example. Our existence and convergence results can be seen as generalizations and prevalent of some known results.

2. Convergence Results

Theorem 1. Let be , -inverse strongly monotone mappings, respectively, and be a nonexpansive mapping such that . Suppose that the assumptionholds, where . Then, the iterative sequence defined by (16) converges strongly to with the following error estimates:where

Proof. Note thatSince being -inverse strongly monotone is -Lipschitz continuous mapping, and are the nonexpansive mappings. Then, from (16 and 20), we obtainSince is -inverse strongly monotone mapping, then we haveAlso, is -inverse strongly monotone mapping; then we haveThus, from (21) to (23), we havewhere is defined by (19), and from (17), we have . Again, following the same steps (21)–(24) and from (16), we obtainNext, we estimatewhich amounts to saySince, . Therefore, we get . By repeating the process in this fashion, we obtainwhich gives that .

Now, we exemplify the existence of solution.

Example 1. Let be equipped with norm and inner product . Let be defined byThen, for all , observe thatThen, and are 2 and -inverse strongly monotone mapping, respectively, and is nonexpansive mapping. One can easily verify that is the unique fixed point of . Also,Thus, we have .

Theorem 2. Let be a real Hilbert space and be a nonempty closed convex subset of . Let , and be same as defined in Theorem 1. Let and be the sequences defined by (13) and (16), respectively. Suppose that (17) holds and . Then, the following statements hold:(i)If is bounded and , then the sequence converges strongly to 0 with following error estimates: converges strongly to .(ii)If converges strongly to , then converges strongly to 0 with following error estimates:

Proof. (i)It follows from Theorem 1 that . Next, we prove that . Following (13) and (16) and the steps as in (21)–(24), we obtainwhere is same as in (19). Again, utilizing (13), (16), and (34), we haveLet , and . It follows from assumption of the theorem that is bounded; therefore, is also bounded. Then, there exists a constant , such that . Since and is bounded, therefore, as , i.e., , which amounts to say that . Thus, all the assumptions of Lemma 4 are fulfilled. Hence, and . Thus, we have .(ii)Next, we estimate that . Since converges to , then following the same arguments as in (34) and (35), we obtainLet . By the assumption converges to and utilizing the fact that is bounded, we obtain that as . Thus, all the assumptions of Lemma 3 are fulfilled. Hence, . Also, we know that . Thus, . Hence, as .

Theorem 3. Let be a real Hilbert space and be a closed convex subset of . Suppose , and are identical as in Theorem 1. Let and be sequences defined by (13) and (16), respectively. Suppose that assumption (17) holds and . If , then converges faster than to , such that .

Proof. It follows from (27) thatSince is a sequence in , we can choose a constant , such that . Then,By repeating the process, we obtainAlso, it follows from (13) thatBy following the arguments as discussed from (21)to(24), we haveAlso,By combining (41) and (42), we getSince is a sequence in , we can choose a constant , such that . Then,Thus, by repeating the process, we obtainSet and ; then,Hence, converges faster that .

3. Applications

3.1. Convex Minimization Problem

Now, we solve convex minimization problem as an application of Theorem 1.

Let be a closed convex subset of a real Hilbert space , be a projection, and be a convex, Frechet differentiable mapping. We consider the following convex minimization problem:

Clearly, is a solution of if and only if

More precisely, solves problem (47) if and only if is a fixed point of the projection mapping , i e.,where is the gradient of mapping . This formulation is known as gradient projection, which plays a key role in solving problem (47). So far, several iterative methods have been employed to solve minimization problems [7, 26, 32]. By considering and assuming , the identity mapping, we propose the following modified gradient projection algorithm for solving as follows:where is a sequence in . Now, we approximate the proposed algorithm (50) to estimate the solution of (47).

Theorem 4. Let be a nonempty closed convex subset of real Hilbert space . Let be a convex, Freschet differentiable mapping, and is -inverse strongly monotone mapping. Suppose that the convex minimization problem (47) has a solution and condition (17) holds. Then, the sequence generated by (50) converges strongly to which solves convex minimization problem (47) with the following error estimates:where

Proof. The desired conclusion is accomplished by taking and in Theorem 1.

Example 2. Let . Then, is a Hilbert space given byConsider a closed convex subset of . Define by . Then, is a unique minimum of a convex function , and is the Frechet differentiable at . The gradient is evaluated as . Then, for all , we geti.e., is inverse strongly monotone. Also, for . Thus, all the assumptions of Theorem 4 are satisfied, and for , the sequence generated by (50) is given aswhere Then, the sequence generated by (50) converges to 0 function.

3.2. Split Feasibility Problem

This subsection is devoted to utilization of Theorem 1 to examine a split feasibility problem . Let and be nonempty closed convex subsets of real Hilbert spaces and , respectively. Let be a bounded linear operator. The is to locate a point , such that

Let denotes the solution set of SFP (56); then,

A class of inverse problems has been solved by using , for example, [6]. In [32], Xu established the relationship between SFP (56) and the fixed point of problem . More precisely, for , solves SFP (56) if and only if . Byrne [5] posed the following iterative algorithm for solving SFP (56) as follows:where , is the adjoint of operator , and and are the projections onto and , respectively.

Note that the operator with is nonexpansive. Now, we propose following iterative algorithm to solve SFP (56):where is a sequence in and .

Theorem 5. Suppose that and condition (17) holds. Then, the sequence initiated in (59) converges weakly to , which solves SFP (56) with following error estimates:where

Proof. The desired conclusion follows by taking and in Theorem 1.

4. Conclusion

In this study, a new iterative algorithm (16) has been proposed and employed to explore convergence analysis. Using this newly constructed iterative procedure, a common solution of the generalized variational inequality problem and fixed points of nonexpansive mapping is investigated, and theoretical findings are verified by a numerical example. Furthermore, we have shown that our iteration algorithm converges faster than the normal -iteration process for contraction mapping. Finally, we applied our newly constructed iterative algorithm to investigate the convex optimization problem and split feasibility problem.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The first and fourth authors would like to thank the Deanship of Scientific Research, Prince Sattam bin Abdulaziz University, for supporting this work.