College of Mathematics and Information, China West Normal University, Nanchong 637002, China
yml2002cn@aliyun.com
Show less
History+
Received
Accepted
Published
Issue Date
Revised Date
2025-12-26
PDF
Abstract
Gibali [J. Nonlinear Anal. Optim., 2015, 6(1): 41‒51] presented a self-adaptive subgradient extragradient projection method for solving variational inequalities without Lipschitz continuity, where its next iterative point was obtained by projecting a vector onto a specific half-space. In this paper, we present new kinds of self-adaptive subgradient extragradient projection methods by using a new descent direction. With the help of the techniques in the method of He and Liao [J. Optim. Theory Appl, 2002, 112(1): 111‒128], we get a longer step-size for these kinds of algorithms, which proves the global convergence of the generated sequence. Numerical results show that these kinds of extragradient subgradient projection methods are less dependent on the choice of the initial point, the dimension of the variational inequalities, and the tolerance of accuracy than the known methods. Moreover, the new methods proposed in this paper outperform (with respect to the number of iterations and cpu-time) the method presented by Gibali.
In this paper, the variational inequality model considered in this paper refers to: find a point such that
where is a non-empty closed convex set, is a continuous map, and are the inner product and the norm in the corresponding space respectively. Denote S as the solution set of the variational inequality , the distance from the point x to the set C, and the projection of the point x onto C. Since C is a closed convex set, exists uniquely and there is .
The variational inequality model has been applied in the research of economics, financial management, network planning, transportation, and game theory. See [1, 3, 16-17]. The projection algorithm is an easy-to-implement algorithm for solving variational inequality problems. Regarding the projection algorithm, readers can refer to articles [6, 12, 14-15, 19-20, 22], review literature [18, 23-24], monograph [4], and relevant references.
Censor et al. [2] proposed a projection algorithm for solving the case where F was Lipschitz continuous. When calculating the new iterative point, this algorithm replaces the projection calculation onto the non-empty closed convex set C with the projection onto a specific half-space. Since this half-space is constructed by using the characteristics of the sub-differential, this algorithm is called the subgradient extragradient algorithm. Meanwhile, as the projection onto the half-space can be directly calculated, this algorithm eliminates one projection calculation compared with the extragradient projection algorithm [12].
However, considering that it may be not easy to calculate the Lipschitz coefficient of the map F, and to weaken the Lipschitz continuity of the map F, Iusem and Svaiter [11] introduced an algorithm for solving pseudomonotone variational inequalities using line search. This line search does not require the calculation of projections, and the algorithm only needs to project twice in one iteration process. However, the step-size of this line search may tend to zero. Wang et al. [22] propose a better step-size rule for this line search. Similar line-search methods can also be found in [10] and [20].
To solve this kind of variational inequality problem, Han and Lo [6] adopt a different line search method (this line search process requires calculating the projection onto the feasible set C, and the number of projections is uncertain). The advantage of this line search is that, under the assumption that F is Lipschitz continuous, it can be proved that the lower bound of the step-size is strictly greater than 0 (see [6, Lemma 3.1]). [6] utilizes this characteristic to construct a self-adaptive projection algorithm. The next iterative point of this algorithm is obtained by projecting onto the closed convex set C. Similar self-adaptive algorithms can also be found in [3], etc. In addition, He and Liao [9] introduce a method to optimize the step-size of this type of algorithm.
Gibali [5] proposes a self-adaptive subgradient extragradient projection algorithm for solving pseudomonotone non-Lipschitz continuous variational inequalities. The algorithm is as follows:
The descent direction of this algorithm is , and under the assumptions that
(A) The solution set S is non-empty;
(B) F is pseudomonotone on the non-empty closed convex set C;
(C) F is Lipschitz continuous on
The global convergence of this algorithm is proved. [13] constructs a subgradient extragradient projection algorithm for solving pseudomonotone variational inequalities by using this descent direction.
In this paper, with the help of the line search method in [6], a class of self-adaptive subgradient extragradient projection algorithms for solving non-Lipschitz continuous variational inequalities is proposed. The main differences between this algorithm and the existing algorithms are as follows:
1) The descent directions of the algorithms in [5] and [13] are both . This paper considers not only Algorithm 2.1 with the descent direction of , but also a class of Algorithm 3.1 using the non-negative combination of and as the descent direction. See Note 3.1(b) and Note 4.1(b).
2) For these types of algorithms, the step-size is optimized by using the techniques in [9].
3) Under certain conditions, Algorithm 3.1 can degenerate into the algorithm in [13], thus generalizing the existing algorithms. Additionally, under the same assumptions as in [5], it is proven that the infinite sequence of points generated by these types of algorithms can converge to the solution of the variational inequality. Numerical experiments show that these types of algorithms are less affected by the dimension of the variational inequality problem, the precision, and the selection of the initial point. Moreover, in terms of the number of iterations, the number of projections, and the time spent on operations, the new algorithm outperforms the algorithm in Gibali [5]. See Note 4.1.
2 Preliminaries
First, we introduce the definition, some of its properties, and some basic conclusions of the projection operator. For any non-empty closed set , denote as the projection of the point x onto Ω, where is norm in .
Lemma 2.1 [4, 24] Letbe a non-empty closed convex set, then the following conclusions hold:
(i) ;
(ii) ;
(iii) .
To give the necessary and sufficient condition for the solution of the variational inequality , denote the natural residual mapping as
The following first shows the algorithm proposed in this paper.
Algorithm 3.1 Select an initial point and parameters , , r = 1. Take k := 0.
Step 1: Compute , , where mk is the smallest non-negative integer m for which the following equation holds:
Step 2: Compute . If , stop; otherwise, go to the next step.
Step 3: Compute
where Tk is a half-space:
Step 4: If
; or, r = βk. Let , go back to Step 1.
The following notes are used to illustrate the main differences between this algorithm and existing algorithms, as well as the specific calculation methods of the new iterative points.
Note 3.1 (a) The next iterative point of the algorithm is obtained by projecting onto the half-space Tk. Since the projection onto a half-space can be calculated directly (for the specific calculation, see Note 3.2), this algorithm requires one less projection calculation in each iteration compared to Algorithm 3.1 in [6]. This can reduce the computing time of the computer when the projection onto the feasible set C is difficult to calculate.
(b) This algorithm adopts a new descent direction , which is different from the descent direction in [5] and [13].
(c) This algorithm optimizes the step-size by using the method in [9], thus obtaining a relatively longer step-size.
Note 3.2 Assume , . Then is calculated as follows:
(a) If , then , and there is
(b) or and . At this point, it can be assumed that , , and according to Lemma 2.5, there is
The following lemmas are used to illustrate that Algorithm 3.1 is feasible.
Lemma 3.1If condition (B) in assumption H holds, then for any fixed positive numbersr > 0 and , thereexists a smallest non-negative integermsuch that the following equation holds:
Proof Assume that Eq. (3.2) cannot be achieved within a finite number of steps. Then, for any positive integer m, there is
If , then . Consequently, , and in this case, (3.2) is naturally true. Therefore, it can be assumed that. Note , and there are the following discussions:
(i) If makes , there is . From the continuity of and the assumption that F is a continuous mapping, it is obtained that . Furthermore, there is
On the other hand, for a sufficiently large m, there is , and according to (2.2), there is
This contradicts (2.3).
(ii) If , at this time . Here, according to (3.3), there is . According to the continuity of F, it is known that the left-hand side of this inequality . But the right-hand side of this inequality is . Thus, a contradiction is obtained.
First, in Algorithm 3.1, for any , according to the definition and Lemma 2.1(i), there is , so .
Second, as known from Lemma 2.2, if , then xk is a solution of , and the algorithm stops at this time. Therefore, if is an infinite sequence generated in Algorithm 3.1, it can assumed that .
Lemma 3.2 [9] The infinite sequencegenerated by Algorithm 3.1 satisfies the following inequalities:
(i)
(ii)
Proof Based on the definition of , , and together with (3.1), there is
Q.E.D. for (i).
By the Cauchy-Schwarz inequality and (3.1), there is
Q.E.D. for (ii). □
In addition, it can also be found from Lemma 2.2(ii) that .
4 Convergence of algorithms
Lemma 4.1If Assumption H holds andis an infinite sequence generated by Algorithm 3.1, then for any fixed ,
Proof By Lemma 2.1(iii), combined with , and for any fixed , there is
Let . There is
The second inequality can be obtained from Eq. (2.5) and (in fact, it is known from the selection of αk that ).
It is noted that
and
According to (4.2)‒(4.4), there is
To obtain a relatively long step-size, it is noted that the right-hand side of inequality (4.5) is a quadratic function of αk, which reaches its maximum value at . Similar to the method in [9], a relaxation factor γ is introduced in this paper, where . Let in (4.5), and combined with Lemma 3.2(i) and 3.2(ii). We can obtain
Hence,
Based on the above, Q.E.D. □
Theorem 4.1Assume that Assumption H holds andis an infinite sequence generated by Algorithm 3.1, thenthe sequenceconverges to the solution of the variational inequality (1.1).
Proof Arbitrarily take a fixed . According to (3.1) and the values of the parameters in the algorithm and , it is known that is a strictly monotonically decreasing sequence with a lower bound, so it converges. Thus, the sequence is bounded and
Based on the definition of yk in Step 1 of Algorithm 3.1, there is
Let be any cluster point of the sequence . Then there exists an index set such that . First, it requires to prove that there exists an index set such that the following equation holds:
To this end, the following cases are discussed:
(a) If , then for any , there is . Further, for any , there is
where the second inequality can be deduced from Lemma 2.3 (), and from (4.6), it is known that the limit is zero. If , (4.7) holds.
(b) If , then there exists such that . Noting the definition of βkin (4.1), for large enough (such that ), there is
where the second inequality is derived from (2.2) and . In addition, based on the definition of the residual function in (2.1), the definition in Step 1, and the non-expansiveness of the projection operator, there is
and it is known that the limit is zero from (4.6). Assume , and there is the boundedness of the sequence and the continuity of F on (so the sequence is bounded). From this, combined with (4.8) and the continuity of F on , it is shown that (4.7) holds.
Based on the above, (4.7) holds. □
Next, it requires to prove the main conclusions of this paper.
Since is a bounded sequence, let be one of its cluster points. Due to the continuity of the mapping and (4.7), it is known that . Then, by Lemma 2.2, . In Eq. (4.1), replace x* with and thus obtain that is a convergent sequence. Since is a cluster point of , must converge to 0. Therefore, converges to . Thus, the proof of the theorem is completed.
According to [7], a new search direction is considered.
where , , and .
The following presents algorithm based on the direction .
Algorithm 4.1 Select an initial point and parameters , r = 1. Take k := 0.
Step 1: Compute , . Here, mk is the smallest non-negative integer m such that the following equation holds:
Step 2: Compute . If , stop; otherwise, proceed to the next step.
Step 3: Compute
where
Step 4: If
, or , let . Go back to Step 1.
The following notes are used to illustrate the relationship between Algorithm 4.1 and existing algorithms.
Note 4.1 (a) When , Algorithm 4.1 degenerates into Algorithm 3.1. Therefore, Algorithm 4.1 can be regarded as a generalization of Algorithm 3.1.
(b) When t1 = 0, t2 = 1, and γ = 1, then the next iterative point of Algorithm 4.1 is as follows.
At this point, Algorithm 4.1 degenerates into Algorithm 1 in [13]. Consequently, Algorithm 4.1 can also be regarded as a generalization of Algorithm 1 in [13].
Lemma 4.2If Assumption H holds andis an infinite sequence generated by Algorithm 4.1, then for any fixed , the following conclusion holds:
Proof By Lemma 2.1(iii) and combining , for arbitrarily taken , there is
Let . There is
where the second inequality can be obtained from (2.3), (2.5), and αk≥ 0.
where the second inequality is derived from , and t2 > 0.
According to (4.10)‒(4.12), there is
The right-hand side of the inequality (4.13) is a quadratic function with respect to αk, which reaches its maximum value at . Substitute into the equation (4.13), and there is
Let , and Eq. (4.9) can be obtained through similar discussions as in Lemma 4.1.
Similarly, the following conclusions can be reached.
Theorem 4.2If Assumption H holds andis an infinite sequence generated by Algorithm 4.1, then the sequenceconverges to a solution of the variational inequality (1.1).
5 Computer verification of the algorithm
This section presents the computer verification results of Algorithm 3.1 and Algorithm 4.1 in the following four simple examples, and makes a comparison with Algorithm 3.1 in [5] (referred to as the Gibali algorithm for short) and Algorithm 2.2 in [20] (referred to as the S‒S algorithm for short). All these results are obtained by running MATLAB on a laptop with an Intel(R) Core(TM) i5-3210M CPU (dual core, main frequency 2.50 GHz) and 2.0 GB RAM. The parameter selection for Algorithm 3.1 and Algorithm 4.1 is as follows: v =0.98, r = 1, l = 0.5, γ= 1.9, t1 = 1, t2 = 1. The parameters selected for the Gibali algorithm are the same as those in [5], namely , ε = 0.2, β = 0.5. The parameters selected for the S-S algorithm are the same as those in [20], namely σ = 0.3, γ = 0.5, , θ = 4. The termination condition of the algorithms is . Respectively take and as the stopping criteria for the algorithms. Denote the running time in CPU (unit: s), the number of iteration steps as iter, the total number of projection times as np, and the total number of function value estimation times as nf. Denote the initial point as x0.
Example 5.1 This example is used in [14] to test the algorithms for continuous monotone variational inequality problems. The feasible set and the mapping are respectively and
Example 5.2 This example is used in [21] to test the algorithms for the Nonlinear Complementarity Problem (NCP). Define , where
The matrix M is an n × n square matrix, and ,
The initial point
Example 5.3 This example is used in [9] to test the algorithms for the Nonlinear Complementarity Problem, where
where , A is an n×n square matrix randomly generated from the interval (−5, 5), B is an anti-symmetric matrix generated in the same way, the vector q is uniformly generated from the interval (−500, 0), and , where di is a random variable on the interval (0,1). The initial point .
Example 5.4 Nonlinear variational inequality problem. In this example, three cases are tested for the above four algorithms. The Mathiesen case is tested in reference [20]. Harker [8] defined and tested the 5-dimensional Harnash5 and 10-dimensional Harnash10. For the Mathiesen case, the initial point is taken as , and for the other cases in this example.
Note 5.1 Judging from the results of the numerical experiments, we have the following conclusions:
(A) Algorithm 4.1, Algorithm 3.1, the Gibali algorithm, and the S‒S algorithm can all work out the solutions of the variational inequality problems. However, the subgradient extragradient projection algorithm is less affected by the dimension of the variational inequality problem, the precision, and the selection of the initial point, as shown in Tables 1 and 3. It is believed that this is mainly because the next iterative point of the subgradient extragradient projection algorithm is obtained by projecting onto a half-space and it does not require the condition . In contrast, the convergence analysis of the S‒S algorithm requires the condition . Thus, when there is an error in the computer’s calculation of the next iterative point: (εkis the error vector), it is not necessarily guaranteed that at this time. This may have an impact on the convergence of the algorithm. Therefore, this algorithm does not perform well when the stopping criterion is 10−8.
(B) Under the circumstances of low dimension and stopping criteria, the S‒S algorithm outperforms the Gibali algorithm [5], demonstrating an advantage in the number of iteration steps. See Tables 1, 2, and 4.
(C) In terms of the number of iterations, the number of function value estimations, and the number of projections, both Algorithm 3.1 and Algorithm 4.1 outperform the algorithm in [5], as shown in Tables 1‒4.
BertsekasD.P.. and Gafni, E.M., Projection methods for variational inequalities with application to the traffic assignment problem, In: Nondifferential and Variational Techniques in Optimization (Sorensen, D.C. and Wets, R.J.-B. eds.), Math. Programming Stud., No. 17, Amsterdam: North-Holland, 1982, 139–159
[2]
Censor Y., Gibali, A. , Reich, S.. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization2012; 61(9): 1119–1132
[3]
Chen A., Lo, H.K. , Yang, H.. A self-adaptive projection and contraction algorithm for the traffic assignment problem with path-specific costs. European J. Oper. Res.2001; 135(1): 27–41
[4]
FacchineiF. , Pang, J.-S., Finite-dimensional Variational Inequalities and Complementarity Problems, Vol. I, Springer Series in Operations Research, New York: Springer-Verlag, 2003
[5]
Gibali A.. A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim.2015; 6(1): 41–51
[6]
Han D.R. , Lo, H.K.. Two new self-adaptive projection methods for variational inequality problems. Comput. Math. Appl.2002; 43(12): 1529–1537
[7]
Han D.R., Zhang, H.C., Qian, G. , Xu, L.L.. An improved two-step method for solving generalized Nash equilibrium problems. European J. Oper. Res.2012; 216(3): 613–623
[8]
Harker P.T.. Accelerating the convergence of the diagonalization and projection algorithms for finite-dimensional variational inequalities. Math. Programming Ser. A1988; 41(1): 29–59
[9]
He B.S. , Liao, L.Z.. Improvements of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl.2002; 112(1): 111–128
[10]
He Y.R.. A new double projection algorithm for variational inequalities. J. Comput. Appl. Math.2006; 185(1): 166–173
[11]
Iusem A.N. , Svaiter, B.F.. A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization1997; 42(4): 309–321
[12]
Khobotov E.N.. Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys.1987; 27(5): 120–127
[13]
Li H., Yang, L. , Li, J.. A subgradient exgradient projection method for pseudomonotone variational inequalities. J. China West Norm. Univ. (Nat. Sci.)2016; 37(2): 189–194
[14]
Malitsky. Projected reflected gradient methods for monotone variational inequalities. SIAM. J. Optim.2015; 25(1): 502–520
[15]
Malitsky. An extragradient algorithm for monotone variational inequalities. Cybernet. Systems Anal.2014; 50(2): 271–277
[16]
NagurneyA., Network Economics: A Variational Inequality Approach, Revised Second Edition, Advances in Computational Economics, Vol. 10, Dordrecht: Springer Science+Business Media, 1999
[17]
Nagurney A. , Ramanujam, P.. Transportation network policy modeling with goal targets and generalized penalty functions. Transport. Sci.1996; 30(1): 3–13
[18]
Noor M.A., Some recent advances in variational inequalities, II.. other concepts. New Zealand J. Math.1997; 26(2): 229–255
[19]
Sibony M.. Methodes itératives pour les équations et inequations aux dérivées partielles non linéaires de type monotone. Calcolo1970; 7: 65–183
[20]
Solodov M.V. , Svaiter B.F.. A new projection method for variational inequality problems. SIAM J. Control Optim.1999; 37(3): 765–776
[21]
Sun D.F.. A projection and contraction method for the nonlinear complementarity problem and its extensions. Math. Numer. Sinica1994; 16(2): 183–194
[22]
Wang Y.J., Xiu, N.H. , Wang, C.Y.. A new version of extragradient method for variational inequality problems. Comput. Math. Appl.2001; 42(6/7): 969–979
[23]
Wang Y.J., Xiu, N.H. , Wang, C.Y.. Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl.2001; 111(3): 641–656
[24]
Xiu N.H. , Zhang, J.Z.. Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math.2003; 152(1/2): 559–585
RIGHTS & PERMISSIONS
Higher Education Press 2025
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.