New kinds of self-adaptive subgradient extragradient projection methods for solving pseudomonotone variational inequalities

Minglu YE , Yuncheng LIU

Front. Math. China ›› 2025, Vol. 20 ›› Issue (4) : 169 -185.

PDF
Front. Math. China ›› 2025, Vol. 20 ›› Issue (4) : 169 -185. DOI: 10.3868/s140-DDD-025-0014-x
RESEARCH ARTICLE

New kinds of self-adaptive subgradient extragradient projection methods for solving pseudomonotone variational inequalities

Author information +
History +
PDF

Abstract

Gibali [J. Nonlinear Anal. Optim., 2015, 6(1): 41‒51] presented a self-adaptive subgradient extragradient projection method for solving variational inequalities without Lipschitz continuity, where its next iterative point was obtained by projecting a vector onto a specific half-space. In this paper, we present new kinds of self-adaptive subgradient extragradient projection methods by using a new descent direction. With the help of the techniques in the method of He and Liao [J. Optim. Theory Appl, 2002, 112(1): 111‒128], we get a longer step-size for these kinds of algorithms, which proves the global convergence of the generated sequence. Numerical results show that these kinds of extragradient subgradient projection methods are less dependent on the choice of the initial point, the dimension of the variational inequalities, and the tolerance of accuracy than the known methods. Moreover, the new methods proposed in this paper outperform (with respect to the number of iterations and cpu-time) the method presented by Gibali.

Keywords

Variational inequalities / subgradient extragradient projection method / half-space / pseudomonotone / non-Lipschitz continuous

Cite this article

Download citation ▾
Minglu YE, Yuncheng LIU. New kinds of self-adaptive subgradient extragradient projection methods for solving pseudomonotone variational inequalities. Front. Math. China, 2025, 20(4): 169-185 DOI:10.3868/s140-DDD-025-0014-x

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

In this paper, the variational inequality model VI(F,C) considered in this paper refers to: find a point xC such that

F(x),xx0,xC,

where CRn is a non-empty closed convex set, F:RnRn is a continuous map, , and are the inner product and the norm in the corresponding space respectively. Denote S as the solution set of the variational inequality VI(F,C), d(x,C) the distance from the point x to the set C, and PC(x) the projection of the point x onto C. Since C is a closed convex set, PC(x) exists uniquely and there is xPC(x)∥=d(x,C).

The variational inequality model has been applied in the research of economics, financial management, network planning, transportation, and game theory. See [1, 3, 16-17]. The projection algorithm is an easy-to-implement algorithm for solving variational inequality problems. Regarding the projection algorithm, readers can refer to articles [6, 12, 14-15, 19-20, 22], review literature [18, 23-24], monograph [4], and relevant references.

Censor et al. [2] proposed a projection algorithm for solving the case where F was Lipschitz continuous. When calculating the new iterative point, this algorithm replaces the projection calculation onto the non-empty closed convex set C with the projection onto a specific half-space. Since this half-space is constructed by using the characteristics of the sub-differential, this algorithm is called the subgradient extragradient algorithm. Meanwhile, as the projection onto the half-space can be directly calculated, this algorithm eliminates one projection calculation compared with the extragradient projection algorithm [12].

However, considering that it may be not easy to calculate the Lipschitz coefficient of the map F, and to weaken the Lipschitz continuity of the map F, Iusem and Svaiter [11] introduced an algorithm for solving pseudomonotone variational inequalities using line search. This line search does not require the calculation of projections, and the algorithm only needs to project twice in one iteration process. However, the step-size of this line search may tend to zero. Wang et al. [22] propose a better step-size rule for this line search. Similar line-search methods can also be found in [10] and [20].

To solve this kind of variational inequality problem, Han and Lo [6] adopt a different line search method (this line search process requires calculating the projection onto the feasible set C, and the number of projections is uncertain). The advantage of this line search is that, under the assumption that F is Lipschitz continuous, it can be proved that the lower bound of the step-size is strictly greater than 0 (see [6, Lemma 3.1]). [6] utilizes this characteristic to construct a self-adaptive projection algorithm. The next iterative point of this algorithm is obtained by projecting onto the closed convex set C. Similar self-adaptive algorithms can also be found in [3], etc. In addition, He and Liao [9] introduce a method to optimize the step-size of this type of algorithm.

Gibali [5] proposes a self-adaptive subgradient extragradient projection algorithm for solving pseudomonotone non-Lipschitz continuous variational inequalities. The algorithm is as follows:

{(linesearch)calculateyk=PC(xkβkF(xk)),whereβk=βk1βmk,mkisthesmallestnonnegativeintegerm,suchthatβk(xkyk,F(xk)F(yk))(1ε)xkyk2holds;Tk:={wRn|xkβkF(xk)yk,wyk0};xk+1=PTk(xkβkd2(xk,βk)).

The descent direction of this algorithm is d2(xk,βk)=F(yk), and under the assumptions that

(A) The solution set S is non-empty;

(B) F is pseudomonotone on the non-empty closed convex set C;

(C) F is Lipschitz continuous on Rn.

The global convergence of this algorithm is proved. [13] constructs a subgradient extragradient projection algorithm for solving pseudomonotone variational inequalities by using this descent direction.

In this paper, with the help of the line search method in [6], a class of self-adaptive subgradient extragradient projection algorithms for solving non-Lipschitz continuous variational inequalities is proposed. The main differences between this algorithm and the existing algorithms are as follows:

1) The descent directions of the algorithms in [5] and [13] are both d2(xk,βk)=F(yk). This paper considers not only Algorithm 2.1 with the descent direction of d1(xk,βk)=xkykβk(F(xk)F(yk)), but also a class of Algorithm 3.1 using the non-negative combination of d1(xk,βk) and d2(xk,βk) as the descent direction. See Note 3.1(b) and Note 4.1(b).

2) For these types of algorithms, the step-size is optimized by using the techniques in [9].

3) Under certain conditions, Algorithm 3.1 can degenerate into the algorithm in [13], thus generalizing the existing algorithms. Additionally, under the same assumptions as in [5], it is proven that the infinite sequence of points generated by these types of algorithms can converge to the solution of the variational inequality. Numerical experiments show that these types of algorithms are less affected by the dimension of the variational inequality problem, the precision, and the selection of the initial point. Moreover, in terms of the number of iterations, the number of projections, and the time spent on operations, the new algorithm outperforms the algorithm in Gibali [5]. See Note 4.1.

2 Preliminaries

First, we introduce the definition, some of its properties, and some basic conclusions of the projection operator. For any non-empty closed set ΩRn, denote PΩ(x):=argminyΩ{xy} as the projection of the point x onto Ω, where is 2 norm in Rn.

Lemma 2.1 [4, 24]  Let ΩRn be a non-empty closed convex set, then the following conclusions hold:

(i) PΩ(x)x,yPΩ(x)0,xRn,yΩ;

(ii) PΩ(x)PΩ(y)∥≤∥xy,x,yRn;

(iii) PΩ(x)x2≤∥xy2PΩ(x)y2,xRn,yΩ.

To give the necessary and sufficient condition for the solution of the variational inequality VI(F,C), denote the natural residual mapping as

e(x,λ)=xPC(xλF(x)),whererealnumberλ>0.

Lemma 2.2 [4]  Let λ > 0, then xS if and only if e(x,λ)∥=0.

Lemma 2.3 [4]  Let xRn, then for any 0 < λ1 < λ2, there is

e(x,λ1)∥≤∥e(x,λ2),e(x,λ1)λ1e(x,λ2)λ2.

Lemma 2.4 [4]  If xS, then for a fixed ν > 0, there exists β~>0, such that for any β(0,β~), the following equation holds:

βF(x)F(xe(x,β))∥≤νe(x,β).

It is noted that PH(z)=argminyH12zy2, and by applying the KKT condition to this constrained optimization problem the following lemma can be obtained.

Lemma 2.5  Let H:={xRnh(x)0} be a half-space, where uRn, bR and h(x):=u,xb. If zH and u0, then PH(z)=zu,zbu2u.

Definition 2.1 A mapping F:RnRn is said to be pseudomonotone on C. If x,yC, there is

F(x),yx0F(y),yx0.

The following shows the assumptions proposed in this paper.

Assumption H (A) S is non-empty;

(B) F is continuous on Rn;

(C) F is pseudomonotone on C.

From assumptions (A) and (C), there is

F(y),yx0,yC,xS.

Let y=PC(xλF(x)). It is noted that SC, then according to Lemma 2.1(i), there is

xλF(x)y,yx0,xS,xRn,λR.

Moreover, based on (2.3) and (2.4), there is

xyλ(F(x)F(y)),yx0,xS,xRn,λ>0.

3 Self-adaptative subgradient extragradient algorithm

The following first shows the algorithm proposed in this paper.

Algorithm 3.1 Select an initial point x0Rn and parameters ν,l(0,1), γ(0,2), r = 1. Take k := 0.

Step 1: Compute yk:=PC(xkβkF(xk)), βk:=rlmk, where mk is the smallest non-negative integer m for which the following equation holds:

βkF(xk)F(xke(xk,βk))∥≤νe(xk,βk).

Step 2: Compute e(xk,βk)=xkyk. If e(xk,βk)=0, stop; otherwise, go to the next step.

Step 3: Compute

xk+1=PTk(xkαkd1(xk,βk)),

where Tk is a half-space:

Tk:={wRn:xkβkF(xk)yk,wyk0},

αk=γe(xk,βk),d1(xk,βk)d1(xk,βk)2,d1(xk,βk)=xkykβk(F(xk)F(yk)).

Step 4: If

βkF(xk)F(xkβkF(xk))∥≤0.4e(xk,βk),

r=βk0.7; or, r = βk. Let k:=k+1, go back to Step 1.

The following notes are used to illustrate the main differences between this algorithm and existing algorithms, as well as the specific calculation methods of the new iterative points.

Note 3.1 (a) The next iterative point of the algorithm is obtained by projecting onto the half-space Tk. Since the projection onto a half-space can be calculated directly (for the specific calculation, see Note 3.2), this algorithm requires one less projection calculation in each iteration compared to Algorithm 3.1 in [6]. This can reduce the computing time of the computer when the projection onto the feasible set C is difficult to calculate.

(b) This algorithm adopts a new descent direction d1(xk,βk)=xkykβk(F(xk)F(yk)), which is different from the descent direction d2(xk,βk)=F(yk) in [5] and [13].

(c) This algorithm optimizes the step-size by using the method in [9], thus obtaining a relatively longer step-size.

Note 3.2 Assume zk=xkαkd1(xk,βk), uk=xkβkF(xk)yk. Then xk+1 is calculated as follows:

(a) If xkβkF(xk)yk,zkyk0, then zkTk, and there is

xk+1=zk,

(b) or xkβkF(xk)yk,zkyk>0 and uk∥≠0. At this point, it can be assumed that u=uk, b=uk,yk, and according to Lemma 2.5, there is

xk+1=zkzkyk,ukuk2uk.

The following lemmas are used to illustrate that Algorithm 3.1 is feasible.

Lemma 3.1  If condition (B) in assumption H holds, then for any fixed positive numbers r > 0 and xkRn, there exists a smallest non-negative integer m such that the following equation holds:

rlmF(xk)F(xke(xk,rlm))∥≤νe(xk,rlm).

Proof Assume that Eq. (3.2) cannot be achieved within a finite number of steps. Then, for any positive integer m, there is

F(xk)F(xke(xk,rlm))∥>νe(xk,rlm)rlm.

If e(xk,1)∥=0, then xkS. Consequently, e(xk,βk)∥=0, and in this case, (3.2) is naturally true. Therefore, it can be assumed thate(xk,1)∥>0. Note l(0,1), and there are the following discussions:

(i) If xkC makes m, there is rlm0. From the continuity of PC() and the assumption that F is a continuous mapping, it is obtained that e(xk,rlm)0(m). Furthermore, there is

F(xk)F(xke(xk,rlm))∥→0.

On the other hand, for a sufficiently large m, there is rlm1, and according to (2.2), there is

νe(xk,rlm)rlmνe(xk,1)1>0.

This contradicts (2.3).

(ii) If xkC, at this time e(xk,rlm)xkPC(xk)0(m). Here, according to (3.3), there is rlmF(xk)F(xke(xk,rlm))∥>νe(xk,rlm). According to the continuity of F, it is known that the left-hand side of this inequality rlmF(xk)F(xke(xk,rlm))∥→0(m). But the right-hand side of this inequality is νe(xk,rlm)∥→νxkPC(xk)∥>0(m). Thus, a contradiction is obtained.

First, in Algorithm 3.1, for any zC, according to the definition yk=PC(xkβkF(xk)) and Lemma 2.1(i), there is xkβkF(xk)yk,zyk0, so CTk.

Second, as known from Lemma 2.2, if e(xk,βk)∥=0, then xk is a solution of VI(F,C), and the algorithm stops at this time. Therefore, if {xk} is an infinite sequence generated in Algorithm 3.1, it can assumed that e(xk,βk)∥>0.

Lemma 3.2 [9]  The infinite sequence {xk} generated by Algorithm 3.1 satisfies the following inequalities:

(i) e(xk,βk),d1(xk,βk)>12d1(xk,βk)2;

(ii) e(xk,βk),d1(xk,βk)(1ν)e(xk,βk)2.

Proof Based on the definition of d1(xk,βk), ν(0,1), and together with (3.1), there is

e(xk,βk),d1(xk,βk)=e(xk,βk)2βke(xk,βk),F(xk)F(xke(xk,βk))>12e(xk,βk)2βke(xk,βk),F(xk)F(xke(xk,βk))+12ν2e(xk,βk)2>12e(xk,βk)2βke(xk,βk),F(xk)F(xke(xk,βk))+12βk2F(xk)F(xke(xk,βk))2=12d1(xk,βk)2.

Q.E.D. for (i).

By the Cauchy-Schwarz inequality and (3.1), there is

e(xk,βk),d1(xk,βk)=e(xk,βk)2βke(xk,βk),F(xk)F(xke(xk,βk))≥∥e(xk,βk)2βkF(xk)F(xke(xk,βk))∥∥e(xk,βk)≥∥e(xk,βk)2νe(xk,βk)2=(1ν)e(xk,βk)2.

Q.E.D. for (ii). □

In addition, it can also be found from Lemma 2.2(ii) that d1(xk,βk)∥>0.

4 Convergence of algorithms

Lemma 4.1  If Assumption H holds and {xk} is an infinite sequence generated by Algorithm 3.1, then for any fixed xS,

xk+1x2≤∥xkx212γ(2γ)(1ν)e(xk,βk)2.

Proof By Lemma 2.1(iii), combined with SCTk, and for any fixed xS, there is

xk+1x2≤∥xkαkd1(xk,βk)x2xkαkd1(xk,βk)xk+12=xkx22αkxkx,d1(xk,βk)+αk2d1(xk,βk)2(xkxk+122αkxkxk+1,d1(xk,βk)+αk2d1(xk,βk)2)=xkx22αkxk+1x,d1(xk,βk)xkxk+12.

Let ϑ(xk)=∥xkx2xk+1x2. There is

ϑ(xk)≥∥xkxk+12+2αkxk+1x,d1(xk,βk)=xkxk+12+2αkxk+1yk+ykx,d1(xk,βk)=xkxk+12+2αkxk+1yk,d1(xk,βk)+2αkykx,d1(xk,βk)≥∥xkxk+12+2αkxk+1yk,d1(xk,βk).

The second inequality can be obtained from Eq. (2.5) and αk0 (in fact, it is known from the selection of αk that αk>12γ).

It is noted that

2αkxk+1yk,d1(xk,βk)=2αkxk+1xk,d1(xk,βk)+2αkxkyk,d1(xk,βk)

and

xkxk+12=xkxk+1αkd1(xk,βk)+αkd1(xk,βk)22xkxk+1αkd1(xk,βk),αkd1(xk,βk)+αk2d1(xk,βk)2=αk2d1(xk,βk)2+2αkxkxk+1,d1(xk,βk).

According to (4.2)‒(4.4), there is

ϑ(xk)αk2d1(xk,βk)2+2αkxkxk+1,d1(xk,βk)+2αkxk+1xk,d1(xk,βk)+2αkxkyk,d1(xk,βk)=αk2d1(xk,βk)2+2αke(xk,βk),d1(xk,βk).

To obtain a relatively long step-size, it is noted that the right-hand side of inequality (4.5) is a quadratic function of αk, which reaches its maximum value at αk=e(xk,βk),d1(xk,βk)d1(xk,βk)2. Similar to the method in [9], a relaxation factor γ is introduced in this paper, where γ(0,2). Let αk=γαk in (4.5), and combined with Lemma 3.2(i) and 3.2(ii). We can obtain

ϑ(xk)12γ(2γ)(1ν)e(xk,βk)2.

Hence,

xk+1x2≤∥xkx212γ(2γ)(1ν)e(xk,βk)2.

Based on the above, Q.E.D. □

Theorem 4.1  Assume that Assumption H holds and {xk} is an infinite sequence generated by Algorithm 3.1, then the sequence {xk} converges to the solution of the variational inequality (1.1).

Proof Arbitrarily take a fixed xS. According to (3.1) and the values of the parameters in the algorithm γ(0,2) and ν(0,1), it is known that {xkx} is a strictly monotonically decreasing sequence with a lower bound, so it converges. Thus, the sequence {xk} is bounded and

012γ(2γ)(1ν)e(xk,βk)2≤∥xkx2xk+1x20,k.

Based on the definition of yk in Step 1 of Algorithm 3.1, there is

limke(xk,βk)∥=limkxkyk∥=0.

Let x¯ be any cluster point of the sequence {xk}. Then there exists an index set I{1,2,} such that xix¯(iI,i). First, it requires to prove that there exists an index set JI such that the following equation holds:

limjJ,je(xj,1)∥=0.

To this end, the following cases are discussed:

(a) If β0:=inf{βiiI}>0, then for any iI, there is β0βi. Further, for any iI, there is

0≤∥e(xi,1)∥≤e(xi,βi)min{βi,1}e(xi,βi)min{β0,1}0,i,

where the second inequality can be deduced from Lemma 2.3 (e(xi,1)∥=e(xi,1)1e(xi,min{βi,1})min{βi,1}e(xi,βi)min{βi,1}), and from (4.6), it is known that the limit is zero. If I=J, (4.7) holds.

(b) If β0:=inf{βiiI}=0, then there exists JI such that βj0(j). Noting the definition of βk in (4.1), for large enough jJ (such that l1βj<1), there is

F(xj)F(xje(xj,l1βj))>νe(xj,l1βj)l1βjνe(xj,1)1=νe(xj,1),

where the second inequality is derived from (2.2) and l1βj<1. In addition, based on the definition of the residual function in (2.1), the definition yj=PC(xjβjF(xj)) in Step 1, and the non-expansiveness of the projection operator, there is

xj(xje(xj,l1βj))=∥xjPC(xjl1βjF(xj))≤∥xjyj+yjPC(xjl1βjF(xj))≤∥xjyj+xjβjF(xj)(xjl1βjF(xj))=xjyj+βj|1l1|F(xj)∥→0,jJ,j,

and it is known that the limit is zero from (4.6). Assume βj0(j), and there is the boundedness of the sequence {xk} and the continuity of F on Rn (so the sequence {F(xj)} is bounded). From this, combined with (4.8) and the continuity of F on Rn, it is shown that (4.7) holds.

Based on the above, (4.7) holds. □

Next, it requires to prove the main conclusions of this paper.

Since {xk} is a bounded sequence, let x¯ be one of its cluster points. Due to the continuity of the mapping e(,1) and (4.7), it is known that e(x¯,1)=0. Then, by Lemma 2.2, x¯S. In Eq. (4.1), replace x* with x¯ and thus obtain that {xkx¯} is a convergent sequence. Since x¯ is a cluster point of {xk}, {xkx¯} must converge to 0. Therefore, {xk} converges to x¯S. Thus, the proof of the theorem is completed.

According to [7], a new search direction is considered.

d(xk,βk)=t1d1(xk,βk)+t2d2(xk,βk),

where d1(xk,βk)=xkykβk(F(xk)F(yk)), d2(xk,βk)=βkF(yk), t1,t2[0,) and t12+t220.

The following presents algorithm based on the direction d(xk,βk)=t1d1(xk,βk)+t2d2(xk,βk).

Algorithm 4.1 Select an initial point x0Rn and parameters ν,l(0,1), γ(0,2), r = 1. Take k := 0.

Step 1: Compute yk=PC(xkβkF(xk)), βk:=rlmk. Here, mk is the smallest non-negative integer m such that the following equation holds:

βkF(xk)F(xke(xk,βk))∥≤νe(xk,βk).

Step 2: Compute e(xk,βk)=xkyk. If e(xk,βk)=0, stop; otherwise, proceed to the next step.

Step 3: Compute

xk+1=PTk(xkαkd(xk,βk)),

where

Tk={wRn:xkβkF(xk)yk,wyk0},αk=γe(xk,βk),d1(xk,βk)(t1+t2)d1(xk,βk)2,d(xk,βk)=t1d1(xk,βk)+t2d2(xk,βk),d1(xk,βk)=xkykβk(F(xk)F(yk)),d2(xk,βk)=βkF(yk),t12+t220,t1,t2[0,).

Step 4: If

βkF(xk)F(xkβkF(xk))∥≤0.4e(xk,βk),

r=βk0.7, or r=βk, let k:=k+1. Go back to Step 1.

The following notes are used to illustrate the relationship between Algorithm 4.1 and existing algorithms.

Note 4.1 (a) When t1=1,t2=0, Algorithm 4.1 degenerates into Algorithm 3.1. Therefore, Algorithm 4.1 can be regarded as a generalization of Algorithm 3.1.

(b) When t1 = 0, t2 = 1, and γ = 1, then the next iterative point of Algorithm 4.1 is as follows.

xk+1=PTk(xke(xk,βk),d1(xk,βk)d1(xk,βk)2d2(xk,βk)).

At this point, Algorithm 4.1 degenerates into Algorithm 1 in [13]. Consequently, Algorithm 4.1 can also be regarded as a generalization of Algorithm 1 in [13].

Lemma 4.2  If Assumption H holds and {xk} is an infinite sequence generated by Algorithm 4.1, then for any fixed xS, the following conclusion holds:

xk+1x2≤∥xkx212γ(2γ)(1ν)e(xk,βk)2.

Proof By Lemma 2.1(iii) and combining SCTk, for arbitrarily taken xS, there is

xk+1x2≤∥xkαkd(xk,βk)x2xkαkd(xk,βk)xk+12=xkx22αkxkx,d(xk,βk)+αk2d(xk,βk)2(xkxk+122αkxkxk+1,d(xk,βk)+αk2d(xk,βk)2)=xkx22αkxk+1x,d(xk,βk)xkxk+12.

Let ϑ(xk)=∥xkx2xk+1x2. There is

ϑ(xk)≥∥xkxk+12+2αkxk+1x,d(xk,βk)=xkxk+12+2αkxk+1yk,d(xk,βk)+2αkykx,d(xk,βk)≥∥xkxk+12+2αkxk+1yk,d(xk,βk),

where the second inequality can be obtained from (2.3), (2.5), and αk 0.

2αkxk+1yk,d(xk,βk)=2αkxk+1yk,t1(xkyk)t1βkF(xk)+(t1+t2)βkF(yk)2αkxk+1yk,t1(xkyk)t1βkF(xk)+(t1+t2)βkF(yk)+2αkxk+1yk,t2(xkβkF(xk)yk)=2αk(t1+t2)xk+1yk,xkykβk(F(xk)F(yk))=2αk(t1+t2)xk+1yk,d1(xk,βk),

where the second inequality is derived from xk+1Tk,αk>0, and t2 > 0.

xkxk+12=xkxk+1αk(t1+t2)d1(xk,βk)+αk(t1+t2)d1(xk,βk)22xkxk+1(t1+t2)αkd1(xk,βk),(t1+t2)αkd1(xk,βk)+(t1+t2)2αk2d1(xk,βk)2=(t1+t2)2αk2d1(xk,βk)2+2(t1+t2)αkxkxk+1,d1(xk,βk).

According to (4.10)‒(4.12), there is

ϑ(xk)(t1+t2)2αk2d1(xk,βk)2+2(t1+t2)αkxkyk,d1(xk,βk)=(t1+t2)2αk2d1(xk,βk)2+2(t1+t2)αke(xk,βk),d1(xk,βk).

The right-hand side of the inequality (4.13) is a quadratic function with respect to αk, which reaches its maximum value at αk=e(xk,βk),d1(xk,βk)(t1+t2)d1(xk,βk)2. Substituteαk into the equation (4.13), and there is

ϑ(xk)(t1+t2)2(αk)2d1(xk,βk)2+2(t1+t2)αke(xk,βk),d1(xk,βk)=αke(xk,βk),d1(xk,βk).

Let αk=γe(xk,βk),d1(xk,βk)(t1+t2)d1(xk,βk)2, and Eq. (4.9) can be obtained through similar discussions as in Lemma 4.1.

Similarly, the following conclusions can be reached.

Theorem 4.2  If Assumption H holds and {xk} is an infinite sequence generated by Algorithm 4.1, then the sequence {xk} converges to a solution of the variational inequality (1.1).

5 Computer verification of the algorithm

This section presents the computer verification results of Algorithm 3.1 and Algorithm 4.1 in the following four simple examples, and makes a comparison with Algorithm 3.1 in [5] (referred to as the Gibali algorithm for short) and Algorithm 2.2 in [20] (referred to as the S‒S algorithm for short). All these results are obtained by running MATLAB on a laptop with an Intel(R) Core(TM) i5-3210M CPU (dual core, main frequency 2.50 GHz) and 2.0 GB RAM. The parameter selection for Algorithm 3.1 and Algorithm 4.1 is as follows: v =0.98, r = 1, l = 0.5, γ= 1.9, t1 = 1, t2 = 1. The parameters selected for the Gibali algorithm are the same as those in [5], namely α1=0.7, ε = 0.2, β = 0.5. The parameters selected for the S-S algorithm are the same as those in [20], namely σ = 0.3, γ = 0.5, η1=1, θ = 4. The termination condition of the algorithms is xnyn∥≤ε. Respectively take ε=104 and ε=108 as the stopping criteria for the algorithms. Denote the running time in CPU (unit: s), the number of iteration steps as iter, the total number of projection times as np, and the total number of function value estimation times as nf. Denote the initial point as x0.

Example 5.1 This example is used in [14] to test the algorithms for continuous monotone variational inequality problems. The feasible set and the mapping are respectively C={xR+n:x1+x2+x3+x4=4} and

F(x1,x2,x3,x4)=[3x12+2x1x2+2x22+x3+3x462x12+x1+x22+10x3+2x423x12+x1x2+2x22+2x3+9x49x12+3x22+2x3+3x43].

Example 5.2 This example is used in [21] to test the algorithms for the Nonlinear Complementarity Problem (NCP). Define C=R+n,F(x)=F1(x)+F2(x), where

F1(x)=(f1(x),f2(x),,fn(x)),F2(x)=Mx+q,fi(x)=xi12+xi2+xi1xi+xixi+1,i=1,2,,n,x0=xn=0.

The matrix M is an n × n square matrix, and q=(1,1,,1),

M=(4214214214).

The initial point x0=(0,0,,0).

Example 5.3 This example is used in [9] to test the algorithms for the Nonlinear Complementarity Problem, where

F(x)=Mx+D(x)+q,

where M=ATA+B, A is an n×n square matrix randomly generated from the interval (−5, 5), B is an anti-symmetric matrix generated in the same way, the vector q is uniformly generated from the interval (−500, 0), and Di(x)=diarctan(xi), where di is a random variable on the interval (0,1). The initial point x0=(1,1,,1).

Example 5.4 Nonlinear variational inequality problem. In this example, three cases are tested for the above four algorithms. The Mathiesen case is tested in reference [20]. Harker [8] defined and tested the 5-dimensional Harnash5 and 10-dimensional Harnash10. For the Mathiesen case, the initial point is taken as x0=(0.3,0.4,0.3), and x0=(1,1,,1) for the other cases in this example.

Note 5.1 Judging from the results of the numerical experiments, we have the following conclusions:

(A) Algorithm 4.1, Algorithm 3.1, the Gibali algorithm, and the S‒S algorithm can all work out the solutions of the variational inequality problems. However, the subgradient extragradient projection algorithm is less affected by the dimension of the variational inequality problem, the precision, and the selection of the initial point, as shown in Tables 1 and 3. It is believed that this is mainly because the next iterative point of the subgradient extragradient projection algorithm is obtained by projecting onto a half-space and it does not require the condition xkC. In contrast, the convergence analysis of the S‒S algorithm requires the condition xkC. Thus, when there is an error in the computer’s calculation of the next iterative point: xk+1=PCHk(xk)+εk (εk is the error vector), it is not necessarily guaranteed that xk+1C at this time. This may have an impact on the convergence of the algorithm. Therefore, this algorithm does not perform well when the stopping criterion is 10−8.

(B) Under the circumstances of low dimension and stopping criteria, the S‒S algorithm outperforms the Gibali algorithm [5], demonstrating an advantage in the number of iteration steps. See Tables 1, 2, and 4.

(C) In terms of the number of iterations, the number of function value estimations, and the number of projections, both Algorithm 3.1 and Algorithm 4.1 outperform the algorithm in [5], as shown in Tables 1‒4.

(D) Algorithm 4.1 outperforms Algorithm 3.1.

References

[1]

BertsekasD.P.. and Gafni, E.M., Projection methods for variational inequalities with application to the traffic assignment problem, In: Nondifferential and Variational Techniques in Optimization (Sorensen, D.C. and Wets, R.J.-B. eds.), Math. Programming Stud., No. 17, Amsterdam: North-Holland, 1982, 139–159

[2]

Censor Y., Gibali, A. , Reich, S. . Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012; 61(9): 1119–1132

[3]

Chen A., Lo, H.K. , Yang, H. . A self-adaptive projection and contraction algorithm for the traffic assignment problem with path-specific costs. European J. Oper. Res. 2001; 135(1): 27–41

[4]

FacchineiF. , Pang, J.-S., Finite-dimensional Variational Inequalities and Complementarity Problems, Vol. I, Springer Series in Operations Research, New York: Springer-Verlag, 2003

[5]

Gibali A. . A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim. 2015; 6(1): 41–51

[6]

Han D.R. , Lo, H.K. . Two new self-adaptive projection methods for variational inequality problems. Comput. Math. Appl. 2002; 43(12): 1529–1537

[7]

Han D.R., Zhang, H.C., Qian, G. , Xu, L.L. . An improved two-step method for solving generalized Nash equilibrium problems. European J. Oper. Res. 2012; 216(3): 613–623

[8]

Harker P.T. . Accelerating the convergence of the diagonalization and projection algorithms for finite-dimensional variational inequalities. Math. Programming Ser. A 1988; 41(1): 29–59

[9]

He B.S. , Liao, L.Z. . Improvements of some projection methods for monotone nonlinear variational inequalities. J. Optim. Theory Appl. 2002; 112(1): 111–128

[10]

He Y.R. . A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006; 185(1): 166–173

[11]

Iusem A.N. , Svaiter, B.F. . A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997; 42(4): 309–321

[12]

Khobotov E.N. . Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987; 27(5): 120–127

[13]

Li H., Yang, L. , Li, J. . A subgradient exgradient projection method for pseudomonotone variational inequalities. J. China West Norm. Univ. (Nat. Sci.) 2016; 37(2): 189–194

[14]

Malitsky . Projected reflected gradient methods for monotone variational inequalities. SIAM. J. Optim. 2015; 25(1): 502–520

[15]

Malitsky . An extragradient algorithm for monotone variational inequalities. Cybernet. Systems Anal. 2014; 50(2): 271–277

[16]

NagurneyA., Network Economics: A Variational Inequality Approach, Revised Second Edition, Advances in Computational Economics, Vol. 10, Dordrecht: Springer Science+Business Media, 1999

[17]

Nagurney A. , Ramanujam, P. . Transportation network policy modeling with goal targets and generalized penalty functions. Transport. Sci. 1996; 30(1): 3–13

[18]

Noor M.A., Some recent advances in variational inequalities, II. . other concepts. New Zealand J. Math. 1997; 26(2): 229–255

[19]

Sibony M. . Methodes itératives pour les équations et inequations aux dérivées partielles non linéaires de type monotone. Calcolo 1970; 7: 65–183

[20]

Solodov M.V. , Svaiter B.F. . A new projection method for variational inequality problems. SIAM J. Control Optim. 1999; 37(3): 765–776

[21]

Sun D.F. . A projection and contraction method for the nonlinear complementarity problem and its extensions. Math. Numer. Sinica 1994; 16(2): 183–194

[22]

Wang Y.J., Xiu, N.H. , Wang, C.Y. . A new version of extragradient method for variational inequality problems. Comput. Math. Appl. 2001; 42(6/7): 969–979

[23]

Wang Y.J., Xiu, N.H. , Wang, C.Y. . Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J. Optim. Theory Appl. 2001; 111(3): 641–656

[24]

Xiu N.H. , Zhang, J.Z. . Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 2003; 152(1/2): 559–585

RIGHTS & PERMISSIONS

Higher Education Press 2025

AI Summary AI Mindmap
PDF

22

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/