Optimality conditions for a class of variational inequalities with cone constraints

Wen DONG , Junrong ZHANG , Yiyun WANG , La HUANG

Front. Math. China ›› 2025, Vol. 20 ›› Issue (1) : 39 -53.

PDF (513KB)
Front. Math. China ›› 2025, Vol. 20 ›› Issue (1) : 39 -53. DOI: 10.3868/s140-DDD-025-0004-x
RESEARCH ARTICLE

Optimality conditions for a class of variational inequalities with cone constraints

Author information +
History +
PDF (513KB)

Abstract

In this paper, through the use of image space analysis, optimality conditions for a class of variational inequalities with cone constraints are proposed. By virtue of the nonlinear scalarization function, known as the Gerstewitz function, three nonlinear weak separation functions, two nonlinear regular weak separation functions and a nonlinear strong separation function are introduced. According to nonlinear separation functions, some optimality conditions of the weak and strong alternative for variational inequalities with cone constraints are derived.

Keywords

Variational inequalities with constraints / image space analysis / nonlinear separation function / optimality condition

Cite this article

Download citation ▾
Wen DONG, Junrong ZHANG, Yiyun WANG, La HUANG. Optimality conditions for a class of variational inequalities with cone constraints. Front. Math. China, 2025, 20(1): 39-53 DOI:10.3868/s140-DDD-025-0004-x

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction and preliminary knowledge

As is well-known, variational inequalities have been widely applied in control theory, equilibrium problems, complementarity problems, fixed-point problems, and game theory, among others. In recent years, the theory and methods related to variational inequalities have gained increasing attention from scholars [5, 6, 11]. Castellani and Giannessi [2] first introduced mimetic space analysis, which serves as a very powerful tool in studying constrained extrema, classical variational inequalities, and Ky Fan inequalities, and has been further explored in [8]. Mimetic space analysis primarily studies optimization problems that can be equivalently expressed as the infeasibility of a parametric system and the separation of two sets in the mimetic space on constrained optimization problems. With the help of mimetic space analysis, Li and Huang [12] studied the optimality conditions for variational inequality problems and applied them to traffic equilibrium problems. Mastroeni et al. [14] studied saddle points and gap functions for vector quasi-equilibrium problems with cone constraints. Gu and Li [10] explored vector quasi-equilibrium problems and used linear separation functions to investigate the optimality conditions and error bounds of vector quasi-equilibrium problems. Based on mimetic space analysis, Chen et al. [4] studied vector-like variational inequalities with cone constraints, thereby deriving the optimality conditions and continuity of gap functions. Xu [15] employed mimetic space analysis to study nonlinear separation problems of inverse variational inequalities with cone constraints. However, it fails to successfully transform the variational inequality problem into the infeasibility form of a parametric system and separate the corresponding sets in its mimetic space in this paper, which is the core to study the problem with the help of mimetic space analysis. To resolve this issue, [15] will be improved in this paper.

Some symbols and definitions are reviewed first. The closure, relative interior, interior, and boundary of a set MRn are denoted by clM, riM, intM and M, respectively. Let KRn be an interior non-empty closed convex cone.

Given a function f:RnRn, the following considers variational inequality with cone constraints. Find xRn such that

f(x)Ω,(yf(x))Tx0,yΩ,

where Ω:={yRn:g(y)D},withg:RnRm being a vector-valued mapping, and DRm an interior non-empty closed convex cone. In [15], the variational inequality (1.1) is also referred to as the inverse variational inequality.

The following shows the main characteristics of the image in the variational inequality (1.1). Given xRn, define the map

Ax:RnR×Rm×Rm,Ax(y):=((f(x)y)Tx,g(y),g(f(x))),yRn.

Consider the set

K(x):={(u,v,τ)R×Rm×Rm:(u,v,τ)=Ax(y),yRn},

H:={(u,v,τ)R×Rm×Rm:u>0,(v,τ)D×D}.

K(x) is called the image of the variational inequality (1.1), and R×Rm×Rm is referred to as the image space. Clearly, xRn is a solution to the variational inequality (1.1) if and only if f(x)Ω and the generalized system

Ax(y)H,yRn

is infeasible, or equivalently, f(x)Ω and K(x)H=.

For a function h defined on a set XRn and aR, the sets

levαh:={xX:h(x)α},

lev>αh:={xX:h(x)>α}

are called the non-negative level set and the positive level set of the function h, respectively.

Definition 1.1 [9] Given a function ω:R×Rm×Rm×ΠR, the function ω is called a weak separation function class, denoted as W(Π), if it satisfies the following conditions:

(i) lev0ω(;π)H,πΠ,

(ii) πΠlev>0ω(;π)H,

where Π is a certain parameter set.

Definition 1.2 [9] Given a function ω:R×Rm×Rm×ΠR, the function ω is called a regular weak separation function class, denoted as WR(Π), if it satisfies the condition: πΠlev>0ω(;π)=H, where П is a certain parameter set.

Definition 1.3 [9] Given a function s:R×Rm×Rm×ΠR, the function s is called a strong separation function class, denoted as S(Π), if it satisfies the following conditions:

(i) lev0ω(;π)H,πΠ,

(ii) πΠlev>0s(;π)=riH,

where П is a certain parameter set.

Note 1.1 When intH, riH=intH. Thus, when intH, there is

πΠlev>0s(;π)=riHπΠlev>0s(;π)=intH.

Lemma 1.1 [1]  Let Y be a normed space, and let A,EY and E is an interior non-empty convex cone. Then, int(A+E)=A+intE.

Given eintK, define the Gerstewitz function ξe,K:RnR as ξe,K(y)=min{rR:yre+K},yRn.

Proposition 1.1 [3, 7, 13]  For any given eintK, yRn, and rR:

(i) ξe,K(y)<ryre+intK;

(ii) ξe,K(y)ryre+K;

(iii) The function ξe,K is decreasing on Rn, that is, y2y1Kξe,K(y2)ξe,K(y1).

2 Separation functions

This section introduces four nonlinear functions and discusses their properties.

ω1(u,v,τ;θ,λ,β):=θuλξe,D(v)βξe,D(τ),(u,v,τ)R×Rm×Rm,(θ,λ,β)Π1,

where Π1:=R+3{0R3},eintD.

ω2(z;π):=ξe0,clH(z+π),z=(u,v,τ)R×Rm×Rm,πΠ2,

s(z;π):=ξe0,clH(zπ),z=(u,v,τ)R×Rm×Rm,πΠ2,

where Π2:=clH,e0intH.

ω3(u,v,τ;θ,λ,β):=θu+supz{v}D(λξe,D(z)rσ(z))+supt{τ}D(βξe,D(t)rσ(t)),(θ,λ,β)Π1,

where eintD,r is a positive real number, and the expansion function σ:RmR is upper semicontinuous with

argminzRmσ(z)={0Rm},σ(0Rm)=0.

Proposition 2.1 (i) The nonlinear function ω1 is a weak separation function.

(ii) When (θ,λ,β)Π3=R++×R+×R+, the nonlinear function ω1 is a regular weak separation function.

(iii) The nonlinear function ω2 is a weak separation function.

(iv) If intH, then the nonlinear function s is a strong separation function.

(v) The nonlinear function ω3 is a weak separation function.

(vi) If (θ,λ,β)Π3, then the nonlinear function ω3 is a regular weak separation function.

Proof (i) For any (θ,λ,β)Π1,(u,v,τ)H, there is θu0. From Proposition 1.1(ii), we know

ω1(u,v,τ;θ,λ,β):=θuλξe,D(v)βξe,D(τ)0.

Thus,

lev0ω1(;θ,λ,β)H,(θ,λ,β)Π1.

Now, it is required to prove

(θ,λ,β)Π1lev>0ω1(;θ,λ,β)H.

To prove this, it is sufficient to show that if (u,v,τ)H, then (u,v,τ)(θ,λ,β)Π1lev>0ω1(;θ,λ,β), i.e., when (u,v,τ)H, there exists (θ,λ,β)Π1 such that ω1(u,v,τ,θ,λ,β)0. Let (u,v,τ)H. The following discusses the three cases.

Case 1: If u0, and (v,τ)D×D, then there exists (θ,λ,β)=(1,0,0)Π1 such that ω1(u,v,τ;θ,λ,β)=θu=u0.

Case 2: If u>0, and vD,τD, then there exists (θ,λ,β)=(0,1,0)Π1. According to Proposition 1.1(ii), ω1(u,v,τ;θ,λ,β)=ξe,D(v)<0.

Case 3: if u>0, and vD,τD, then there exists (θ,λ,β)=(0,0,1)Π1. According to Proposition 1.1(ii), ω1(u,v,τ;θ,λ,β)=ξe,D(τ)<0.

According to Equations (2.5) and (2.6), ω1W(Π1).

(ii) From (i), it is proved

πΠ3lev>0ω1(;π)H.

Now, it is required to prove

πΠ3lev>0ω1(;π)H.

If (u,v,τ)H, there exists (θ,λ,β)Π3 such that ω1(u,v,τ;θ,λ,β)0. Let (u,v,τ)H. The following discusses the three cases:

Case 1: If u0, and (v,τ)D×D, assuming π=(1,0,0)3, ω1(u,v,τ;π)=u0.

Case 2: If u>0,vD, and τD, assuming π=(0,ξe,D(τ)ξe,D(v),1). According to Proposition 1.1(ii), πΠ3, and

ω1(u,v,τ;π)=ξe,D(τ)ξe,D(v)ξe,D(v)ξe,D(τ)=0.

Case 3: If u>0,vD, and τD, we can assume π=(0,1,ξe,D(v)ξe,D(τ)). According to Proposition 1.1(ii), πΠ3, and

ω1(u,v,τ;π)=ξe,D(v)+ξe,D(v)ξe,D(τ)ξe,D(τ)=0.

From Equations (2.7) and (2.8), it is concluded that ω1WR(Π3).

(iii) For any πΠ2 and (u,v,τ)H, there is (u,v,τ)+πH+clH=clH. Moreover, according to Proposition 1.1(ii), it is known that ω2(u,v,τ;π)0, which implies

πΠ2lev0ω2(;π)H.

Now, it is required to prove

πΠ2lev>0ω2(;π)H.

In fact, let π0=0Rm+1clH. According to Proposition 1.1(i), it is known

lev>0ω2(;π0)=intHH.

Furthermore, from πΠ2lev>0ω2(;π)lev>0ω2(;π0) and Equation (2.11), it is concluded that Equation (2.10) holds. Therefore, according to Equations (2.9) and (2.10), ω2W(Π2).

(iv) From (iii), it is proved

lev>0s(;π)H,πΠ2.

Now, it is required to prove

πΠ2lev>0s(;π)=intH.

First, it is required to prove

πΠ2lev>0s(;π)intH.

In fact, take any (u,v,τ)πΠ2lev>0s(;π). Then there exists π0Π2 such that (u,v,τ)lev>0s(;π0). According to Proposition 1.1(i), (u,v,τ)π0intH. Then, by Lemma 1.1, there is (u,v,τ)intH. Therefore, Equation (2.14) holds.

Next, it is required to prove

πΠ2lev>0s(;π)intH.

In fact, let π0=0Rm+1clH. From Proposition 1.1(i), it is known that lev>0s(;π0)=intH. Thus,

intH=lev>0s(;π0)πΠ2lev>0s(;π).

From Equations (2.14) and (2.15), it is concluded that Equation (2.13) holds. Therefore, based on Equations (2.12) and (2.13), sS(Π2).

(v) For any (θ,λ,β)Π1, and (u,v,τ)H, there is θu0. From Proposition 1.1(ii), it is known that 0Rm({v}D)({τ}D) and σ(0Rm)=0, and it is deduced that

ω3(u,v,τ;θ,λ,β)=θu+supz{v}D(λξe,D(z)rσ(z))+supt{τ}D(βξe,D(t)rσ(t))0,

which implies

(θ,λ,β)Π1lev0ω3(;θ,λ,β)H.

Now, it is required to prove

(θ,λ,β)Π1lev>0ω3(;θ,λ,β)H.

To do this, if (u,v,τ)H, there exists (θ,λ,β)Π1 such that ω3(u,v,τ;θ,λ,β)0. Let (u,v,τ)H. The following discusses the three cases:

Case 1: If u0, and (v,τ)D×D, let (θ,λ,β)=(1,0,0)Π1. Since argminzRmσ(z)={0Rm},σ(0Rm)=0, it follows that σ(z)0 for all zRm. Also, since 0Rm({v}D)({τ}D) and σ(0Rm)=0, there is

supz{v}D(rσ(z))=0,supt{τ}D(rσ(t))=0.

Therefore, it is obtained that ω3(u,v,τ;θ,λ,β)=u0.

Case 2: If u>0,vD, and τD, let (θ,λ,β)=(0,ξe,D(τ)ξe,D(v),1). From Proposition 1.1(ii), it is known that (θ,λ,β)Π1. Furthermore, by Proposition 1.1(iii) and since σ(z)0 for all zRm, it is deduced that

ω3(u,v,τ;θ,λ,β)supz{v}D(λξe,D(z))+supt{τ}D(βξe,D(t))λξe,D(v)βξe,D(τ)=0.

Case 3: If u>0,vD, and τD, let (θ,λ,β)=(0,1,ξe,D(v)ξe,D(τ)). Based on Proposition 1.1(ii), it is known that (θ,λ,β)Π1. Additionally, by Proposition 1.1(iii) and since σ(z)0 for all zRm, there is

ω3(u,v,τ;θ,λ,β)λξe,D(v)βξe,D(τ)=0.

From Equations (2.16) and (2.17), it is concluded that ω3W(Π1).

(vi) The proof for (vi) is similar to (v). □

Note 2.1 (i) From the definitions of ω1 and ω3, it is known that ω3ω1. In fact, by Proposition 1.1(iii) and since σ(z)0 for all zRm, there is

ω3(u,v,τ;θ,λ,β)=θu+supz{v}D(λξe,D(z)rσ(z))+supt{τ}D(βξe,D(t)rσ(t))θu+supz{v}D(λξe,D(z))+supt{τ}D(βξe,D(t))θuλξe,D(v)βξe,D(τ)=ω1(u,v,τ;θ,λ,β),(u,v,τ)R×Rm×Rm,(θ,λ,β)Π1.

(ii) Let θ¯R++, and define

ω¯(u,v,τ;θ¯,λ,β)=θ¯u+supz{v}D(λξe,D(z)rσ(z))+supt{τ}D(βξe,D(t)rσ(t)),

where λ,βR+, and (u,v,τ)R×Rm×Rm. Then the function ω¯ is a regular weak separation function.

3 Optimality conditions

This section establishes some weak and strong selection theorems, and discusses the optimality conditions for the variational inequality (1.1).

Theorem 3.1 (i) System (1.2) with respect to the variable y is not simultaneously feasible with the system

(θ¯,λ¯,β¯)Π1,s.t.θ¯[(f(x)y)Tx]λ¯ξe,D(g(y))β¯ξe,D(g(f(x)))<0,yRn.

(ii) System (1.2) with respect to the variable y is not simultaneously feasible with the system

π0Π2,s.t.ξe0,clH[((f(x)y)Tx,g(y),g(f(x)))+π0]>0,yRn.

(iii) System (1.2) with respect to the variable y is not simultaneously feasible with the system

(θ¯,λ¯,β¯)Π1,s.t.θ¯[(f(x)y)Tx]+supzg(y)D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))<0,yRn.

(iv) System (1.2) with respect to the variable y is not simultaneously feasible with the system

π0Π2,s.t.ξe0,clH[((f(x)y)Tx,g(y),g(f(x)))π0]>0,yRn.

Proof (i) Assume that System (1.2) is feasible. Then, there exists yRn such that ((f(x)y)Tx,g(y),g(f(x)))H. From Proposition 2.1(i), it is known that

θ[(f(x)y)Tx]λξe,D(g(y))βξe,D(g(f(x)))0,(θ,λ,β)Π1.

Thus, System (3.1) is infeasible.

Conversely, assume that the system (3.1) is feasible. Then,

K(x)={(u,v,τ)R×Rm×Rm:(u,v,τ)=Ax(y),yRn}{(u,v,τ)R×Rm×Rm:ω1(u,v,τ;θ¯,λ¯,β¯)<0}.

That is,

K(x){(u,v,τ)R×Rm×Rm:ω1(u,v,τ;θ¯,λ¯,β¯)0}=,

which implies K(x)lev0ω1(;θ¯,λ¯,β¯)=. From Proposition 2.1(i), it is known that K(x)H=, which means that for any yRn, Ax(y)H. Hence, the system (1.2) with respect to the variable y is infeasible.

(ii) By combining Proposition 2.1(iii) and the proof of part (i), the conclusion follows easily.

(iii) By combining Proposition 2.1(v) and the proof of part (i), the conclusion follows easily.

(iv) Assume that System (1.2) is infeasible. Then Ax(y)H,yRn. Therefore, from Proposition 2.1(iv), for any yRn, there exists π0Π2 such that s(Ax(y);π0)0, i.e.,

ξe0,clH[((f(x)y)Tx,g(y),g(f(x)))π0]0,yRn.

Thus, System (3.4) is feasible.

Conversely, assume that System (3.4) is infeasible. Then, for any πΠ2, there exists y¯Rn such that

ξe0,clH[((f(x)y¯)Tx,g(y¯),g(f(x)))π]<0,

i.e., s(Ax(y¯);π)>0. From Proposition 2.1(iv), it is known that Ax(y)H. Therefore, System (1.2) with respect to the variable y is feasible. □

Theorem 3.2 (i) System (1.2) with respect to the variable y is not simultaneously feasible with the system

(θ¯,λ¯,β¯)R++×R+×R+,s.t.θ¯[(f(x)y)Tx]λ¯ξe,D(g(y))β¯ξe,D(g(f(x)))0,yRn.

(ii) System (1.2) with respect to the variable y is not simultaneously feasible with the system

(θ¯,λ¯,β¯)R++×R+×R+,s.t.θ¯[(f(x)y)Tx]+supzg(y)D(λ¯ξe,D(z)|rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))0,yRn.

Proof The proof of this theorem is similar to the proof of Theorem 3.1(i). □

According to Theorems 3.1 and 3.2, it is derived the necessary and sufficient optimality conditions for the variational inequality (1.1).

Theorem 3.3 (i) Let f(x)Ω and there exists (θ¯,λ¯,β¯)R++×R+×R+ such that

θ¯[(f(x)y)Tx]λ¯ξe,D(g(y))β¯ξe,D(g(f(x)))0,yRn.

Then x* is a solution to the variational inequality (1.1).

(ii) Let f(x)Ω and there exists (θ¯,λ¯,β¯)R++×R+×R+ such that

θ¯[(f(x)y)Tx]+supzg(y)D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))0,yRn.

Then x* is a solution to the variational inequality (1.1).

(iii) If x* is a solution to the variational inequality (1.1) and intH, then there exists πclH such that

ξe0,clH[((f(x)y)Tx,g(y),g(f(x)))π]0,yRn.

Proof By combining Theorem 3.1 and Theorem 3.2, the conclusion can be drawn. □

The following example illustrates the feasibility of Theorem 3.3.

Example 3.1 Let D=R+3, and e=(1,1,1)T. Define

f(x)={x21,x1,0,1<x2,x2,x>2.

g(y)=(y,y3,f(y))T, so that Ω=[3,+).

Let x=5, then f(x)=3Ω and (f(x)y)x=5(y3). Let (θ¯,λ¯,β¯)=(1,5,0). By the definition of ξe,D(y) and Note 2.1(i), it is known that

ω3(Ax(y);θ¯,λ¯,β¯)ω1(Ax(y);θ¯,λ¯,β¯)=5(y3)5ξe,D((y,y3,f(y))T)=5(y3)+5(y3)=0,yR.

Thus, Equations (3.7) and (3.8) hold. By calculation, x=5 is a solution to the variational inequality (1.1).

Let π=(0,(0,0,0)T,(1,1,1)T)clH, then

((f(x)y)Tx,g(y),g(f(x)))π=(5(y3),g(y),(2,1,0)T)H.

Based on Proposition 1.1(ii), it is known

ξe0,clH[((f(x)y)Tx,g(y),g(f(x)))π]0,yR,

which implies that Equation (3.9) holds.

Using the nonlinear regular weak separation functions (3.1) and (3.4), the necessary and sufficient optimality conditions for the variational inequality (1.1) can be obtained.

Theorem 3.4  Let θ¯R++. Then xRn is a solution to the variational inequality (1.1) if and only if f(x)Ω and

supyRninfλR+,βR+[θ¯((f(x)y)Tx)+supzg(y)D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]=0,

or

supyRninfλR+,βR+[θ¯((f(x)y)Tx)λξe,D(g(y))βξe,D(g(f(x)))]=0.

Proof  We only need to prove Equation (3.10); the proof for Equation (3.11) is similar. Let

ω4(u,v,τ;θ¯,λ,β):=θ¯u+supz{v}D(λξe,D(z)rσ(z))+supt{τ}D(βξe,D(t)rσ(t)),λ,βR+.

First, it is required to prove

H=λR+,βR+lev>0ω4(;θ¯,λ,β)lev>0infλR+,βR+ω4(;θ¯,λ,β)H.

In fact, by Note 2.1(ii), it is known that ω4 is a regular weak separation function, so the first equality in Equation (3.12) holds. The second inclusion is obviously valid. Now it is required to prove

lev>0infλR+,βR+ω4(;θ¯,λ,β)H.

For any (u,v,τ)H, by Proposition 1.1(ii) and 0Rm({v}D)({τ}D), there is

infλR+,βR+ω4(u,v,τ;θ¯,λ,β)infλR+,βR+[θ¯uλξe,D(0Rm)βξe,D(0Rm)2rσ(0Rm)]=θ¯u>0.

This implies that

lev>0infλR+,βR+ω4(;θ¯,λ,β)H.

From Equation (3.12), it is known that

H=lev>0infλR+,βR+ω4(;θ¯,λ,β).

If xRn is a solution to the variational inequality (1.1), then HK(x)=. According to Equation (3.13), there is

lev>0infλR+,βR+ω4(;θ¯,λ,β)K(x)=,

which means

infλR+,βR+[θ¯((f(x)y)Tx)+supzg(y)D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]0,yRn.

In particular, when y=f(x), there is

infλR+,βR+[θ¯×0+supzg(f(x))D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]0.

On the other hand, since 0Rmg(f(x))D, there is

infλR+,βR+[θ¯×0+supzg(f(x))D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]infλR+,βR+[λξe,D(0Rm)βξe,D(0Rm)2rσ(0Rm)]=0.

Thus, there is

infλR+,βR+[θ¯×0+supzg(f(x))D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]=0.

By combining this with Equation (3.14), it is concluded that Equation (3.10) holds.

Conversely, assume that Equation (3.10) holds. Then, there is

infλR+,βR+[θ¯((f(x)y)Tx)+supzg(y)D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))]0,yRn.

At the same time, from Equation (3.12), it is known that

lev>0infλR+,βR+ω4(;θ¯,λ,β)H.

Therefore, HK(x)=, and since f(x)Ω, it is concluded that xRn is a solution to the variational inequality (1.1). □

Next, the definition of nonlinear separation is provided to proceed to investigate the optimality conditions for the variational inequality (1.1).

Definition 3.1 The sets K(x) and H satisfy nonlinear separation if and only if there exist (θ¯,λ¯,β¯)R+3{0R3} such that

ω4(Ax(y);θ¯,λ¯,β¯)=θ¯((f(x)y)Tx)+supzg(y)D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))0,yRn.

Additionally, if θ¯>0, the separation is called regular.

Proposition 3.1  If K(x) and Hsatisfy regular nonlinear separation and f(x)Ω, then xRn is a solution to the variational inequality (1.1).

Proof This proof is similar to Proposition 4.1 in literature [15]. □

Assume xRn. Define the generalized Lagrange function L:Rn×R+3{0R3}R as

L(x,y;θ,λ,β):=θ(yf(x))Txsupzg(y)D(λξe,D(z)rσ(z))suptg(f(x))D(βξe,D(t)rσ(t)),yRn,(θ,λ,β)R+3{0R3}.

Theorem 3.5  Let f(x)Ω. The following two statements are equivalent:

(i) K(x) and H satisfy nonlinear separation.

(ii) There exist (θ¯,λ¯,β¯)R+3{0R3} such that (f(x),λ¯,β¯) is a saddle point of L(x,y;θ¯,λ,β) on Rn×R+×R+, i.e.,

L(x,f(x);θ¯,λ,β)L(x,f(x);θ¯,λ¯,β¯)L(x,y;θ¯,λ¯,β¯),yRn,λ,βR+.

Proof (i)(ii). Assume that K(x) and H satisfy nonlinear separation. Then there exist (θ¯,λ¯,β¯)R+3{0R3} such that Equation (3.15) holds. In Equation (3.15), let y=f(x). Then

supzg(f(x))D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))0.

Since f(x)Ω, there is 0Rmg(f(x))D, and thus

supzg(f(x))D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))0,λ,βR+.

Therefore,

supzg(f(x))D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))=0.

Since

supzg(f(x))D(λξe,D(z)rσ(z))+suptg(f(x))D(βξe,D(t)rσ(t))0,λ,βR+.

There is

L(x,f(x);θ¯,λ,β)=θ¯×0supzg(f(x))D(λξe,D(z)rσ(z))suptg(f(x))D(βξe,D(t)rσ(t))θ¯×0supzg(f(x))D(λ¯ξe,D(z)rσ(z))suptg(f(x))D(β¯ξe,D(t)rσ(t))=L(x,f(x);θ¯,λ¯,β¯),λ,βR+.

Next, it is required to prove the second inequality of Equation (3.17). In fact, since Equation (3.15) holds, there is

L(x,y;θ¯,λ¯,β¯)0=L(x,f(x);θ¯,λ¯,β¯),yRn.

(ii)(i). Assume that f(x);λ¯,β¯) is a saddle point of L(x,y;θ¯,λ,β) on Rn×R+×R+, i.e., for all yRn, and λ,βR+,

θ¯×0supzg(f(x))D(λξe,D(z)rσ(z))suptg(f(x))D(βξe,D(t)rσ(t))θ¯×0supzg(f(x))D(λ¯ξe,D(z)rσ(z))suptg(f(x))D(β¯ξe,D(t)rσ(t))θ¯(yf(x))Txsupzg(y)D(λ¯ξe,D(z)rσ(z))suptg(f(x))D(β¯ξe,D(t)rσ(t)).

In the first inequality of Equation (3.18), let λ,β=0. Then

supzg(f(x))D(λ¯ξe,D(z)rσ(z))+suptg(f(x))D(β¯ξe,D(t)rσ(t))0.

From the second inequality of Equation (3.18), it is obtained

θ¯(yf(x))Txsupzg(y)D(λ¯ξe,D(z)rσ(z))suptg(f(x))D(β¯ξe,D(t)rσ(t))0.

Therefore, Equation (3.15) holds, and K(x) and H satisfy nonlinear separation. □

Corollary 3.1  Let f(x)Ω and there exist (θ¯,λ¯,β¯)R++×R+×R+ such that (f(x),λ¯,β¯) is a saddle point of L(x,y;θ¯,λ,β) on Rn×R+×R+, then x is a solution to the variational inequality (1.1).

Proof By combining Theorem 3.5 and Proposition 3.1, the conclusion follows. □

References

[1]

Breckner W.W. , Kassay, G.. A systematization of convexity concepts for sets and functions. J. Convex Anal. 1997; 4(1): 109–127

[2]

CastellaniG.. and Giannessi, F., Decomposition of mathematical programs by means of theorems of alternative for linear and nonlinear systems, In: Survey of Mathematical Programming (Proc. Ninth Internat. Math. Programming Sympos., Budapest, 1976), Vol. 2, Amsterdam: North-Holland, 1979, 423–439

[3]

ChenG.Y., Huang, X.X.. and Yang, X.Q., Vector Optimization: Set-Valued and Variational Analysis, Lecture Notes in Economics and Math. Systems, Vol. 541, Berlin: Springer-Verlag, 2005

[4]

Chen J.W., Li, S.J., Wan, Z.P. , Yao, J.C.. Vector variational-like inequalities with constraints: separation and alternative. J. Optim. Theory Appl. 2015; 166(2): 460–479

[5]

FacchineiF. , Pang, J.S., Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Ser. Oper. Res. Financ. Eng., New York: Springer-Verlag, 2003

[6]

Ferris M.C. , Pang, J.S.. Engineering and economic applications of complementarity problems. SIAM Rev. 1997; 39(4): 669–713

[7]

Gerth C. , Weidner, P.. Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 1990; 67(2): 297–320

[8]

Giannessi F.. Theorems of the alternative and optimality conditions. J. Optim. Theory Appl. 1984; 42(3): 331–365

[9]

GiannessiF., Constrained Optimization and Image Space Analysis, Vol. 1: Separation of Sets and Optimality Conditions, Mathematical Concepts and Methods in Science and Engineering, Vol. 49, New York: Springer Science+Business Media, 2005

[10]

Guu S.M. , Li, J.. Vector quasi-equilibrium problems: separation, saddle points and error bounds for the solution set. J. Global Optim. 2014; 58(4): 751–767

[11]

Harker P.T. , Pang, J.S.. Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 1990; 48: 161–220

[12]

Li J. , Huang, N.J.. Image space analysis for variational inequalities with cone constraints applications to traffic equilibria. Sci. China Math. 2012; 55(4): 851–868

[13]

LucD.T., Theory of Vector Optimization, Lecture Notes in Economics and Math. Systems, Vol. 319, Berlin: Springer-Verlag, 1989

[14]

Mastroeni G., Panicucci, B., Passacantando, M. , Yao, J.C.. A separation approach to vector quasi-equilibrium problems: saddle point and gap function. Taiwanese J. Math. 2009; 13(2): 657–673

[15]

Xu Y.D.. Nonlinear separation approach to inverse variational inequalities. Optimization 2016; 65(7): 1315–1335

RIGHTS & PERMISSIONS

Higher Education Press 2025

AI Summary AI Mindmap
PDF (513KB)

294

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/