[mathematical programming studies] optimality and stability in mathematical programming volume 19 ||...

13
Mathematical Programming Study 19 (1982) 140-152. North-Holland Publishing Company NECESSARY CONDITIONS FOR E-OPTIMALITY P. LORIDAN Universit~ de Dijon, Dijon, France Received 4 October 1979 Revised manuscript received 18 October 1980 This paper consists in a study of necessary conditions in mathematical programming with errors, by introducing the notion of regular approximate solutions up to ~. These solutions are 'almost' stationary and we obtain Kuhn-Tucker conditions up to ~ with no constraint qualification. A duality result is given by using an e-Lagrangian functional. Key words: Mathematical Programming, Approximate Solution, Kuhn-Tucker Conditions up to ~, e-Semiconvex Function. I. Introduction In this paper we are concerned with necessary conditions for approximate solutions of mathematical programming problems. We first recall a result due to Ekeland [4]. Then, in Section 3 of this paper, we introduce the notion of regular approximate solutions and we demonstrate that these solutions are stationary up to E. In Section 4, we obtain also some properties by introducing the concept of ~-semiconvexity which extends the definition given by Mifflin [9]. In the next sections, we give Kuhn-Tucker conditions up to E with no constraint qualification and we present some duality results by introducing an E-Lagrangian functional. 2. Preliminary results In order to prove the main results of this paper, we shall use the following theorem of Ekeland [4, 5]: Theorem 2.1. Let V be a real Banach space and J a lower semicontinuous functional defined on V, with values in R U {+~}, J : V-~ R O {+o0}, not identically +oo and bounded from below. Let ~ >0. For every point u E V satisfying J(u) <- inf{J(v) I v E V}+ ~, there exists some point u~ ~ V such that: J(u~) <- J(u) <- inf{J(v) I v E V}+ e, (2.1) Ilu - u~ll - V ~ , (2.2) J(u~) <- J(v) + ~/-~ ]Iv - u,[[ for all v e V. (2.3) 140

Upload: monique

Post on 11-Dec-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Mathematical Programming Study 19 (1982) 140-152. North-Holland Publishing Company

N E C E S S A R Y C O N D I T I O N S F O R E - O P T I M A L I T Y

P. L O R I D A N

Universit~ de Dijon, Dijon, France

Received 4 October 1979 Revised manuscript received 18 October 1980

This paper consists in a study of necessary conditions in mathematical programming with errors, by introducing the notion of regular approximate solutions up to ~. These solutions are 'almost' stationary and we obtain Kuhn-Tucker conditions up to ~ with no constraint qualification. A duality result is given by using an e-Lagrangian functional.

Key words: Mathematical Programming, Approximate Solution, Kuhn-Tucker Conditions up to ~, e-Semiconvex Function.

I. Introduction

In this paper we are concerned with necessary conditions for approximate

solutions of mathematical programming problems. We first recall a result due to

Ekeland [4]. Then, in Section 3 of this paper, we introduce the notion of regular

approximate solutions and we demonstrate that these solutions are stationary up to E.

In Section 4, we obtain also some properties by introducing the concept of

~-semiconvexity which extends the definition given by Mifflin [9].

In the next sections, we give Kuhn-Tucker conditions up to E with no

constraint qualification and we present some duality results by introducing an E-Lagrangian functional.

2. Preliminary results

In order to prove the main results of this paper, we shall use the following theorem of Ekeland [4, 5]:

Theorem 2.1. Let V be a real Banach space and J a lower semicontinuous

functional defined on V, with values in R U {+~}, J : V-~ R O {+o0}, not identically

+oo and bounded f rom below. Let ~ >0 . For every point u E V satisfying

J(u) <- inf{J(v) I v E V}+ ~, there exists some point u~ ~ V such that:

J(u~) <- J (u) <- inf{J(v) I v E V}+ e, (2.1)

Ilu - u~ll - V~ , (2.2)

J(u~) <- J (v ) + ~/-~ ]Iv - u,[[ for all v e V. (2.3)

140

P. Loridan/ Necessary conditions [or e-optimality 141

I f J is Gfiteaux-differentiable, then u, satisfies :

IIY'(u,)U, -< V L (2.4)

where J'(u,) denotes the Gfiteaux-derivative of J at u~ and [[. [[, the dual norm.

Remark 2.1. No compacity assumptions are made. So the infimum is not necessarily attained.

Remark 2,2 By considering the proof given by Ekeland [4], the result (2.1) can be written more precisely:

J(u , ) + x / ~ Ilu - u, II-< J(u) . (2.5)

Corollary 2.1. Let K be a closed subset of V and let J : K ~ R be lower

semicontinuous and bounded from below on K. If u ~ K satisfies J(u)<_ inf{J(v) [ v E K } + e, then there exists u, ~ K such that:

Ilu - u,II ~ x / L

J(u , ) - J (v) § x /~ IIv - u, II for all v E K. (2.6)

The proof uses Theorem 2.1, substituting the functional J by the functional H:

H ( v ) = J ( v ) for a l l v E K ,

H(v) = +~ for all v~ K.

This corollary has been used in particular by Clarke [2].

3. Necessary conditions for regular approximate solutions. Stationarity up to

We consider the mathematical programming problem:

(P) inf{J(v) I v E K}

and we denote by the following hypothesis (H):

Hypothesis (H): (1) K is a non empty closed subset of V. (2) J is a real valued functional lower semicontinuous and bounded from

below on K.

We begin with some definitions.

3.1. Approximate solutions: Definitions

Definition 3.1. Let ~ > O. If u E K satisfies J(u) <- J(v) + ~ for all v E K, we say that u is an approximate solution up to E for the problem (P).

142 P. Loridanl Necessary conditions for e-optimality

Definition 3.2. We shall say that u is an e-quasisolution of (P) if u E K satisfies

J ( u ) _< J ( v ) + IIv - ull for all v E K.

Definition 3.3. An element u E K is said to be a regular approximate solution up to E for the problem (P) if the following two conditions are satisfied:

(1) u is an approximate solution up to e (according to Definition 3.1), (2) u is an E-quasisolution of (P) (according to Definition 3.2).

Remark 3.1. Obviously, every E-quasisolution of (P) is ' locally optimal up to E'. That is to say, if u is an E-quasisolution, there exists a ball B around u with radius equal to X/~ such that J(u) <- J(v) + E for all v E B n K.

With Corollary 2.1, there exists at least one regular approximate solution up to E for the problem (P). In other words, there exists an e-quasi solution u which is also 'globally optimal up to E':J(u)--< J ( v ) + E for all v ~ K.

3.2. Stationarity up to E and necessary conditions

With the definition of a regular approx imate solution, we shall introduce the notion of 's tat ionari ty up to E' generalizing the classical one.

We denote by J'(u; d) the directional der ivat ive of J at u in the direction d:

J'(u ; d) = lim J(u + td) - J (u ) t--,o t 1>0

and by using the cone of feasible directions to K at u, denoted by F(K, u) as in [1], we have the following definition:

Definition 3.4. I f J has a directional derivat ive at u E K for every d in F(K, u),

we shall say that u is stationary up to E if J'(u ; d) + "X/-~ Ildll >- 0 for all direction d ~ F ( K , u).

Proposition 3.1. I f the hypothesis (H) is satisfied, if J has a directional derivative at u for all [easible directions d E F ( K ; u), if u, is a regular approximate solution up to E for (P), then it is also a stationary point up to E.

Proof. By using Corollary 2.1 with v = u, + td, where d is some feasible direc- tion at u, and t > 0 such that u, + td E K, we obtain:

J(u~) <_ J(u, + td) + x/-~ tlldll.

Lett ing t ~ O, we have obviously:

J'(u,; d) + Ildll --- 0

and the proof is achieved.

P. Loridan [ Necessary conditions .for e-optimality 143

Now, following [10], we shall extend the notion of an approximate stat ionary point for the case where K has an empty interior.

A vector y is called a tangent direction to K at u if there exists a sequence

{uk}C K satisfying uk# u, Ilu - ull-~0 and

Uk - - U

Ilu - ull y when k+~ .

The sequence {uk} is said to define the tangent direction y. As in [10], the set of all tangent directions to K at u will be denoted Y ( K , u).

It is noted that the cone of tangents to K at u is the cone consisting of non negative multiples of directions y U Y ( K , u).

Now, we first recall the notion of y-derivat ive as defined in [10].

Definition 3.5. Le t y ~ Y ( K , u). If there exists a real number ~J(u, y) such that

J ( u 0 - J(u) lim = ~J(u, y)

for every sequence {k}c K which defines y, then 8J(u, y) is called the y-

derivative of J at u.

By using Definition 3.5, we introduce another concept of stat ionari ty up to �9

with the next definition.

Definition 3.6. I f J has a y-derivat ive at u E K for every y E Y ( K , u), we shall

say that u is a stationary point up to �9 if

~J(u, y ) + ~ / ~ - > 0 for all y E Y ( K , u).

Proposition 3.2. I f the hypothesis (H) is satisfied, if J has a y-derivative at u for every y E Y ( K , u), if u~ is a regular approximate solution up to �9 for the problem (P), then u, is also a stationary point up to ~, according to Definition 3.6.

Proof. Le t {Uk} be a sequence which defines the direction y at u~. From Corollary

2.1, with v = Uk, we have:

J(u,) <- J(uk) + Ilu - u, ll.

Then, dividing by I[uk - u,][ and letting k ~ +~ , we obtain obviously:

M(u~, y) + X/~-> 0 for all y E Y ( K , u).

Remark 3.2. In the case where K = V and J has a G~teaux-derivat ive J'(u~) at

u,, then, f rom Theorem 2.1, we have

IIJ'(u,)ll, V Z

144 P. Loridan / Necessary conditions for e-optimality

It is another necessary condition in the case where (P) is an unconstrained optimization problem and we shall say again that u, is a stationary point up to e.

4. e-semiconvex functionals

In order to obtain a characterization for e-quasisolutions of (P), we now introduce the concept of 'e-semiconvexi ty ' by extending the definition given by

Mifflin [9]. We define the generalized directional derivative J~ d) at u, in the direction d

as in [2] by:

J~ ; d ) = lim sup J(u + h + td) - J(u + h) t-,o+ t h-*O

In the sequel, we shall refer to the following definition (with E > 0).

Definition 4.1. Le t K be a subset of V; J : V ~ R is E-semiconvex at u E K if:

(1) J is Lipschitz on a ball about u, (2) J is ~-quasidifferentiable (or ~-regtrlar), that is to say J'(u, d) exists and

0_< J~ d ) - J'(u; d) <- X/~ Ildll for each d E V, (3) u + d E K and J'(u; d) + X/~ Ildll - 0 imply that J(u + d) + X/~ Ildll -> J(u).

Remark 4.1. A convex functional on V is also E-semiconvex. For E = 0, we have

the concept of semiconvexity defined by Mifflin [9].

By including the notion of E-quasidifferentiability, we obtain the next pro-

position similar to [9, Theorem 8].

Proposition 4.1. If J is E-semiconvex on a convex set K, u E K and u + d E K,

then

J(u + d) + x/~ Ildll -< 1(u)

implies that J ' (u ; d) + ~/~ [Id[[ -< O.

Proof. The proof is similar to the one given in [9]. We suppose, for contradiction purposes, J(u + d) + X/-~ Ildll -< 1(u) and J ' (u; d) + X/~ I[dll > 0. Then, there exists t > 0 s u c h t h a t t < l a n d

J(u + td) - J (u) + V ~ Ildll > 0,

and by setting f ( t ) = J(u + td) + X/-~ t[ldl], we obtain f ( t ) > [(0). But we have supposed f(1) -< f(O). So if /- maximizes the continuous function f ( t ) over [0, 1],

we obtain:

f(1) = 1(u + d) § V ~ Ildll ~ J (u) = f(o) < f({).

P. Loridan [ Necessary conditions for e-optimality 145

Then, by the maximality of {, we have

J'(u + i-d; d) + ~/~ Ildll -< 0, (4.1)

J'(u + ? d ; - d ) + x/~lldll ~ 0 . (4.2)

By the E-quasidifferentiability of J, we obtain from (4.1) and (4.2):

J~ + ?d; d)<-J'(u + Ed; d ) + x/~lldll ~ 0 , (4.3)

J~ + Yd; - d) <- J'(u § ?d; - d) + X/~ Ildll --- 0. (4.4)

As in [9], we have obviously J~ + {d; d) = J~ + i-d; - d ) = 0, and we obtain:

J'(u + ?d; d) + V ~ Ildll -- 0.

Finally, we have

J(u + d) + x/~ Ildll -> Y(u + {d) + V-~ Ildll (4.5)

by using arguments similar to those given in [9]. The inequality (4.5) gives f(1) - [(t-), which contradicts f(1) -< .f(0) < .f(i-).

Corollary 4.1. If J is e-semiconvex on K, if the hypothesis (H) is satisfied, i.f u E K verifies J(u) <- J(v) + � 9 all v ~ K, there exists an approximate regular solution up to �9 denoted by u, such that:

J'(u; u, - u ) + x / ~ I lu , - ull-< 0.

Proof. Use Remark 2.2 and Proposit ion 4.1 with d = u, - u.

Proposition 4.2. If the hypothesis (H) is satisfied (with K convex), if J is �9 -semiconvex on K, then every �9 u, of (P) is characterized by

1 ' (u , ; v - u,) + x /~ Ilv - u, II -> 0 for all v E K.

Proof. Obviously, if u, is an �9 u, is a stationary point up to �9 according to Proposit ion 3.1, with d = v - u,, v E K, and so we have

J'(u,; v - u , ) + x / ~ l l v - u, ll---0 for all v E K

This condition is sufficient from the definition of the ~-semiconvexity (J is �9 -semiconvex at every point of the convex K).

5. Generalized Kuhn-Tucker conditions up to E

Hypothesis (H'): Let V be a real Banach space. Let J be a real valued functional defined on V,

locally Lipschitz and bounded from below on V. We are concerned with the

146 P. Loridan / Necessary conditions for ~-optimality

following mathematical programming problem:

(P) InfU(v) I v ~ K}.

K is a non empty subset of V defined by:

K = {v E V [ Gi(v) -< 0, 1 -< i -< n ; Hi(v) = O, 1 <- j <- p},

where Gi and Hj are real valued functionals defined on V and locally Lipschitz, l<_i<_n, l_<j___p.

Let E > 0. We introduce the e-feasible set K,,

K, = {v ~ V IG,(v)<-~/-~,I <- i < - n ; -~/-~ <<_ Hj(v) <_ ~/-~, 1 <- j -<p}.

Definition 5.1. We shall say that u~ is an almost regular approximate solution (up to e) for the problem (P) if u, satisfies the following conditions:

(1) u, E K , , (2) J(u,) <- J(v) + e for all v E K,

(3) J(u,) <- J(v) + V-~ IIv - u , II for all v E K.

Remark 5.1. If u~ E K, then u, is a regular approximate solution up to e, according to Definition 3.3.

Lemma 5.1. Let f be a locally Lipschitz functional defined on V. Let h be the functional defined by h(v) = [f(v)] 2 for all v E V. Then the generalized gradient of h at v, denoted Oh(v), satisfies the following inclusion:

Oh(v) c 2f(v)Of(v).

Proof. It is a consequence of [2, Proposit ion 10].

Remark 5.2. In the sequel, we shall also use this lemma with a functional f (v )=max{O,g(v)} , where g is a locally Lipschitz functional on V. Then, following [2, Proposi t ion 9] with h(v) = If(v)] 2, we shall have:

Oh(v) C/{0} if g(v) <-0, [2g(v) ag(v) if g(v) > O.

So, we can write Oh(v) C 2 max{0, g(v)} �9 ag(v).

Theorem 5.1. Suppose the hypothesis (H') is satisfied. Then there exists an almost regular approximate solution uo up to e, and real numbers )~i(e)>-O, l <- i <- n, /xi(e), 1-<j -<p , such that:

Ai(e) > 0 for all i E I (e) = {i I 0 < Gi(u~) -< ~/~}, (5.1)

hi(e) = 0 if Gi(u,)-< 0, (5.2)

P. Loridan / Necessary conditions for ~-optimality 147

OE OJ(u,)+ie~,~,)h,(~) OG,(u~)+ ~ l.q(e)c3Hj(u~)+ ~f~ B*, (5.3)

where B* is the unit ball in V*, the topological dual space of V.

Proof. We introduce a penalized functional (see [61):

" ril P 1 [Hi(v)] 2, J,,,(v) = J(v) + ~ [max{0, G,(v)}]2+ si

where ri and s i are real positive numbers, 1 - < i - < n, 1-< j-< p, and we consider the mathematical programming problem:

(P~,J inf{s l v a V}.

Obviously, J~,s is a locally Lipschitz functional defined on V, bounded from below. Then, by using Theorem 2.1, there exists u~ ~ V (uE depending on r and s) such that:

J,,s(u,) -< Jr.s(v) + ~ for all v E V, (5.4)

-u,II for all v E V. (5.5)

With inequality (5.4), we obtain also:

J(u,)-< s J (v) + �9 for all v E K

and, with inLevJ(v) <- J ( u J , we have:

~ ~il [max{0' Gi(uE)}]2+ ~ 1 [Hj(u')]2 < ' -- veKinf J ( v ) - ~evinf J(v)+e.

So, max{0, Gi(u,)} ~ 0 if ri ~ 0, 1 --- i -< n, and Hi(u,) --> 0 if sj ~ 0, 1 -< j -< p. Consequently, there exists ri(e), 1-< i -< n, and sj(e), 1 <- j <-p, such that

ri -< r~(e) implies that max{0, G~(u,)} -< X/~, 1 -< i -< n, sj -< sj(e) implies that [Hi(u,)] 2 -< e, 1 -< j -< p, that is to say,

-X/~--< Hi(u,) -< ~/~, l<--j<_p.

So, with such a choice of ri, 1 < i <- n, and sj, 1 < j - p, we can define an element u, E K, which is an almost regular approximate solution, according to Definition 5.1 (by using the inequalities (5.4) and (5.5)). Besides, u, satisfies the necessary

condition (see [2]) 0 E Os + X/-~ B*. With Lemma 5.1 and the results of [2], we then deduce

0~- OJ(u')+2~--l= G~f(U')ri OGi(u,)+2~= ~l Hi(u,)OHi(u,)+V~B. '

where G~(v) = max{O, Gi(v)}, 1 -< i -< n.

148 P. Loridan[ Necessary conditions [or e-optimality

Finally, we obtain the result (5.3) by setting:

~j ( �9 = 2 Hi(u , ) , 1 -< j a p, sj

)~i(�9 - 2G?(u,), 1 -< i -< n. ri

The conditions (5.1) and (5.2) are then trivially satisfied.

Remark 5.3. Conditions (5.1) to (5.3) are called generalized Kuhn-Tucker con- ditions up to e. For related work we refer to [2] where Clarke derives necessary conditions in Lagrange multiplier form for a local solution of (P) by making use of a normality condition.

Remark 5.4. If we consider the problem (Q):

(Q) inf{J(v) I v ~ K fq C},

where C is a closed subset of V, if the hypothesis (H') is satisfied, then, by using the results of Clarke [2] about the distance function dc(v) and the normal cone Nc(v), it is easy to demonstrate the following theorem:

Theorem 5.2. Let �9 be a real positive number. There exists an almost regular approximate solution u, (up to e) /or the problem (Q), real numbers ~i(�9 1 -< i -< n,/xj(e), 1 <- j <- p, and a point z, in V* such that:

(1) x,(~) = 0 i/ G~(u,) -< 0,

(2) X~(e) > 0 /or all i ~ I(e) = {i [ 0 < Gi(u,) -< X/~},

(3) z, E OJ(u,)+ ~, ~.i(e) OGi(u,)+ ff~, I~j(e) OHj(u,)+VeB*, i ~l(e ) TI=

(4) - z , ~ Nc(U,).

Remark 5.5. In the differentiable case we obtain, with the next theorem, Kuhn- Tucker conditions up to e with no constraint qualification [8]. For related work, we refer to [4] where regularity assumption is made.

Theorem 5.3. I / the hypothesis (H') is satisfied and i/ J, Gi, 1-<i < - n, Hi, 1 <-j <-p, are Gdteaux-differentiable, then there exists an almost regular ap- proximate solution u, E K, and real numbers )ti(E) >- O, 1 <- i <- n, I~j(e), 1 <- j <- p, such that

•i(E) > 0 for all i ~ I(e),

x,(~) = 0 i /O~(u , ) -< 0, (5.6)

P. Loridan [ Necessary conditions for E-optimality 149

Remark 5.6. (1) Theorem 5.3 is not necessarily a corollary of Theorem 5.1 but the proof is similar. It is also valid when the hypothesis 'locally Lipschitz ' is deleted and replaced by 'J, G~, l<_i<_n, and H~, 1-<j -<p, are lower semi continuous functionals ' .

(2) Theorem 5.3 can be considered as a corollary of Theorem 5.1 when the following hypothesis is satisfied: J is bounded from below on V, J, Gi, 1 -< i -< n, and H i, 1 -< j -< p, are continuously differentiable on V.

In effect, with these conditions, the hypothesis (H') is verified, and we derive the inequality (5.6) from condition (5.3) by making use of the following property [2]: if a functional [ admits a continuous derivative f ' , then the generalized

gradient af(v) = ff'(v)}.

6. Lagrangian duality up to �9

We consider now the problem (P) with no equality constraints, that is to say:

K ={v ~ V JGi(v)<-O, 1 <-i <-n}.

The e-feasible set is K, = {v E V [ Gi(v) <- X/~, 1 <- i <- n}.

With this problem (P), we associate the Lagrangian functional

n

L ( v , h ) = J ( v ) + ~ h i G i ( v ) for a l l v E V a n d h i E R , 1--<i<n.

We denote by h the vector h = (h~, h2, . . . , h,). We define an E-Lagrangian functional by setting:

L,(v, A; z, I~) = L(v, A) + ~/~ IIv - z l l - x/~ - NI,,

where (z,/x) E V • R" and IIx - 11. denotes a norm on R". In the sequel, we shall

choose Ilxll. = E T a , Ix, I.

Definition 6.1. Le t K* be K* = {h E R" ] hi - 0, 1 -< i -< n}. We shall say that (a,)~) is a quasi saddlepoint for the r L, on V x K* if a E V, h E K* and

L,(0, h ; a, h) -< L,(a, h; a, h) -< L,(v, h; a, h) for all v E V and h E K*,

that is to say:

L(a, A) - V ~ IIx - 11. -< L(a, ~) <- L(v, h) + V-~ IIv - all.

If the hypothesis (H' ) is satisfied, there exists u, E K, and hi(e), 1 - < i - < n, verifying Theorem 5.1. Let us denote by A(E) the vector ()~l(E), A2(r . . . . . h,(r Then, we have the following theorem:

150 P. Loridan[ Necessary conditions for e-optimality

Theorem 6.1. I f the hypothesis (H') is satisfied, if J'(u,, d) exists and J~ d) = J'(u,, d) for each d E V, if G~(u,, d) exists and G~ d) = G~(u,, d), 1 <- i <- n, for each d ~ V (J and G~ are said to be quasidifferentiable as in [9]), if the functional J(v)+~,~=lX~(()Gi(v) is e-semiconvex at u , ~ V, then (u,,)t(e)) is a quasi saddlepoint for the e-Lagrangian L, on V x K*.

Proof. Let ~,i(~) = 0 if

and, since

By using

I be I = {1, 2 . . . . . . n} and I(e) = {i ~ I [ Gi(u,) > 0}. With Theorem 5.1, i~ I(E). So we have:

n n

~ , , ) X~G~(u,)~ 0 for all X E K*, we obtain the following inequality:

n

(x,(~) - x , ) a , ( u , ) -> ,~ ,~ (x,(~) - x , )C, (u , ) .

Gi(u,) ~- X/~, i ~ I(e), we then deduce:

n

We finally obtain:

n

( x , ( ~ ) - x,)G,(u,) >_ -V-~ IIx - x(~)llo for all ;~ E K * ,

n

J(uA + ~= X,(e) Gi(u,) - L(u,, X) - X/~ IX - ,~(e)lln for all X ~ K*.

This gives the first inequality for a quasi saddlepoint. We now use the result (5.3) by noting that there exists z* E 0J(uA, z~, E

0G~(u~), 1 --- i -< n, and b* ~ B* such that

n

z* + ~ ;~i(e)z~,, +X/~ b* = 0.

By using the characterization of the generalized gradient of Clarke [2] and with the hypothesis of quasidifferentiability for d and Gi, we finally obtain:

n n

--- - x / ~ l l d l l

for all d E V, where ( . , .) denote the canonical bilinear form on V* x V, where V* is the topological dual space of V.

By using the definition of E-semiconvexity at u,, with d = v - u,, v ~ V, we then deduce:

P. Loridan/ Necessary conditions for ~-optimality 151

n n

s ( v ) + ~ h,(e) c , (v) + x G IIv - u, ll-> J(u~) § ~ h,(e) C,(u,)

So we obtain the second inequality for a quasi saddlepoint:

L(v, h ( e ) ) + V ~ l l v - u, ll-> L(u,, h(e)) for all v E V

and the proof is achieved.

We also have, trivially, the following result:

Corollary 6.1. I f the hypothesis (H') is satisfied, if J and Gi, 1 <- i <- n, are convex functionals, then (u,, h(e)) is a quasi saddlepoint for the e-Lagrangian on V x K * .

Theorem 6.2. I f (u,, h(e)) is a quasi saddlepoint for the e-Lagrangian on V x K*, then u, is such that:

(1) J(u~) <- J(v) + X/~ [Iv - u~[[ for all v verifying Gi(v) <- Gi(u~), 1 <- i <- n,

(2) Gi(u,) <- \ / ~ , i <- i <- n,

(3) hi(e) ~ 0 implies -'X/-~ <- Gj(u,).

Proof. From the definition of a quasi saddlepoint, we have:

n

s (u , ) + ~ hi(,) Gi(u,) <- s(v) + ~ hi(e) Gi(v) + v'~ IIv - u, II

and we then deduce the inequality (1). Now, we use

n n n

s(u.) + hici(u.)-v Ihi-

for all h ~ K*, and, in particular, with hk = 1 + hk(e) and hi = hi(e) for all iS k, we obtain:

hkGk(u,) - ~/~--< h~(e) Gk(u,),

ak(u,) <-- X/~.

This result can be obtained for all k, 1 -< k -< n. So we have demonstrated (2). Finally, if h i (e )~ 0 for some j, let us choose h i such that 0 < hi < hi(e) and

hi = hi(e) for i~ j. We obtain:

hj Gj(u,) - x G (hi(E) - hi) --< hi(e) Gj(u,)

and the next result:

-~/~ - G~(u,).

152 P. Loridan/ Necessary conditions for e-optimality

Remark 6.1. Theorem 6.2. is a l i t t le a n a l o g o u s w i th the ' e p s i l o n t h e o r e m ' of E v e r e t t [3].

References

[1] M. Bazaraa and C. Shetty, "Foundations of optimization", in: Lecture notes in economics and mathematical systems 122 (Springer, Berlin, 1976).

[2] F. Clarke, "A new approach to Lagrange multipliers", Mathematics of Operations Research 1 (1976) 165-174.

[3] H. Everett, "Generalized Lagrange multiplier method for solving problems of optimum al- location of resources", Operations Research 11 (1963) 399-417.

[4] I. Ekeland, "On the variational principle", Journal of Mathematical Analysis and Applications 47 (1974) 324-353.

[5] I. Ekeland and R. Temam, Analyse convexe et probl~mes variationnels (Dunod, Paris, 1974). [6] A. Fiacco and G. McCormick, Nonlinear programming (Wiley, New York, 1968). [7] P. Loridan, "Solutions approch6es de probl/~mes d'optimisation", Communication au Colloque

d'Analyse Num6rique, Imbours (1977). [8] P. Loridan, "Conditions de Kuhn et Tucker ~t ~ pros pour des solutions approch6es de

probl~mes d'optimisation avec contraintes", Comptes rendus Acad~.mie des Sciences de Paris 285 (1977) 449-450.

[9] R. Mifflin, "Semismooth and semiconvex functions in constrained optimization", SIAM Journal on Control and Optimization 15 (1977) 959-972.

[10] J. Zang, E. Choo and M. Avriel, "On functions whose stationary points are global minima", Journal of Optimization Theory and Applications 22 (1977) 195-208.