optimality and duality for minimax fractional programming with support function under...

10
Journal of Computational and Applied Mathematics 274 (2015) 1–10 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Optimality and duality for minimax fractional programming with support function under (C ,α,ρ, d)-convexity S.K. Mishra a , K.K. Lai b,c,, Vinay Singh a a Department of Mathematics, Banaras Hindu University, Varanasi-221005, India b International Business School, Shaanxi Normal University, Xian, China c Department of Management Sciences, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong article info Article history: Received 31 August 2012 Received in revised form 13 June 2014 Keywords: Generalized convexity Minimax fractional programming Support function Sufficiency Duality abstract Sufficient optimality conditions are established for a class of nondifferentiable generalized minimax fractional programming problem with support functions. Further, two dual mod- els are considered and weak, strong and strict converse duality theorems are established under the assumptions of (C ,α,ρ, d)-convexity. Results presented in this paper, general- izes several results from literature to more general model of the problems as well as for more general class of generalized convexity. © 2014 Published by Elsevier B.V. 1. Introduction Chinchuluun and Pardalos [1] considered optimality conditions and duality for multiobjective programming problems, multiobjective fractional programming problems and multiobjective variational programming problems under the assump- tions of (C ,α,ρ, d)-convexity. Liang et al. [2] introduced a unified formulation of the generalized convexity and obtained some results on optimality conditions and duality theorems for a single objective programming problem. Yuan et al. [3] studied nondifferentiable minimax fractional programming problem for locally Lipschitz functions under the assumptions of (C ,α,ρ, d)-convexity. Yuan et al. [4] considered (C ,α,ρ, d)-type-I functions and presented sufficient optimality condi- tions and duality results for a nondifferentiable multiobjective programming problem for Lipschitz functions. Chinchuluun et al. [5] extended the results of [4] to multiobjective fractional case. Long et al. [6] studied a class of nondifferentiable multiobjective fractional programs in which every component of the ob- jective function contains a term involving the support function of a compact convex set and obtained Kuhn–Tucker necessary and sufficient optimality conditions, duality and saddle point results for weakly efficient solutions of the nondifferentiable multiobjective fractional programming problems. Recently, Long [7] considered a class of nondifferentiable multiobjective fractional programming problem in which the numerator of every component of the objective function contains a term involving the support function of a compact convex set. Long [7] established sufficient optimality conditions and duality results for the problem involving (C ,α,ρ, d)-convexity. Kim and Kim [8] established sufficient optimality conditions and duality results for nondifferentiable generalized fractional programming problem in which the numerator as well as the de- nominator of every component of the objective function contains a term involving the support function of compact convex sets. Kim and Kim [8] obtained these results under the (V ,ρ)-Invexity assumptions. Corresponding author at: Department of Management Sciences, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong. E-mail addresses: [email protected] (S.K. Mishra), [email protected] (K.K. Lai), [email protected] (V. Singh). http://dx.doi.org/10.1016/j.cam.2014.06.025 0377-0427/© 2014 Published by Elsevier B.V.

Upload: vinay

Post on 02-Feb-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Computational and Applied Mathematics 274 (2015) 1–10

Contents lists available at ScienceDirect

Journal of Computational and AppliedMathematics

journal homepage: www.elsevier.com/locate/cam

Optimality and duality for minimax fractional programmingwith support function under (C, α, ρ, d)-convexityS.K. Mishra a, K.K. Lai b,c,∗, Vinay Singh a

a Department of Mathematics, Banaras Hindu University, Varanasi-221005, Indiab International Business School, Shaanxi Normal University, Xian, Chinac Department of Management Sciences, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong

a r t i c l e i n f o

Article history:Received 31 August 2012Received in revised form 13 June 2014

Keywords:Generalized convexityMinimax fractional programmingSupport functionSufficiencyDuality

a b s t r a c t

Sufficient optimality conditions are established for a class of nondifferentiable generalizedminimax fractional programming problemwith support functions. Further, two dual mod-els are considered and weak, strong and strict converse duality theorems are establishedunder the assumptions of (C, α, ρ, d)-convexity. Results presented in this paper, general-izes several results from literature to more general model of the problems as well as formore general class of generalized convexity.

© 2014 Published by Elsevier B.V.

1. Introduction

Chinchuluun and Pardalos [1] considered optimality conditions and duality for multiobjective programming problems,multiobjective fractional programming problems andmultiobjective variational programming problems under the assump-tions of (C, α, ρ, d)-convexity. Liang et al. [2] introduced a unified formulation of the generalized convexity and obtainedsome results on optimality conditions and duality theorems for a single objective programming problem. Yuan et al. [3]studied nondifferentiable minimax fractional programming problem for locally Lipschitz functions under the assumptionsof (C, α, ρ, d)-convexity. Yuan et al. [4] considered (C, α, ρ, d)-type-I functions and presented sufficient optimality condi-tions and duality results for a nondifferentiable multiobjective programming problem for Lipschitz functions. Chinchuluunet al. [5] extended the results of [4] to multiobjective fractional case.

Long et al. [6] studied a class of nondifferentiablemultiobjective fractional programs inwhich every component of the ob-jective function contains a term involving the support function of a compact convex set and obtainedKuhn–Tucker necessaryand sufficient optimality conditions, duality and saddle point results for weakly efficient solutions of the nondifferentiablemultiobjective fractional programming problems. Recently, Long [7] considered a class of nondifferentiable multiobjectivefractional programming problem in which the numerator of every component of the objective function contains a terminvolving the support function of a compact convex set. Long [7] established sufficient optimality conditions and dualityresults for the problem involving (C, α, ρ, d)-convexity. Kim and Kim [8] established sufficient optimality conditions andduality results for nondifferentiable generalized fractional programming problem in which the numerator as well as the de-nominator of every component of the objective function contains a term involving the support function of compact convexsets. Kim and Kim [8] obtained these results under the (V , ρ)-Invexity assumptions.

∗ Corresponding author at: Department of Management Sciences, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong.E-mail addresses: [email protected] (S.K. Mishra), [email protected] (K.K. Lai), [email protected] (V. Singh).

http://dx.doi.org/10.1016/j.cam.2014.06.0250377-0427/© 2014 Published by Elsevier B.V.

2 S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10

In this paper, we have considered a class of nondifferentiable multiobjective fractional programming problem in whichthe numerator as well as denominator of every component of the objective function contains a term involving the supportfunctions of convex sets. We have obtained sufficient optimality conditions and duality results for the problem under theassumption of (C, α, ρ, d)-convexity.

2. Notations and preliminaries

We consider the following nondifferentiable generalized minimax fractional programming problem with supportfunction (GMFPS):

(GMFPS)minx∈Rn

supy∈Y

F (x, y)G (x, y)

= minx∈Rn

supy∈Y

f (x, y) + s (x|C)

g (x, y) − s (x|D)

Subject to hj (x) + sx|Ej

≤ 0, j = 1, . . . , p, (1)

where Y is a compact subset of Rm, f , g : Rn× Rm

→ R, and hj : Rn→ Rp (j = 1, . . . , p), are continuously differentiable

functions on Rn×Rm. C,D and Ej (j = 1, . . . , p) are compact convex sets in Rm, and s (x|C), s (x|D) and s

x|Ej

, (j = 1, . . . , p)

designate the support functions of compact sets and F (x, y) ≥ 0 and G (x, y) > 0 for all feasible x. Let Rn be then-dimensional Euclidean space and Rn

+be nonnegative orthant of Rn. Let X be an open subset of Rn. Assume that α : X×X →

R+ \ {0}, ρ ∈ R and d : X × X → R+ satisfies d (x, x0) = 0 ⇔ x = x0. Let C : X × X × Rn→ R be a function which satisfies

C(x,x0) (0) = 0 for any (x, x0) ∈ Rn× Rm.

Let S = {x ∈ X : g (x) ≤ 0} denote the set of all feasible solutions of (GMFPS). For each (x, y) ∈ Rn× Rm, we define

φ (x, y) =f (x,y)+s(x|C)

g(x,y)−s(x|D), such that for each (x, y) ∈ S × Y , f (x, y) + s (x|C) ≥ 0 and g (x, y) − s (x|D) > 0.

Let us define the following sets for every x ∈ S:

J (x) =j ∈ J|hj (x) + s

x|Ej

= 0

,

Y (x) =

y ∈ Y

f (x, y) + s (x|C)

g (x, y) − s (x|D)= sup

z∈Y

f (x, z) + s (x|C)

g (x, z) − s (x|D)

.

K (x) =

s, t, y

∈ N × Rs

+× Rm

: 1 ≤ s ≤ n + 1, t = (t1, . . . , ts) ∈ Rs+

withs

i=1

ti = 1, y = (y1, . . . , ys) and yi ∈ Y (x) , i = 1, 2, . . . , s

.

Since f and g are continuously differentiable and Y is compact subset of Rm, it follows that for each x∗∈ S, Y (x∗) = φ. Thus

for any yi ∈ Y (x∗), we have a positive constant

k0 = φx∗, yi

=

f (x∗, yi) + s (x∗|C)

g (x∗, yi) − s (x∗|D).

The problem (GMFPS) is more general than that of the problems considered by Kim and Kim [8] as well as by Long [7].

Definition 2.1. A function C : X × X × Rn→ R is said to be convex on Rn iff for and fixed (x, x0) ∈ X × X and for any

y1, y2 ∈ Rn, one has

C(x,x0) (λy1 + (1 − λ) y2) ≤ λC(x,x0) (y1) + (1 − λ) C(x,x0) (y2) for all λ ∈ ]0, 1[.

Definition 2.2 ([9]). A differentiable function h : X → R is said to be (C, α, ρ, d)-convex at x0 ∈ X iff for any x ∈ X ,

h (x) − h (x0)α (x, x0)

≥ C(x,x0) (∇h (x0)) + ρd (x, x0)α (x, x0)

.

The function h is said to be (C, α, ρ, d)-convex on X iff it is (C, α, ρ, d)-convex at every point in X . In particular, h is said tobe strongly (C, α, ρ, d)-convex on X iff ρ > 0.

Remark 2.1. If the function C is sublinear with respect to the third argument, then the (C, α, ρ, d)-convexity is the sameas the (F , α, ρ, d)-convexity introduced by Liang et al. [10].

Let K be a compact convex set in Rn. The support function of K is denoted by s (x|K) and defined by s (x|K) :=

maxxty : y ∈ K

.

S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10 3

The support function s (x|K), being convex and everywhere finite, has a Clarke subdifferential [3], in the sense of convexanalysis. The subdifferential of s (x|K) is given by

∂s (x|K) =z ∈ K |ztx = s (x|K)

.

For any set S, the normal case to S at a point x ∈ S denoted by NS (x) and denoted by NS (x) =y ∈ Rn

: yt (z − x) ,

∀z ∈ S.

It is readily verified that for a compact convex set K ∈ Rn, y ∈ NS (x) if and only if s (x|K) = xTy or equivalently, x is inthe Clarke subdifferential of s at y.

Theorem 2.1 (Necessary Optimality Conditions). Let x∗ be an optimal solution for (GMFPS) satisfying ⟨w, x⟩ > 0, ⟨v, x⟩ > 0and if ∇

hj (x∗) +

uj, x∗

, j ∈ J (x∗) are linearly independent. Then there exists (s, t∗, y) ∈ K (x∗), k0 ∈ R+, w, v ∈ Rn, uj ∈ Rn

and µ∗∈ Rp

+ such that

si=1

t∗i∇fx∗, yi

+w, x∗

− k0

∇gx∗, yi

−v, x∗

+

pj=1

µ∗

j ∇hjx∗+uj, x∗

= 0, (2)

fx∗, yi

+w, x∗

− k0

gx∗, yi

−v, x∗

= 0, i = 1, . . . , s, (3)

pj=1

µ∗

j

hjx∗+uj, x∗

= 0, (4)

w, x∗

= s

x∗

|C

(5)v, x∗

= s

x∗

|D

(6)uj, x∗

= s

x∗

|Ej

(7)

t∗i ≥ 0, i = 1, . . . , s,s

i=1

t∗i = 1.

3. Sufficient optimality conditions

In this section established sufficient KKT condition for (GMFPS).

Theorem 3.1. Let x∗ be a feasible solution of (GMFPS) and let there exist positive integers s, 1 ≤ s ≤ n+1, t∗ ∈ Rs+, yi ∈ Y (x∗),

i = 1, . . . , s, k0 ∈ R+, w, v, u ∈ Rn and µ∗∈ Rp

+ satisfying relations (2)–(6). If f (., yi) + ⟨w, .⟩ is (C, α, ρi, di)-convex, if−g (., yi) + ⟨v, .⟩ is

C, α, ρi, di

-convex and let hj (.) +

uj, .

isC, βj, ηj, δj

-convex at x∗ and

si=1

t∗i

ρi

di (x, x∗)

α (x, x∗)

+ k0ρi

di (x, x∗)

α (x, x∗)

+

pj=1

µ∗

j ηj

δj (x, x∗)

βj (x, x∗)

≥ 0, (8)

then x∗ is an optimal solution of (GMFPS).

Proof. Suppose to the contrary that x∗ is not an optimal solution of (GMFPS). Then there exists x ∈ S such that

supy∈Y

f (x, y) + ⟨w, x⟩g (x, y) − ⟨v, x⟩

< supy∈Y

f (x∗, y) + ⟨w, x∗⟩

g (x∗, y) − ⟨v, x∗⟩.

We have

supy∈Y

f (x∗, y) + ⟨w, x∗⟩

g (x∗, y) − ⟨v, x∗⟩=

f (x∗, yi) + ⟨w, x∗⟩

g (x∗, yi) − ⟨v, x∗⟩= k0,

for yi ∈ Y (x∗), i = 1, . . . , s, and

f (x, yi) + ⟨w, x⟩g (x, yi) − ⟨v, x⟩

< supy∈Y

f (x, y) + ⟨w, x⟩g (x, y) − ⟨v, x⟩

.

Therefore, we have

f (x, yi) + ⟨w, x⟩g (x, yi) − ⟨v, x⟩

< k0, for i = 1, . . . , s.

4 S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10

It follows that

f (x, yi) + ⟨w, x⟩ − k0 (g (x, yi) − ⟨v, x⟩) < 0. (9)

From (3)–(7) and (9), we get

ϕ0 (x) =

si=1

t∗i (f (x, yi) + ⟨w, x⟩ − k0 (g (x, yi) − ⟨v, x⟩))

< 0 =

si=1

t∗ifx∗, yi

+w, x∗

− k0

gx∗, yi

−v, x∗

= ϕ0

x∗.

That is,

ϕ0 (x) < ϕ0x∗.

Since α (x, x∗) > 0, the above inequality yields

[ϕ0 (x) − ϕ0 (x∗)]α (x, x∗)

< 0. (10)

We use the (C, α, ρi, di)-convexity of f (., yi) + ⟨w, .⟩ andC, α, ρi, di

-convexity of −g(., yi) + ⟨v, .⟩ at x∗ for i = 1, . . . , s,

i.e.

[f (x, yi) + ⟨w, x⟩ − f (x∗, yi) − ⟨w, x∗⟩]

α(x, x∗)≥ C(x,x∗)∇(f (x∗, yi) + ⟨w, x⟩) + ρi

di(x, x∗)

α(x, x∗)(11)

[−g(x, yi) + ⟨v, x⟩ + g(x∗, yi) − ⟨v, x∗⟩]

α(x, x∗)≥ C(x,x∗)∇(−g(x∗, yi) + ⟨v, x⟩) + ρi

di(x, x∗)

α(x, x∗), (12)

where i = 1, 2, . . . , s. Multiplying (11) by t∗i , (12) by t∗i k0 and then summing up these inequalities, we have by (10)s

i=1

t∗i C(x,x∗)

∇fx∗, yi

+w, x∗

+

si=1

t∗i k0C(x,x∗)

∇−g

x∗, yi

+v, x∗

+

si=1

t∗i

ρi

di (x, x∗)

α (x, x∗)

+ k0ρi

di (x, x∗)

α (x, x∗)

[ϕ0 (x) − ϕ0 (x∗)]α (x, x∗)

< 0. (13)

On the other hand, byC, βj, ηj, δj

-convexity of hj(., yi) + ⟨uj, .⟩, for j = 1, . . . , p, we have

hj (x) +uj, x

−hj (x∗) +

uj, x∗

βj (x, x∗)

≥ C(x,x∗)∇hjx∗+uj, x∗

+ ηj

δj (x, x∗)

βj (x, x∗).

Since µ∗≥ 0, we have

pj=1

µ∗

jhj (x) +

uj, x

−hj (x∗) +

uj, x∗

βj (x, x∗)

pj=1

µ∗

j C(x,x∗)∇hjx∗+uj, x∗

+

pj=1

µ∗

j ηjδj (x, x∗)

βj (x, x∗). (14)

Since the feasibility of x, βj (x, x∗) > 0, and (4) imply thatp

j=1

µ∗

jhj (x) +

uj, x

−hj (x∗) +

uj, x∗

βj (x, x∗)

≤ 0.

Then (14) leads to

pj=1

µ∗

j C(x,x∗)∇hjx∗+uj, x∗

+

pj=1

µ∗

j ηjδj (x, x∗)

βj (x, x∗)≤ 0. (15)

Adding (13), (15) and using condition (8), we haves

i=1

t∗i C(x,x∗)

∇fx∗, yi

+w, x∗

+

si=1

t∗i k0C(x,x∗)

∇−g

x∗, yi

+v, x∗

+

pj=1

µ∗

j C(x,x∗)∇hjx∗+uj, x∗

< 0.

S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10 5

Using the convexity of C we conclude that

C(x,x∗)

(1/γ )

s

i=1

t∗i∇fx∗, yi

+w, x∗

+

si=1

t∗i k0∇−g

x∗, yi

+v, x∗

+

pj=1

µ∗

j ∇hjx∗+uj, x∗

< 0,

where γ = 1 + k0 +p

j=1 µ∗

j .Thus, we have contradiction to (2). Hence, the proof is complete. �

4. First duality model

In this section, we consider the following dual to (GMFPS)

(DI) max(s,t,y)∈K(z)

sup(z,µ,k,v,w)∈H1(s,t,y)

k,

where H1 (s, t, y) denotes the set of all (z, µ, k, v, w, u) ∈ Rn× Rp

+ × R+ × Rn× Rn

× Rp satisfying

si=1

ti {∇ (f (z, yi) + ⟨w, z⟩) − k∇ (g (z, yi) − ⟨v, z⟩)} + ∇

pj=1

µjhj (z, yi) + ⟨ui, z⟩

= 0, (16)

si=1

ti {(f (z, yi) + ⟨w, z⟩) − k (g (z, yi) − ⟨v, z⟩)} ≥ 0 (17)

pj=1

µjhj (z) +

uj, z

≥ 0, (18)

(s, t, y) ∈ K (z) . (19)

For a triplet (s, t, y) ∈ K (z) if the set H1 (s, t, y) = φ, then we define the supremum over it to be (−∞). In this section,we denote

φ1 (.) =

si=1

ti {(f (., yi) + ⟨w, .⟩) − k (g (., yi) − ⟨v, .⟩)} .

Theorem 4.1 (Weak Duality). Let x and (z, µ, k, v, w, u, s, t, y) be the feasible solutions of (GMFPS) and (DI) respectively.Suppose that f (., yi) + ⟨w, .⟩ and −g (., yi) + ⟨v, .⟩ for i = 1, . . . , s, are respectively (C, α, ρi, di)-convex and

C, α, ρi, di

-

convex at z. Also hj (.) +uj, .

for j = 1, . . . , p, is

C, βj, ηj, δj

-convex z and let the inequality

si=1

ti

ρi

di (x, z)α (x, z)

+ kρi

di (x, z)α (x, z)

+

pj=1

µjηj

δj (x, z)βj (x, z)

≥ 0, (20)

hold. Then,

supy∈Y

f (x, y) + ⟨w, x⟩g (x, y) − ⟨v, x⟩

≥ k.

Proof. Suppose contrary to the result, that is

supy∈Y

f (x, y) + ⟨w, x⟩g (x, y) − ⟨v, x⟩

< k.

Then, we get

f (x, yi) + ⟨w, x⟩ − k (g (x, yi) − ⟨v, x⟩) < 0, for all yi ∈ Y .

It follows from ti ≥ 0, i = 1, . . . , swiths

i=1 ti = 1, that

ti [f (x, yi) + ⟨w, x⟩ − k (g (x, yi) − ⟨v, x⟩)] ≤ 0, (21)

with at least one strict inequality because t = (t1, . . . , ts) = 0.

6 S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10

Then, we have

φ1 (x) =

si=1

ti {(f (x, yi) + ⟨w, x⟩) − k (g (x, yi) − ⟨v, x⟩)}

< 0 ≤

si=1

ti {(f (z, yi) + ⟨w, z⟩) − k (g (z, yi) − ⟨v, z⟩)} = φ1 (z) .

That is,

φ1 (x) < φ1 (z) .

Since α (x, z) > 0, the above inequality yields

[φ1 (x) − φ1 (z)]α (x, z)

< 0.

Similar to the proof of Theorem 3.1, we haves

i=1

tiC(x,z) (∇ (f (z, yi) + ⟨w, z⟩)) +

si=1

tikC(x,z) (∇ (−g (z, yi) + ⟨v, z⟩))

+

si=1

ti

ρi

di (x, z)α (x, z)

+ kρi

di (x, z)α (x, z)

< 0 (22)

and

pj=1

µjhj (x) +

uj, x

−hj (z) +

uj, z

βj (x, z)

pj=1

µjC(x,z)∇hj (z) +

uj, z

+

pj=1

µjηjδj (x, z)βj (x, z)

. (23)

Utilizing the feasibility of x for (GMFPS) and (18), we get

pj=1

µjhj (x) +

uj, x

≤ 0 ≤

pj=1

µjhj (z) +

uj, z

. (24)

Therefore from (23), (24) and βj (x, z) > 0, we obtain

pj=1

µjC(x,z)∇hj (z) +

uj, z

+

pj=1

µjηjδj (x, z)βj (x, z)

≤ 0. (25)

Adding (22), (25) and using the condition (20), we haves

i=1

tiC(x,z) (∇ (f (z, yi) + ⟨w, z⟩)) +

si=1

tikC(x,z) (∇ (−g (z, yi) + ⟨v, z⟩))

+

pj=1

µjC(x,z)∇hj (z) +

uj, z

< 0.

Using the convexity of C , we conclude that

C(x,z)

(1/γ )

s

i=1

ti (∇ (f (z, yi) + ⟨w, z⟩)) +

si=1

tik (∇ (−g (z, yi) + ⟨v, z⟩)) +

pj=1

µj∇hj (z) +

uj, z

< 0,

where γ = 1 + k +p

j=1 µj.

Thus, we have contradiction to (16). Hence, the proof is complete. �

Theorem 4.2 (Strong Duality). Assume that x∗ is an optimal solution for (GMFPS) and let ∇hj (x∗) +

uj, x∗

, j ∈ J (x∗),

be linearly independent. Then, there exists, t, y∗

∈ K (x∗) and

x∗, µ, k, v, w

∈ H1

s, t, y∗

such that

x∗, µ, k, v, w, u,

s, t, y∗is feasible for (DI). Further, if the weak duality holds for all feasible (z, µ, k, v, w, u, s, t, y) of (DI), then

x∗, µ,

k, v, w, u, s, t, y∗is optimal for (GMFPS) and the two objectives have the same optimal values.

S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10 7

Proof. By Theorem 2.1, there exists, t, y∗

∈ K (x∗) and

x∗, µ, k, v, w

∈ H1

s, t, y∗

such that

x∗, µ, k, v, w, u, s, t, y∗

is the feasible for (DI) and

k =fx∗, y∗

i

+ ⟨w, x∗⟩

gx∗, y∗

i

− ⟨v, x∗⟩

.

Since (GMFPS) and (DI) have the same objective values, the optimality of this feasible solution follows fromTheorem4.1. �

Theorem 4.3 (Strict Converse Duality). Let x∗ andz, µ, k, v, w, u, s, t, y∗

be the optimal solutions for (GMFPS) and (DI),

respectively. Suppose that f., y∗

i

+ ⟨w, .⟩ and −g

., y∗

i

+ ⟨v, .⟩ for i = 1, . . . , s, are respectively (C, α, ρi, di)-convex and

C, α, ρi, di-convex at z for all

s, t, y∗

∈ K (x∗) and

z, µ, k, v, w, u

∈ H1

s, t, y∗

. Let hj (x∗) +

uj, .

for j = 1, . . . , p, is

C, βj, ηj, δj-convex z and let the inequality

si=1

ti

ρi

di (x∗, z)α (x∗, z)

+ kρi

di (x∗, z)α (x∗, z)

+

pj=1

µjηj

δj (x∗, z)βj (x∗, z)

> 0, (26)

hold, let ∇hj (x∗) +

uj, .

, j ∈ J (x∗), be linearly independent. Then x∗

= z; that is, z is optimal for (GMFPS) and

supy∈Y

f (z, y∗) + ⟨w, z⟩g (z, y∗) − ⟨v, z⟩

= k.

Proof. Suppose to the contrary that x∗= z. From Theorem 4.2, we know that

supy∈Y

f (x∗, y∗) + ⟨w, x∗⟩

g (x∗, y∗) − ⟨v, x∗⟩= k.

Similar to the proof of Theorem 4.1, we have

[φ1 (x∗) − φ1 (z)]α (x∗, z)

si=1

tiC(x∗,z)∇f (z, y∗

i ) + ⟨w, z⟩

+

si=1

tikC(x∗,z)∇−g(z, y∗

i ) + ⟨v, z⟩

+

si=1

ti

ρi

di(x∗, z)α(x∗, z)

+ kρi

di(x∗, z)α(x∗, z)

, (27)

pj=1

µjhj (x∗) +

uj, x∗

−hj (z) +

uj, z

βj (x∗, z)

pj=1

µjC(x∗,z)∇hj (z) +

uj, z

+

pj=1

µjηjδj (x∗, z)βj (x∗, z)

. (28)

By both the feasibility of x∗ and (18), we have

pj=1

µjhjx∗+ui, x∗

≤ 0 ≤

pj=1

µjhj (z) + ⟨ui, z⟩

.

Thus, from (28) and βj (x∗, z) > 0, we obtain

pj=1

µjC(x∗,z)∇hj (z) +

uj, z

+

pj=1

µjηjδj (x∗, z)βj (x∗, z)

≤ 0. (29)

The inequalities (27) and (29) together yield

[φ1 (x∗) − φ1 (z)]α (x∗, z)

si=1

tiC(x∗,z)∇f (z, y∗

i ) + ⟨w, z⟩

+

si=1

tikC(x∗,z)∇−g(z, y∗

i ) + ⟨v, z⟩

+

pj=1

µjC(x∗,z)∇hj(z) + ⟨uj, z⟩

+

si=1

ti

ρi

di(x∗, z)α (x∗, z)

+

si=1

tikρi

di(x∗, z)α (x∗, z)

+

pj=1

µjηjδj(x∗, z)βj(x∗, z)

. (30)

8 S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10

Using the convexity of C we conclude that

[φ1 (x∗) − φ1 (z)]α (x∗, z)

≥ C(x∗,z)

(1/γ )

s

i=1

ti

∇(f (z, y∗

i ) + ⟨w, z⟩) +

si=1

k−∇(g(z, y∗

i ) + ⟨v, z⟩)

+

pj=1

µj∇(hj(z) + ⟨uj, z⟩)

+

si=1

ti

ρi

di(x∗, z)α(x∗, z)

+ kρi

di(x∗, z)α(x∗, z)

+

pj=1

µjηjδj(x∗, z)βj(x∗, z)

,

where γ = 1 + k +p

j=1 µj. From (16), (26) and above inequality, we have

[φ1. (x∗) − φ1 (z)]α (x∗, z)

> 0. (31)

Hence from (31) and α (x∗, z) > 0, we get

φ1.x∗− φ1 (z) > 0.

Now, by (17), we have

si=1

ti

fx∗, y∗

i

+w, x∗

− k

gx∗, y∗

i

−v, x∗

>

si=1

ti

fz, y∗

i

+ ⟨wi, z⟩

− k

gz, y∗

i

− ⟨v, z⟩

≥ 0.

Therefore, there exist certain i0, such thatfx∗, y∗

i0

+w, x∗

− k

gx∗, y∗

i0

−v, x∗

> 0.

It follows that

supy∈Y

f (x∗, y∗) + ⟨w, x∗⟩

g (x∗, y∗) − ⟨v, x∗⟩≥

fx∗, y∗

i0

+ ⟨w, x∗⟩

gx∗, y∗

i0

− ⟨v, x∗⟩

> k.

Finally, we have contradiction and the proof is complete. �

5. Second duality model

In this section we consider the following form of Theorem 2.1.

Theorem 5.1. Let x be a solution for (GMFPS) and let ∇hj (x) +

uj, .

, j ∈ J (x) be linearly independent. Then, there exist

s, t, y

∈ K (x) and µ ∈ Rp+ such that

si=1

ti [(g (x, yi) − ⟨v, x⟩) (∇f (x, yi) + ⟨w, x⟩) − (f (x, yi) + ⟨w, x⟩) (∇ (g (x, yi) − ⟨v, x⟩))]

+

pj=1

µj∇hj (x) +

uj, x

= 0,

pj=1

µjhj (x) +

uj, x

≥ 0,

µ ∈ Rp+, ti ≥ 0,

si=1

ti = 1, yi ∈ Y (x) , i = 1, 2, . . . , s.

Now we consider the Wolfe type dual model to problem (GMFPS) as follows:

(DII) max(s,t,y)∈K(z)

sup(z,µ,v,w,u)∈H2(s,t,y)

F (z) ,

S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10 9

where F (z) = supy∈Yf (z,y)+⟨w,z⟩g(z,y)−⟨v,z⟩ , and H2 (s, t, y) denotes the set of all (z, µ, v, w, u) ∈ Rn

×Rp+ ×R+ ×Rn

×Rn×Rp satisfying

si=1

ti [(g (z, yi) − ⟨v, z⟩) (∇f (z, yi) + ⟨w, z⟩) − (f (z, yi) + ⟨w, z⟩) (∇ (g (z, yi) − ⟨v, z⟩))]

+

pj=1

µj∇hj (z) +

uj, z

= 0, (32)

pj=1

µjhj (z) + ⟨ui, z⟩

≥ 0, (33)

(s, t, y) ∈ K (z) (34)

⟨w, z⟩ = s (z|C) , ⟨v, z⟩ = s (z|D) ,uj, z

= s

z|Ej

. (35)

For a triplet (s, t, y) ∈ K (z) if the set H2 (s, t, y) is empty then we define the supremum over it to be (−∞). In this section, wedenote

φ2 (.) =

si=1

ti ((g (z, yi) − ⟨v, .⟩) (f (., yi) + ⟨w, .⟩) − (f (z, yi) + ⟨w, z⟩) (g (., yi) − ⟨v, .⟩)) .

We state Theorem 5.2, Theorem 5.3 and Theorem 5.4 without proof as their proofs are similar to the proofs of Theorem 4.1, The-orem 4.2 and Theorem 4.3, respectively.

Theorem 5.2 (Weak Duality). Let x and (z, µ, k, v, w, u, s, t, y) be the feasible solutions of (GMFPS) and (DII) respectively.Suppose that f (., yi) + ⟨w, .⟩ and −g (., yi) + ⟨v, .⟩ for i = 1, . . . , s, are respectively (C, α, ρi, di)-convex and

C, α, ρi, di

-

convex at z. Also hj (.) +uj, .

for j = 1, . . . , p, is

C, βj, ηj, δj

-convex z and let the inequality

si=1

ti

ρi

(g (z, yi) − ⟨v, z⟩)

di (x, z)α (x, z)

+ ρi

(f (z, yi) + ⟨w, z⟩)

di (x, z)α (x, z)

+

pj=1

µjηj

δj(x, z)βj(x, z)

≥ 0, (36)

hold. Then, supy∈Yf (x,y)+⟨w,x⟩g(x,y)−⟨v,x⟩ ≥ F (z) .

Theorem 5.3 (Strong Duality). Assume that x∗ is an optimal solution for (GMFPS) and let ∇hj (x∗) +

uj, x∗

, j ∈ J (x∗), be lin-

early independent. Then, there exists, t, y∗

∈ K (x∗) and

x∗, µ, k, v, w

∈ H1

s, t, y∗

such that

x∗, µ, k, v, w, u, s, t, y∗

is

feasible for (DII). Further, if theweak duality holds for all feasible (z, µ, k, v, w, u, s, t, y) of (DII), thenx∗, µ, k, v, w, u, s, t, y∗

is optimal for (GMFPS) and the two objectives have the same optimal values.

Theorem 5.4 (Strict Converse Duality). Let x∗ andz, µ, k, v, w, u, s, t, y∗

be the optimal solutions for (GMFPS) and (DII),

respectively. Suppose that f., y∗

i

+ ⟨w, .⟩ and −g

., y∗

i

+ ⟨v, .⟩ for i = 1, . . . , s, are respectively (C, α, ρi, di)-convex and

C, α, ρi, di-convex at z for all

s, t, y∗

∈ K (x∗) and

z, µ, k, v, w, u

∈ H1

s, t, y∗

. Let hj (x∗) +

uj, .

for j = 1, . . . , p, is

C, βj, ηj, δj-convex z and let the inequality

si=1

ti

ρi

(g (z, yi) − ⟨v, z⟩)

di (x∗, z)α (x∗, z)

+ ρi

(f (z, yi) + ⟨w, z⟩)

di (x∗, z)α (x∗, z)

+

pj=1

µjηj

δj(x∗, z)βj(x∗, z)

> 0,

hold, let ∇hj (x∗) +

uj, .

, j ∈ J (x∗), be linearly independent. Then x∗

= z; that is, z is optimal for (GMFPS) and

supy∈Yf (z,y∗)+⟨w,z⟩g(z,y∗)−⟨v,z⟩ = F (z).

Acknowledgment

Author Vinay Singh is thankful to National Board for Higher Mathematics (NBHM), Department of Atomic Energy (DAE),Government of India for financial support as Post Doctoral Fellowship.

10 S.K. Mishra et al. / Journal of Computational and Applied Mathematics 274 (2015) 1–10

References

[1] A. Chinchuluun, P.M. Pardalos, Multiobjective programming problems under generalized convexity. Models and algorithms for global optimization,in: Springer Optim. Appl., vol. 4, Springer, New York, 2007, pp. 3–20.

[2] Z.A. Liang, H.X. Huang, P.M. Pardalos, Optimality conditions and duality for a class of nonlinear fractional programming problems, J. Optim. TheoryAppl. 110 (2001) 611–619.

[3] D.H. Yuan, X.L. Liu, A. Chinchuluun, P.M. Pardalos, Nondifferentiable minimax fractional programming problems, J. Optim. Theory Appl. 129 (2006)185–199.

[4] D. Yuan, A. Chinchuluun, X. Liu, P.M. Pardalos, Optimality conditions and duality for multiobjective programming involving (C, α, ρ, d)-convexitytype-I functions, in: Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, Berlin, 2007,pp. 73–87.

[5] A. Chinchuluun, P.M. Pardalos, Optimality conditions and duality for nondifferentiable multiobjective fractional programming with generalizedconvexity, Ann. Oper. Res. 154 (2007) 133–147.

[6] X.J. Long, N.J. Huang, Z.B. Liu, Optimality conditions, duality and saddle points for nondifferentiable multiobjective fractional programs, J. Ind. Manag.Optim. 4 (2008) 287–298.

[7] X.J. Long, Optimality conditions and duality for nondifferentiable multiobjective fractional programming problems with (C, α, ρ, d)-convexity,J. Optim. Theory Appl. 148 (2011) 197–208.

[8] M.H. Kim, G.S. Kim, Optimality and duality for generalized nondifferentiable fractional programming with generalized invexity, J. Appl. Math. Inform.28 (2010) 1535–1544.

[9] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley-Interscience, New York, 1983.[10] Z.A. Liang, Z.W. Shi, Optimality conditions and duality for a minimax fractional programming with generalized convexity, J. Math. Anal. Appl. 277

(2003) 474–488.