KKT Optimality Conditions of Interval-Valued Optimization Problem With Sub-Differentiable Functions

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

KKT optimality conditions of interval-valued optimization

problem with sub-differentiable functions


Chenxi Ouyang, Dong Qiu∗
Key Laboratory of Intelligent Analysis and Decision on Complex Systems,
Chongqing University of Posts and Telecommunications,
Nanan, Chongqing, 400065, P. R. China

Abstract
This paper addresses the optimization problems with interval-valued objec-
tive function. For this we consider two types of order relation on the interval
space. For each order relation, we obtain KKT conditions using of the concept of
sub-derivative for interval-valued functions. The sub-derivative is a concept more
general of derivative for this class of functions than other concepts of derivative.
Keywords: sub-differentiable; interval-valued optimization problem; KKT op-
timality conditions

1 Introduction
In modern times, the optimization problems with uncertainty have received consid-
erable attention and have great value in economic and control fields [1, 2, 5, 11]. From
this point of view, Ishibuchi and Tanaka [7] derived the interval-valued optimization as
an attempt to handle the problems with imprecise parameters. Since then, a collection
of papers written by Chanas, Kuchta and Bitran et al [6, 9, 10] offered many different
approaches on this subject. In addition, the importance of derivatives in interval-valued
optimization problems can not be ignored. The gH-derivative is a very important con-
cept in the study of interval-valued functions [3]. The most common way is to transform
the study of derivative of interval-valued function into the relationship of derivative of
endpoint functions [12]. The gH-derivative concept has an important drawback, that is
the existence conditions of gH-derivative of interval-valued functions is very strict [12].
The sub-derivative concept is slightly more general than the notion of gH-derivative
for the case of interval-valued functions. Based on sub-derivative and its properties, we
give KKT optimality conditions for interval-valued optimization problems.
The paper is discussed as follows. In Section 2, we recall some preliminaries. In Sec-
tion 3, new KKT type optimality conditions are derived and some interesting examples
are given. Finally, Section 4 contains some conclusions.

Corresponding author. Tel.:+86-15123126186; Fax:+86-23-62471796; E-mail: dongqiu-
math@163.com; qiudong@cqupt.edu.cn (D. Qiu).

1
2 Preliminaries
In this section, we introduce some definitions that will be used throughout the
paper. We denote by R the family of all real numbers and Kc the bounded and closed
intervals of R, that is

Kc = {[aL , aU ]|aL , aU ∈ R, aL ≤ aU }.

For A = [aL , aU ], B = [bL , bU ] and λ ∈ R we consider the following equations:

A + B = [aL , aU ] + [bL , bU ] = [aL + bL , aU + bU ], (1)



[λaL , λaU ], if λ ≥ 0,
λ·A= (2)
[λaU , λaL ], if λ < 0.
From (1) and (2) we get −A = [−aU , −aL ] and B − A = [bL − aU , bU − aL ]. However
this definition of the difference has the drawback that the space Kc with Operation
(1) and (2) is not a linear space, the interval does not have an inverse element. The
following difference between two intervals has been introduced by Stefanini and Bede
[8].

Definition 2.1 [8] The generalized Hukuhara difference of two intervals, A and B,
(gH-difference for short) is defined as follows

(i) A = B + C
A gH B = C ⇔
or (ii) B = A − C .

This difference has many interesting new properties, for example A gH A = [0, 0].
Also, the gH-difference of two intervals A = [aL , aU ] and B = [bL , bU ] always exists
and equals to

A gH B = [min{aL − bL , aU − bU }, max{aL − bL , aU − bU }].

Let us consider that M is an open and nonempty subset of Rn . Let F : M → Kc


be an interval-valued function. And we have F (x) = [FL (x) , FU (x)], where FL (x) ≤
FU (x), FS (x) = FU (x) − FL (x), for x ∈ M . Based on the definition of gH-difference,
we introduce the following derivative of interval-valued functions.

Definition 2.2 [8] Let x0 ∈ M ⊂ R and h be such that x0 + h ∈ M , then the gH-
derivative of an interval-valued function F : M ⊂ R → Kc at x0 is defined as

0 1
FgH (x0 ) = lim [F (x0 + h) gH F (x0 )] . (3)
h→0 h
0 (x ) ∈ K satisfying Equation (3) exists, we say that F is generalized Hukuhara
If FgH 0 c
differentiable (gH-differentiable for short) at x0 .

2
We presents the necessary and sufficient condition for the existence of gH-derivative
of interval-valued functions. It is introduced in [12].
Theorem 2.1 [12] Let F (x) be an interval-valued function. F (x) is gH-differentiable
at x0 ∈ M if and only if one of the following cases holds:
(a) FL (x) and FU (x) are differentiable at x0 and
0
(x0 ) = min{(FL )0 (x0 ), (FU )0 (x0 )}, max{(FL )0 (x0 ), (FU )0 (x0 )} .
 
FgH

(b) (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ) and (FU )0 + (x0 ) exist and satisfy (FL )0 − (x0 ) =
(FU )0 + (x0 ) and (FL )0 + (x0 ) = (FU )0 − (x0 ). Moreover
0 0 0 0 0
FgH (x0 ) = [min{(FL )+ (x0 ), (FU )+ (x0 )}, max{(FL )+ (x0 ), (FU )+ (x0 )}]
0 0 0 0
= [min{(FL )− (x0 ), (FU )− (x0 )}, max{(FL )− (x0 ), (FU )− (x0 )}].
In order to extend the application of gH-derivative, we introduce the following
concept.
Definition 2.3 [4] Let x0 ∈ M ⊂ R, if (FL )0− (x0 ), (FL )0+ (x0 ), (FU )0− (x0 ) and (FU )0+ (x0 )
exist, then the sub-derivative of an interval-valued function F : M → Kc at x0 is defined
as
∂F (x0 ) = [min (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ), (FU )0 + (x0 ) ,

(4)
max (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ), (FU )0 + (x0 ) ].
If F 0 (x0 ) ∈ Kc satisfying Equation (4) exists, we say that F (x) is sub-differentiable at
x0 .

Definition 2.4 [4] Let F : M ⊂ Rn → Kc . If all partial sub-derivatives of function


F (x) exist at x0 = (x1 , x2 , . . . , xn ) and there exists an n-dimensional vector, it is shown
as follows  
∂F (x0 ) ∂F (x0 ) ∂F (x0 )
∇∂F (x0 ) =
e , ,..., .
∂x1 ∂x2 ∂xn
We define ∇∂F
e (x0 ) as the sub-gradient of F (x) at x0 . And for any d = (d1 , . . . , dn ) ∈
n
R , d is an n-dimensional vector whose components are all real numbers. We define
 
T e ∂F (x0 ) ∂F (x0 ) ∂F (x0 )
d ∇∂F (x0 ) = d1 , d2 , . . . , dn .
∂x1 ∂x2 ∂xn
It should be noted that real-valued functions are special interval-valued functions.
So Definition 2.3 and 2.4 are equally applicable to real-valued functions.
For A = [aL , aU ], B = [bL , bU ], the order relation ≤LU is defined by A≤LU B if and
only if aL ≤ bL and aU ≤ bU . It is known that ≤LU is a partial order relation on Kc .
Also, we write A<LU B if and only if A≤LU B and A 6= B. A<LU B if and only if
  
aL (x) < bL (x), aL (x) ≤ bL (x), aL (x) < bL (x),
or or
aU (x) ≤ bU (x). aU (x) < bU (x). aU (x) < bU (x).

3
Definition 2.5 Let F : M ⊂ Rn → Kc be an interval-valued function. For x0 ∈ M ,
we say that x0 is the LU-optimal solution of F (x) if there exist no x ∈ M such that
F (x) <LU F (x0 ).
Next we introduce the second solution concept. For A = [aL , aU ], the width of A
is defined by w(A) = aS = aU − aL . The order relation ≤LS is defined by A≤LS B if
and only if aL ≤ bL and aS ≤ bS . we write A<LS B if and only if A≤LS B and A 6= B.
A<LS B if and only if
  
aL (x) < bL (x), aL (x) ≤ bL (x), aL (x) < bL (x),
or or
aS (x) ≤ bS (x). aS (x) < bS (x). aS (x) < bS (x).

Definition 2.6 Let F : M ⊂ Rn → Kc be an interval-valued function. For x0 ∈ M ,


we say that x0 is the LS-optimal solution of F (x) if there exist no x ∈ M such that
F (x) <LS F (x0 ).

Definition 2.7 [13] Let F (x) be an interval-valued function on a convex set M ⊂ Rn .


Then
(a) we say that F (x) is LU-convex at x0 if
F (λx0 + (1 − λ)x) ≤LU λF (x0 ) + (1 − λ) F (x) . (5)
for all λ ∈ (0, 1) and each x ∈ M .
(b) we say that F (x) is LS-convex at x0 if
F (λx0 + (1 − λ)x) ≤LS λF (x0 ) + (1 − λ) F (x) . (6)
for all λ ∈ (0, 1) and each x ∈ M .

Remark 2.1 [13] Let M be an convex subset of Rn and F (x) be an interval-valued


function defined on M . Then we have following properties
(a) F (x) is LU-convex at x0 if and only if FL (x) and FU (x) are convex at x0 .
(b) F (x) is LS-convex at x0 if and only if FL (x) and FS (x) are convex at x0 .

Corollary 2.1 Let F (x) be sub-differentiable on M and Fm (x) = λ1 FL (x) + λ2 FU (x),


for λ1 , λ2 ∈ R. Then Fm (x) is sub-differentiable on M and ∂Fm (x) ⊂ λ1 ∂FL (x) +
λ2 ∂FU (x).

Proof. From Definition 2.3, we know

∂FL (x) = min{(FL )0− (x), (FL )0+ (x)}, max{(FL )0− (x), (FL )0+ (x)} ,
 

∂FU (x) = min{(FU )0− (x), (FU )0+ (x)}, max{(FU )0− (x), (FU )0+ (x)} ,
 

∂Fm (x) = [min{λ1 (FL )0− (x) + λ2 (FU )0− (x), λ1 (FL )0+ (x) + λ2 (FU )0+ (x)},
max{λ1 (FL )0− (x) + λ2 (FU )0− (x), λ1 (FL )0+ (x) + λ2 (FU )0+ (x)}] .
So ∂Fm (x) ⊂ λ1 ∂FL (x) + λ2 ∂FU (x). 2

4
Corollary 2.2 Let F (x) be sub-differentiable on M and Fn (x) = λ1 FL (x) + λ2 FS (x),
for λ1 , λ2 ∈ R. Then Fn (x) is sub-differentiable on M and ∂Fn (x) ⊂ λ1 ∂FL (x) +
λ2 ∂FS (x).

Proof. It is similar to Corollary 2.1. 2

Corollary 2.3 [4] Let real-valued function f (x) be sub-differentiable and convex on M .
Then f (y) − f (x) ≥ k · (y − x) and k ∈ ∂f (x).

3 Optimization conditions of interval-valued functions


Now we consider the real-valued optimization problem, it can be formulated as
follows
(F P ) M in f (x)
subject to gi (x) ≤ 0, i = 1, ..., m,
where f : M ⊂ Rn → R, gi (x)(i = 1, ...., m) are sub-differentiable and convex on M .
Then we study the optimal solution of (F P ).

Theorem 3.1 Suppose f : M ⊂ Rn → R, gi (x)(i = 1, ...., m) are sub-differentiable


and convex on M . If there exists 0 ≤ ui ∈ R, i = 1, ...., m so that
m
(1) [0, 0]n = ∇∂f
e (x0 ) + P ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the optimal solution of (F P ).

Proof. According to Corollary 2.3, we know for all k ∈ ∇∂f


e (x0 ),

f (x) − f (x0 ) ≥ k · (x − x0 ).
m
P
So for all h ∈ ui ∇∂g
e i (x),
i=1

f (x) − f (x0 ) ≥ −h(x − x0 ).

Now, since each gi (x) is convex, we have again by Corollary 2.3 for i = 1, ..., m

−n(x − x0 ) ≥ gi (x0 ) − gi (x).

for all n ∈ ∇∂g


e i (x). Therefore
m
X m
X
f (x) − f (x0 ) ≥ ui (gi (x0 ) − gi (x)) = − ui gi (x).
i=1 i=1

But for each i ui ≥ 0 and gi (x) ≤ 0. Thus f (x) − f (x0 ) ≥ 0 and x0 is the optimal
solution of (F P ). 2

5
Example 3.1 Consider the following optimization problem

min f (x) = |x − 1|
subject to − |x| + 1 ≤ 0
x − 1 ≤ 0.

we have
∂f (x) = [−1, 1], ∂g1 (x) = [1, −1], ∂g2 (x) = [1, 1].
for x0 = 1 and u1 = 1, u2 = 0, it easy to get
(1) [0, 0] = [−1, 1] + 1 · [1, −1] + 0 · [1, 1],
(2) 1 · (− |1| + 1) + 0(1 − 1) = [0, 0].
Based on Theorem 3.1, we know x0 = 1 is the optimal solution of (F P ).

Next, we discuss the following interval-valued optimization problem

(LP ) M in F (x)
subject to gi (x) ≤ 0, i = 1, ..., m,

where F : M ⊂ Rn → Kc , gi (x)(i = 1, ...., m) are sub-differentiable and convex on M .


Then we study the optimal solution of (LP ).

Theorem 3.2 Suppose F : M ⊂ Rn → Kc is sub-differentiable and LU-convex on M .


If there exists (Lagrange) multipliers 0 < λ1 , λ2 ∈ R and 0 ≤ ui ∈ R, i = 1, ...., m so
that
m
(1) [0, 0]n = λ1 ∇∂F e U (x0 ) + P ui ∇∂g
e L (x0 ) + λ2 ∇∂F e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LU-optimal solution of (LP ).

Proof. We define Fm (x) = λ1 FL (x) + λ2 FU (x). Since F (x) is LU-convex and sub-
differentiable at x0 , then Fm (x) is convex and sub-differentiable at x0 . And

∂Fm (x) ⊂ λ1 ∂FL (x) + λ2 ∂FU (x).

Because of (1), we get


m
X m
X
∇∂F
e m (x0 ) + ui ∇∂g
e i (x0 ) ⊂ λ1 ∇∂F
e L (x0 ) + λ2 ∇∂F
e U (x0 ) + ui ∇∂g
e i (x0 ),
i=1 i=1

then we have the following conditions


m
e i (x0 ) = [0, 0]n .
e m (x0 ) + P ui ∇∂g
(1) ∇∂F
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1

6
Based on Theorem 3.1, x0 is the optimal solution of Fm (x). Now, for complete the
proof, suppose that x0 is not the LU-optimal solution of F (x). Then there exists an x
such that
  
FL (x) < FL (x0 ), FL (x) ≤ FL (x0 ), FL (x) < FL (x0 ),
or or
FU (x) ≤ FU (x0 ). FU (x) < FU (x0 ). FU (x) < FU (x0 ).

Therefore we see that Fm (x) < Fm (x0 ) which contradicts the fact that x0 is the optimal
solution of Fm (x). 2

Theorem 3.3 Suppose F : M ⊂ Rn → Kc is sub-differentiable and LS-convex on M .


If there exists (Lagrange) multipliers 0 < λ1 , λ2 ∈ R and 0 ≤ ui ∈ R, i = 1, ...., m so
that
m
(1) [0, 0]n = λ1 ∇∂F e S (x0 ) + P ui ∇∂g
e L (x0 ) + λ2 ∇∂F e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LS-optimal solution of (LP ).

Proof. We define Fn (x) = λ1 FL (x) + λ2 FS (x). Since F (x) is LS-convex and sub-
differentiable at x0 , then Fn (x) is convex and sub-differentiable at x0 . And

∂Fn (x) ⊂ λ1 ∂FL (x) + λ2 ∂FS (x).

According to the proof of Theorem 3.2, we know x0 is the optimal solution of Fn (x).
Now, for complete the proof, suppose that x0 is not the LS-optimal solution of F (x).
Then there exists an x such that
  
FL (x) < FL (x0 ), FL (x) ≤ FL (x0 ), FL (x) < FL (x0 ),
or or
FS (x) ≤ FS (x0 ). FS (x) < FS (x0 ). FS (x) < FS (x0 ).

Therefore we see that Fn (x) < Fn (x0 ) which contradicts the fact that x0 is the optimal
solution of Fn (x). 2
Example 3.2 Suppose the object function

[ 41 x − 1, x],

if x > 0.
F (x) = 1
[− 4 x − 1, −x], if x ≤ 0.

and the optimization problem as bellow

min F (x)
subject to −x − 2 ≤ 0
x ≤ 0.

we have
1

FL (x) = 4 x − 1, if x > 0,
− 14 x − 1, if x ≤ 0.

7

x, if x > 0,
FU (x) =
−x, if x ≤ 0.
3

4 x + 1, if x > 0,
FS (x) =
− 34 x + 1, if x ≤ 0.
And we get
[ 14 , 14 ],

if x > 0,
∂FL (x) =
[− 14 , − 14 ], if x ≤ 0.

[1, 1], if x > 0,
∂FU (x) =
[−1, −1], if x ≤ 0.
∂g1 (x) = [−1, −1], ∂g2 (x) = [1, 1].
Both FL (x), FU (x), FS (x) are convex and sub-differentiable. Furthermore, the condition
of (1) and (2) of Theorem 3.2 are satisfying at x0 = 0 when λ1 = 4, λ2 = 0, u1 = 0, u2 =
1. Hence x0 = 0 is the LU-optimal solution of (LP). Because of
[ 34 , 34 ],

if x > 0,
∂FS (x) =
[− 34 , − 34 ], if x ≤ 0.
The condition of (1) and (2) of Theorem 3.3 are satisfying at x0 = 0 when λ1 = 4, λ2 =
0, u1 = 0, u2 = 1. Hence x0 = 0 is also the LS-optimal solution of (LP).

Theorem 3.4 Suppose F : M ⊂ Rn → Kc is sub-differentiable and LU-convex on M .


If there exists 0 ≤ ui ∈ R, i = 1, ...., m so that
m
(1) [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LU-optimal solution of (LP ).
m
Proof. The equation [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ) can be interpreted as
i=1
m
X
[0, 0]n = ∇∂F
e L (x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
X
n
[0, 0] = ∇∂F
e U (x0 ) + ui ∇∂g
e i (x0 ).
i=1
Which implies
m
X
[0, 0]n = λ1 ∇∂F
e L (x0 ) + λ2 ∇∂F
e U (x0 ) + vi ∇∂g
e i (x0 ),
i=1

where vi = λ1 ui +λ2 ui , i = 1, ..., m. Then the result meets all the conditions of Theorem
3.2. This is the end of proof. 2

8
Theorem 3.5 Suppose F : M ⊂ Rn → Kc is sub-differentiable and LS-convex on M .
If there exists 0 ≤ ui ∈ R, i = 1, ...., m so that
m
(1) [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LS-optimal solution of (LP ).
m
Proof. From the equation [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ) we know
i=1

m
X
[0, 0]n = ∇∂F
e L (x0 ) + ui ∇∂g
e i (x0 ),
i=1

m
X
[0, 0]n = ∇∂F
e U (x0 ) + ui ∇∂g
e i (x0 ).
i=1

Because of FS (x) = FU (x) − FL (x) and Corollary 2.2,

∂FS (x) ⊂ ∂FU (x) − ∂FL (x).

So
m
X
n
[0, 0] = ∇∂F
e S (x0 ) + ui ∇∂g
e i (x0 ).
i=1

Which implies
m
X
[0, 0]n = λ1 ∇∂F
e L (x0 ) + λ2 ∇∂F
e S (x0 ) + vi ∇∂g
e i (x0 ),
i=1

where vi = λ1 ui +λ2 ui , i = 1, ..., m. Then the result meets all the conditions of Theorem
3.3. 2

Example 3.3 Suppose the following optimization problem

min F (x1 , x2 ) = [x21 , x21 + x22 ]


subject to g1 (x1 , x2 ) = x1 + x2 − 1 ≤ 0
g2 (x1 , x2 ) = −x1 + 1 ≤ 0
g3 (x1 , x2 ) = −x2 ≤ 0.

We know FL (x) = x21 , FU (x) = x21 + x22 and FS (x) = x22 . Because of the convex of
FL (x), FU (x), FS (x),

∇∂F
e (x1 , x2 ) = ([2x1 , 2x1 ], [min{0, 2x2 }, max{0, 2x2 }]),

∇∂g
e 1 (x1 , x2 ) = ([1, 1], [1, 1]),

9
∇∂g
e 2 (x1 , x2 ) = ([−1, −1], [0, 0]),

∇∂g
e 3 (x1 , x2 ) = ([0, 0], [−1, −1]).

When x0 = (1, 0) and u1 = 0, u2 = 2, u3 = 0, it is easy to get


(1) [0, 0]n = ([2, 2], [0, 0]) + 0 · ([1, 1], [1, 1]) + 2 · ([−1, −1], [0, 0]) + 0 · ([0, 0], [−1, −1]),
(2) 0 · (1 + 0 − 1) + 2 · (−1 + 1) + 0 · (−0) = [0, 0].
Therefore from Theorem 3.4 we have x0 = (1, 0) is the LU-optimal solution of (LP).
And from Theorem 3.5, we have x0 = (1, 0) is also the LS-optimal solution of (LP).

Theorem 3.6 Suppose F : M ⊂ Rn → Kc is sub-differentiable and FL + FU is convex


on M . If there exists (Lagrange) multiplier 0 < λ ∈ R and 0 ≤ ui ∈ R, i = 1, ...., m so
that
m
(1) [0, 0]n = λ∇∂(F
P
e L + FU )(x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LU-optimal solution of (LP ).

Proof. We define Ft (x) = λ(FL + FU )(x). Because of the convex of FL + FU , Ft (x) is


convex and
∂Ft (x) = [min{λ(FL + FU )0− (x), λ(FL + FU )0+ (x)},
max{λ(FL + FU )0− (x), λ(FL + FU )0+ (x)}] .
So
∇∂F
e t (x) = λ∇∂(F
e L + FU )(x).

And we get
m
X
[0, 0]n = λ∇∂F
e t (x0 ) + ui ∇∂g
e i (x0 ).
i=1

Based on Theorem 3.1, x0 is the optimal solution of Ft (x). It is easy to get that x0 is
the LU-optimal solution of (LP). 2

Theorem 3.7 Suppose F : M ⊂ Rn → Kc is sub-differentiable and FL + FS is convex


on M . If there exists (Lagrange) multiplier 0 < λ ∈ R and 0 ≤ ui ∈ R, i = 1, ...., m so
that
m
(1) [0, 0]n = λ∇∂(F
P
e L + FS )(x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LS-optimal solution of (LP ).

Proof. We define Fk (x) = λ(FL + FS )(x). The next proof is similar to Theorem 3.6,
so it is omitted. 2

10
Example 3.4 Suppose

[3x2 + x − 16, 2x2 + 2x], if x ∈ (−1, 0),



F (x) =
[2x − 16, 2x], if x ∈ [0, 1).

and the optimization programming problem

min F (x)
subject to −x ≤ 0
x − 1 ≤ 0.

We have
3x2 + x − 16, if x ∈ (−1, 0),

FL (x) =
2x − 16, if x ∈ [0, 1).
2x2 + 2x, if x ∈ (−1, 0),

FU (x) =
2x, if x ∈ [0, 1).
−x2 + x + 16, if x ∈ (−1, 0),

FS (x) =
16, if x ∈ [0, 1).
Because FL (x), FU (x), FS (x) are convex, we get

[10x + 3, 10x + 3] if x ∈ (−1, 0),
∂(FL + FU )(x) =
[4, 4] if x ∈ [0, 1).

[4x + 2, 4x + 2], if x ∈ (−1, 0),
∂(FL + FS )(x) =
[2, 2], if x ∈ [0, 1).
∂g1 (x) = [−1, −1], ∂g2 (x) = [1, 1].
The condition of (1) and (2) of Theorem 3.6 are satisfying at x0 = 0 when λ =
1, u1 = 4, u2 = 0. Hence x0 = 0 is the LU-optimal solution of (LP). The condition of
(1) and (2) of Theorem 3.7 are satisfying at x0 = 0 when λ = 1, u1 = 2, u2 = 0. Hence
x0 = 0 is also the LS-optimal solution of (LP).

4 Conclusions
We have considered two order relations on the interval space: the order relation LU
and the order relation LS. We have defined the gradient for an interval-valued function
using sub-derivative and we have used this to obtain KKT optimality conditions con-
sidering LU and LS order relation. These results are more general than other similar
results using gH-derivative. Examples used to help us better understand the conclu-
sions. In the future, we hope that the results of this study will produce innovative
results on related fields.
Author contributions: These authors contributed equally to this work. Concep-
tualization, Dong Qiu; methodology and writing, Chenxi Ouyang.

11
Funding: This work is supported by the National Natural Science Foundation of
China (1167100161876201) and Natural Science Foundation of Chongqing Science and
Technology Commission (cstc2019jcyj-msxmX0716)
Acknowledgments: The authors thank the anonymous reviewers for their valu-
able comments.
Conflicts of interest: The authors declare no conflict of interest.

References
[1] A. Mahanipour, H. Nezamabadi-Pour. GSP: an automatic programming technique with
gravitational search algorithm. Applied Intelligence, 2019, 49, 1502-1516.
[2] B. D. Chung, T. Yao, C. Xie, et al. Robust Optimization Model for a Dynamic Network
Design Problem Under Demand Uncertainty. Netw Spat Econ, 2011, 11 (2), 371-389.
[3] B. Bede, L. Stefanini. Generalized differentiability of fuzzy-valued functions. Fuzzy Sets
Syst, 2013, 230, 119-141.
[4] Chenxi Ouyang, Dong Qiu, Senlin Xiang, Jiafeng Xiao. Optimization conditions of interval-
valued problems based on sub-differentials, CGCKD2020, (under review).
[5] G. M. Ostrovsky, Y. M. Volin, D. V. Golovashkin. Optimization problem of complex system
under uncertainty. Computers & Chemical Engineering, 1998, 22 (7-8), 1007-1015.
[6] G. R. Bitran. Linear Multiple Objective Problems with Interval Coefficients. Management
Science, 1980, 26 (7), 694-706.
[7] H. Ishibuchi, H. Tanaka. Multiobjective programming in optimization of the interval ob-
jective function. European Journal of Operational Research, 1990, 48 (2), 219-225.
[8] L. Stefanini, B. Bede. Generalized Hukuhara differentiability of interval-valued functions
and interval differential equations. Nonlinear Analysis, 2008, 71 (3-4), 1311-1328.
[9] M. Ida. Multiple objective linear programming with interval coefficients and its all efficient
solutions. In Proceedings of the 35th IEEE Conference on Decision and Control, Kobe,
Japan, 13 December 1996.
[10] S. Chanas, D. Kuchta. Multiobjective programming in optimization of interval objective
functions − A generalized approach. European Journal of Operational Research, 1996, 94,
594-598.
[11] T. Q. Bao, B. S. Mordukhovich. Set-valued optimization in welfare economics. Advances
in Mathematical Economics, 2010, 13, 113-153.
[12] Y. Chalco-Cano, H. Romn-Flores, M. D. Jimnez-Gamero. Generalized derivative and π-
derivative for set-valued functions. Information Sciences, 2011, 181 (11), 2177-2188.
[13] Y. Chalco-Cano, W. A. Lodwick, A. Rufian-Lizana. Optimality conditions of type KKT
for optimization problem with interval-valued objective function via generalized derivative.
Fuzzy Optim Decis Making, 2013, 12 (3): 305-322.

12

You might also like