Professional Documents
Culture Documents
KKT Optimality Conditions of Interval-Valued Optimization Problem With Sub-Differentiable Functions
KKT Optimality Conditions of Interval-Valued Optimization Problem With Sub-Differentiable Functions
KKT Optimality Conditions of Interval-Valued Optimization Problem With Sub-Differentiable Functions
Abstract
This paper addresses the optimization problems with interval-valued objec-
tive function. For this we consider two types of order relation on the interval
space. For each order relation, we obtain KKT conditions using of the concept of
sub-derivative for interval-valued functions. The sub-derivative is a concept more
general of derivative for this class of functions than other concepts of derivative.
Keywords: sub-differentiable; interval-valued optimization problem; KKT op-
timality conditions
1 Introduction
In modern times, the optimization problems with uncertainty have received consid-
erable attention and have great value in economic and control fields [1, 2, 5, 11]. From
this point of view, Ishibuchi and Tanaka [7] derived the interval-valued optimization as
an attempt to handle the problems with imprecise parameters. Since then, a collection
of papers written by Chanas, Kuchta and Bitran et al [6, 9, 10] offered many different
approaches on this subject. In addition, the importance of derivatives in interval-valued
optimization problems can not be ignored. The gH-derivative is a very important con-
cept in the study of interval-valued functions [3]. The most common way is to transform
the study of derivative of interval-valued function into the relationship of derivative of
endpoint functions [12]. The gH-derivative concept has an important drawback, that is
the existence conditions of gH-derivative of interval-valued functions is very strict [12].
The sub-derivative concept is slightly more general than the notion of gH-derivative
for the case of interval-valued functions. Based on sub-derivative and its properties, we
give KKT optimality conditions for interval-valued optimization problems.
The paper is discussed as follows. In Section 2, we recall some preliminaries. In Sec-
tion 3, new KKT type optimality conditions are derived and some interesting examples
are given. Finally, Section 4 contains some conclusions.
∗
Corresponding author. Tel.:+86-15123126186; Fax:+86-23-62471796; E-mail: dongqiu-
math@163.com; qiudong@cqupt.edu.cn (D. Qiu).
1
2 Preliminaries
In this section, we introduce some definitions that will be used throughout the
paper. We denote by R the family of all real numbers and Kc the bounded and closed
intervals of R, that is
Kc = {[aL , aU ]|aL , aU ∈ R, aL ≤ aU }.
Definition 2.1 [8] The generalized Hukuhara difference of two intervals, A and B,
(gH-difference for short) is defined as follows
(i) A = B + C
A gH B = C ⇔
or (ii) B = A − C .
This difference has many interesting new properties, for example A gH A = [0, 0].
Also, the gH-difference of two intervals A = [aL , aU ] and B = [bL , bU ] always exists
and equals to
Definition 2.2 [8] Let x0 ∈ M ⊂ R and h be such that x0 + h ∈ M , then the gH-
derivative of an interval-valued function F : M ⊂ R → Kc at x0 is defined as
0 1
FgH (x0 ) = lim [F (x0 + h) gH F (x0 )] . (3)
h→0 h
0 (x ) ∈ K satisfying Equation (3) exists, we say that F is generalized Hukuhara
If FgH 0 c
differentiable (gH-differentiable for short) at x0 .
2
We presents the necessary and sufficient condition for the existence of gH-derivative
of interval-valued functions. It is introduced in [12].
Theorem 2.1 [12] Let F (x) be an interval-valued function. F (x) is gH-differentiable
at x0 ∈ M if and only if one of the following cases holds:
(a) FL (x) and FU (x) are differentiable at x0 and
0
(x0 ) = min{(FL )0 (x0 ), (FU )0 (x0 )}, max{(FL )0 (x0 ), (FU )0 (x0 )} .
FgH
(b) (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ) and (FU )0 + (x0 ) exist and satisfy (FL )0 − (x0 ) =
(FU )0 + (x0 ) and (FL )0 + (x0 ) = (FU )0 − (x0 ). Moreover
0 0 0 0 0
FgH (x0 ) = [min{(FL )+ (x0 ), (FU )+ (x0 )}, max{(FL )+ (x0 ), (FU )+ (x0 )}]
0 0 0 0
= [min{(FL )− (x0 ), (FU )− (x0 )}, max{(FL )− (x0 ), (FU )− (x0 )}].
In order to extend the application of gH-derivative, we introduce the following
concept.
Definition 2.3 [4] Let x0 ∈ M ⊂ R, if (FL )0− (x0 ), (FL )0+ (x0 ), (FU )0− (x0 ) and (FU )0+ (x0 )
exist, then the sub-derivative of an interval-valued function F : M → Kc at x0 is defined
as
∂F (x0 ) = [min (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ), (FU )0 + (x0 ) ,
(4)
max (FL )0 − (x0 ), (FL )0 + (x0 ), (FU )0 − (x0 ), (FU )0 + (x0 ) ].
If F 0 (x0 ) ∈ Kc satisfying Equation (4) exists, we say that F (x) is sub-differentiable at
x0 .
3
Definition 2.5 Let F : M ⊂ Rn → Kc be an interval-valued function. For x0 ∈ M ,
we say that x0 is the LU-optimal solution of F (x) if there exist no x ∈ M such that
F (x) <LU F (x0 ).
Next we introduce the second solution concept. For A = [aL , aU ], the width of A
is defined by w(A) = aS = aU − aL . The order relation ≤LS is defined by A≤LS B if
and only if aL ≤ bL and aS ≤ bS . we write A<LS B if and only if A≤LS B and A 6= B.
A<LS B if and only if
aL (x) < bL (x), aL (x) ≤ bL (x), aL (x) < bL (x),
or or
aS (x) ≤ bS (x). aS (x) < bS (x). aS (x) < bS (x).
∂FL (x) = min{(FL )0− (x), (FL )0+ (x)}, max{(FL )0− (x), (FL )0+ (x)} ,
∂FU (x) = min{(FU )0− (x), (FU )0+ (x)}, max{(FU )0− (x), (FU )0+ (x)} ,
∂Fm (x) = [min{λ1 (FL )0− (x) + λ2 (FU )0− (x), λ1 (FL )0+ (x) + λ2 (FU )0+ (x)},
max{λ1 (FL )0− (x) + λ2 (FU )0− (x), λ1 (FL )0+ (x) + λ2 (FU )0+ (x)}] .
So ∂Fm (x) ⊂ λ1 ∂FL (x) + λ2 ∂FU (x). 2
4
Corollary 2.2 Let F (x) be sub-differentiable on M and Fn (x) = λ1 FL (x) + λ2 FS (x),
for λ1 , λ2 ∈ R. Then Fn (x) is sub-differentiable on M and ∂Fn (x) ⊂ λ1 ∂FL (x) +
λ2 ∂FS (x).
Corollary 2.3 [4] Let real-valued function f (x) be sub-differentiable and convex on M .
Then f (y) − f (x) ≥ k · (y − x) and k ∈ ∂f (x).
f (x) − f (x0 ) ≥ k · (x − x0 ).
m
P
So for all h ∈ ui ∇∂g
e i (x),
i=1
Now, since each gi (x) is convex, we have again by Corollary 2.3 for i = 1, ..., m
But for each i ui ≥ 0 and gi (x) ≤ 0. Thus f (x) − f (x0 ) ≥ 0 and x0 is the optimal
solution of (F P ). 2
5
Example 3.1 Consider the following optimization problem
min f (x) = |x − 1|
subject to − |x| + 1 ≤ 0
x − 1 ≤ 0.
we have
∂f (x) = [−1, 1], ∂g1 (x) = [1, −1], ∂g2 (x) = [1, 1].
for x0 = 1 and u1 = 1, u2 = 0, it easy to get
(1) [0, 0] = [−1, 1] + 1 · [1, −1] + 0 · [1, 1],
(2) 1 · (− |1| + 1) + 0(1 − 1) = [0, 0].
Based on Theorem 3.1, we know x0 = 1 is the optimal solution of (F P ).
(LP ) M in F (x)
subject to gi (x) ≤ 0, i = 1, ..., m,
Proof. We define Fm (x) = λ1 FL (x) + λ2 FU (x). Since F (x) is LU-convex and sub-
differentiable at x0 , then Fm (x) is convex and sub-differentiable at x0 . And
6
Based on Theorem 3.1, x0 is the optimal solution of Fm (x). Now, for complete the
proof, suppose that x0 is not the LU-optimal solution of F (x). Then there exists an x
such that
FL (x) < FL (x0 ), FL (x) ≤ FL (x0 ), FL (x) < FL (x0 ),
or or
FU (x) ≤ FU (x0 ). FU (x) < FU (x0 ). FU (x) < FU (x0 ).
Therefore we see that Fm (x) < Fm (x0 ) which contradicts the fact that x0 is the optimal
solution of Fm (x). 2
Proof. We define Fn (x) = λ1 FL (x) + λ2 FS (x). Since F (x) is LS-convex and sub-
differentiable at x0 , then Fn (x) is convex and sub-differentiable at x0 . And
According to the proof of Theorem 3.2, we know x0 is the optimal solution of Fn (x).
Now, for complete the proof, suppose that x0 is not the LS-optimal solution of F (x).
Then there exists an x such that
FL (x) < FL (x0 ), FL (x) ≤ FL (x0 ), FL (x) < FL (x0 ),
or or
FS (x) ≤ FS (x0 ). FS (x) < FS (x0 ). FS (x) < FS (x0 ).
Therefore we see that Fn (x) < Fn (x0 ) which contradicts the fact that x0 is the optimal
solution of Fn (x). 2
Example 3.2 Suppose the object function
[ 41 x − 1, x],
if x > 0.
F (x) = 1
[− 4 x − 1, −x], if x ≤ 0.
min F (x)
subject to −x − 2 ≤ 0
x ≤ 0.
we have
1
FL (x) = 4 x − 1, if x > 0,
− 14 x − 1, if x ≤ 0.
7
x, if x > 0,
FU (x) =
−x, if x ≤ 0.
3
4 x + 1, if x > 0,
FS (x) =
− 34 x + 1, if x ≤ 0.
And we get
[ 14 , 14 ],
if x > 0,
∂FL (x) =
[− 14 , − 14 ], if x ≤ 0.
[1, 1], if x > 0,
∂FU (x) =
[−1, −1], if x ≤ 0.
∂g1 (x) = [−1, −1], ∂g2 (x) = [1, 1].
Both FL (x), FU (x), FS (x) are convex and sub-differentiable. Furthermore, the condition
of (1) and (2) of Theorem 3.2 are satisfying at x0 = 0 when λ1 = 4, λ2 = 0, u1 = 0, u2 =
1. Hence x0 = 0 is the LU-optimal solution of (LP). Because of
[ 34 , 34 ],
if x > 0,
∂FS (x) =
[− 34 , − 34 ], if x ≤ 0.
The condition of (1) and (2) of Theorem 3.3 are satisfying at x0 = 0 when λ1 = 4, λ2 =
0, u1 = 0, u2 = 1. Hence x0 = 0 is also the LS-optimal solution of (LP).
where vi = λ1 ui +λ2 ui , i = 1, ..., m. Then the result meets all the conditions of Theorem
3.2. This is the end of proof. 2
8
Theorem 3.5 Suppose F : M ⊂ Rn → Kc is sub-differentiable and LS-convex on M .
If there exists 0 ≤ ui ∈ R, i = 1, ...., m so that
m
(1) [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
P
(2) ui gi (x0 ) = [0, 0], i = 1, ..., m.
i=1
Then x0 is the LS-optimal solution of (LP ).
m
Proof. From the equation [0, 0]n = ∇∂F
P
e (x0 ) + ui ∇∂g
e i (x0 ) we know
i=1
m
X
[0, 0]n = ∇∂F
e L (x0 ) + ui ∇∂g
e i (x0 ),
i=1
m
X
[0, 0]n = ∇∂F
e U (x0 ) + ui ∇∂g
e i (x0 ).
i=1
So
m
X
n
[0, 0] = ∇∂F
e S (x0 ) + ui ∇∂g
e i (x0 ).
i=1
Which implies
m
X
[0, 0]n = λ1 ∇∂F
e L (x0 ) + λ2 ∇∂F
e S (x0 ) + vi ∇∂g
e i (x0 ),
i=1
where vi = λ1 ui +λ2 ui , i = 1, ..., m. Then the result meets all the conditions of Theorem
3.3. 2
We know FL (x) = x21 , FU (x) = x21 + x22 and FS (x) = x22 . Because of the convex of
FL (x), FU (x), FS (x),
∇∂F
e (x1 , x2 ) = ([2x1 , 2x1 ], [min{0, 2x2 }, max{0, 2x2 }]),
∇∂g
e 1 (x1 , x2 ) = ([1, 1], [1, 1]),
9
∇∂g
e 2 (x1 , x2 ) = ([−1, −1], [0, 0]),
∇∂g
e 3 (x1 , x2 ) = ([0, 0], [−1, −1]).
And we get
m
X
[0, 0]n = λ∇∂F
e t (x0 ) + ui ∇∂g
e i (x0 ).
i=1
Based on Theorem 3.1, x0 is the optimal solution of Ft (x). It is easy to get that x0 is
the LU-optimal solution of (LP). 2
Proof. We define Fk (x) = λ(FL + FS )(x). The next proof is similar to Theorem 3.6,
so it is omitted. 2
10
Example 3.4 Suppose
min F (x)
subject to −x ≤ 0
x − 1 ≤ 0.
We have
3x2 + x − 16, if x ∈ (−1, 0),
FL (x) =
2x − 16, if x ∈ [0, 1).
2x2 + 2x, if x ∈ (−1, 0),
FU (x) =
2x, if x ∈ [0, 1).
−x2 + x + 16, if x ∈ (−1, 0),
FS (x) =
16, if x ∈ [0, 1).
Because FL (x), FU (x), FS (x) are convex, we get
[10x + 3, 10x + 3] if x ∈ (−1, 0),
∂(FL + FU )(x) =
[4, 4] if x ∈ [0, 1).
[4x + 2, 4x + 2], if x ∈ (−1, 0),
∂(FL + FS )(x) =
[2, 2], if x ∈ [0, 1).
∂g1 (x) = [−1, −1], ∂g2 (x) = [1, 1].
The condition of (1) and (2) of Theorem 3.6 are satisfying at x0 = 0 when λ =
1, u1 = 4, u2 = 0. Hence x0 = 0 is the LU-optimal solution of (LP). The condition of
(1) and (2) of Theorem 3.7 are satisfying at x0 = 0 when λ = 1, u1 = 2, u2 = 0. Hence
x0 = 0 is also the LS-optimal solution of (LP).
4 Conclusions
We have considered two order relations on the interval space: the order relation LU
and the order relation LS. We have defined the gradient for an interval-valued function
using sub-derivative and we have used this to obtain KKT optimality conditions con-
sidering LU and LS order relation. These results are more general than other similar
results using gH-derivative. Examples used to help us better understand the conclu-
sions. In the future, we hope that the results of this study will produce innovative
results on related fields.
Author contributions: These authors contributed equally to this work. Concep-
tualization, Dong Qiu; methodology and writing, Chenxi Ouyang.
11
Funding: This work is supported by the National Natural Science Foundation of
China (1167100161876201) and Natural Science Foundation of Chongqing Science and
Technology Commission (cstc2019jcyj-msxmX0716)
Acknowledgments: The authors thank the anonymous reviewers for their valu-
able comments.
Conflicts of interest: The authors declare no conflict of interest.
References
[1] A. Mahanipour, H. Nezamabadi-Pour. GSP: an automatic programming technique with
gravitational search algorithm. Applied Intelligence, 2019, 49, 1502-1516.
[2] B. D. Chung, T. Yao, C. Xie, et al. Robust Optimization Model for a Dynamic Network
Design Problem Under Demand Uncertainty. Netw Spat Econ, 2011, 11 (2), 371-389.
[3] B. Bede, L. Stefanini. Generalized differentiability of fuzzy-valued functions. Fuzzy Sets
Syst, 2013, 230, 119-141.
[4] Chenxi Ouyang, Dong Qiu, Senlin Xiang, Jiafeng Xiao. Optimization conditions of interval-
valued problems based on sub-differentials, CGCKD2020, (under review).
[5] G. M. Ostrovsky, Y. M. Volin, D. V. Golovashkin. Optimization problem of complex system
under uncertainty. Computers & Chemical Engineering, 1998, 22 (7-8), 1007-1015.
[6] G. R. Bitran. Linear Multiple Objective Problems with Interval Coefficients. Management
Science, 1980, 26 (7), 694-706.
[7] H. Ishibuchi, H. Tanaka. Multiobjective programming in optimization of the interval ob-
jective function. European Journal of Operational Research, 1990, 48 (2), 219-225.
[8] L. Stefanini, B. Bede. Generalized Hukuhara differentiability of interval-valued functions
and interval differential equations. Nonlinear Analysis, 2008, 71 (3-4), 1311-1328.
[9] M. Ida. Multiple objective linear programming with interval coefficients and its all efficient
solutions. In Proceedings of the 35th IEEE Conference on Decision and Control, Kobe,
Japan, 13 December 1996.
[10] S. Chanas, D. Kuchta. Multiobjective programming in optimization of interval objective
functions − A generalized approach. European Journal of Operational Research, 1996, 94,
594-598.
[11] T. Q. Bao, B. S. Mordukhovich. Set-valued optimization in welfare economics. Advances
in Mathematical Economics, 2010, 13, 113-153.
[12] Y. Chalco-Cano, H. Romn-Flores, M. D. Jimnez-Gamero. Generalized derivative and π-
derivative for set-valued functions. Information Sciences, 2011, 181 (11), 2177-2188.
[13] Y. Chalco-Cano, W. A. Lodwick, A. Rufian-Lizana. Optimality conditions of type KKT
for optimization problem with interval-valued objective function via generalized derivative.
Fuzzy Optim Decis Making, 2013, 12 (3): 305-322.
12