Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

This article was downloaded by: [Selcuk Universitesi]

On: 06 January 2015, At: 04:02


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

Numerical Functional Analysis and Optimization


Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/lnfa20

A Smoothing Objective Penalty Function Algorithm for


Inequality Constrained Optimization Problems
a b a a
Zhiqing Meng , Chuangyin Dang , Min Jiang & Rui Shen
a
College of Business and Administration , Zhejiang University of Technology , Zhejiang, P. R.
China
b
Department of Manufacturing Engineering & Engineering Management , City University of
Hong Kong , Kowloon, Hong Kong
Published online: 23 May 2011.

To cite this article: Zhiqing Meng , Chuangyin Dang , Min Jiang & Rui Shen (2011) A Smoothing Objective Penalty Function
Algorithm for Inequality Constrained Optimization Problems, Numerical Functional Analysis and Optimization, 32:7, 806-820

To link to this article: http://dx.doi.org/10.1080/01630563.2011.577262

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
Numerical Functional Analysis and Optimization, 32(7):806–820, 2011
Copyright © Taylor & Francis Group, LLC
ISSN: 0163-0563 print/1532-2467 online
DOI: 10.1080/01630563.2011.577262

A SMOOTHING OBJECTIVE PENALTY FUNCTION ALGORITHM


FOR INEQUALITY CONSTRAINED OPTIMIZATION PROBLEMS

Zhiqing Meng,1 Chuangyin Dang,2 Min Jiang,1 and Rui Shen1


1
College of Business and Administration, Zhejiang University of Technology,
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

Zhejiang, P. R. China
2
Department of Manufacturing Engineering & Engineering Management,
City University of Hong Kong, Kowloon, Hong Kong

 In this article, a smoothing objective penalty function for inequality constrained


optimization problems is presented. The article proves that this type of the smoothing objective
penalty functions has good properties in helping to solve inequality constrained optimization
problems. Moreover, based on the penalty function, an algorithm is presented to solve the
inequality constrained optimization problems, with its convergence under some conditions
proved. Two numerical experiments show that a satisfactory approximate optimal solution can
be obtained by the proposed algorithm.

Keywords Algorithm; Constrained optimization problems; Exact penalty function;


Objective parameter; Objective penalty function; Smoothing.

AMS Subject Classification Primary 65K05; Secondary 90C30.

1. INTRODUCTION
The problem considered in this article is the inequality constrained
optimization problem as follows,

(P ) min f0 (x)
s.t. fi (x) ≤ 0, i ∈ I = 1, 2,    , m,

where fi : R n → R , i ∈ I0 = 0, 1, 2,    , m is differentiable function. Its


feasible set is denoted by X = x ∈ R n | fi (x) ≤ 0, i ∈ I .

Received 16 September 2010; Revised 29 November 2010; Accepted 29 March 2011.


Address correspondence to Zhiqing Meng, College of Business and Administration, Zhejiang
University of Technology, Hangzhou, Zhejiang 310023, P. R. China; E-mail: mengzhiqing@zjut.edu.cn

806
Smoothing Objective Penalty Function Algorithm 807

The penalty function method provides an important approach to


solving (P). Its main idea is to transform (P) into a sequence of
unconstrained optimization problems which are easier to solve. It is
well known that a penalty function for (P) is defined as:

F (x, ) = f0 (x) +  maxfi (x), 02 ,
i∈I

and the corresponding penalty optimization problem for (P) is defined as:

(P ) min F (x, ) s.t. x ∈ R n 

The penalty function F (x, ) is smooth, but not exact. A penalty function
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

F (x, ) is exact if there is some ∗ such that an optimal solution to (P )


is also an optimal solution to (P ) for all  ≥ ∗ . In 1967, Zangwill [14]
presented the following penalty function:

F1 (x, ) = f0 (x) +  maxfi (x), 0,
i∈I

with the corresponding penalty optimization problem of (P)

(EP ) min F1 (x, ) s.t. x ∈ R n 

The penalty function F1 (x, ) is exact under certain assumptions, but it is


not smooth.
Exact penalty functions attract many researchers to study. For example,
Han and Mangasrian [5] presented an exact penalty function for
nonlinear programming. Rosenberg [12] gave a globally convergent
algorithm for convex programming based on an exact penalty function.
Many researchers have discovered that the existing exact penalty function
algorithms need to increase the penalty parameter in order to find out
a better solution and the penalty functions are not differentiable [5, 11,
12, 14, 15]. Hence, in order to use many efficient algorithms, such as
the Newton method, it is very necessary and important to smooth exact
penalty function in order to solve constrained optimization problems.
Zenios et al. [15] discussed a smooth penalty function algorithm for
network-structured problems. Pinar and Zenios [11] presented a smooth
exact penalty function for convex constrained optimization problems, in
which all the objective functions and the constraint functions were convex,
and the smoothing penalty function was first-order differentiable. Yang
et al. [13] studied a kind of smoothing nonlinear penalty functions for
constrained optimization. Meng et al. [8] also proposed a smoothing of the
square-root exact penalty function for inequality constrained optimization.
808 Z. Meng et al.

In fact, most exact penalty functions are not smooth. Therefore


smoothing is necessary and important. So, in order to overcome above
drawbacks, many researchers have been finding out novel penalty
functions. One kind of penalty function method with an objective penalty
parameter has been discussed in [1–4, 6, 7, 10], where the penalty function
is defined as

(x, M ) = (f0 (x) − M )p + fi (x)p ,
i∈I

where p > 0. Suppose x ∗ is an optimal solution and f ∗ is the optimal value


of the objective function, then a sequential penalty function method can
be envisaged, in which a convergent sequence (M k  → f ∗ ) is generated so
that the minimizers x(M k ) → x ∗ . Morrison [10] considered the problem
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

minf (x) | g (x) = 0 and defined a penalty function problem: min(f (x) −
M )2 + |g (x)|2 . Without convexity or continuity assumptions, a sequence
of problems was constructed by choosing an appropriate convergent
sequence M k . Fletcher [3, 4] discussed a similar type of (x, M ), and
Burke [1] considered a more general type. Fiacco and McCormick [2]
gave a general introduction of sequential unconstrained minimization
techniques. Mauricio and Maculan [6] discussed a Boolean penalty
method for zero-one nonlinear programming and defined another type of
penalty functions:

H (x, M ) = maxf0 (x) − M , f1 (x),    , fm (x)

Meng et al. [7] also studied an objective penalty function method as


follows

F (x, M ) = (f0 (x) − M )2 + maxfi (x), 0p ,
i∈I

which was a good smooth penalty function for p > 1.


In [9], a new general objective penalty function was presented, which
was proved to have good prospects in solving a global approximate solution
to the constrained optimization problems, as follows,

F (x, M ) = Q (f0 (x) − M ) + P (fi (x)),
i∈I

where the objective parameter M ∈ R , function Q : R → R ∪ +∞ and


P : R → R ∪ +∞ with


Q (t ) > 0 for all t ∈ R \0
Q (0) = 0


Q (t1 ) < Q (t2 ) for 0 ≤ t1 < t2
Smoothing Objective Penalty Function Algorithm 809

and

P (t ) = 0 if and only if t ≤ 0,
P (t ) > 0 if and only if t > 0

In [9], the case that objective penalty function F (x, M ) is differentiable


when Q (t ), P (t ), fi (x) (i ∈ I0 ) are all differentiable was studied. In this
article, we will study another case that F (x, M ) is not differentiable when
the function P (t ) =  maxt , 0p , ( > 0, 0 < p ≤ 1) is not differentiable. In
[8], a similar smoothing exact penalty function  maxt , 0p , ( > 0, 0 <
p ≤ 1) that had better convergence was discovered. This article will discuss
smoothing the objective penalty F (x, M ).
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

The remainder of this article is organized as follows. Section 2 shows


some theorems for smoothing penalty function. An algorithm to solve
the original problem (P) with global convergence without any convex
conditions is presented in Section 3. Finally, two numerical examples,
which show that the number of iterations of the algorithm is few for
obtaining a global approximate solution to (P), are given.

2. A SMOOTHING OBJECTIVE PENALTY FUNCTION


Let p  : R 1 → R 1 :

0, if t ≤ 0,
p (t ) =

t , if t ≥ 0,

where 0 <  ≤ 1. Then p  (t ) is not C 1 on R 1 for 0 <  ≤ 1. The


function p  (t ) is useful in defining exact penalty function for nonlinear
programming [13]. In order to smooth function p  (t ), we define p :
R 1 → R 1:


 0, if t ≤ 0,




 1 −a a
 t , if 0 ≤ t ≤ ,
p (t ) = a


  



 1 
t  − 1 −  , if t ≥ ,
a

where 0 <  ≤ 1 and a > 1 . It is clear that lim→0 p (t ) = p  (t ) and p (t ) is
differentiable at each t .
In this article, it is assumed that a > 1 and 0 <  ≤ 1; fi : R n → R 1 , i ∈
0 ∪ I be C 1,1 , where I = 1, 2,    , m. It is easy to understand that
p (fi (x))(i ∈ 0 ∪ I ) is C 1 .
810 Z. Meng et al.

Consider the following optimization problem:

(P): min f0 (x) s.t. x ∈ X ,

where X = x ∈ Y | fi (x) ≤ 0, i = 1, 2,    , m (Y ⊂ R n ) with the corres-


ponding objective penalty functions for (P):

F (x, M ) = Q (f0 (x) − M ) +  p  (fi (x)),
i∈I

F (x, M , ) = Q (f0 (x) − M ) +  p (fi (x)),
i∈I
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

where  > 0 is given. Hence, the following two objective penalty problems
can be denoted as:

P(M ): min F (x, M ) s.t. x ∈ Y


P(M , ): min F (x, M , ) s.t. x ∈ Y

Since lim→0 F (x, M , ) = F (x, M ) for ∀, the relationship between


P(M ) and P(M , ) must be studied first.

Proposition 2.1. For any x ∈ Y and  > 0, we have


 
1
0 ≤ F (x, M ) − F (x, M , ) ≤ 1 − m , (1)
a

where  > 0.

Proof. With the definition of p (t ), we have


 
1 
0 ≤ p (t ) −

p (t ) ≤ 1− 
a

As a result,
 
1 
0 ≤ p (fi (x)) −

p (fi (x)) ≤ 1−  ∀x ∈ Y , i = 1, 2,    , m
a

After adding up for all i, we obtain


   
1
0≤ p (fi (x)) −

p (fi (x)) ≤ 1− m 
i∈I i∈I
a
Smoothing Objective Penalty Function Algorithm 811

Hence,
 
1
0 ≤ F (x, M ) − F (x, M , ) ≤ 1 − m 
a

Corollary 2.1. Let j  → 0 be a sequence of positive numbers and assume that
xj is a solution to minx∈X F (x, M , j ) for some M . Let x̄ be an accumulation point
of the sequence xj , then x̄ is an optimal solution to minx∈X F (x, M )

Definition 2.1. A vector x ∈ Y is -feasible if

fi (x ) ≤ , ∀i ∈ I 
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

Theorem 2.1. Let x ∗ be an optimal solution to P(M ) and x̄ ∈ Y be an optimal


solution to P(M , ). Then
 
1
0 ≤ F (x ∗ , M ) − F (x̄, M , ) ≤ 1 − m , (2)
a

Proof. From Proposition 2.1 we have


 
1
F (x, M ) ≤ F (x, M , ) + 1 − m ∀x ∈ Y ,
a

Consequently,

1
inf F (x, M ) ≤ inf F (x, M , ) + 1 − m ,
x∈X x∈X a

which proves the right-hand inequality of (2). The left-hand inequality


of (2) can be proved similar to Proposition 2.1.

Theorem 2.2. Let x ∗ be an optimal solution to P(M ) and x̄ ∈ Y be an optimal


solution to P(M , ). Furthermore, let x ∗ be feasible to (P) and x̄ be -feasible to (P).
Then

0 ≤ Q (f0 (x ∗ ) − M ) − Q (f0 (x̄) − M ) ≤ m  (3)

If M < minx∈X f0 (x) and M < f0 (x̄), then f0 (x̄) ≤ f0 (x ∗ ) and x ∗ be an optimal
solution to (P).

Proof. Since x̄ is -feasible to (P), it follows that


 1 
p (fi (x̄)) ≤ m 
i∈I
a
812 Z. Meng et al.

As x ∗ is a feasible solution to (P), we have



p  (fi (x ∗ )) = 0
i∈I

By Proposition 2.1, we get


 
0 ≤ Q (f0 (x ∗ ) − M ) +  p  (fi (x ∗ ))) − Q (f0 (x̄) − M ) +  p (fi (x̄)))
i∈I i∈I
 
1
≤ 1− m ,
a

which implies 0 ≤ Q (f0 (x ∗ ) − M ) − Q (f0 (x̄) − M ) ≤ m .


Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

According to (3) and Theorem 2.2 in [9], it is understood that f0 (x̄) ≤


f0 (x ∗ ) and x ∗ be an optimal solution of (P).

Based on Theorem 2.2, an algorithm is developed herein to compute


an optimal solution to (P), which solves the smoothing problem (P(M , ))
sequentially, and can be called as smoothing objective parameters function
algorithm (SOPFA).

3. A SMOOTHING OBJECTIVE PENALTY FUNCTION


ALGORITHM
In this section, we consider the following nonlinear optimization
problem:

P(M , ) min F (x, M , ), s.t. x ∈ Y ,

where Y ⊂ R n and the feasible set X ⊂ Y , especially when Y = R n ,


(P(M , )) which is an unconstrained optimization problem. In the SOPFA,
it is necessary to find an optimal solution to minx∈Y F (x, Mk , k ), which is
a difficult task. To avoid this, one may replace it with

F (x, Mk , k ) = 0

if Q (·), fi (x) (i ∈ I0 ) are all differentiable. Then we have the SOPFA as


follows.

SOPFA.

Step 1: Choose 0 <  < 1,  > 0, x 0 ∈ X , with a1 satisfying a1 < minx∈X


f0 (x) < b1 . Let k = 1, and M1 = a1 +b
2
1
. Choose a sequence k > 0
such that k → 0, Go to step 2.
Smoothing Objective Penalty Function Algorithm 813

Step 2: Solve minx∈Y F (x, Mk , k ). Let x k be a point satisfying


F (x k , Mk , k ) = 0.
+b
Step 3: If F (x k , Mk ) > 0, let bk+1 = bk , ak+1 = Mk , Mk+1 = k+1 2 k+1 , and go
a

to step 5. Otherwise, F (x , Mk ) = 0, and go to step 4.


k
+b
Step 4: Let ak+1 = ak , bk+1 = Mk , Mk+1 = k+1 2 k+1 , and go to step 5.
a

Step 5: If |bk+1 − ak+1 | <  and fi (x k ) ≤ , i = 1, 2,    , m, stop and x k is an


-solution to (P). Otherwise, let k := k + 1, and go to step 2.

Remark. In the SOPFA, it is assumed that we can always get a1 <


minx∈X f0 (x). Let

S (L, f0 ) = x k | L ≥ Q (f0 (x k ) − yk ), k = 1, 2,    ,
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

which is called a Q-level set. S (L, f0 ) is said to be bounded if, for any given
L > 0 and a convergent sequence yk → y ∗ , S (L, f0 ) is bounded.
If x k  is a finite sequence (i.e., the SOPFA stops at the k̄-th iteration),
then x k̄ is an -solution to (P) in step 5.
The convergence of the SOPFA is proved in the following theorem.

Theorem 3.1. Let Y = R n or Y be an open set. Suppose that Y is connected


and compact. Let x k  be an infinite sequence generated by the SOPFA. Suppose the
Q-level set S (L, f0 ) and the sequence F (x k , Mk , k ) are bounded.
Then a general conclusion could be reached, that is, the sequence x k  is
bounded, and for any limit point x ∗ of it, there exist
∈ R and i ≥ 0, i =
1, 2,    , m, such that


f0 (x ∗ ) + i fi (x ∗ ) = 0
i∈I

A more specified conclusions that x k  is bounded and any limit point of it is


an optimal solution to (P) could be reached under the conditions that

(i) ak < M∗ for all k = 1,    , k,


(ii) x k is (up to the perturbation given by k ) a global solution of the auxiliary
problem (P(M , k )) [with M := Mk ] for all k ≥ k + 1.

Proof. It is clear that the sequence ak  increases and bk  decreases with

ak + bk
ak < Mk = < bk , k = 1, 2,    (4)
2
and
bk − ak
bk+1 − ak+1 = , k = 1, 2,     (5)
2
814 Z. Meng et al.

Thus, both ak  and bk  converge. Let ak → a ∗ and bk → b ∗ . By (5), we


have a ∗ = b ∗ . Therefore, Mk  also converges to a ∗ .
Since F (x, Mk , k ) is bounded and Mk converges to a ∗ as k → +∞,
there must be some L > 0 such that

L > F (x, Mk , k ) ≥ Q (f0 (x k ) − Mk ), k = 1, 2,    ,

which brings with it the boundedness of the Q-level set S (L, f0 ), as well as
conclusion that x k  is bounded. For x ∈ R n , we define

I 0 (x) = i | fi (x) ≤ 0, i ∈ I ,
I+ (x) = i | fi (x) ≥ , i ∈ I ,
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

I− (x) = i | 0 < fi (x) < , i ∈ I 

Without loss of generality, suppose x k → x ∗ . Due to the assumption,


we have

F (x, Mk , k ) = Q
(f0 (x k ) − Mk ) f0 (x k )

+ p
(fi (x k )) fi (x k ) = 0, k = 1, 2,    ,
i∈I

where p
(fi (x k )) ≥ 0; that is,

Q
(f0 (x k ) − Mk ) f (x k ) +  k−a fi (x k )a−1 fi (x k )
i∈I−k (x k )

+ fi (x k )−1 fi (x k ) = 0 (6)
i∈I+k (x k )

For k = 1, 2,    , let
 
k = 1 + k−a fi (x k )a−1 + fi (x k )−1 
i∈I−k (x k ) i∈I+k (x k )

Then j > 0. From (6), we have

1
 k−a fi (x k )a−1
Q (f0 (x k ) − Mk ) f0 (x k ) + fi (x k )
k − k
k
i∈Ik (x )

 fi (x k )−1
+ fi (x k ) = 0 (7)
+ k
k
i∈Ik (x )
Smoothing Objective Penalty Function Algorithm 815

Let
1 k 1
k = ,
= Q
(f0 (x k ) − Mk ),
k k
k−a fi (x k )a−1
ik = , i ∈ I−k (x k ),
k
fi (x k )−1
ik = , i ∈ I+k (x k ),
k


ik = 0, i ∈ I \ I+k (x k ) I−k (x k ) 

Then,
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015


k + ik = 1, ∀k,
i∈I
(8)
ik ≥ 0, i ∈ I, ∀k
j
Clearly, as k → ∞, we have k → > 0, i → i ≥ 0, ∀i ∈ I . Since
Q (·), fi (x)(i ∈ I ) are all continuously differentiable, Q
(f0 (x k ) − Mk ) →
Q
(f0 (x ∗ ) − a ∗ ) and
k →
. By (7) and (8), as k → +∞, we have


f0 (x ∗ ) + i gi (x ∗ ) = 0
i∈I

Next, the specified conclusion will be proved. Starting with the


above-proved conclusion that the sequence x k  is bounded, let M∗ =
minx∈X f0 (x). By step 3 and Theorem 2.3 (ii) in [9], we get ak ≤ M∗ ,
k ≥ k + 1. By step 4 and Theorem 2.3 (i) in [9], we get M∗ ≤ bk , k ≥ k + 1.
Thus, ak ≤ M∗ ≤ bk . When k → +∞, we obtain a ∗ = M∗ . Let y ∗ be an
optimal solution to (P). Then M∗ = f0 (y ∗ ). Note that

F (x, Mk , k ) ≤ F (y ∗ , Mk , k ) = Q (f0 (y ∗ ) − Mk )

By letting k → +∞ in the above equation, we obtain

F (x ∗ , M∗ , 0) ≤ 0,

which implies M∗ = f0 (x ∗ ). Therefore, x ∗ is an optimal solution to (P). 

Remark. The appropriate choice of Q in the SOPFA is very important,


which makes it different from other penalty algorithms.
816 Z. Meng et al.

4. NUMERICAL EXAMPLES
The feasible solution set X to the problem (P) is often unbounded. If
an optimal solution to the problem (P
)

(P
) min f (x) x ∈X ∩Y

is an optimal solution to the problem (P), where Y ⊂ R n is bounded, then


in the SOPFA, we only need to solve

(P(M , )) min F (x, M , ), s.t. x ∈ Y 

The SOPFA provides a good method in solving (P). In the following


examples, we choose
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

2
Q (t ) = c t − 1,

with c > 1 and  ∈ (10−7 , 10−1 ). Let  = 10−6 , then it is expected to get an
-solution to (P) in the SOPFA by Matlab6.5.
The SOPFA is used to solve some problems, where a starting point
x 0 ∈ R n and a1 < minx∈X f0 (x) < b1 .
In order to compare convergence of the SOPFA for different penalty
functions, we adopt the exact penalty function


m
F1 (x, ) = f0 (x) +  maxfi (x), 0, (9)
i=1

and the penalty function


m
F2 (x, ) = f0 (x) +  maxfi (x), 02 , (10)
i=1

to define the following algorithm I and II correspondingly.

Algorithm I.

Step 1: Choose x 0 ,  > 0, 0 > 0, and N > 1.


Let k = 1.
Step 2: Using the violation x k−1 as the mstarting point to solve the problem:
minx∈X F1 (x, k ) = f0 (x) + k i=1 maxfi (x), 0.
Let x k be the stop solution.
Step 3: If x k is -feasible to (P),
then stop and get an approximate solution x k of (P),
otherwise,
let k+1 = N k , set k := k + 1 and go to step 2.
Smoothing Objective Penalty Function Algorithm 817

Algorithm II.

Step 1: Choose x 0 ,  > 0, 0 > 0, and N > 1.


Let k = 1.
Step 2: Using the violation x k−1 as the mstarting point to2 solve the problem:
minx∈X F2 (x, k ) = f0 (x) + k i=1 maxfi (x), 0 .
Let x k be the stop solution.
Step 3: If x k is -feasible to (P),
then stop and get an approximate solution x k of (P),
otherwise,
let k+1 = N k , set k := k + 1 and go to step 2.

The constrain error e(x k ) is defined, for the k’th step as


Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015


m
e(x k ) = maxfi (x k ), 0
i=1

It is clear that x j is -feasible to (P), when e(x k ) < . In the following


examples we apply both algorithm I and II programmed in Matlab, with
the convergence of the SOPFA compared with the modified OPFA in [9].

Example 4.1. Consider the following problem:

(P 31) min f (x) = x12 + x22 + 2x32 + x42 − 5x1 − 5x2 − 21x3 + 7x4
s.t. g1 (x) = 2x12 + x22 + x32 + 2x1 + x2 + x4 − 5 ≤ 0
g2 (x) = x12 + x22 + x32 + x42 + x1 − x2 + x3 − x4 − 8 ≤ 0
g3 (x) = x12 + 2x22 + x32 + 2x42 − x1 − x4 − 10 ≤ 0

The nonlinear penalty function of (P3.1) is defined as


2
F (x, M , ) = 1000001(f (x)−M ) − 1 + (p (g1 (x)) + p (g2 (x)) + p (g3 (x))),

where  = 1,  = 05, a = 4. Let Y = (x1 , x2 , x3 , x4 ) | − 100 ≤ xi ≤ 100, i =


1, 2, 3, 4. Let x 0 = (0, 0, 0, 0) ∈ X , a1 = −200, b1 = 0, M1 = −100. Then we
have Table 1 by the SOPFA.
From Table 1, it is said that an approximate -feasible solution is
obtained at the 2’th iteration, that is x 2 = (0170056, 0841066, 2004907,
−0968785), with its objective value f (x 2 ) = −44225989( = 00000001).
It is easy to check that the x 2 is feasible solution to (P3.1).
If the nonlinear penalty function of (P3.2) is defined as
2

F (x, M , ) = 1000001(f (x)−M ) − 1 +  p (gi (x)),
i∈I
818 Z. Meng et al.

TABLE 1 Numerical results of the SOPFA with 2 iterations

k g1 (x k ) g2 (x k ) g3 (x k ) e(x k ) xk f (x k ) Mk

1 0.0000850.000200 −1.886545 0.000294 (0.170286, 0.841358, −44.234088 −100.000000


2.005303, −0.968966)
2 −0.002726 −0.002810 −1.889745 0.000000 (0.170056, 0.841066, −44.225989 −50.000000
2.004907, −0.968785)
Note: k is the number of iteration in the SOPFA.
x k is a solution at the k
th iteration in the SOPFA.
gi (x k )(i = 1, 2, 3) is a constrain value at x k .
e(x j ) is a constrain error at x k to judge whether or not x k is a feasible solution.
f (x k ) is an objective value at x k .
Mk is an objective penalty parameter.
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

TABLE 2 Numerical results of the SOPFA with 2 iterations

k g1 (x k ) g2 (x k ) g3 (x k ) e(x k ) xk f (x k ) Mk

1 0.0005260.000316 −2.384617 0.000174 (0.268081, 0.730804, −44.148631 −100.000000


2.001145, −0.949561)
2 −0.000101 −0.004258 −1.641114 0.000000 (0.138865, 0.891228, −44.207498 −50.000000
1.995794, −0.985107)

where  = 1,  = 025, a = 6, then we have better results in Table 2 by the


SOPFA.
Table 3 shows results of the modified OPFA in [9], algorithm I and II.
From Table 3, it is found that the convergence of the SOPFA is almost
the same as that of the modified OPFA. But, the penalty parameter 
may be smaller in the SOPFA. The results in Table 3 show that the exact
penalty function F1 (x, ) is not stably convergent, when penalty parameter
 becomes larger. The convergence of algorithm II is very slow.

TABLE 3 Numerical results of the modified OPFA in [9], algorithm I and II

Penalty function k k e(x k ) f (x k ) xk

Modified 1 100 0.000747 −44.235044 (0.170866, 0.834766, 2.008375, −0.965080)


OPFA 2 100 0.000000 −44.224200 (0.170445, 0.834404, 2.007750, −0.965163)
Algorithm I 1 1 3.088841 −48.675003 (0.271487, 0.665847, 2.284397, −1.202172)
F1 (x, ) 2 10 0.067519 −43.965729 (0.224944, 0.681818, 1.986078, −1.025888)
3 100 0.000000 −42.873678 (0.255930, 0.821528, 1.925370, −0.846347)
Algorithm II 1 1 1.277340 −46.205029 (0.202688, 0.832115, 2.108992, −1.075412)
F2 (x, ) 2 10 0.135699 −44.455547 (0.171664, 0.835793, 2.019720, −0.978234)
3 100 0.013660 −44.256315 (0.169746, 0.835565, 2.009759, −0.966241)

Note: k is the number of iteration in the SOPFA.


k is a constrain penalty parameter at the k
th iteration.
e(x j ) is a constrain error at x k to judge whether or not x k is a feasible solution.
x k is a solution at the k
th iteration in the SOPFA.
f (x k ) is an objective value at x k .
Smoothing Objective Penalty Function Algorithm 819

TABLE 4 Numerical results of the SOPFA, the modified OPFA in [9], algorithm I, and II

Penalty function k k e(x k ) f (x k ) xk

SOPFM 1 10 0.000006 944.215759 (2.500000, 4.223369, 0.955589)


2 10 0.000012 944.215768 (2.500000, 4.223370, 0.955589)
Modified OPFM 1 10 0.014463 944.186404 (2.501241, 4.222113, 0.964877)
2 10 0.000345 944.214915 (2.500033, 4.221319, 0.964693)
Algorithm I 1 10 0.000289 944.241903 (2.500013, 4.251487, 0.821586)
F1 (x, ) 2 100 0.000007 944.240828 (2.499999, 4.250654, 0.825795)
Algorithm II 1 10 0.114892 943.980301 (2.510162, 4.227377, 0.967794)
F2 (x, ) 2 100 0.011481 944.192098 (2.501018, 4.221964, 0.964759)
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

Example 4.2. Consider the following problem:

(P 33) min f (x) = 1000 − x12 − 2x22 − x32 − x1 x2 − x1 x3


s.t. g1 (x) = x12 + x22 + x32 − 25 = 0
g2 (x) = (x1 − 5)2 + x22 + x32 − 25 = 0
g3 (x) = (x1 − 5)2 + (x2 − 5)2 + (x3 − 5)2 − 25 ≤ 0

Let Y = (x1 , x2 , x3 ) | 0 ≤ xi ≤ 100, i = 1, 2, 3. We have the nonlinear


penalty function,
2
F (x, M , ) = 1000000001(f (x)−M ) − 1 + (p (g1 (x)) + p (−g1 (x)) + p (g2 (x))
+ p (−g2 (x)) + p (g3 (x))),

where  = 10,  = 05, a = 4. Let x 0 = (2, 4, 0) ∈ X , a1 = −1200, b1 =


956, M1 = −1220, then results of the SOPFA are given in Table 4. In
order to compare their convergence, results of the modified OPFA in [9],
algorithm I and II are given in Table 4 as well.
As shown in Table 4, it is said that under the same parameter  the
SOPFA gets an approximate solution more accurate than the modified
OPFA; the SOPFA can get an approximate solution within fewer iterations
than algorithm I; and the SOPFA has a better approximate solution
than algorithm II. In [9], it was shown that the modified OPFA is more
convergent than algorithm I and II.
The above numerical experiments show that the results obtained by the
SOPFA are better than or at least the same as modified OPFA, algorithm I,
and II. Therefore, there exists a function Q (t ) such that the SOPFA
converges faster for a global approximate solution, which means that the
SOPFA can be efficient in solving constrained optimization problems.
820 Z. Meng et al.

5. CONCLUSIONS
This article presents a smoothing objective penalty function method,
with some error estimations of the smoothing objective penalty function
proved. Based on the penalty function, we develop a SOPFA to solve
constrained optimization problems and prove its global convergence.
Numerical experiments show that the SOPFA has a good convergence for
a global approximate solution.

ACKNOWLEDGMENTS
This research is supported by the National Natural Science Foundation
of China under grant 10971193 and the Natural Science Foundation of
Downloaded by [Selcuk Universitesi] at 04:02 06 January 2015

Zhejiang Province with grant Y6090063.


The authors would like to express their gratitude to anonymous
referees’ detailed comments and remarks that help us improve our
presentation of this article considerably.

REFERENCES
1. J.V. Burke (1991). An exact penalization viewpoint of constrained optimization. SIAM J. Control
Optimiz. 29:968–998.
2. A.V. Fiacco and G.P. McCormick (1968). Nonlinear Programming: Sequential Unconstrained
Minimization Techniques. Wiley, New York.
3. R. Fletcher (1981). Practical Method of Optimization. Wiley-Interscience, New York.
4. R. Fletcher (1983). Penalty functions. In: Mathematical Programming: The State of the Art.
(A. Bachem, M. Grotschel, and B. Korte, eds.). Springer, Berlin, pp. 87–114.
5. S.P. Han and O.L. Mangasrian (1979). Exact penalty function in nonlinear programming. Math.
Prog. 17:251–269.
6. D. Mauricio and N. Maculan (2000). A boolean penalty method for zero-one nonlinear
programming. J. Global Optimiz. 16:343–354.
7. Z.Q. Meng, Q.Y. Hu, and C.Y. Dang (2004). An objective penalty function method for nonlinear
programming. Appl. Math. Lett. 17:683–689.
8. Z.Q. Meng, C.Y. Dang, and X.Q. Yang (2006). On the smoothing of the square-root exact
penalty function for inequality constrained optimization. Compu. Optimiz. Appls. 35:375–398.
9. Z.Q. Meng, Q.Y. Hu, and C.Y. Dang (2009). A penalty function algorithm with objective
parameters for nonlinear mathematical programming. J. Indus. Mgmt. Optimiz. 5:585–601.
10. D.D. Morrison (1968). Optimization by least squares. SIAM J. Numer. Anal. 5:83–88.
11. M.C. Pinar and S.A. Zenios (1994). On smoothing exact penalty functions for convex
constraints optimization. SIAM J. Optimiz. 4:486–511.
12. E. Rosenberg (1981). Globally convergent algorithms for convex programming. Math. Oper. Res.
6:437–443.
13. X.Q. Yang, Z.Q. Meng, X.X. Huang, and G.T.Y. Pong (2003). Smoothing nonlinear penalty
functions for constrained optimization. Numer. Funct. Anal. Optimiz. 24:351–364.
14. W.I. Zangwill (1967). Nonlinear programming via penalty function. Mgmt. Sci. 13:334–358.
15. S.A. Zenios, M.C. Pinar, and R.S. Dembo (1993). A smooth penalty function algorithm for
network-structured problems. Eur. J. Oper. Res. 64:258–277.

You might also like