Professional Documents
Culture Documents
Mathematics - Ijmcar - An Approach For Continuous Method For The General
Mathematics - Ijmcar - An Approach For Continuous Method For The General
ABSTRACT
In this paper, we introduce a new continuous method for the general convex problem (1). Our method overcomes
all the shortfalls of the previous methods, yet maintains strong convergence. As pointed out in [2] that the projection model
(3) is difficult or impossible to be implemented for a general convex constraint, our approach first transforms problem (1)
into a monotone variational inequality problem. Then, a continuous method is introduced for this variational inequality
problem. Our continuous model differs from the one in [3] in two aspects. First, we employ a merit function in our
continuous model. The structure of our model is similar to the one in [23, 9]. Second, the bound for the scaling parameter
(called 0 in [3]) is constant in [3], but it is variable in our continuous model; in particular, an ODE for this variable as
function of time is established.
min xRn f ( x)
Such that
(1a)
g ( x) 0 , g : R n R m
(1b)
Where f(x) and g(x) are convex functions and have continuous first-order derivatives. Obviously, for convex
problem (1), every local solution is also a global solution. For problem (1), we define
0 = { x R n g ( x) 0}
as the feasible set and
f ( x) + ( g ( x))T = 0 ,
(2a)
y 0 , g ( x) 0 , yT g ( x) = 0
(2b)
www.tjprc.org
editor@tjprc.org
144
Prashant Chauhan
Even though continuous methods can be traced back to the 1950s [1], the most significant theoretical contribution
to continuous methods comes from Hopfield [1618]. In recent years, the restriction on the merit function being a
Lyapunov function has been removed to allow wider applications [25, 20, 10].
As in Flam [7] the ODE
dx(t)/dt =x(t), >0.
The solution of the above ODE is
t
x(t) = x0e
Thus, it is easy to see that x(t) 0as t . But x(t) will never reach x* = 0 in any finite time if
x0 0 .
Therefore, the conditions and result of theorem 3.2 in [7] do not hold. The first convergence proof of a continuous model
for the general convex problem (1) was given in [2]. The ODE suggested in [2] is based on the gradient projection method
and has the following form:
dx ( t ) / dt = [ x P0 ( x f ( x ))]
Where
(3)
0 the feasible is set of (1) and P 0 () is the projection onto 0 . Even though this method is
set 0 .
2. PROBLEM EQUIVALENT TO VI
First, let us look at the following lemma,
Lemma 2.1.
f ( x) + ( g ( x))T y
x
u = , F (u ) =
,
y
g ( x)
(4)
(5)
Where
x
= u = x R n , y R+m , R+m = { y R+m y 0} .
y
Then, the following lemma shows that F (u) defined in (4) is monotone.
145
= ( xT , yT ) , u = ( x T , y T )T we have
xx
T
T
(u u ) ( F (u ) F (u )) =
( f ( x) f ( x ) + ( g ( x)) y ( g ( x )) y )
y y
T
= ( x x )T (f ( x) f ( x )) + yT {g ( x ) g ( x) g ( x)( x x)}
+ y T {g ( x) g ( x ) g ( x )( x x )}
( x x )T (f ( x) f ( x )) 0 ( x x ) 0
(7)
g ( x) {g ( x ) + g ( x)( x x)} ,
(8a)
g ( x ) {g ( x) + g ( x)( x x )}
(8b)
y 0 and y 0 , we have
(u u )T ( F (u ) F (u )) 0 .
This completes the proof of Lemma 2.2.
Lemma 2.2 indicates that (5) is a monotone VI problem. Let
T T
T T
= y T g ( x)
(9)
( x x)T [f ( x ) + ( g ( x ) ) y ] ( y y )T g ( x ) 0 u
T
(10)
f ( x ) + ( g ( x ) ) y = 0
T
Consequently,
www.tjprc.org
editor@tjprc.org
146
Prashant Chauhan
( y y )T g ( x) 0 , y 0
(11)
In (11), let
y = y + ei , i = 1, 2,......m where ei is the ith column vector of the identity matrix Im. Then, (11)
becomes ei
g ( x) 0 , i = 1, 2,......m .
T
These inequalities imply that g(x) 0. Now, we need only to verify that y g(x) = 0. Since y 0 and (11) is true
for all
This implies that y g(x) = 0. Therefore, u satisfies (2). This proves Lemma 2.3.
3. PROJECTION
For any v R
m+n
, we denote by
P (v) = arg min { v u u } is the unique solution of the minimization problem min { v u u } . Notice
that the projection on
(v P (v))T (u P (v)) 0 , v R n , u
(12)
Since the early work of Eaves [6] and others, we know that a variational inequality VI (, F ) is equivalent to
the following projection equation:
u = P [u F (u )]
(13)
In other words, solving VI (, F ) is equivalent to solving the nonsmooth equation (13), i.e., finding the zero
point of the residue function
e(u ) = u P [u F (u )]
(14)
e(u ) = 0 u *
Let
F (u*)T {P [u F (u )] u*} 0 , u R n
(15)
(16)
147
, we have
(17)
(15) Follows from the definition of variational inequalities; (16) follows from a basic property of the projection
mapping; (17) follows from the assumption of monotonicity of the mapping F.
Besides
e(u ) [7],
d (u ) = e(u ) {F (u ) F ( P [u F (u )])}
(18)
Adding (15), (16), (17) and using the notation e(u ) and
d (u ) we have
(u u*)T d (u ) e(u )T d (u ) , u R n
Lemma 3.1. For
(19)
F (u ) F ( P [u F (u )]) L e(u ) , u R n + m
(20)
then we have
e(u )T d (u ) (1 L) e(u )
(21)
(1 L) e(u )
F and denote
e(u , ) = u P [u F (u )]
d (u , ) = e(u , ) ( F (u ) F ( P [u F (u )])) .
It is trivial to see that, for any
> 0,
e(u , ) = 0 e(u ) = 0 u * .
www.tjprc.org
editor@tjprc.org
148
Prashant Chauhan
e(u , ) e(u , )
(22)
e(u , ) / e(u , ) /
(23)
Proof. If e(u , ) = 0 , then e(u , ) = 0 . Therefore the results hold. Now, we assume e(u , ) 0 .
Let t
e(u , )
e(u , )
(t 1)(t
1 t
)0
(24)
(v P (v))T ( P (v) w) 0 , w
(25)
Substituting
w = P [u F (u )] , v = u F (u )
in (25) and using
P [u F (u )] P [u F (u )] = e(u , ) e(u , ) ,
We get
(26)
Similarly, we have
(27)
(28)
Consequently,
2
(29)
149
Dividing (29) by
e(u , ) , we obtain
2
+ t 2 ( + )t
thus, (24) holds and Lemma 3.2 is proved.
Lemma 3.3. For any
u R n + m with e(u ) 0 and L (0,1) , there always exists a small > 0 such that
(30)
is a closed convex set, there exist an > 0 and a > 0 such that
(31)
e(u , ) / > 0 .
However u implies
lim +0 F (u )] F ( P [u F (u )]) = 0
Together with
e(u , ) / > 0
and (23), this indicates that (30) is true. Combining the two cases, there exists a
(32)
4. MONOTONE VI PROBLEM
In the continuous method, we adopt a technique to adjust the parameter
be satisfied. We define
= e
(33)
and let
www.tjprc.org
editor@tjprc.org
150
Prashant Chauhan
du
u dw dt
w= ,
=
.
dt d
dt
Let
2
T1 (u , F , ) = u P [u e F (u )] ,
T2 (u , F , ) = e F (u ) e F ( P [u e F (u )])
Then, we define
du d (u, e ), if T2 0.9T1
=
dt 0,
otherwise
d
= (u , F , )
dt
Where
0,if T2 0.9T1
1, otherwise
(u, F , ) =
1,if T2 0.9T1
0, otherwise
(u, F , ) =
Therefore, we have
(u, F , )d (u, e )
dw
=
dt
(u, F , )
(34)
In (34), the right-hand-side term is not continuous. To make this term continuous, we can define
(u, F , )d (u, e )
dw
=
dt
(u, F , )
(35)
Where
if T0 0.8
1,
(36a)
151
if T0 0.8
0,
T0 (u , F , ) =
(36b)
T2 (u , F , )
T1 (u , F , )
(36c)
(37)
5. CONVERGENCE PROPERTIES
Here we will discuss the convergence properties for (35), we use
Theorem 5.1. For any w0 R
n + m +1
T0 in place of T0 (u , F , ) .
in [0, ) .
Proof. From our assumption that there exists a finite x* 0 , we
*
u* * . Then,
(u u*)T d (u , e ),
if T0 0.8
d u u*
= (9 10T0 )(u u*)T d (u, e ),if T0 0.8
dt
0,
if T0 > 0.9
(38)
For all cases of (38), from (32) with L =0.9, it is easy to see
d u u*
dt
(39)
u (t ) B (u0 , u*) = u R n + m u u * u0 u *
The set
of d (u , e
d
0 . From the continuity
dt
) , it is easy to verify that the right-hand-side function of (35) is bounded. From the Cauchy-Peano theorem, the
www.tjprc.org
n + m +1
, let
editor@tjprc.org
152
Prashant Chauhan
T0 0.8 if t t
(40)
that u (t ) * ,
t 0 .
d
0 . Therefore, is always monotonically non-increasing in t. From
dt
T0 0.8 , (u )
(41)
Lemma 5.1.
Theorem 5.2. For any w0 R
n + m +1
, let
t 0 such that T0 0.8 , t t . From (35), (38), and (32), we can see
easily that
d u u*
dt
2
2
(1 L) e(u , e ) , t t
(42)
d
= 0 , t t .
dt
Therefore,
u *
w* =
*
From (42), we have
d w w*
dt
2
2
(1 L) e(u , e ) , t t
(43)
153
From the proof of Theorem 5.1, we know that u (t ) B (u0 , u*) , which is a compact set. Therefore, there exists a
sequence tk , with
tk as k , such that lim k u (tk ) exists. Let u be the limit. From (42), it is easy to see that
d u u
dt
u * by u in (42), we have
2
2
(1 L) e(u , e ) , t t
7. CONCLUSIONS
In this paper, we have discussed a continuous method for convex programming (CP) problems, which includes
both a merit function and an ordinary differential equation (ODE), for the resulting variational inequality problem. Our
approach is different from existing continuous methods in the literature. Strong convergence results of our continuous
method are obtained under very mild assumptions. No Lipschitz condition is required in our convergence results.
REFERENCES
1.
Arrow, K.J., Hurwicz, L., and Uzawa, H., Studies in Linear and Nonlinear Programming, Stanford University
Press, Stanford, California, 1958.
2.
Antipin, A.S., Minimization of Convex Functions on Convex Sets by Means of Differential Equations,
Differential Equations, Vol. 30, pp. 13651375, 1994.Programming Problems, SIAM Journal on Control and
Optimization, Vol. 36, pp. 680697, 1998.
3.
Antipin, A.S., Solving Variational Inequalities with Coupling Constraints with the Use of Differential Equations,
Differential Equations, Vol. 36, pp. 14431451, 2000.
4.
Antipin, A.S., Feedback-Controlled Saddle Gradient Process, Automation and Remote Control, Vol. 55, pp. 311
320, 1994.
5.
Bertsekas, D.P., Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific, Belmont,
Massachusetts, 1996.
6.
Eaves, B.C., On the Basic Theorem of Complementarity, Mathematical Programming, Vol. 1, pp. 6875, 1971.
7.
Flam, S.D., Solving Convex Programs by Means of Ordinary Differential Equations, Mathematics of Operations
Research, Vol. 17, pp. 290302, 1992.
8.
Fletcher, R., Practical Methods of Optimization, 2nd Edition, John Wiley and Sons, New York, NY, 1987.
9.
Glazos, M.P., Hui, S., and Zak, S. H., Sliding Modes in Solving Convex Programming Problems, SIAM Journal
on control and Optimization, vol.36 pp. 680-697,1998.
www.tjprc.org
editor@tjprc.org
154
Prashant Chauhan
10. Han, Q. M., Liao, L. Z., Qi, H. D., and Qi,L.Q., Stability Analysis of Gradient-Based Neural Networks for
Optimization Problems, Journal of Global Optimization, Vol. 19, pp. 363381, 2001.
11. He, B.S., Liao, L. Z., Han, D. R., and Yang, H., A New Inexact Alternating Directions Method for Monotone
Variational Inequalities, Journal of Optimization Theory and Applications, Vol. 92, pp. 103118, 2002.
12. He, B. S., and Liao, L. Z., Improvements of Some Projection Methods for Monotone Nonlinear Variational
Inequalities, Journal of Optimization Theory and Applications, Vol. 112, pp. 111128, 2002.
13. He, B.S., Inexact Implicit Methods for Monotone General Variational Inequalities, Mathematical Programming,
Vol. 86, pp. 199217, 1999.
14. He, B.S., A Class of Projection and Contraction Methods for Monotone Variational Inequalities, Applied
Mathematics and Optimization, Vol. 35, pp. 6976, 1997.
15. He, B.S., Some Predictor-Corrector Methods for Monotone Variational Inequalities, Technical Report 9565,
Faculty of Technical Mathematics and Informatics, TU Delft, Delft, Netherlands, 1995.
16. Hopfield, J.J., Neural Networks and Physical Systems with Emergent Collective Computational Ability,
Proceedings of the National Academy of Sciences of the USA, Vol. 79, pp. 25542558, 1982.
17. Hopfield, J.J., Neurons with Graded Response Have Collective Computational Properties Like Those of TwoState Neurons, Proceedings of the National Academy of Sciences of the USA, Vol. 81, pp. 30883092, 1984.
18. [18] Hopfield, J. J., and Tank, D.W., Neural Computation of Decisions in Optimization Problems, Biological
Cybernetics, Vol. 52, pp. 141152, 1985.
19. Liao, L. Z., Qi, H. D., and Qi, L.Q., Neurodynamical Optimization, Journal of Global Optimization (to appear).
20. Leung, Y., Chen, K.Z., Jiao, Y.C., Gao, X. B., and Leung,K.S., A New Gradient-Based Neural Network for
Solving Linear and Quadratic Programming Problems, IEEE Transactions on Neural Networks, Vol. 12, pp.
10741083, 2001.
21. Luenberger, D.G., Linear and Nonlinear Programming, 2nd Edition, Addi-son-Wesley Publishing Company,
Reading, Massachusetts, 1984.
22. Rockafellar, R.T., Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970.
23. Rodrguez Vazquez, A., Domnguez Castro, R., Rueda, A., Huertas, J. L., and Sanchez Sinencio, E.,
Nonlinear Switch-Capacitor Neural Networks for Optimization Problems, IEEE Transactions on Circuits and
Systems, Vol. 37, pp. 384398, 1990.
24. Slotine, J. J. E., and Li, W., Applied Nonlinear Control, Prentice Hall, Englewood Cliffs, New Jersey, 1991.
25. Xia, Y., A New Neural Network for Solving Linear Programming Problems and Its Applications, IEEE
Transactions on Neural Networks, Vol. 7, pp. 525529, 1996.