Professional Documents
Culture Documents
A Sequential Linear Programming Method For Generalized Linear Complementarity Problem
A Sequential Linear Programming Method For Generalized Linear Complementarity Problem
A Sequential Linear Programming Method For Generalized Linear Complementarity Problem
Introduction
x 0,
y 0,
(x ) y = 0,
This subject is supported by the Natural Science Foundation of China (Grant No.
10771120).
624
First, we give some equivalent expressions of the solution set of the GLCP.
Certainly, (x , y ) is a solution of the GLCP if and only if there exist z Rl
such that
x 0, y 0,
Mx Ny Qz q = 0
(x ) (y ) = 0
(I, 0, 0) 0, (0, I, 0) 0,
[(I, 0, 0) ] [(0, I, 0) ] = 0,
(M, N, Q) q = 0
625
(2.1)
si =
0, j J,
and r be such that r + s = e, where
I := {i {1, 2, , n} | ((I, 0, 0) )i ((0, I, 0) )i },
J := {j {1, 2, , n} | ((I, 0, 0) )j > ((0, I, 0) )j }.
Combining this with (2.1), we know that ( , r , s ) satises the constraints
in (2.2) and render zero its objective.
On the other hand, let ( , r , s ) solve (2.2) with the objective varnishing.
Then
r (I, 0, 0) + s ((0, I, 0) ) = 0.
Since r + s = e > 0, it follow that ((I, 0, 0) ) ((0, I, 0) ) = 0, and
solves the GLCP.
2
In the following, we prove that the existence of a vertex solution for (2.2),
which will be used to design our algorithm for solving (1.1).
Theorem 2.3 If (2.2) is feasible, then it has a vertex solution.
626
(M, N, Q) q = 0
has
such that a vertex ((I, 0, 0)
, (0, I, 0)
) of its feasible region as solution
([9]), and
(I, 0, 0)
r
(I, 0, 0)
r
=
(0, I, 0)
s
(0, I, 0)
s
Similarly, the linear program
r
(I, 0, 0)
s
(0, I, 0)
s 0, r 0, r + s = e
min
s.t.
has a vertex (
r, s) of its feasible region as solution and
r
s
(I, 0, 0)
(0, I, 0)
=
r
s
(I, 0, 0)
(0, I, 0)
=
r
s
(I, 0, 0)
(0, I, 0)
.
Hence (
, r, s) is a vertex of feasible region of (2.2) and a vertex solution of
(2.2).
2
Now, we can give the following equivalent reformulation of (2.2).
min [(e s) (I, 0, 0) + s (0, I, 0)]
s.t. (I, 0, 0) 0, (0, I, 0) 0,
(M, N, Q) q = 0
where
si =
1, i I,
r + s = e.
0, j J.
(2.3)
627
(3.1)
(M, N, Q) q = 0
where s( k ) is dened in (2.3) and such that
(((e s( k )) (I, 0, 0) + s( k ) ((0, I, 0)))( k+1 k ) < 0.
Stop when impossible.
In the following, we are in the position to show the global convergence of
the Algorithm 3.1.
Theorem 3.1 The sequence { k } generated by Algorithm 3.1 terminates
in a nite number of steps at a solution of the GLCP or a point
such that
(
, r(
), s(
)) is a stationary point of (2.2), where s(
) be dene in (2.3).
Proof. The result of theorem is a direct consequence of [7], Theorem 2.1,
where the minimization over (r, s) in (2.3) has been carried out resulting in
the objective of (2.3) that does not depend on (r, s).
2
Theorem 3.2 A point
satises the following necessary optimality conn
, y3 Rm such that
ditions of (2.2) for some y1 , y2 R+
(I, 0, 0)
0,
(0, I, 0)
0,
(M, N, Q)
q =0
(3.2)
(I, 0, 0) y1 + (0, I, 0) y2 + (M, N, Q) y3
= [(e s(
)) (I, 0, 0) + s(
) ((0, I, 0)] ,
) + y2 (0, I, 0)
= 0.
y1 ((I, 0, 0)
n
The stationary point
solves (2.2) if and only if there exist y1 R+
, y2
n
m
R+ , y3 R such that
, s(
), e s(
)),
(
, y1, y2 ) = (
(M, N, Q) y3 = 0
628
satises (3.2).
Proof. Condition (3.2) can be satised by
, s(
), e s(
)),
(
, y1, y2 ) = (
(M, N, Q) y3 = 0
when
is a solution of the GLCP.
Conversely, if
, s(
), e s(
)),
(
, y1, y2 ) = (
(M, N, Q) y3 = 0
satisfy conditions (3.2), it follows that
is feasible for the GLCP and that
+ s( k ) ((0, I, 0)
) = 0.
(e s(
)) (I, 0, 0)
Since (e s(
)) + s(
) = e, it follows that [(I, 0, 0)
] [(0, I, 0)
] = 0, and
Computational Experiments
1
1 2 2 2 2
1
0 1 2 2 2
1
.. .. ..
.
.
.
.
.
.
, q=
N = . . .
.
. . .
..
.
0 0 0 1 2
0 0 0 0 1
1
For this problem, Harker and Pang ([10]) used the damped-Newton method
(DNA), and Wang ([11]) used the Newton-type method (NTA). The results for
the above two methods and several values of the dimensions n are summarized
in Table 1. In Table 2, we take initial point x0 = (1, 1, , 1) , and summarize
the results of our algorithm for several values of dimensions n. From Table 1
629
and Table 2, we can conclude that our algorithm excels the other two methods
listed above.
To illustrate the stability of our algorithm, under the initial point x0 is
produced randomly in (0,1), and the dimensions n = 64, we use it to solve
example 4.1, and the results are listed in Table 3. Table 2 and Table 3 indicate
that our algorithm is not sensitive to the change of initial point, thus, we can
see Algorithm 3.1 performs well for this problem.
Table 1. Numerical Results by DNA, NTA for Example 4.1
Dimension 8 16 32 64
128
DAN iter
9 20 72 208 > 300
NAT iter
13 12 18 99
99
Table 3. Numerical Results of Our Algorithm by Random Initial Point for Example 4.1
Trial
Iter
f (1e 14)
CT
Trial
Iter
f (1e 14)
CT
1
1
2.1932
0.2810
6
1
2.1932
0.2970
2
1
2.1932
0.2810
7
1
2.1932
0.2970
3
1
2.1932
0.2810
8
1
2.1932
0.2970
4
1
2.1932
0.2970
9
2
2.1932
0.3750
5
1
2.1932
0.2970
10
1
2.1932
0.2970
References
[1] O.L.Mangasarian, J.S. Pang. The extended linear complementarity problem[J]. SIAM J. Matrix Analy. Appl., 1995, 16:359-368.
[2] M.S. Gowda. On the extended linear complementarity problem. Math. programming, 1996, 72:33-50.
630