Professional Documents
Culture Documents
Necessary Conditions For Minimax Control Problems of Second Order Elliptic Partial Dierential Equations
Necessary Conditions For Minimax Control Problems of Second Order Elliptic Partial Dierential Equations
Abstract. In this paper, we study an optimal control problem with the state equation
being a second order semilinear elliptic partial dierential equation containing a distributed
control. The control domain is not necessarily convex. The cost functional, which is to be
minimized, is the essential supremum over the domain
of some function of the state and
the control. Our main result is the Pontryagin type necessary conditions for the optimal
control. Ekeland variational principle and the spike variation technique are used in proving
our result.
8
<
in
;
where
is a bounded region in lRn with a smooth boundary @
and f :
lR U ! lR is
a given map satisfying some conditions. U is a metric space in which the control variable
u() takes values. Function y() is called the state of the system, which is to be controlled.
Under certain conditions, for each measurable function u(), problem (1.1) admits a unique
* This work was partially supported by the NSF of China under Grant 19131050 and
the Fok Ying Tung Education foundation. Current address: IMA, University of Minnesota, Minneapolis, MN 55455.
1
solution y() y(; u()) in some function space, say Y . Then, for a given set Q Y , we
may consider a state constraint of the following type:
(1:2)
y(; u()) 2 Y :
Next, we let h :
lR U ! lR and consider a cost functional
(1:3)
Then, our optimal control problem is to nd a control u() such that the corresponding
state y(; u()) satises (1.1){(1.2) and minimizes the cost functional (1.3).
One of the motivation of our problem is the following: Suppose we would like to
control the state y(), which is subject to (1.1){(1.2), so that the largest deviation of it
form the desired one, say z(), is minimized. In this case, we only need to take
h(x; y; u) = jy z(x)j2 :
Since the problem is to minimize a \maximum", it is usually referred as minimax control problem. Similar problem for ordinary dierential equations was studied by several
authors, see [2,3,14]. The purpose of this work is to give Pontryagin type maximum principle for optimal controls of our problem. Since the cost functional is not smooth (in
some sense), we adopt an idea from [2,3] to approximate it by the Lp-norm of function
h(x; y(x); u(x)) and then let p ! 1. In order the Ekeland variational principle applies,
we need the stability of the optimal cost value. Namely, we need to show that the optimal
cost value of the approximating problem approaches that of the original one. Due to the
state constraint, we have to impose a condition to ensure such a stability. On the other
hand, since U is merely a metric space, no convexity can be talked about. Thus, we use the
spike variation technique as [11,21] in deriving the maximum principle. We have proved
the nontriviality of the Lagrange multiplier. We should admit that, at present time, we
are only able to prove the result for the case h has the form
(1:4)
8(x; y; u) 2
lR U:
Our approach seems not applicable to general case. However, we conjecture that the result
should hold for general h. We notice that in [2] (for ODE systems with no state constraint),
2
same problem also exists and the proof provided there, for the case of general h, seems
questionable to us. Also, the nontriviality of the Lagrange multiplier was not mentioned
in [2]. Finally, we note that the proofs of [2] heavily relied on the dynamic programming
and viscosity solutions for HJB equations. These techniques, however, are not applicable
for elliptic systems here. Besides, unlike in [2], we will not use the epi-convergence. For
the completeness, in this paper, we also present an existence result of optimal controls,
whose proof is basically similar to that given in [20], which was for Lagrange type cost
functional.
We refer the readers to [4,14,15] for standard optimal control theory of nite dimensions, to [1,5{7,11,12,18{21] for innite dimensional counterpart.
To conclude this introduction, let us point out that our approach applies to general
second order elliptic partial dierential equations, namely, we may replace the Laplacian
by a general second order elliptic operator with smooth coecients. Also, we may consider
quasilinear systems, boundary control problems as well as problems for parabolic equations.
Some interesting extensions will appear elsewhere.
x2. Preliminaries.
Let us start with some assumptions which will be assumed throughout of the paper.
Let
lRn be a bounded region with a smooth boundary @
and U be a Polish space
([10]). Let f :
lR U ! lR, g :
lR ! lR and ` :
U ! lR be given maps satisfying
the following: (h is dened by (1.4))
(i) f (x; y; u) and g(x; y) are dierentiable in y and for some constant L > 0,
(2:1)
(2:2)
8(x; y; u) 2
lR U;
8(x; y; u) 2
lR U:
(ii) Maps f (x; y; u), fy (x; y; u), g(x; y), gy (x; y) and `(x; u) are continuous on
lRU .
Remark 2.1. The above conditions can be relaxed substantially, say, f and fy are
merely measurable in x, the boundedness of f and h is replaced by the linear growth in y,
etc. We prefer not get into such a generality since the result will be similar and the main
idea is the same.
Next, we dene
U = fu() :
! U u() measurable g:
Any element u() 2 U is referred as a control. From [9,17], we know that under the
above (i), for any u() 2 U , problem (1.1) admits a unique solution y() y(; u()) 2
T
W 2;p(
) W01;p(
), for all p 1. Moreover, there exists a constant C = C (p; L;
) such
that
(2:3)
ky(; u())kW ;p (
) C;
2
8u() 2 U :
ky(; u())kC ; (
) C;
1
8u() 2 U ;
with the constant C and 2 (0; 1) being independent of u(). Hereafter, C will be generic
constants which could be dierent in dierent places. Now, we let p > n be xed and let
Y be a separable Banach space containing W01;p(
) (topologically):
Y W01;p(
):
(2:5)
We let Q Y be convex and closed. Then, by the above analysis, we see that for any
u() 2 U , we have
y(; u()) 2 Y :
(2:6)
Thus, the state constraint (1.2) makes sense. It is important to notice that (1.2) includes
many interesting cases. Let us point out some of them. We let Y = C (
) and
(2:7)
for some xi 2
, ai 2 lR, 1 i m. Under this choice, (1.2) gives a pointwise constraint
for the state. If we take Y = Lp(
) and let
(2:8)
Q = fy() 2 Y
y (x)
0; a.e. x 2
;
4
y(x)h(x)dx 1 g;
Y W 2;p(
) W01;p(
):
for some xi 2
, bi 2 lRn, 1 i m. Under this choice, (1.2) provides a pointwise
constraint for the gradient of the state. Similar constraints like (2.7) and (2.10) for other
types of optimal control problem were considered in [5,7,21]. We may cook up some other
interesting examples of state constraints. In what follows, we will restrict ourselves to the
case (2.5). The case (2.9) will not be treated in this paper. Some relevant results will
appear elsewhere.
Of course, for any given u() 2 U , the corresponding state y(; u()) does not necessarily
satisfy the constraint (1.2). Thus, let us introduce
J (u()) = inf
J (u()):
U
Q
Now, let us make some reductions. First of all, by scaling, we may assume
(2:13)
j
j meas
= 1:
5
Next, we let
(2:14)
b
h(x; y; u)
y; u) + L + 1 ;
= h(x; 2(
L + 1)
8(x; y; u) 2
lR U:
1 < 1;
0 < 2(L1+ 1) bh(x; y; u) 2(2LL +
+ 1)
8(x; y; u) 2
lR U:
8(x; y; u) 2
lR U:
We will keep assumptions (2.13) and (2.16) in the rest of the paper. Clearly, under this
assumption,
(2:17)
To conclude this section, let us present an existence result for the optimal controls. In this
result, we do not need h to be of form (1.4). Now, we dene
(2:18)
Theorem 2.2. Let E (x; y) be convex and closed for each (x; y) 2
lR. let AQ be
nonempty. Then, Problem C admits at least one solution.
(2:19)
in C (
);
in W 2;p(
):
yk () !s ye();
w
yk () *
ye();
6
(2:20)
in Lp(
);
in Lp(
):
Thus, we have
8
<
ye(x) = fe(x);
: ye = 0:
@
(2:21)
in
;
a.e. x 2
:
a.e. x 2
:
8
<e
h(x)
: fe(x)
a.e. x 2
:
k!1
where
d(y(); Q) = q(inf
ky() q()kY :
)2Q
Then,
(3:2)
k!1
This assumption seems crucial in our approach. We do not know if one can remove
this condition. Similar condition was used by the author in [19] for treating nonsmooth
problem. We will make some remarks on this assumption a little later. Next, for any
u() 2 U , we introduce the variational system associated with (y(); u()) as follows:
(3:3)
8
>
>
>
<
>
>
>
:
in
;
Clearly, under our assumptions, for any u() 2 U , there exists a unique solution z(; u()) 2
Y of (3.3). We dene
(3:4)
This is called the reachable set of the variational system (3.3). It is clear that 0 2 R.
We say that Q Y is nite codimensional in Y if there exists a point z 2 Q, such that
span fQ zg is a nite codimensional subspace in Y and the relative interior of cofQ zg
in span fQ zg is nonempty. It is not hard to see that in the denition of codimensionality,
the choice of the point z 2 Q is irrelevant. Now, let us state our main result of this paper.
Theorem 3.1. Let (y (); u()) 2 AQ be optimal and let (H) hold. Moreover, let Q be
nite codimensional in Y . Then, there exist ( 0 ; ') 2 [ 1; 0] Y nf0g, 2 (L1 (
)) nf0g
0
and () 2 W01;p (
), p0 = p=(p 1) 2 (1; n=(n 1)), ( 0 ; ()) =
6 0, such that
(3:5)
(3:6)
8
<
:
supp fx 2
h(x; y(x); u(x)) = kh(; y(); u())kL1 (
)g;
8
(
) h ;
i a;
(3:7)
(3:8)
8q() 2 Q;
(3:9)
a.e. x 2
0;
where
(3:10)
0 = fx 2
h(x; y(x); u(x) < kh(; y(); u())kL1 (
)g;
(3:11)
(S ) h ; S i = 0:
We refer to (3.8) as the transversality condition and (3.9) as the maximum condition. We
see that if () 6= 0, then, (3.9) gives a necessary condition for the optimal control u().
Whereas, if () = 0, then, (3.5) tells us that
(3:13)
This implicitly gives a necessary condition for u(). By ( 0 ; ') 6= 0, we know that (3.13)
is a nontrivial condition. Also, we should note in the case h does depend on u and
(3:14)
a.e. x 2
;
To conclude this section, let us make some comments on (H). First of all, if Q = Y ,
i.e., there is no state constraint, then, (H) holds. Secondly, if for each (x; y) 2
lR, the
set f (x; y; U ) is convex and closed, and h(x; y; u) h(x; y), then, (H) holds. In fact, in
this case, if (yk (); uk ()) 2 A satisfying (3.1), then, we can show that there exists a pair
(ye(); ue()) 2 AQ, such that for some subsequence,
(3:15)
Thus,
m J (ue()) klim
J (u ()):
!1 k
(3:16)
It is not hard to see that actually, if the conditions of Theorem 2.2 hold, then, (H) holds.
Hence, assumption (H) is very general.
(4:1)
in
;
with ge() 2 Lr (
), it holds
kz()kW ;r (
) C keg()kLr (
):
(4:2)
kz()kL1 (
) C keg()kLr (
);
(4:3)
kz()kW ;r (
) C (kc()z()kLr (
) + keg()kLr (
))
C (kz()kL1 (
) + keg()kLr (
)) C keg()kLr (
):
2
10
E = fx 2
u(x) 6= ub(x)g:
(4:5)
Let y() and yb() be the states corresponding to u() and ub(), respectively. Then,
ky() yb()kW ;r (
) C jE j1=r;
(4:6)
kz()kW ;r (
) C k[f (; y(); u()) f (; y(); ub())]E ()kLr (
)
C jE j1=r :
2
Jr (u()) =
1=r
8u() 2 U ;
and let
(4:10)
mr = u(inf
J (u()):
)2U r
Q
The following result is the stability of the optimal cost value. This result is crucial in
sequel. Also, we will nd that h does not have to be of form (1.4) in this result.
lim mr = m inf
J (u()):
U
r!1
11
Lemma 4.4. There exists a nondecreasing continuous function !b : [0; 1) ! [0; 1) such
that for any (y(); u()) 2 A and 2 lR, there exists a (ye(); ue()) 2 A, such that
(4:12)
a.e. x 2
;
where
(4:13)
M = fx 2
h(x; y(x); u(x)) > g;
and
E fx 2
u(x) 6= ue(x) g M:
(4:14)
Proof. If jM j = 0 or jM j = 1 (recall j
j = 1), (4.12) is trivially true. Thus, we let
0 < jM j < 1. Let > 0 be such that
(4:15)
where O (x) is the open ball centered at x with radius . Then, we can choose xi 2
,
such that
[
(4:16)
i1
O (xi )
:
By (4.15), we know that for each i 1, there exists an xei 2 O (xi ) n M . For such xei , we
know that
(4:17)
Then, we dene
u(x); x 2
n M ;
ue(x) = u(xei ); x 2 [O (xai ) n Si 1 O (xj )] T M :
j =1
Clearly, (4.14) holds. By Corollary 4.2, we know that there exists a constant C , independent of u() and ue(), such that (ye() = y(; ue()))
(4:18)
(4:19)
ky() ye()kW ;p (
) C jE j1=p C jMj1=p:
2
12
(4:20)
T
Si
j =1 O (xj )],
Lemma 4.5. Let (H) hold. Then, for any sequence (yr (); ur ()) 2 A with
lim d(yr (); Q) = 0;
(4:22)
r!1
it holds
lim Jr (ur ()) m:
(4:23)
r!1
Proof. Suppose (4.23) is not the case. Then, for some " > 0 and some subsequence
(still denoted by itself) (yr (); ur ()) 2 AQ, we have
(4:24)
1=r
m 2";
8r r0:
Let
(4:25)
cr = fx 2
h(x; yr (x); ur (x)) > m
M
"g:
Then,
(4:26)
m 2"
1=r
cr j1=r :
(m ")jM
Thus,
(4:27)
2" r ! 0;
cr j m
jM
m "
13
(r ! 1):
On the other hand, by Lemma 4.4, there exists a (yer (); uer ()) 2 A, such that
cr j);
h(x; yer (x); uer (x)) m " + !b(jM
(4:28)
and
cr j ! 0:
jfx 2
ur (x) 6= uer (x) gj jM
(4:29)
lim esssup h(x; yer (x); uer (x)) m:
r!1 x2
(4:32)
Jr (u()) J (u());
8u() 2 U :
lim mr m:
r!1
On the other hand, there exists (yr (); ur ()) 2 AQ , such that
(4:34)
Jr (ur ()) mr + 1r ; r 1:
Noting that yr () 2 Q, by Lemma 4.5, we have
(4:35)
r!1
8u(); bu() 2 U :
From [8,11], we know that (U ; db) is a complete metric space. Now, for r > 1, we dene
Fr (u()) = f[(Jr (u()) mr + 1r )+ ]2 + dQ (y(; u()))2 g1=2;
(5:1)
8u() 2 U :
Then, we see that
(5:2)
8
Fr (u(
>
>
>
>
<
Here, we have used Theorem 4.3. Thus, by Ekeland variational principle, we can nd a
ur () 2 U , such that
(5:4)
8ub() 2 U :
s fx 2
h(x; y(x); u(x)) m sg 6= ;:
Then, we let
(5:5)
and dene
(5:6)
s (x) =
Us (x); x 2
s;
U;
x 2
n
s :
15
s;n
;
r
s
with v() 2 Vs. Next, we x any u() 2 Usr . For any 2 (0; 1), we make a spike variation
to the control: Let E
be undetermined satisfying
jEj = j
j = ;
(5:8)
and dene
ur (x); x 2
n E;
u(x); x 2 E:
Thus, ur() is obtained by changing the values of ur () only on the set E. Due to the fact
that there is no convexity for U , only such perturbations are allowed. Then, we see that
(5:9)
(5:10)
ur(x) =
We let yr () and yr () are states corresponding to the controls ur () and ur(), respectively.
From [21] (see [11] also), we know that there exists a set E
satisfying (5.8), such that
the following hold:
(
yr () = yr () + zr () + r ();
(5:11)
kr ()kW ;p (
) = o(1); ( ! 0);
and
Z
(1 1 E (x))[h(x; yr (x); u(x))r h(x; yr (x); ur (x))r ]dx = o(1);
(5:12)
( ! 0);
where zr () is the solution of the following problem:
8
zr (x) = fy (x; yr (x); ur (x))zr (x)
>
>
>
<
f (x; yr (x); u(x)) f (x; yr (x); ur (x)); in
;
(5:13)
>
>
>
:
zr @
= 0:
1
16
(5:14)
in W 2;p(
);
in C (
);
8
>
>
>
<
>
>
>
:
in
;
(5:16)
where
(5:17)
and
(5:18)
Z
1
1
hr (x)r 1 bh (x)z (x)
+
! F (u ()) f(Jr (ur ()) mr + r )
r
r
r r
Jr (ur ())r 1
r (x)
dx + dQ(yr ()) h rdQ(yr ()); zr () ig
+ rJ h
r (ur ())r 1
Z
0
= r r (x)bhr (x)zr (x) + rJ (hu r((x)))r 1 dx + h 'r ; zr () i;
r r
8
hr (x)
>
>
>
<
b
hr (x)
>
>
>
:
( r0 )2 + k'r k2Y = 1:
17
h
r (x; ur (x))r
r (x)dx
1 dx
(5:20)
and
Z
(5:21)
1)
(
) = 1;
8r > 1:
r () * ;
(5:22)
in (L1(
)) ;
(5:23)
(5:24)
(5:25)
(5:26)
s
Z
h(x; y(x); v(x)) + "r r dx
rb
m "r
Z s
rb m m s +" "r r dx ! 0;
r
8q() 2 Q:
Thus, combining (5.14), (5.16), (5.25){(5.27), we have (It is here that we need hy = gy to
be independent of u!)
(5:28)
with r ! 0 as r ! 1. Here, Rs is the reachable set of (3.3) with u() 2 Us. Hence, by
the nite codimensionality of Q, similar to [21], we may let
( 0
0
r! ;
(5:29)
w
'r *
';
in Y ;
for some ( 0 ; ') 2 [0; 1]Y nf0g. Thus, we take limits in (5.27) to obtain the transversality
condition (3.8). Next, we take limits in (5.28) with q() = y() to obtain
0 0 h ; gy (; y())z() i + h ';
z() i;
(5:30)
8z() 2 Rs:
Then, we let
0=
(5:31)
0;
' = ';
(5:32)
We let
(5:33)
1;p0 (
)
r () 2 W0
8
<
:
8z() 2 Rs:
be the solution of
r r gy (x; yr (x))
By [1,10,16],
(5; 34)
Then, we may let
(5:35)
k r ()kW ;p0 (
) C (kr kL (
) + k'r kY ) C:
1
8
w
< r*
:
in W 1;p (
);
0
in Lp (
):
;
s
r! ;
19
'r ;
x 2
;
Clearly, () is the solution of (3.5). Then, combining (5.32) with some straightforward
computations, we can obtain
(5:36)
0
Thus, the maximum condition (3.9) follows. Now, we show (3.6). If it is not the case,
then, there exists a set S
0 , such that
(5:37)
and for some " > 0, r0 1,
(5:38)
x 2 S; 8r r0 :
Hence,
Z
m 2" r 1 jS j ! 0:
(S ) = rlim
(
x
)
dx
r
!1 S
m "
This contradicts (5.37). Thus, (3.6) holds. Finally, from (5.20) we see that (3.7) holds.
In the case h is independent of u, hr = 0 and we can carry out the proof without
considering
s and Us etc. Thus, the nal conclusion of Theorem 3.1 follows.
(5:40)
References
[1] V. Barbu, Optimal Control of Variational Inequalities, Pitman, Boston, 1984.
[2] E. N. Barron, The Pontryagin maximum principle for minimax problems of optimal
control, Nonlinear Anal., 15 (1990), 1155{1165.
[3] E. N. Barron and H. Ishii, The Bellman equation for minimizing the maximum cost,
Nonlinear Anal., 13 (1989), 1067{1090.
[4] L. D. Berkovitz, Optimal Control Theory, Springer-Verlag, New York, 1974.
20
[5] J. F. Bonnans and E. Casas, Optimal control of semilinear multistate systems with
state constraints, SIAM J Control & Optim., 27 (1989), 446{455.
[6] J. F. Bonnans and E. Casas, Un principe de Pontryagine pour le contr^ole des systemes
semilinearires elliptiques, J. Di. Equ., 90 (1991), 288{303.
[7] E. Casas L. A. Fernandez, Optimal control of semilinear elliptic equations with pointwise constraints on the gradient of the state, preprint.
[8] I. Ekeland,, Nonconvex minimization problems, Bull. Amer. Math. Soc. (New Series),
1(1979), 443{474.
[9] D. Gilbarg and N. S. Trudinger, Elliptic Partial Dierential Equations of Second
Order, 2nd Edition, Springer-Verlag, 1983.
[10] C. J. Himmelberg, M. Q. Jacobs and F. S. Van Vleck, Measurable multifunctions,
selectors and Filippov's implicit functions lemma, J. Math. Anal. Appl., 25 (1969),
276{284.
[11] X. Li and J. Yong, Necessary conditions of optimal control for distributed parameter
systems, SIAM J. Control & Optim., 29 (1991), 895{908.
[12] J. L. Lions, Optimal Control of Systems Governed by Partial Dierential Equations,
Springer-Verlag, New York, 1971
[13] C. Miranda, Partial Dierential Equations of Elliptic Types, Springer-Verlag, New
York, 1970.
[14] L. Neustadt, Optimization, Princeton Univ. Press, NJ, 1976.
[15] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mischenko, Mathematical Theory of Optimal Processes, Wiley, New York, 1962.
[16] G. Stampacchia, Le probleme de Dirichlet pour les equations elliptiques du second
ordre a coecients discontinus, Ann. Inst. Fourier Grenoble, 15 (1965), 189{258.
[17] G. M. Troianiello, Elliptic Dierential Equations and Obstacle Problems, Plenum
Press, New York, 1987.
21
[18] Y. Yao, Optimal control for a class of elliptic equations, Control Theory & Appl., 1
(1984) No.3, 17{23 (in Chinese).
[19] J. Yong, A maximum principle for the optimal controls for a nonsmooth semilinear
evolution system, Analysis and Optimization of Systems, A. Bensoussan and J. L. Lions eds., Lecture Notes in Control & Inform. Sci., Vol 144, Springer-Verlag, 1990,
559{569.
[20] J. Yong, Existence theory for optimal control of distributed parameter systems, Kodai
Math. J., 15 (1992), 193{220.
[21] J. Yong, Pontryagin maximum principle for second order partial dierential equations
and variational inequalities, Di. Int. Equ., 5 (1992), 1307{1334.
22