Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Automatica 38 (2002) 1591 – 1599

www.elsevier.com/locate/automatica

Brief Paper
Robust control of nonlinear systems with parametric uncertainty 
Qian Wang, Robert F. Stengel ∗
Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544, USA
Received 26 August 1999; received in revised form 7 September 2001; accepted 8 March 2002

Abstract

Probabilistic robustness analysis and synthesis for nonlinear systems with uncertain parameters are presented. Monte Carlo simulation
is used to estimate the likelihood of system instability and violation of performance requirements subject to variations of the probabilistic
system parameters. Stochastic robust control synthesis searches the controller design parameter space to minimize a cost that is a function
of the probabilities that design criteria will not be satis4ed. The robust control design approach is illustrated by a simple nonlinear example.
A modi4ed feedback linearization control is chosen as controller structure, and the design parameters are searched by a genetic algorithm
to achieve the tradeo7 between stability and performance robustness. ? 2002 Elsevier Science Ltd. All rights reserved.

Keywords: Stochastic robustness; Monte Carlo simulation; Input-to-state stability; Genetic algorithm

1. Introduction worst-case deterministic robust control problems are non-


polynomial (NP) hard (Braatz, Young, Doyle, & Morari,
The problem of designing a robust state feedback control 1994). If instead of worst-case guaranteed conclusions,
that provides global stability for an uncertain nonlinear sys- probabilistic conclusions are acceptable, computation com-
tem has been the subject of considerable research over the plexity can be reduced signi4cantly. Probabilistic control
last decade. For nonlinear systems with unknown static non- design uses randomized algorithms with polynomial com-
linearities satisfying known functional bounds, the current plexity to characterize system robustness (Ray & Stengel,
literature focuses on deterministic worst-case robust analy- 1993; Marrison, 1995; Tempo, Bai, & Dabbene, 1997;
sis and synthesis. Early nonlinear design methods (Gutman, Khargonekar & Tikku, 1996; and references therein) and
1979; Corless & Leitmann, 1981; Barmish, Corless, & to identify satisfactory controllers. Furthermore, probabilis-
Leitmann, 1983) were based on the “matching condition” tic robustness analysis and control design directly attack
assumption that the uncertainty enters into the state equation the real engineering problem of interest by evaluating the
at the same point as the control input. For nonlinear systems likelihood that the design requirements would fail subject
containing unmatched uncertainty, recent results (Marino & to the uncertainties, while the deterministic robust control
Tomei, 1993; Slotine & Hedrick, 1993; Freeman & theories usually need to transform the engineering design
Kokotovic, 1993a; and references therein) are based on the criteria to 4t their own frameworks.
technique of integrator backstepping (Kanellakopoulous, Since many problems in robustness analysis and synthe-
Kokotovic, & Morse, 1991). sis can be formulated as the minimization of an objective
For parametric uncertainty, guaranteed stability-bound function with respect to the controller parameters, we be-
estimates often are unduly conservative, and the resulting lieve that creative combinations of a variety of pre-existing
controller usually needs very high control e7ort (Freeman & control methodologies and the probabilistic approach will
Kokotovic, 1993b). Even for linear systems, many result in powerful tools that can address real engineering
control problems. The probabilistic approach has been ap-
 This paper was not presented at the IFAC meeting. This paper was plied to a linear benchmark robust control problem (Wie &
recommended for publication in revised form by the Associate Editor Bernstein, 1990) for which linear–quadratic–Gaussian reg-
Zhihua Qu under the direction of Editor Hassan Khalil.
∗ Corresponding author. Fax: +1-609-258-6109. ulators (Marrison & Stengel, 1995) and transfer function
E-mail addresses: wangqian@alumni.princeton.edu (Q. Wang), sweep designs (Wang & Stengel, 2001) were given to min-
stengel@princeton.edu (R.F. Stengel). imize a probabilistic robustness design cost.

0005-1098/02/$ - see front matter ? 2002 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 0 5 - 1 0 9 8 ( 0 2 ) 0 0 0 4 6 - 8
1592 Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599

The probabilistic approach is readily applied to nonlin- N samples are


ear designs as well as to linear designs. In this paper, we N
present a framework for nonlinear robust control by merg- 1 
P̂(d ) = I [H (qk ); C(d )]; (3)
ing the stochastic point of view with feedback linearization N
k=1
and backstepping. In Section 2, we propose the general ap-
proach for probabilistic robust control of nonlinear systems. Jˆ(d ) = fcn[P̂ 1 (d ); P̂ 2 (d ); : : : ]: (4)
In Section 3, for a nonlinear model that is an adaptation of a
The estimated cost Jˆ approaches the actual cost J in the
linear benchmark problem, state feedback controller struc-
limit as N → ∞. For a 4nite N , because the probabilities
ture and robustness cost function are formulated. In Section
de4ned above are all binary variables, with the outcome of
4, a genetic algorithm searches the design parameters of the
a trial taking one of two possible values (acceptable or un-
probabilistic robust feedback control law. Simulation results
acceptable) for each MCE, the binomial test can be applied
are presented for stability and performance robustness of the
to determine exact con4dence intervals for the estimates of
closed-loop system.
the probabilities. Stengel and Ray (1991) provide the for-
mula to compute the lower and upper bounds of the con4-
dence interval for the MCE corresponding to any speci4ed
2. Stochastic robust control design for nonlinear systems con4dence coePcients.

2.1. Stochastic robustness


2.2. Stochastic robust control of nonlinear systems
Denote a plant structure as H (q), where q is a vector
In this section, we formulate the general nonlinear design
of varying plant parameters selected randomly throughout
problem for deterministic systems, introduce parametric
the parameter space, Q, reKecting the expected distribution,
uncertainty, and reformulate the problem in a probabilistic
pr(q). Stochastic robustness characterizes a control law,
format. As a practical matter, we make certain simplifying
C(d), with d as the design parameter, in terms of the prob-
assumptions to illustrate the design approach. De4ne R+ as
abilities that the closed-loop system will have unacceptable
the interval [0; ∞). A continuous function : R+ → R+ is
stability and performance when subjected to parametric un-
of class K when it is strictly increasing and (0) = 0. It is
certainties. For each design requirement on stability or per-
of class K∞ when, in addition, (r) → ∞ as r → ∞. A
formance, de4ne the corresponding binary indicator function
continuous function  : R+ × R+ → R+ is of class KL when
I [ · ] as one if H (q) and C(d ) form an unacceptable system
for each 4xed t; (·; t) is of class K and for each 4xed
and as zero otherwise. For example, if the closed-loop sys-
r; (r; ·) is decreasing in t with (r; ·) → 0 as t → ∞.
tem is unstable, its indicator function equals one; otherwise
Consider a nonlinear system with the same input and out-
it equals zero. If a step response lies outside a desired per-
put dimensions in the presence of an uncertain parameter
formance envelope, its indicator function is one; otherwise
q ∈ Q:
it is zero. Then for each design requirement, the probabil-
ity of design requirement violation, P, can be calculated as ẋ = f (x; q) + G (x; q)u + P(x; q)w;
the integral of the corresponding indicator function over the G = [g1 g2 ··· gm ]; (5)
expected system parameter space:
 P = [p1 p2 ··· pl ];
P(d) = I [H (q); C(d )]pr(q) dq: (1) y = h(x; q); (6)
Q

This probability can be viewed as the expected value of the where G = [ g 1 g 2 · · · g m ]; P = [ P 1 P 2 · · · P l ]; f ; gj


indicator function. (j = 1; 2; : : : ; m) and pk (k = 1; 2; : : : ; l) are smooth vector
The stochastic robustness cost function, J (d), is formal- 4elds on Rn ; h is a smooth function mapping Rn → Rm ,
ized by combining the probabilities for di7erent kinds of and w is an essentially bounded unknown disturbance
design requirements with certain tradeo7s: vector.
With state feedback control u = u(x(t)), the input-to-state
J (d) = fcn[P1 (d ); P2 (d ); : : : ]: (2) stability (Sontag, 1989) of the closed-loop system with re-
spect to the disturbance w is de4ned as follows:
The goal is to 4nd the optimal controller parameter d ∗
that minimizes the cost function J . For example, the design Denition. The system is robust input-to-state stable with
cost may tradeo7 the probability of system instability, of respect to the disturbance w if ∀q ∈ Q; there exist a class
unsatisfactory performance, and of excess use of control. K function and class KL functions  and ; such that; for
In most cases, Eq. (1) cannot be integrated analytically. each x0 ∈ Rn ; and any continuous bounded w; the solution
Monte Carlo evaluation (MCE) is a practical and Kexible exists and satis4es
alternative to estimate the probabilities (Kalos & Whitlock,
1986). The estimates of the probability and cost based on |x(t)| 6 (|x0 |; t) + (w[0; t ∗ ] ; t − t ∗ ) + (w(t ∗ ; t] ) (7)
Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599 1593

for all t ∗ and t such that 0 6 t ∗ 6 t. | · | denotes the Eu- simulations are used to evaluate the probability of metric
clidean norm; and  ·  denotes the sup norm for which violation. Optimization algorithms are applied to search the
w:=ess sup {|w(t)|; t ¿ 0}. design parameter space to minimize the stochastic robust-
ness cost function.
2.2.1. Nominal control system design
The following theorem describes the necessary conditions 2.2.3. Simpli9cation of control design for special cases
under which our design method can be applied. If in Eq. (8), the disturbance enters in a simpler way,
speci4cally, if only the disturbance contained in ii has a
Theorem (proof is given in the appendix). In the nominal nonlinear coePcient, then the control law can be generated
system of equations (5) – (6), if we assume that (f ; G ) is in a more conventional fashion.
exact feedback linearizable, and (f ; P) takes a strict feed-
back form (i.e., lower triangular form), then there exists a
Corollary. If Eq. (8) takes the form
nonlinear coordinate transformation  = T (x) and a
feedback control u = −[G ∗ (x)]−1 f ∗ (x) + [G ∗ (x)]−1 v (the i
˙ = Ai i + bi vi + Ci w + bi [ i (i )]T w; (12)
de9nitions of T (x); f ∗ (x), and G ∗ (x) are given in the
appendix) such that the system takes the following form: where
   
i
˙1 = i2 + [’i1 (i1 )]T w; 0 1 0 ··· 0 0
0 0 1 ··· 0  .. 
   
.. Ai =  · bi =  . 
. · · · ···  ;
0
; (13)
0 0 0 ··· 1
i
˙i −1 = ii + [’ii −1 (i1 ; : : : ; ii −1 )]T w; 0 0 0 ··· 0  × 1  ×1
i
i i

i
˙i = vi + [’ii (i1 ; : : : ; ii )]T w; Ci is a (i × l) constant matrix and i is an (l × 1)
nonlinear function vector; then the closed-loop system is
i = 1; 2; : : : ; m: (8) input-to-state stable with respect to the disturbance w with
the control law vi :
The closed-loop system is input-to-state stable with re-
spect to the disturbance w if the control is designed as vi = −ri−1 biT Pi i − i | i |2 biT Pi i ; ri ; i ¿ 0; (14)

u = −[G ∗ (x)]−1 f ∗ (x) + [G ∗ (x)]−1 [ 11 ··· mm ]T (9) where Pi is the positive-de9nite solution to the Riccati
equation with design parameters Qi and rI :
with intermediate controls ji and error variables zji calcu-
lated in terms of the backstepping process (Kristic, ATi Pi + Pi Ai − ri−1 Pi bi biT Pi + Qi = 0; Qi ; ri ¿ 0:
Kanellakopoulos, & Kokotovic, 1995) (15)
Considering system robustness to parameter uncertainty,
zji = ij − j−1
i
(i1 ; : : : ; ij−1 ); (10)
we optimize the design parameter vector d = {Qi ; ri ; i ;
i = 1; 2; : : : ; m} of control law (14) to minimize the stochas-
j−1
 i
@ j−1 tic robustness cost function.
ji (i1 ; : : : ; ij ) = −ij zji − zj−1
i
+ ik+1 In the following sections, we present an example to illus-
@ik
k=1
trate the design procedure. First, we derive the parameter-
 2 ized control law for the nominal system and formulate the
 j−1 
 i  @ j−1 i 
i
− eji zji  ’j − ’  ; (11) stochastic robustness cost function. Then, we apply an opti-
 @ik k  mization algorithm to search the controller parameter space
k=1
so that the Monte Carlo estimation of the probabilistic
where z0i ≡ 0i ≡ 0, and ij ; eji ; i=1; 2; : : : ; m; j=1; 2; : : : ; i robustness cost function is minimized.
are positive design parameters.

2.2.2. Robust control design 3. Stochastic robust control design for a nonlinear
Now we consider the parametric uncertainty q ∈ Q in spring–mass system
Eqs. (5) and (6). We adopt controller (9) as our stochas-
tic robust control law structure, which is parameterized As a simple example to demonstrate the design method
by the design parameter vector d = {ij ; eji ; i = 1; 2; : : : ; m; proposed in Section 2, a nonlinear spring–mass system
j = 1; 2; : : : ; i }. According to system design requirements, is considered (Fig. 1). This is an adaptation of the linear
we de4ne corresponding robustness metrics and formulate benchmark problem posed by Wie and Bernstein (1990).
a stochastic robustness cost function as Eq. (2). For each The linear spring of the earlier problem is replaced by
candidate vector of the design parameter, Monte Carlo a linear-cubic spring therefore, the state-space model
1594 Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599

˙3 = 4 ;
k1, k2
w
u ˙4 = L4f h + Lg L3f hu + Lp L3f hw; (20)
m1 m2
where L4f h; Lg L3f h, and Lp L3f h are Lie derivatives given
below:
Fig. 1. A nonlinear system consists of two masses connected by a
linear-cubic spring. 6k20
L4f h = (x1 − x2 )(x3 − x4 )2
m02

0
becomes 1 1 k1 3k20 2
  − + + (x 1 − x 2 )
  x3 m01 m02 m02 m02
ẋ1  
 x4  ×[k10 (x1 − x2 ) + k20 (x1 − x2 )3 ]; (21)
 ẋ   
 2   k1 k2 
 =− (x − x ) − (x − x ) 3
 ẋ3   m1 1 2
m1
1 2  1
  Lg L3f h = [k 0 + 3k20 (x1 − x2 )2 ]; (22)
ẋ4  k1 k2 3
 m01 m02 1
(x1 − x2 ) + (x1 − x2 )
m2 m2 1
  Lp L3f h = − [k 0 + 3k20 (x1 − x2 )2 ]: (23)
0   (m02 )2 1
  0
 0   0  If Lg L3f h is not singular, the control law u for the nominal
 
+ 1 u +  0 w

system can be de4ned as
  

 m1 
 1 
m2 u = (Lg L3f h)−1 (−L4f h + v): (24)
0
Then Eq. (20) takes the form of Eq. (12)
= f (x; q) + g(x; q)u + p(x; q)w; (16)
3k 0
˙ = A + bv + cw − 02 2 (x1 − x2 )2 bw (25)
y = x2 = h(x): (17) (m2 )

We denote x1 and x2 as the positions of the masses, and with


x3 and x4 as their velocities. The plant is disturbed by w  
0
on the second mass, and the control u is applied to the  
4rst mass. The vector of system parameters is denoted by  1=m02 
c=

:
 (26)
q=[k1 k2 m1 m2 ]T , with nominal values as k10 =m01 =m02 =  0 
1, and k20 = −0:1. The limits of mass and spring-constant −k1 =(m02 )2
variations are
0:5 ¡ k1 ¡ 2; −0:5 ¡ k2 ¡ 0:2; By Eq. (14), we formulate the control law as
(18) 1
0:5 ¡ m1 ¡ 1:5; 0:5 ¡ m2 ¡ 1:5: u= (−L4f h − r −1 bT P − (x1 − x2 )2 bT P); (27)
Lg L3f h
The single input=single output form of the transformation
 = T (x) to Lie-derivative coordinates (de4ned in the ap- where P is the positive-de4nite solution to the Riccati equa-
pendix) is calculated as tion parameterized by Q and r as Eq. (15). For simplic-
ity, the matrix Q is chosen as a diagonal matrix, Q =
1 = h(x) = x2 ; diag{q1 ; q2 ; q3 ; q4 }; therefore, the design parameter vector
2 = Lf h = x4 ; is de4ned as

k10 k20 d = {q1 ; q2 ; q3 ; q4 ; r; }: (28)


3 = L2f h = (x 1 − x 2 ) + (x1 − x2 )3 ;
m02 m02 We adopt the nominal control law (27) as our stochastic

0 robust controller structure. Three aspects of system robust-
3 k1 3k20 2
4 = Lf h = (x3 − x4 ) + 0 (x1 − x2 ) : (19) ness are of concern in this design: stability, settling time,
m02 m2 and control e7ort. The appropriate design parameter vector
By this nonlinear transformation, the nominal system of d achieves tradeo7s between stability and performance ro-
Eq. (16) becomes bustness by minimizing a stochastic robustness cost function
˙1 = 2 ; J = wi Pi2 + wts Pts2 + wu Pu2 ; (29)
1 where Pi ; Pts , and Pu represent probability of instability,
˙2 = 3 + 0 w;
m2 of settling-time exceedance, and of excess control e7ort
Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599 1595

respectively. The probabilities are squared in Eq. (29) to mutation. The initial population is formed by randomly
place higher penalty on large probabilities of unsatisfactory generating a number of chromosomes. Each chromosome
behavior, which may be of greater concern than low proba- is evaluated by Monte Carlo simulation, and high-4tness
bilities of design metric violation. The choice of the weights chromosomes are selected to survive to the next generation.
wi ; wts , and wu has been addressed extensively in Marrison A chromosome with lower cost J has higher 4tness. The
and Stengel (1995). In this paper, we choose the cost func- selected chromosomes are paired randomly and subjected to
tion weights as wi = 1; wts = 0:01 and wu = 0:01, which crossover with a probability usually in (0:6; 1:0). Crossover
are the same weights as those of Design 1 in Marrison and is carried out by swapping the tails of a pair of chromo-
Stengel (1995). somes at a random point along the binary sequence. After
crossover, the binary number sequence in each chromo-
some may be mutated with a very low probability (usually
4. Searching design parameters and simulation results ¡ 0:1). Each mutated binary number of the chromosome
is altered from 0 to 1 or from 1 to 0. For this design, the
In Monte Carlo estimation of Pi ; Pts , and Pu , each can- initial population contains 50 chromosomes.
didate control law is evaluated subject to variations of the We examine the e7ects of probability distributions of un-
system parameters, variations of the initial state, and varia- certainties on control law design and robustness. Stengel and
tions of the magnitude of the disturbance. We separate the Ray (1991) show that linear–quadratic regulator design is
design procedure into several stages. In the 4rst stage, we relatively insensitive to the assumed probability distribution
4x the initial state at the origin and the disturbance as a unit of the parametric uncertainty. Barmish and Lagoa (1996)
impulse at the initial time, estimating only the e7ect of the suggest that the uniform distribution is the worst case among
system parameter variations on the closed-loop system. In bounded unimodal distributions on which robustness anal-
the second stage, we change the initial condition as well as ysis may be based. In this paper, we design two controllers
the system parameters, using a random number generator to based on uniform (bounded) and Gaussian (unbounded) dis-
specify the initial condition over the region of interest. In tributions respectively, then compare the robustness of the
the 4nal stage, we add the variations of the magnitude of two controllers when the actual distributions are either uni-
the disturbance, using a random number generator to specify form or Gaussian.
the size of disturbance within the interval of interest. First, we assume that the system’s uncertain parameters
In this paper, we address just the 4rst step, i.e., we assume have uniform probability distributions over the range given
that the initial state is at the origin and that the disturbance in Eq. (18). During the process of optimization, each can-
is a unit impulse at the initial time. In addition to the de4ni- didate control law is evaluated by 500 Monte Carlo simula-
tion of system stability, we identify the measures of system tions. After 20 generations the genetic algorithm produces
performance for the design at this speci4c step as: control law (27) as



• Probability of exceeding settling time Pts : mts = 1 if 1 
the time history of the output |x2 (t)| ¿ 0:1 for any u= − L4f h − [0:2616 1:3831 2:9157 1:214]
Lg L3f h 

t ¿ 15 s; mts = 0 otherwise. 
• Probability of exceeding control limit Pu : mu = 1 if the
 
time history of the control input |u(t)| ¿ 1 for any t; 1
mu = 0 otherwise.  
 2
×   − [0:0967 0:5115 1:078 0:449]
Because Mount Carlo estimation of probability is used  3 
in the process of searching control design parameters, 4
the discrepancy between the Monte Carlo estimate and the   
1
true value results in apparent “noise” in the evaluation of 

  
the cost function. Furthermore, the cost function is not con-  2
×   (x1 − x2 )2 : (30)
vex; therefore, a genetic algorithm (Goldberg, 1989) is used  3  


to search for the control design parameters. The genetic 4
algorithm is a randomized adaptive search method. It deals
with the design parameter vectors as though they are the This control law contains two terms that are linear in the
chromosomes of organisms trying to compete and survive Lie-derivative coordinate vector, whose gains are deter-
in the environment speci4ed by the cost J from generation mined by linear–quadratic regulation of the transformed
to generation. Each element in the design parameter vector system. The second term is weighted by the square of the
is represented by a binary number sequence; the elements di7erence in positions of the two masses, a direct e7ect of
are connected to compose a chromosome. the cubic nonlinearity.
There are four operations in each generation of the chro- The design process is repeated with the assumption that
mosome evolution: evaluation, selection, crossover, and the uncertain parameters have Gaussian distributions as
1596 Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599

follows: present case, the reason is quite clear: the tails of Gaus-
k1 = N (1:25; 0:1875); k2 = N (−0:15; 0:0408); sian distributions are unbounded, and unsatisfactory values
(31) outside the bounds of uniform distributions are likely to
m1 = N (1; 0:0833); m2 = N (1; 0:0833): occur.
Controller 2 has lower probability of instability than Con-
For each system parameter, the mean and variance of the
troller 1 for both uniform and Gaussian evaluations, and it
Gaussian distribution are chosen to be the same as those of
is less likely to use too much control with Gaussian
the uniform distribution listed in Eq. (18). The design for
uncertainty. However, it is twice as likely to use excessive
Gaussian distributions produces control law (27) as
 control if the uncertainty is uniform, and its probability

 of unsatisfactory settling time is marginally higher than that
1  of Controller 1.
u= − L4f h − [0:1403 0:7705 2:0611 0:9214]
Lg L3f h 
 The design procedure proposed by this paper can be ap-

plied very easily to any nonlinear systems with feedback
  linearizable nominal vector 4elds. The probabilistic robust
1
 2  control framework can be further extended to nonlinear
× 
 3  − [0:0657 0:3297 0:8086 0:9835] systems that are not feedback linearizable by utilizing ap-
proximate feedback linearization (Hauser, Sastry, & Koko-
4
tovic, 1992) and backstepping, etc. In Wang and Stengel
  
1  (2000), for the longitudinal motion of a hypersonic aircraft


 2  containing 28 inertial and aerodynamic uncertain parame-
×  (x1 − x2 )
 3 
2
: (32) ters and 39 design metrics, the probabilistic design approach


 produces an ePcient Kight control system that achieves bet-
4
ter stability and performance robustness than a compara-
Table 1 shows the comparison of the robustness pro4les ble stochastic robust linear controller (Marrison & Stengel,
of the two controllers. Controller 1 (Eq. (30)) assumes uni- 1998).
form uncertainty and Controller 2 (Eq. (32)) assumes Gaus-
sian uncertainty. Each evaluation data is based on 25,000
Monte Carlo simulations, against either uniform or Gaus- 5. Conclusions
sian evaluation distributions. In each cell, the 4rst number is
the probability or cost, and the numbers in brackets denote A general framework for designing robust controllers
the 95% con4dence interval of the Monte Carlo estimate of for uncertain nonlinear systems is presented. The method
the probability or cost. combines probabilistic design and evaluation with feed-
As should be expected, if the evaluation distribution back linearization and backstepping techniques. The design
is uniform, the controller designed under that assumption approach reveals new insights about the robustness of
yields a lower design cost than that of the controller de- feedback-linearization-type controllers, and it shows the
signed for Gaussian uncertainty. Conversely, Controller 2 relationship between nominal system characteristics and
produces lower cost than Controller 1 when the uncertainty the probability of maintaining satisfactory performance
is Gaussian. For all but one case, evaluation with uniform when parameters of the system are uncertain. The proposed
distributions produces a lower metric than evaluation with probabilistic approach is illustrated with a nonlinear design
Gaussian distributions, contradicting the notion that the example that shows excellent stability and performance
assumption of uniform uncertainty is conservative. For the robustness. The example illustrates the e7ects of uniform

Table 1
Comparison of system robustness of controllers designed with uniform and Gaussian distributions

Controller design Jcost Pi Pts Pu

Uniform Gaussian Uniform Gaussian Uniform Gaussian Uniform Gaussian

Eval. Eval. Eval. Eval. Eval. Eval. Eval. Eval.


Dist. Dist. Dist. Dist. Dist. Dist. Dist. Dist.

Controller 1 4.15e-4 2.5e-3 0.016 0.044 0.615 0.0952 0.11 0.225


[3.54e-4, [2.29e-3, [0.0144, [0.0415, [0.0585, [0.0916, [0.1061, [0.2198,
4.81e-4] 2.79e-3] 0.0176] 0.0465] 0.0645] 0.0988] 0.1139] 0.2302]

Controller 2 6.22e-4 1.4e-3 0.0107 0.0319 0.0809 0.0968 0.2104 0.1721


[5.7e-4, [1.2e-3, [0.0094, [0.0297, [0.0775, [0.0931, [0.2053, [0.1674,
6.79e-4] 1.6e-3] 0.012] 0.0341] 0.0843] 0.1005] 0.2155] 0.1768]
Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599 1597

and Gaussian uncertainties on controller design and eval- de4ned as


uation. For example, evaluation with Gaussian uncertainty  
Lf 1 h1
produces more conservative results, while the assumption  2 
of Gaussian uncertainty in control design leads to lower  L h2 
 f 
probability of instability when evaluated against actual un- f ∗ (x) = 
 ..  ;

certainties that are uniform or Gaussian. The method is  . 
 
not restricted by the number of uncertainties or the m
Lf hm
degree of nonlinearity of the system, and it provides
a logical extension to prior results for robust linear   
Lg1 Lf 1 −1 h1 Lg2 Lf1 −1 h1 · · · Lgm Lf 1−1 h1
control.  
 Lg L2 −1 h2 Lg L2 −1 h2 · · · Lg L2 −1 h2 
 1 f 2 f m f 
G (x) = 

 . . . ..


Acknowledgements  .. .. .. . 
 
This research has been supported by the Federal Avia- Lg1 Lfm −1 hm Lg2 Lfm −1 hm · · · Lgm Lfm −1 hm
tion Administration and the National Aeronautics and Space (A.4)
Administration under FAA Grant No. 95-G-0011.
such that Eq. (A.3) becomes
l

i
Appendix. ˙1 = i2 + (Lpk hi )wk ;
k=1
..
Proof of the theorem. De4ne a transformation from the .
l

state to Lie-derivative coordinates  = T (x) as i
˙i −1 = ii + (Lpk Lfi −2 hi )wk ;
i1 = hi ; k=1
l

i2 = L f hi ; i
˙i = vi + (Lpk Lfi −1 hi )wk : (A.5)
..
. k=1

ii = Lfi −1 hi ; By the assumption that (f ; P) takes a strict feedback form


i = 1; 2; : : : ; m; (A.1) (i.e. Eq. (A.5) in a lower-triangular form); which is a gen-
eral assumption for nonlinear control system designs; there
where exist functions ’i1 (i1 ); ’i2 (i1 ; i2 ); : : : ; ’ii (i1 ; i2 ; : : : ; ii );
@hi which are l × 1 vectors such that Eq. (A.5) becomes
L f hi = f (x); Lsf hi = Lf (Lfs−1 hi ): (A.2)
@x i
˙1 = i2 + [’i1 (i1 )]T w;
For each hi ; i is de4ned as the smallest integer such that ..
at least one j ∈ {1; 2; : : : ; m} satis4es Lgj (Lfi −1 hi ) = 0. With .
i
the assumption that (f ; G ) is exact feedback linearizable ˙i −1 = ii + [’ii −1 (i1 ; : : : ; ii −1 )]T w;
(Isidori; 1989); Eq. (A.1) de4nes a set of coordinates that
i
transforms the nominal system of Eq. (5) into ˙i = vi + [’ii (i1 ; : : : ; ii )]T w: (A.6)
l

i
˙1 = i2 + (Lpk hi )wk ; The new input vi is designed to control a linear system sub-
k=1 jected to disturbances with nonlinear coePcients. Following
.. the backstepping procedure in Kristic; Kanellakopoulos; and
. Kokotovic (1995); error variables zji and intermediate con-
l

i trols ji are de4ned to aid in the expression of the control
˙i −1 = ii + (Lpk Lfi −2 hi )wk ;
law vi
k=1

m
 l
 zji = ij − j−1
i
(i1 ; : : : ; ij−1 ); (A.7)
i
˙i = Lf i hi + (Lgj Lfi −1 hi )uj + (Lpk Lfi −1 hi )wk ;
j−1
j=1 k=1  i
@ j−1
ji (i1 ; : : : ; ij ) = −ij zji − i
zj−1 + ik+1
i = 1; 2; : : : m: (A.3) @ik
k=1

The exact feedback linearization assumption also allows  2


 j−1
 @ i 
the existence of a feedback control law u = −[G ∗ (x)]−1  j−1 i
− eji zji ’ij − ’ k  ; (A.8)
f ∗ (x) + [G ∗ (x)]−1 v with f ∗ (x) and a nonsingular G ∗ (x)  @ik 
k=1
1598 Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599

where z0i ≡ 0i ≡ 0; and ij ; eji ; i = 1; 2; : : : ; m; j = 1; 2; : : : ; i stable with respect to the disturbance w. The controller is
are positive design parameters. The control law vi is de4ned u = −[G ∗ (x)]−1 f ∗ (x)
as vi = i i (i1 ; : : : ; ii ). The derivative of the error variable is
+ [G ∗ (x)]−1 [ 11 ··· mm ]T : (A.12)
j−1
 @ j−1
i
z˙ij = ij+1 + (’ij+1 )T w − [ik+1 + (’ik+1 )T w] References
@ik
k=1
Barmish, B. R., Corless, M., & Leitmann, G. (1983). A new class of
j−1
 i
@ j−1 stabilizing controllers for uncertain dynamical systems. SIAM Journal
= ji + i
zj+1 − ik+1 on Control and Optimization, 21, 246–255.
@ik
k=1 Barmish, B. R., & Lagoa, C. M. (1996). The uniform distribution: a
rigorous justi4cation for its use in robustness analysis. Proceedings
 j−1
T of the 35th conference on decision and control, Kobe, Japan (pp.
 i
@ j−1
+ ’ij+1 − ’ik+1 w 3418–3423).
@ik Corless, M. J., & Leitmann, G. (1981). Continuous state feedback
k=1
guaranteeing uniform ultimate boundedness for uncertain dynamic
systems. IEEE Transactions on Automatic Control, 26, 1139–1144.
= −ij zji − zj−1
i i
+ zj+1
Freeman, R. A., & Kokotovic, P. V. (1993a). Global robustness of
 T nonlinear systems to state measurement disturbance. Proceedings of
j−1
 i
@ j−1 the 32nd conference on decision and control, San Antonio (pp.
+ ’ij+1 − ’ik+1 w 1507–1512).
@ik Freeman, R., & Kokotovic, P. V. (1993b). Design of ‘softer’ robust
k=1
nonlinear control laws. Automatica, 29, 1425–1437.
 2 Goldberg, D. E. (1989). Genetic algorithm in search, optimization, and
 j−1
 @ i 
 j−1 i machine learning. Reading, MA: Addison Wesley Publishing Co.
− eji zji ’ij − ’ k  : (A.9)
 @ik  Gutman, S. (1979). Uncertain dynamical systems—a Lyapunov
k=1 min-max approach. IEEE Transactions on Automatic Control, 24,
437–443.
Hauser, J., Sastry, S., & Kokotovic, P. (1992). Nonlinear control via
De4ne the quadratic Lyapunov function
approximate input–output linearization: the ball and beam example.
IEEE Transactions on Automatic Control, 37, 392–398.
i
1 i 2 Isidori, A. (1989). Nonlinear control systems. Berlin: Springer.
Vi = (zj+1 ) : (A.10) Kalos, M. H., & Whitlock, P. A. (1986). Monte Carlo methods.
2 New York: Wiley.
j=1
Kanellakopoulous, I., Kokotovic, P. V., & Morse, A. S. (1991). Systematic
design of adaptive controllers for feedback linearizable systems. IEEE
The derivative of (A.10) along the solutions of (A.9) is Transactions on Automatic Control, 36, 1241–1253.
Khargonekar, P., & Tikku, A. (1996). Randomized algorithms for robust
 control analysis have polynomial time complexity. Proceedings of the
i
 j−1
T 35th conference on decision and control, Kobe (pp. 3470 –3475).
  i
@ j−1
V̇ i = −ij (zji )2 + zji ’ij+1 − ’ik+1 w Kristic, M., Kanellakopoulos, I., & Kokotovic, P. (1995). Nonlinear and
@ik adaptive control design. New York: Wiley (pp. 73–85).
j=1 k=1
Marino, R., & Tomei, P. (1993). Robust stabilization of feedback
 2  linearizable time-varying uncertain nonlinear systems. Automatica, 29,
 j−1
 @ i  181–189.
 j−1 i  
− eji (zji )2 ’ij − ’  Marrison, C. I. (1995). The design of control laws for uncertain dynamic
 @ik k  systems using stochastic robustness metrics. Ph.D. thesis, Princeton
k=1
NJ: Princeton University.
   Marrison, C. I., & Stengel, R. F. (1995). Stochastic robustness synthesis
i
  j−1
 i
@ j−1 
  applied to a benchmark control problem. International Journal of
6 − ij (zji )2 + |zji | ’ij+1 − ’ik+1  w∞
 @ik  Robust and Nonlinear Control, 5, 13–31.
j=1 k=1 Marrison, C. I., & Stengel, R. F. (1998). Design of robust control systems
 2  for a hypersonic aircraft. Journal of Guidance, Control, and Dynamics,
 j−1
 @ i  21(1), 58–63.
 j−1 
− eji (zji )2 ’ij − i ’ik  Ray, L. R., & Stengel, R. F. (1993). A Monte Carlo approach to the
 @k  analysis of control system robustness. Automatica, 29, 229–236.
k=1
Slotine, J. -J. E., & Hedrick, J. K. (1993). Robust input–output feedback
i
 
 2 linearization. International Journal of Control, 57, 1133–1139.
w ∞
6 −ij (zji )2 + : (A.11) Sontag, E. D. (1989). Smooth stabilization implies coprime factorization.
4eji IEEE Transactions on Automatic and Control, 34, 435–443.
j=1
Stengel, R. F., & Ray, L. R. (1991). Stochastic robustness of linear
time-invariant control systems. IEEE Transactions on Automatic and
This implies that zji (t) is globally uniformly bounded. Since Control, 36(1), 82–87.
Tempo, R., Bai, E. W., & Dabbene, F. (1997). Probabilistic robustness
ji are smooth functions; by Eq. (A.7); i is globally uni- analysis: explicit bounds for the minimum number of samples. Systems
formly bounded and the closed-loop system is input-to-state and Control Letters, 30, 237–242.
Q. Wang, R.F. Stengel / Automatica 38 (2002) 1591–1599 1599

Wang, Q., & Stengel, R. F. (2000). Robust nonlinear control of a Robert Stengel is Professor and former
hypersonic aircraft. Journal of Guidance, Control, and Dynamics, Associate Dean of Engineering and Ap-
23(4), 577–585. plied Science at Princeton University. He
directs the Laboratory for Control and Au-
Wang, Q., & Stengel, R. F. (2001). Searching for robust minimal-order
tomation and the undergraduate Program
compensators. Transactions of the ASME Journal of Dynamic in Robotics and Intelligent Systems. Cur-
Systems, Measurement, and Control, 123(2), 233–236. rent research interests include bioinformat-
Wie, B., & Bernstein, B. S. (1990). A benchmark problem for robust ics, nonlinear, robust, and adaptive con-
control design. Proceedings of the 1990 American control conference, trol systems, dynamics of aerospace ve-
San Diego (pp. 961–962). hicles, optimization, and machine intelli-
gence. He teaches courses on control and
estimation, aircraft dynamics, and robotics
Qian Wang received her B.S. degree in and intelligent systems. He also has served with The Analytic Sciences
Mechanical engineering from Beijing Uni- Corporation, Charles Stark Draper Laboratory, USAF, and NASA. He was
versity, China in 1992 and the M.A. and a principal designer of the Apollo Lunar Module manual control logic,
Ph.D. degrees in mechanical and aerospace and he contributed to Space Shuttle control system design. Dr. Stengel
engineering from Princeton University received the S.B. degree from M.I.T. (1960) and M.S.E., M.A., and
in 1997 and 2001, respectively. She is Ph.D. degrees from Princeton University (1965 –1968). He is a Fellow
now with HP Labs as a post-doctoral of the Institute of Electrical and Electronics Engineers and a Fellow
research associate in the Storage Systems of the American Institute of Aeronautics and Astronautics. He received
Department. Her research interests include the AIAA Mechanics and Control of Flight Award (2000) and is a
probabilistic approaches to robust control, recipient of the FAA’s 4rst annual Excellence in Aviation Award (1997).
nonlinear control theory, and optimization Dr. Stengel wrote the book, Optimal Control and Estimation (Dover,
algorithms, with applications to aerospace, 1994) and has authored or co-authored numerous technical papers and
mechanical, information and computer systems. reports.

You might also like