Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 32

Nonlinear Quadratic Dynamic Matrix

Control with State Estimation


Hao-Yeh Lee
Process System Engineering
Laboratory
Department of Chemical
Engineering National Taiwan
University
Reference
 Gattu, G., and E. Zafiriou, “Nonlinear Quadratic
Dynamic Matrix Control with State Estimation,” Ind.
Chem. Eng. Res., 31, 1096-1104 (1992).
 Ali, Emad, and E. Zafiriou, “On The Tuning of
Nonlinear Model Predictive Control Algorithms”,
American Control Conference, 786-790 (1993)
 Henson, M. A., and D. E. Seborg, Nonlinear Process
Control, Prentice-Hall PTR (1997).

2
Outline
 Introduction
 Linear and Nonlinear QDMC
 Algorithm Formulation with State Estimation
 Example
 Tuning parameters
 Conclusions

3
Introduction
 Model predictive control (MPC)
 Dynamic matrix control (DMC; Cutler and baker ,
1979)
 An extension of DMC to handle constraints explicitly
as linear inequalities was introduced by Garcia and
Morshedi (1986) and denoted as quadratic dynamic
matrix control (QDMC).
 Garcia (1984) proposed an extension of QDMC to
nonlinear processes.

4
Linear and nonlinear QDMC
 Linear QDMC utilizes a step or impulse response
model of the process, and NLQDMC utilizes the
model of the process represented by nonlinear
ordinary differential equations.
 These approximations are necessary in order for the
on-line optimization to be a single QP at each
sampling point.

5
Algorithm formulation with state estimation

 For the general case of MIMO systems, consider


process and measurement models of the form

 x  f  x, u   

 y  h x   

– where x is the state vector, y is the output vector, u is the


vector of manipulated variables, and  ~ (0, Q) and  ~(0,
R) are white noise. Q and R are covariance matrices
associated with process and measurement noise

6
Algorithm formulation with state estimation(cont’d)

 Know at Sampling Instant k: y(k) the plant mea-


surement, xˆ (k k  1) the estimate of state vector at k
based on information at k-1, and u(k-1) the
manipulated variable.

7
Effect of future manipulated variables
– Step 1: Linearize the x  f  x, u  at xˆ (k k  1) and u(k-1)
to obtain
 x̂  Ak xˆ  Bk u

 y  Ck xˆ

where

Ak   f x  x  xˆ  k k 1 ,u u  k 1

Bk   f u  x  xˆ  k k 1 ,u u  k 1

C k   h x  x  xˆ  k k 1

8
Effect of future manipulated variables(cont’d)

– Step 2: Discretize above equations to obtain

 xˆ j 1   k xˆ j  k u j

 y j  C k xˆ j

– where k and k are discrete state space matrices (e.g.,


Åström and Wittenmark, 1984), obtained from Ak, Bk, and
the sampling time.

9
Effect of future manipulated variables(cont’d)

– Step 3: Compute the step response coefficients Si,k (i = 1,


2, ..., P) where P is the prediction horizon. Si,k can be
obtained from

P n y nu
Si ,k   C k   k
i 1
k
Si , k  R
i 1

– Step response coefficients can also be obtained by


numerical integration of the linearized model over P
sampling intervals with u = 1.0 and x (tk) = 0.0 where tk is
the time at any sampling point k.

10
Computation of filter gain
– Step 4: Compute the steady-state Kalman gain using the
recursive relation (Åström and Wittenmark, 1984):

  k Pjk  Pjk Ck  Ck Pjk Ck  R  Ck Pjk  Tk  Q


 1
T T
P( j 1) k
 
1
K k   k Pk C Ck Pk C  R 
T
k
T
k

 where Pjk is the state covariance at iteration j for the model obtained
by linearization at sampling point k. P∞k is the steady-state value of
state covariance for that model.

11
Effect of past manipulated variables
– Step 5: The effect of past inputs on future output
predictions, y*(k+1), y*(k+2), ..., y*(k+P) is computed
follows. Here the superscript * indicates that input values in
the future are kept constant and equal to u(k-1).

Set xˆ  k k  1  xˆ  k k  1


 Define d  k k   y  k   h  xˆ  k k  1 
 Assume d  k  i k   d  k k  i  1, 2,  , P
 For i = 1, 2, ..., P, successively integrate x  f  x, u  over one
sampling time xˆ   k kfrom
 1  xˆ  k k  1 u  k ,with
i  1 k  1  u  k  1
k d k k   k  i k  1

and Kthen add toxˆ obtain Addition of Kkd provides
correction to the state. We can then write

y   k  i   h  xˆ   k  i k  1  i  1, 2,  , P
12
Output prediction
– Step 6: The predicted output is computed as the sum of the
effect of past and future manipulated variables and the
future predicted disturbances.

l
yˆ  k  l   y   k  l    S i ,k u  k  l  i   d  k k 
i 1

Future
Past effect Future effect disturbances

l  1, 2,  , P

13
Optimization
– Optimization.


k P
min
u  k 1 , , u  k  M 1    yˆ  l   r  l  
T
 2  yˆ  l   r  l  
l  k 1

 u  l  1 2 u  l  1
T

l  1, 2,  , P

 where P is the prediction horizon


 M is the number of future moves
 It is assumed that u(k+M-1) = u(k+M) = ... = u(k+P-1).
  and  are diagonal weight matrices.

14
Optimization(cont’d)
 The above optimization problem with constraints can
be written as a standard quadratic programming
problem:
1
min J  U T GU  g T U
U 2
Subject to
DT U  b
where
U   u  k   u  k  M  1 
T

and D and b depend on the constraints on manipulated


variables, change in manipulated variables, and outputs.
15
Estimation of state
– Step 7: The M future manipulated variables are computed,
but only the first move is implemented (Garcia and
Morshedi, 1986).
 Estimation of State.
– Step 8 : Integrate x  f  x, u  from xˆ   k k  1 and u(k) over
one sampling time and add Kkd to obtain xˆ  k  1 k 

16
NLQDMC Simulation Procedure
x  f  x, u   
y  h x   

Initial condition Linearization

x̂  Ak xˆ  Bk u
y  C k xˆ
Algebraic loop
Algebraic loop Ts Discretization
Compute the step
Compute the xˆ j 1   k xˆ j  k u j response
Kalman filter gain coefficients
y j  Ck xˆ j
Kk S i ,k 
i

C k  kj 1 k
j 1

Effect of past
x  f  x, u   
manipulated
Integration
y  h x    variable
F

Output prediction

Setpoint Objective function P


Optimization weighting
matrix
Constraints u M

State variables x  f  x, u   
Integration
x y  h x   

Controlled variables

17 y
Example
 For the reaction A + B ↔ P the rate of decomposition
of B is
k1CB
rB 
 1  k2C B 
2.0

– The system is described by a dynamic model of the form:


dx1
 u1  u2  0.2 x10.5
dt
dx2 u1 u2 k1C B
  CB1  x2    CB 2  x2  
dt x1 x1  1  k2CB  2.0
y1  x1
y2  x2
18
Example(cont’d)
 Isothermal CSTR
u1 , CB1 u2 , CB2

x1

x2

19
Example(cont’d)
– u1 is the inlet flow rate with condensed B,
– u2 is the inlet flow rate with dilute B,
– x1 is the liquid height in the tank,
– x2 is the concentration of B in the reactor.
 The control problem is simulated with the values
– k1 = 1.0, k2 = 1.0,
– CB1 = 24.9, and CB2 = 0.1.

20
Multi-equilibrium points at steady state
 Multi-equilibrium points of CB
110

108

106
At u1=1.0, u2=1.0
104

102

Lower steady state


1

100
x

98
α = [ 100, 0.6327 ]
96 Middle steady state
94
β = [ 100, 2.7870 ]
92 Upper steady state
γ = [ 100, 7.0747 ]
90
0 1 2 3 4 5 6 7 8 9 10

21 x
2
Simulation results
 A setpoint change from an initial condition of xl0 =
40.00 and x20 = 0.1 to the unstable steady-state point
with values at x1 = 100.00 and x2 = 2.787.

– The lower bounds on u1 and u2 are kept at zero


– The upper bounds varied from 5, 10 to ∞
– A sampling time Ts = 1.0 min
– Tuning parameter values P = 5 and M = 5
– For the tuning parameter = diag[0.0,0.0], = diag[1.0,1.0]

22
110

100

90

80
Level

70

60

50

40
0 5 10 15 20 25 30 35 40
Time(min)

23
8

5
Conc.

0
0 5 10 15 20 25 30 35 40
Time(min)

24
Simulation results(cont’d)
 The plant is running at the unstable steady state.
Consider a step disturbance of 0.5 unit in u1.

 A sampling time Ts = 1.0 min


 the tuning parameter values P = 5.0, M = 5.0,  = 0.0,
u10 = 1.0 and u20 = 1.0 are used in the simulations.
 The lower bounds on u1 and u2 are kept at zero, and
there are no upper bounds.

25
100.5

100.4

100.3
Level

100.2

100.1

100

99.9
0 5 10 15 20 25 30 35 40
Time(min)

26
2.96

2.94

2.92

2.9

2.88
Conc.

2.86

2.84

2.82

2.8

2.78
0 5 10 15 20 25 30 35 40
Time(min)

27
Tuning parameters
 System parameter
– Sampling time
 Tuning parameters
– Prediction horizon
 Longer horizons tend to produce more aggressive control action and
more sensitive to disturbances.
– Control horizon
 Shortening the control horizon relative to the prediction horizon
tend to produce less aggressive controllers, slower system response
and less sensitive to disturbances
– Penalty weights

28
Some problems of NLQDMC
 Truncation error in NLQDMC
 Different sampling times
 If system has large different responses in each loop
 Tuning problem in NLQDMC

29
Optimization based of tuning method

30
Optimization based of tuning method(cont’d)

31
Conclusion
 The proposed algorithm stabilizes open-loop unstable
plants and The incorporation of a Kalman filter also
results in better disturbance rejection when compared
to Garcia's algorithm.
 The major advantage of the proposed algorithm
compared to the nonlinear programming approaches is
that only a single quadratic program is solved on-line
at each sampling time.
 The use of the software package CONSOLE can
obtain solution to an off-line optimization to tune the
NLQDMC parameters.

32

You might also like