Model Predictive Control With Constraints: Example 5.1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

ECE5590: Model Predictive Control 5–1

Model Predictive Control with Constraints

! We begin this section with a straightforward example:

Example 5.1

A continuous-time model of an undamped oscillator is given by


" # " #" # " #
xP 1.t/ 0 1
x1.t/ 1
D C u.t/
xP 2.t/ "4 0
x2.t/ 0
" #
h i x .t/
1
y.t/ D 0 1
x2.t/
! The eigenvalues of the system matrix are located at !1;2 D ˙j 2
(purely imaginary), so the natural frequency is ! D 2 rad=sec
! Assuming a sampling interval 4t D 0:1, we compute the
corresponding discrete-time model:
" # " #" # " #
x1.k C 1/ 0:9801 :0993 x1.k/ 0:0993
D C u.k/
x2.k C 1/ ":3973 0:9801 x2.k/ "0:0199
" #
h i x .k/
1
y.k/ D 0 1
x2.k/
! The eigenvalues of the discrete-time system are located at
!1;2 D :9801 ˙ j 0:1986

– As we expect, both have modulus 1:0 and lie on the unit circle

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–2

! Our objective is to design a predictive control system so that the


output of the plant tracks a unit step reference as fast as possible
! Tuning parameters for the problem are selected as:
Nc D 3
Np D 10
RN D 0

) What happens if control magnitude saturates at ˙20 ?


! Computing the data matrices for this problem yields,
2 3
6:0067 4:8853 3:8150
6 7
ˆT ˆ D 4 4:8853 4:0013 3:1475 5
3:8150 3:1475 2:4952
2 3
65:5285 "25:2099 "6:1768
6 7
ˆT F D 4 53:1281 "19:6709 "4:7606 5
41:3553 "14:6974 "3:5334
2 3
"6:1768
6 7
ˆT Rs D 4 "4:7606 5
"3:5334
! Computing the state feedback gain matrix we obtain
h i
Kmpc D 17:9064 "39:0664 "29:9659
h i
– This gives closed-loop eigenvalues at: "0:1946; 0; 0
h i
! We now design an observer with poles at 0; 0; 0 and examine two
cases: (i) no constraint on control; and (ii) with constraint on control.

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–3

C ASE ( I ) N O C ONSTRAINT.

C ASE ( II ) W ITH C ONSTRAINT.


! Assuming the control amplitude has saturation limits at ˙20 , we write
"20 $ u.k/ $ 20

– So, u.k/ D 20 when u.k/ % 20 and u.k/ D "20 when u.k/ $ 20

! As we might expect, the closed-loop performance deteriorates when


these limits are reached

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–4

! So we ask: is it possible to achieve better performance by


incorporating the constraints directly into the model predictive control
problem?
! Let’s try it...

Formulation of Constrained Control Problems

! The basic idea here is to take the control increment 4u.k/ obtained
for the unconstrained problem and modify it when a constraint
becomes active
! This involves formulating an optimization problem directly in the
presence of constraints
! Three of the most common types of contraints are placed on:
– control variable increment
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–5
– control variable amplitude
– output variable

! It is also sometimes desirable to constrain state variables internal to


the dynamic process; we’ll cover this case as well

Numerical Solutions

! The standard optimization problem for constrained model predictive


control can be cast as a quadratic programming problem
! For consistency with the prevailing literature, we write the objective
function J together with a set of constraint equations as:
1
J D x t Ex C x t F
2
Mx $ !

where we assume E to be symmetric and positive definite.

The Unconstrained Case

! General optimization problems involve finding a local solution to the


problem
min J.x/; x 2 Rn

! Here, J.x/ is the objective function and the minimizing point denoted
by x ? is found among all x vectors in Rn
! We must first make some assumptions on the existence and
uniqueness of x ?

– Clearly x ?may not exist if J is unbounded below

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–6
– In general, it is only really practical to locate a local minimizer, and
this may not be a global minimizer

! In this treatment we will assume that first and second derivatives


exist, are continuous and are given by:
g.x/ D rJ.x/
G.x/ D r 2J.x/

! At a local minimizer, the following simple stationarity conditions hold:


g.x ?/ D 0
st G.x ?/s % 0; _s

! These are first-order and second-order necessary conditions for a


local solution

– The first condition identifies a stationary point


– The second implies the matrix G.x ?/ is positive semi-definite and
indicates the presence of non-negative curvature in all directions
from the stationary point

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–7

! Sufficient conditions for a strict and isolated minimizer x ?are that the
above hold and that G.x ?/ is positive definite.

– How can we determine numerically whether or not G.x ?/ is


positive definite?

Quadratic Programming: Equality Constraints

! For unconstrained optimization, the necessary conditions above


illustrate the requirement for zero slope and non-negative curvature in
any direction at x ?
! In constrained optimization, there is the additional consideration of a
feasible region - i.e., a local minimizer must be a feasible point that
satisfies the constraints
! Additionally, there must be no feasible descent directions at x ?.
! The simplest problem involves finding the constrained minimum of a
positive definite quadratic function with linear equality constraints.

– Each linear equality constraint defines a hyperplane


– Positive definite quadratic functions have their level surfaces as
hyperellipsoids

! The constrained minimum is then found at the point of tangency


between the boundary of the feasible set and the minimizing
hyperellipsoid

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–8
Example 5.2

Minimize
J D .x1 " 2/2 C .x2 " 2/2
subject to
x1 C x2 D 1

! By inspection we see that the unconstrained minimum occurs at


x1 D 2
x2 D 2

! Feasible solutions are the combinations of x1 and x2 that satisfy the


linear equality x1 C x2 D 1
! So we have
x2 D 1 " x1

! Substituting into the expression for J gives,


J D .x1 " 2/2 C .1 " x1 " 2/2
D 2x12 " 2x1 C 5

! To minimize J , we satisfy the stationarity condition


@J
D 4x1 " 2 D 0
@x1
giving,
x1 D 0:5
x2 D 0:5

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–9
Lagrange Multipliers

! One approach to minimizing the objective function subject to equality


constraints is to augment the original function with the equality
constraint multiplied by a Lagrange multiplier vector, ":
1
J D x t Ex C x t F C "t .M x " !/
2
– This creates a new objective function in the n C m variables given
by the vectors x .n & 1/ and " .m & 1/

! The solution is found by taking the first partial derivatives with respect
to the vectors x and " and then equating the derivatives to zero:
@J
D Ex C F C M T " D 0
@x
@J
D Mx " ! D 0
@"
! From these expressions we can build the Lagrangian matrix and write
the corresponding linear expression
" #" # " #
T
E M x "F
D
M 0 " !
! If the inverse exists and is expressed as
" #"1 " #
T
E M H T
D
M 0 T U
then the solution can be written
x ? D "HF C T "
"? D "T T F " U "

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–10

! Explicit expressions for H; T and U (when the inverse exists) are


given by
H D E "1 " E "1M T .ME "1M T /"1ME "1
T D E "1M T .ME "1M T /"1
U D .ME "1M T /"1

! This set of equations contains n C m variables in the vectors x and ",


which form the necessary conditions for minimizing the objective
function with equality constraints
! The optimizing "? and x ? can be found directly from the partial
derivatives above where
"? D ".ME "1M T /"1.! C ME "1F /
x ? D "E "1.M T "? C F /

! Note that
x ? D "E "1F " E "1M T "? D x o " E "1M T "?

where the first term x o gives the optimal solution in the absence of
constraints, and the second term is a correction factor due to the
equality constraint.

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–11
Example 5.3
Minimize
1
J D x t Ex C x t F
2
subject to
x1 C x2 C x3 D 1
3x1 " 2x2 " 3x3 D 1
where 2 3 2 3
1 0 0 "2
6 7 6 7
E D 4 0 1 0 5I F D 4 "3 5
0 0 1 "1

! The unconstrained optimizing solution is given by


2 3
2
6 7
x o D "E "1F D 4 3 5
1
! Note that the cost at the unconstrained minimum is:
1
Jo D x toEx o C x toF
2
2 32 3 2 3
"2
1h i h i
1 0 0 2
6 76 7 6 7
D 2 3 1 4 0 1 0 5 4 3 5 C 2 3 1 4 "3 5
2
0 0 1 1 "1
1
D
& 14 " 14 D "7
2
! From the constraint equations, we write,
" # " #
1 1 1 1
M D I !D
3 "2 "3 1

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–12

! Computing the minimizing Lagrangian vector "? gives,


"? D ".ME "1M T /"1.! C ME "1F /
" #"1 " # " #! " #
3 "2 1 "6 1:6452
D" C D
"2 22 1 3 "0:0323
! Hence, x ? is given by,
2 3
x1
6 7 "1
4 x2 5 D x " E M "
o T ?

x3
2 3 2 3
2 1 3 " #
6 7 6 7 1:6452
D 4 3 5 " 4 1 "2 5
"0:0323
1 1 "3
2 3
0:4516
6 7
D 4 1:2903 5
"0:7419
! For the constrained optimizing solution,
J ? D .x ?/t Ex ? C .x ?/t F D "1:6130

indicating the cost function cannot achieve the same value as the
unconstrained optimum.
! Alternatively, building the Lagrangian matrix,
2 3
1 0 0 1 3
" # 6 60 1 0 1
7
"2 7
E MT 6 7
LD D6 60 0 1 1 "3 7
7
M 0 6 7
41 1 1 0 0 5
3 "2 "3 0 0

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–13

! Computing the eigenvalues of L we have


h i
!fLg D 5:239 2:2441 1:000 "1:2441 "4:2390

indicating the matrix is non-singular and invertable.


! Solving the linear equation,
" # " #
x "F
L D
" !
we again obtain the solution vector
2 3
:4516
" # 6 7
6 1:2903 7
x 6 7
D6
6 ":7419 7
7
" 6 7
4 1:6452 5
":0323

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–14
Example 5.4

Consider once again the objective function,


1
J D x t Ex C x t F
2
where, " # " #
1 0 "2
ED I F D
0 1 "2
and the constraints are given by:
x1 C x2 D 1
2x1 C 2x2 D 6

! Find the feasible set of solutions


! The plot below shows contours of equal J for corresponding values of
x1 and x2

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–15

! Note that in general, the number of equality constraints is required to


be less than or equal to the number of decision variables.
! What happens when the number of equality constraints exactly
equals the number of decision variables?
! What happens when the number of equality constraints is greater
than the number of decision variables?

Quadratic Programming: Inequality Constraints

! In the previous case, it was necessary that all equality constraints be


active at the solution
! In this case we have to consider the case where the inequality
constraints M x $ ! may comprise both active and inactive
constraints
– active denotes equality
– inactive denotes strict inequality
Kuhn-Tucker Conditions

! The Kuhn-Tucker conditions outline a set of necessary conditions for


the optimization problem:
Ex C F C M T " D 0
Mx " ! $ 0
"t .M x " !/ D 0
"%0

! In terms of the active constraints, the conditions become:


X
Ex C F C !i MiT D 0
i 2Sact

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–16
Mi x " "i D 0 i 2 Sact
Mi x " "i < 0 i … Sact
!i % 0 i 2 Sact
!i D 0 i … Sact

where Mi is the i t h row of the M matrix.


! For an active constraint (i.e., Mi x " ! i D 0) , the corresponding
Lagrange multiplier is non-negative
! Otherwise if the constraint is inactive, the Lagrange multiplier is zero

) So, for the set of active constraints, the problem looks exactly like
the equality constraint optimization!

! Assume the previous problem with the equality constraints replaced


with inequality constraints,
1
J D x t Ex C x t F
2
where, " # " #
1 0 "2
ED I F D
0 1 "2
and the inequality constraints are now given by:
x1 C x2 $ 1
2x1 C 2x2 $ 6

! It is clear that the set of variables that satisfy the first inequality will
also satisfy the second

– Thus the first of these is an active constraint, while the second is


inactive
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–17

! We find the constrained optimum by minimizing J subject to the


equality constraint x1 C x2 D 1, which gives x1 D 0:5 and x2 D 0:5, as
found previously
Active Set Methods

! The idea here is to construct an algorithm where we define at each


step a set of constraints to be treated as the active set

– Chosen to be a subset of the contraints that are active at the


current point
– Current point is feasible for the working set
– Algorithm moves along a surface defined by the working set of
constraints to an improved point
– If all !i % 0, the point is a local solution
– If some !i < 0, then the objective function value can be further
decreased by relaxing the constraint

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–18
Example 5.5

Optimize the objective function where


2 3 2 3
1 0 0 "2
6 7 6 7
E D 4 0 1 0 5 I F D 4 "3 5
0 0 1 "1
subject to equality constraints
x1 C x2 C x3 $ 1
3x1 " 2x2 " 3x3 $ 1
x1 " 3x2 C 2x3 $ 1

! The feasible solution of the equality constraints exists and is the


solution of the linear equations

x1 C x2 C x3 D 1
3x1 " 2x2 " 3x3 D 1
x1 " 3x2 C 2x3 D 1

! The three equality constraints are taken as the first working set
! Calculating the Lagrange multipliers we get
2 3
1:6873
6 7
" D ".ME "1M T /"1.! C ME "1F / D 4 0:0309 5
"0:4352
! Since the third element is negative, the third constraint is inactive and
is dropped from the constrained equation set
! Thus the problem becomes:
1
J D x t Ex C x t F
2

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–19
subject to
x1 C x2 C x3 D 1
3x1 " 2x2 " 3x3 D 1

! Solve this equality constraint problem to get


" #
1:6452
"D
":0323
– Here the second element is negative

! The problem collapses once again to


1
J D x t Ex C x t F
2
now subject only to the single constraint
x1 C x2 C x3 D 1

! Solving this problem yields ! D 5=3, leading to


2 3
0:3333
6 7
x ? D 4 1:3333 5
"0:6667
– Clearly, the optimal solution x ? satisfies the equality constraint

! Checking the others, we obtain


2 32 3 2 3 2 3
1 1 1 0:3333 1:0 1:0
6 76 7 6 7 6 7
M x ? D 4 3 "2 "3 5 4 1:3333 5 D 4 0:3334 5 $ 4 1:0 5
1 "3 2 "0:6667 "5:0 1:0
and indeed the rest of the inequality constraints are also satisfied.
! Some observations:

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–20

– The maximum number of equality constraints equals the number of


decision variables
– The maximum number of inequality constraints can exceed the
number of decision variables if they are not all active
– An iterative procedure is required to solve the optimization problem
with inequality constraints

Primal-Dual Methods

! A dual method can be used to identify the constraints that are not
active in the optimization problem so that they can be systematically
eliminated in the solution
! Assuming feasibility (i.e., there exists an x such that M x < !), the
primal problem is equivalent to
! "
1 t
max min x Ex C x t F C "t .M x " !/
!%0 x 2
! The minimum over x is unconstrained and attained by
x D "E "1.F C M T "/

! Substituting this into the above expression gives the dual problem
# $
1 t 1
max " " P " " "t K " F T E "1F
"%0 2 2
where
P D ME "1M T
K D ! C ME "1F

! Hence the dual is another quadratic programming problem, only now


with " as the decision variable instead of x

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–21
! Note, the dual problem is equivalent to
# $
1 t 1
max " P " C "t K C ! T E "1!
"%0 2 2
Hildreth’s Procedure

! Hildreth’s quadratic programming procedure is a simple variant of the


Gauss-Seidel method for solving a linear set of equations
! Some features:

– Direction vectors are equal to the standard basis vectors,


h it
ei D 0 0 : : : 1 : : : 0 0
– The " vector is varied one component at a time
– The objective function can be regarded as a quadratic function in
each component
– Adjust "i to minimize the objective function
– When !i < 0 then set !i D 0

! The general method can be expressed as:


!mC1
i D max.0; wimC1/

where,
1 X
i "1 Xn
wimC1 D " Œki C pij !mC1
j C pij !m
j#
pi i j D1 j Di C1

! pij is the ij t h element of the matrix P D ME "1M T


! ki is the i t h element of the vector K D ! C ME "1F

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–22
Example 5.6
1
Minimize the cost function J D x t Ex C F t x where
2
" # " #
2 "1 "1
ED I F D :
"1 1 0
and the constraints are
0 $ x1
0 $ x2
3x1 C 2x2 $ 4

! Forming the linear inequality constraint equation, we have


Mx $ !
2 3 2 3
"1 0 " # 0
6 7 x 1 6 7
4 0 "1 5 $405
x2
3 2 4
! The unconstrained optimal solution is found as
" #"1 " # " #
2 "1 "1 1
x o D E "1F D D
"1 1 0 1
! Substituting x o into the expression above shows that the third
constraint is violated:
2 3 2 3 2 3
"1 0 " # "1 0
6 7 1 6 7 6 7
4 0 "1 5 D 4 "1 5 – 4 0 5
1
3 2 5 4
! Now, find the optimal "?:
P D ME "1M T

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–23
2 3
"1 0 " #"1 " #
6 7 2 "1 "1 0 3
D4 0 "1 5
"1 1 0 "1 2
3 2
2 3
1 1 "5
6 7
D4 1 2 "7 5
"5 "7 29

K D ! C ME "1F
2 3 2 3
0 "1 0 " #"1 " #
6 7 6 7 2 "1 "1
D 4 0 5 C 4 0 "1 5
"1 1 0
4 3 2
2 3
1
6 7
D4 1 5
"1
! Iteration:
k=0
!01 D !02 D !03 D 0

k=1
w11 C 1 D 0
!11 C 2w21 C 1 D 0
"5!11 " 7!12 C 29w31 " 1 D 0

!11 D max.0; w11/ D 0


!12 D max.0; w21/ D 0
!13 D max.0; w31/ D 0:0345
c 2010-2015, M. Scott Trimboli
Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–24
k=2
w12 C !12 " 5!13 C 1 D 0
!21 C 2w22 " 7!13 C 1 D 0
"5!21 " 7!22 C 29w32 " 1 D 0

!21 D max.0; w12/ D 0


!22 D max.0; w22/ D 0
!23 D max.0; w32/ D 0:0345

! We see that the procedure has converged giving the optimal


Lagrange vector:
2 3
0
6 7
"? D 4 0 5
0:0345
which generates the optimal solution:
" # " # " #
1 0:1724 0:8276
x ? D xo " E "1M T "? D " D
1 0:2414 0:7586

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–25
Constraints on Rate of Change

Example 5.7

Consider the continuous-time plant described by


10
G.s/ D
s 2 C 0:1s C 3
whose system poles are located at: ":05 ˙ j1:7313.
Assume a sampling interval of 4T D 0:1 and design a descrete-time
model predictive control system with
Nc D 3
Np D 20
RN D 0:01 & I
A constraint is imposed on rate-of-change of the control signal as
"1:5 $ 4u.k/ $ 3:0

! First, obtain a discrete-time state-space model

– Continuous-time state-space:
" # " #
"0:1 "3 1
Ac D Bc D
1 0 0
h i h i
Cc D 0 10 Dc D 0
– Discrete-time state-space:
" # " #
:9752 ":2970 :0990
Ad D Bd D
:0990 :9851 :005
h i h i
Cd D 0 10 Dd D 0

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–26

! Next, form the augmented state-space model:


2 3 2 3
:9752 ":2970 0 :0990
6 7 6 7
A D 4 :0990 :9851 0 5 B D 4 :005 5
:990 9:8509 1 :0497
h i h i
C D 0 0 1 DD 0

! Now, write the objective function:


N
J D 4U T .ˆT ˆ C R/4U " 24U T ˆT .Rs " F x.ki //

where,
2 3 2 3
:1760 :1553 :1361 :1972 ":1758 1:4187
6 7 6 7
ˆT ˆ D 4 :1553 :1373 :1204 5 I ˆT F D 4 :1740 ":1552 1:2220 5
:1361 :1204 :1057 :1522 ":1359 1:0443
and 2 3
1:4187
6 7
Rs D 4 1:2220 5 & r.ki /
1:0443
n o
! Select observer poles at 0 0 0
! Computing the closed-loop compensated eigenvalues for a range of
control weighting values .rw / results in the following values for
Kmpc .rw /:
h i
Kmpc .:01/ D 13:1552 84:1009 3:7417
h i
Kmpc .:1/ D 11:3704 58:4467 1:1666
h i
Kmpc .1:0/ D 9:5534 43:5058 0:6567

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–27
h i
Kmpc .10/ D 5:5692 14:6499 0:2460
h i
Kmpc .100/ D 3:6823 3:0537 0:0868

! These give rise to the corresponding closed-loop eigenvalues:

n o
!.:01/ D :3357 C j 0:4 :3357 " j 0:4 0:3824
n o
!.:1/ D :3654 C j 0:2650 :3654 " j 0:2650 0:7552
n o
!.1:0/ D :4697 C j 0:3063 :4697 " j 0:3063 0:8262
n o
!.10/ D :7710 C j 0:2439 :7710 " j 0:2439 :7819
n o
!.100/ D :9240 C j 0:1610 :9249 " j 0:1610 :7283

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–28
! The output response to a unit set-point change is depicted in the
following figure for each of the control weighting values.

! The corresponding control inputs are shown in the plot below

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–29
! Let us now establish our problem constraint as:
2 3
" # 4u.ki / " #
1 0 0 6 7 3:0
4 4u.ki C 1/ 5 $
"1 0 0 "1:5
4u.ki C 2/
M 4U $ !

! Recall that we constructed our original cost function as:


N
J D .R s " Y /t .R s " Y / C 4U t R4U

where
Y D F x.ki / C ˆ4U

! Substituting and multiplying out we obtained:


J D .R s " F x.ki //t .R s " F x.ki // " 24U t ˆT .R s " F x.ki //
C4U t .ˆT ˆ C R/#U N

! We found the optimizing control input sequence from the stationarity


@J
condition D 0 as:
@4U
4U ! D .ˆT ˆ C R/ N "1ˆT .Rs " F x.ki //

! Using this expression with the assumption that the initial state
x.0/ D 0, we obtain
h it
4U D 3:7417 "6:2794 2:7112

– Here we see that the constraint is violated since


4u.1/ D 3:7417 – 3:0

! Using Hildreth’s quadratic programming algorithm we obtain


4U D Π3:000 "4:7241 1:8843#

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–30

! Using the first component, 4u.1/ D 3:000 and y.1/ D 0; the new
estimated state variable is
2 3
:2791
6 7
x.2/ D 4 :0149 5
:1491
! Computing the next incremental input sequence gives
h i
!
4U D "1:9777 "4:1326 3:3456

and the constrained solution via Hildreth’s algorithm gives


h i
!
4Uc D "1:5 "5:1342 3:8781

! This generates the new state update


2 3
:1367
6 7
x.3/ D 4 :0366 5
:5155
! Continuing,
h i
'
4U D "3:0671 ":6187 2:4816
h i
'
4Uc D "1:5 "3:9045 4:2285

and 2 3
":0261
6 7
x.4/ D 4 :0422 5
:9372
! And again
h i
'
4U D " "2:9689 2:2251 1:0865
h i
'
4Uc D "1:5 ":8547 2:7239

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
ECE5590, Model Predictive Control with Constraints 5–31
and the updated state,
2 3
":1865
6 7
x.5/ D 4 :0315 5
1:2523
! So we obtain the optimal incremental input sequence for the first four
time samples as:
h i
'
4U D 3:0 "1:5 "1:5 "1:5 : : :

giving the output sequence:


h i
Y D 0 :1491 :5155 :9372 1:2523

c 2010-2015, M. Scott Trimboli


Lecture notes prepared by M. Scott Trimboli. Copyright #
(mostly blank)

You might also like