Introduction To MPC: WWW - Latex4ei - de Elena - Zhelondz@tum - de 4

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

1. Introduction to MPC 1.6. Optimization problem 1.10.

Constrained optimization in a nutshell

1.1. System class PN (x0 ) :



JN (x0 ) = min {JN (x0 , u) | u ∈ UN (x0 )}
u
min F (z) (cost function)
Nonlinear discrete-time dynamical control system s.t. g(z) = 0 (equality constraints)
Assumption 1:
x(k + 1) = f (x(k), u(k)) ; k = 0, 1, 2, . . . ; x(0) = x0 f, l, Jf are continuous with f (0, 0) = 0, l(0, 0) = 0, Jf (0) = 0. h(z) ≤ 0 (inequality constraints)
Assumption 2:
with state x(k) ∈ Rn and control input u(k) ∈ Rm X is closed, Xf and U are compact and all sets contain the origin. 1. unconstrained:
If z ∗ is minimum, then ∇F (z ∗ ) = 0
Introducing time invariance: Under Assumption 1 and Assumption 2, the optimization problem PN (x0 ) has a solution for all
+ 2. equality constrained:
x = f (x, u) x0 ∈ XN .
Lagrange function
Notation for a prediction horizon of N : T
L = F (z) + λ g(z)
N 1.7. Controller with Lagrange multiplier λ.
u := {u(0), u(1), . . . , u(N − 1)}
∗ If z ∗ is the minimum, then ∇z L(z ∗ , λ∗ ) = 0 and ∇λ L(z ∗ , λ∗ ) = 0.
= controls to be applied within the prediction horizon (actual application of u(0) only) κN (x0 ) = u (0, x0 )
with u∗ (0, x0 ) from u∗ = u∗ (0, x0 ), u∗ (1, x0 ), . . . , u∗ (N − 1, x0 ) as solution of
 Note: Always calculate all λi for completeness.
N
PN (x0 ).

xu (x0 ) := x0 , xu (1, x0 ), . . . , xu (N, x0 )
→ Only the first control input is applied before recalculating predicted trajectory and control! 3. inequality constrained:
Lagrange function
= succession of states resulting from a start at x0 and the application of controls uN T
or if context is clear for brevity: L = F (z) + µ h(z)
1.8. Basic time-invariant MPC algorithm
with KKT multiplier µ.
N
xu (x0 ) := {x0 , x(1), . . . , x(N )} System x+ = f (x, u) If z ∗ is the minimum, then ∇z L(z ∗ , µ∗ ) = 0, µ∗ ≥ 0, h(z ∗ ) ≤ 0 and µ∗ ∗
i hi (z ) = 0
Cost for all i.
N −1 ∗ ∗
X µi hi (z ) = 0 has two possible cases:
1.2. Cost functions J(x, u) = Jf (x(N )) + l (x(k), u(k))
k=0
ˆ µ∗ i = 0 : Inequality constraint is inactive and can be omitted during optimization
Infinite horizon:
ˆ µ∗ ∗
i > 0 → hi (z ) = 0 : Inequality constraint is active and is treated as an equality
J∞ x0 , u
∞
=

X
l xu (k, x0 ), u(k)
 Constraints x(k) ∈ X, u(k) ∈ U for all k ∈ N0 and x(N ) ∈ Xf , where N is the prediction constraint.
horizon.
k=0 Attention: Always calculate all µi multipliers and check that they fulfill µi ≥ 0!
Finite horizon: One iteration of MPC
ˆ Measure x, determine UN (x)
Attention: When h(z) ∈ Rn → 2n cases to check: Always solve the optimization
Solve PN (x) and obtain u∗ (x)
N −1
ˆ
 
N  X 
JN x0 , u = Jf xu (N, x0 ) + l xu (k, x0 ), u(k) problem for all possible combinations of active and inactive constraints and take the valid
k=0 ˆ Control with κN (x) such that x+ = f (x, κN (x)) solution that minimizes the cost function.
ˆ Repeat for x := x+
with stage cost l(x, u) and target cost Jf (x)
Attention: Changes between inequality constraints being active and inactive might require
case distinctions, leading to discontinuities in the loss function.
1.3. Constraints and nomenclature Case distinctions affect all following optimization steps!
1.9. Output prediction
ˆ Input constraints: u(k) ∈ U
U is the set of all possible control inputs. Predict output for a given control sequence for horizon N Attention: Any additional variable that is defined in a constraint way, e.g. α ≥ 0, introduces
ˆ State constraints: x(k) ∈ X, k = 1, 2, . . . ; x(N ) ∈ Xf ˆ Formulate prediction from system equation such that the prediction depends only on initial an additional inequality constraint!
X: set of all possible states of x, Xf : set of all possible final states of x. state x0 and control inputs u :k
X and U result from a consideration of all constraints for states and inputs.  
k
x(k + 1) = f (x(k), u(k)) → x(k + 1) = f˜ x0 , u 1.11. Softened constraint minimization
ˆ Admissible controls: UN (x0 ) := {u | (x0 , u) ∈ Z}, or Add a penalty term to the cost function that is minimized:
The prediction is valid for any x0 , even with receding horizon.
UN X, Xf
n o
UN (x0 ) := u∈ |xu (1, x0 ), . . . , xu (N − 1, x0 ) ∈ xu (N, x0 ) ∈  
min F (z) + l(ε) s.t. h(z) ≤ ε, ε ≥ 0
ˆ Reformulate cost function so it depends only on x0 and control inputs u N
: J x0 , uN z
= control sequences of length N that make the system pass only by permitted the states and → J might be restrained to only non-negative values
Quadratic penalty : l(ε) = v · ε2 , v > 0
end in a permitted final state. ˆ Solve by optimization and predict output
ˆ Feasible initial values: XN := {x0 ∈ X | UN (x0 ) ̸= ∅} → problem: this softened problem does not yield the same solution as unsoftened problem when
= initial states x0 ∈ X where admissible control uN (control sequence to pass only by valid
the optimal solution is feasible without softening the constraints.
Linear penalty : l(ε) = v · ε, v > µ∗ ≥ 0 ,
future states and end in a valid final state) exists.
with µ∗ the optimal KKT multiplier for the optimization problem.
with → problem: resulting cost function is not smooth.
ZN := (x0 , u) | u(k) ∈ U, xu (k, x0 ) ∈ X, k = 0, 1, . . . , N − 1; xu (N, x0 ) ∈ Xf

Solution: Combine quadratic and linear penalty terms.

1.4. Recursive elimination


The process of replacing x+ by f (x, u) until the whole function is expressed only by the initial
states x(0) and the control inputs u.

1.5. Calculating the feasible values set


For calculating one step of the feasible set evolution,

Xk+1 = x ∈ X|x = f (x, u) ∈ Xk


n o
+

with constraints on the input u:


ˆ OK: Set numerical best-case and worst-case values for x+ or u to calculate boundaries for the
state variables x
ˆ NOT OK: Using the numerical best- and worst-case values for x (obtained from x+ and u) to
calculate boundaries for other state variables x!

Homepage: www.latex4ei.de – Please report mistakes immediately. from Elena Zhelondz – Mail: elena.zhelondz@tum.de Last revised: February 13, 2020 1/4
2. Dynamic Programming 3. Stability 3.5. Closed-loop stability
For a linear closed-loop system of the form x(k + 1) = Ax(k), the following stability statements
Idea: Calculation of optimal control input (or policy) and cost starting at the end state 3.1. Stability concepts are applicable:
x(N ) and going backwards to the initial state x0 . + ˆ For a scalar A ≥ 0, the system is critically stable if A = 1 and asymptotically stable if A < 1.
System class: x = f˜(x)
Equilibrium point: xeq = f˜(xeq ) ˆ If A is such that xeq = 0 is reached in one timestep, this is called dead-beat control
2.1. Problem statement Assumption: If f˜ is not continuous, it is at least locally bounded. ˆ If A is a matrix, the system is stable if for all eigenvalues of A, |λA | < 1.
Time-invariant discrete-time dynamical control system

x(k + 1) = f (x(k), u(k)) ; k = 0, 1, 2, . . . ; x(0) = x0 Definition of stability in the sense of Lyapunov: 4. MPC for Linear Systems
The equilibrium point xeq = 0 of x+ = f˜(x) is locally stable, if for all ε > 0 there exists a
with cost δ > 0 such that for all ||x(0)|| ≤ δ(ε), it holds that ||x(k)|| ≤ ε for all k > 0.
N −1 4.1. System class for linear MPC
System class x+ = Ax + Bu
X
V (x0 , u) = Vf (x(N )) + l (x(k), u(k)) It is locally asymptotically stable, if in addition limk→∞ ||x(k)|| = 0 for x(0) close to the
k=0 origin. Cost
and constraints x(k) ∈ X, k = 0, 1, . . . , N − 1, x(N ) ∈ Xf , u(k) ∈ U, k = It is globally asymptotically stable, if in addition limk→∞ ||x(k)|| = 0 for all x(0) ∈ Rn . 1 2
−1
1 NX 2 2
It is asymptotically stable in X , if in addition limk→∞ ||x(k)|| = 0 for all x(0) ∈ X , where JN (x, u) = ||x(N )||P + ||x(k)||Q + ||u(k)||R
0, 1, . . . , N − 1. 2 f 2 k=0
→ Find u = {u(0), u(1), . . . , u(N − 1)} where u(k) = µk (x(k)) are control laws. X is positive invariant. The set X is the region of attraction of the equilibrium.
Definition of positive invariance Constraints x(k) ∈ X, u(k) ∈ U, x(N ) ∈ Xf , where X, U and Xf are convex polytopes.
X is positive invariant for x+ = f˜(x), if f˜(x) ∈ X for all x ∈ X .
2.2. Notation
Cost-to-go: 4.2. LQ control (no constraints, non-receding finite horizon)
Definition of comparison functions

i
 N
X −1
A function α is a class K function, if it is continuous and strictly increasing with α(0) = 0. System class x+ = Ax + Bu
Vi x, u = Vf (x(N )) + l (x(k), u(k)) Cost
A function α is a class K∞ function, if it is a class K function and in addition unbounded.
−1
k=i
A function α is a class PD function, of it is continuous with α(0) = 0 and α(x) > 0 for all 1 2 1 NX 2 2
V0 (x0 , u) = ||x(N )||P + ||x(k)||Q + ||u(k)||R
with ui = {u(i), u(i + 1), . . . , u(N − 1)} (control sequence starting from instant i) x ̸= 0. 2 f 2 k=0
Optimal cost-to-go:
Control law u(k) = K(k)x(k) for k = 0, . . . , N − 1, with
 
∗ i
Vi (x) = min Vi x, u Lyapunov’s direct method
ui ∈Υi (x) A function V : Rn → R is a global Lyapunov function for the equilibrium of x+ = f˜(x), if T −1 T
with α1 , α2 ∈ K∞ and α3 ∈ PD exist, such that for all x ∈ Rn K(k) = −(B P (k + 1)B + R) B P (k + 1)A ,
1. α1 (||x||) ≤ V (x) ≤ α2 (||x||) the Riccati Differential Equation:
U,
n
i
Υi (x) := u | for initial state x(i) = x : u(k) ∈ k = i, . . . , N − 1; 2. V (f˜(x)) − V (x) ≤ −α3 (||x||)
T T T −1 T
If V is a global Lyapunov function for the equilibrium of x+ = f˜(x), then the equilibrium is P (k) = A P (k + 1)A + Q − A P (k + 1)B(B P (k + 1)B + R) B P (k + 1)A ,
= sequence of controls that, starting from state x at time i, permits to reach a final state at time globally, asymptotically stable.
N. and P (N ) = Pf . In addition, V0∗ (x0 ) = 2
1 xT P (0)x
x(k) ∈ X, k = i + 1, . . . , N − 1; x(N ) ∈ Xf
Lyapunov’s direct method (constrained) 0 0
A function V : X → R with X invariant is a Lyapunov function for the equilibrium of x+ = f˜(x)
and Ξi := {x ∈ X | Υi (x) ̸= ∅}. on X , if α1 , α2 ∈ K∞ , α3 ∈ PD exist, such that for all x ∈ X : Note: The Riccati matrix P (k) is symmetric and positive definite, as it describes the
= states for which exists a control such that a desired final state can be reached in a given time. 1. α1 (||x||) ≤ V (x) ≤ α2 (||x||) cost-to-go at time k.
Recursive construction of feasible sets Ξi from behind:
2. V (f˜(x)) − V (x) ≤ −α3 (||x||)
Ξ N = Xf
If V is a Lyapunov function for the equilibrium of x+ = f˜(x) on X , then the equilibrium is
Ξi = x(i) ∈ X | x(i + 1) ∈ Ξi+1 with u(i) ∈ U , i = N − 1, N − 2, . . . , 0

asymptotically stable on X . 4.3. LQ control (no constraints, infinite horizon)
System class x+ = Ax + Bu
2.3. Bellman Recursion 3.2. Assumptions for stability of MPC Cost

Assumption 3: 1 X 2 2
Idea: Suppose that the solution from state i + 1 to state N is optimal and calculate optimal V (x, u) = ||x(k)||Q + ||u(k)||R
control in state i. l(x, u) > αl (||x||) ∀x ∈ XN , ∀u ∈ U 2 k=0
Jf (x) ≤ αf (||x||) ∀x ∈ Xf
Recursive calculation of optimal cost-to-go Vi∗ from behind: where αl , αf are class K∞ functions. Control law u(k) = K∞ x(k), with
Assumption 4: T −1 T

VN (x(N )) = Vf (x(N )) Jf is a Control Lyapunov Function (CLF), that means Jf (0) = 0, Jf > 0 and there exists K∞ = −(B P∞ B + R) B P∞ A ,
u ∈ U such that Jf (f˜(x, u)) − Jf (x) ≤ −l(x, u) ∀x ∈ Xf .
U, X, and the Riccati Equation:
n
∗ ∗
Vi (x(i)) = min l(x(i), u(i)) + Vi+1 (f (x(i), u(i))) | u(i) ∈ x(i) ∈ Assumption 5:
u(i)

Xf is control invariant, that means, if x ∈ Xf , then there exists u ∈ U such that f (x, u) ∈ Xf . T
P∞ = Q + K∞ RK∞ + (A + BK∞ ) P∞ (A + BK∞ )
T
f (x(i), u(i)) ∈ Ξi+1
T T T
3.3. Stability of MPC = Q + A P∞ A + K∞ B P∞ A
for i = N − 1, N − 2, . . . , 0 delivers u(i) = µ∗
i (x(i)).
u(i) ∈ U, x(i) ∈ X, f (x(i), u(i)) ∈ Ξi+1 are the optimization constraints. Under Assumptions 1 to 5 (cf. 1.4 for Assumption 1 and 2), the equilibrium xeq is asymptotically
stable in XN for x+ = f (x, κN (x)). 4.4. MPC (constrained, receding horizon)
Attention: Vi∗ (x(i)) depends only on the state i and nothing else! Proof: Choose Lyapunov function VN (x) = JN (x, u∗ ) and show properties 1 and 2 of Lya- Stability of equilibrium xeq = 0 under MPC, if:
→ u(i) = u(x(i)); always calculate u(i) explicitly as it might be needed for cost calculation punov’s direct method.
ˆ unconstrained: Pf = P∞
or prediction.
Generally: Express all unknown variables in state i only via the variables that are relevant in ˆ constrained:
that state! 3.4. Recursive feasibility 1. Pf = P∞
Definition: 2. constraint admissibility: Xf ⊆ {x ∈ X | Kx ∈ U}
MPC is said to be recursively feasible, if one can assure that there is a solution to PN (x+ ) having Xf Xf
a solution of PN (x). 3. positive invariance: x ∈ → x+ = (A + BK∞ )x ∈
Recursive feasibility
If Xf is control invariant, then
ˆ Xj−1 ⊆ Xj , j = 1, . . . , N
ˆ Xj is control invariant, j = 1, . . . , N
ˆ MPC is recursively feasible

Homepage: www.latex4ei.de – Please report mistakes immediately. from Elena Zhelondz – Mail: elena.zhelondz@tum.de Last revised: February 13, 2020 2/4
4.5. Underlying optimization problem of linear MPC is QP problem 5.3. Solving the Diophantine equation 6. Numerics
Using previews      
−1 −1 −j −1
1 = Ej z à z +z Fj z 6.1. Nonlinear Programming (NP)
k k−1
x(k) = A x(0) + A Bu(0) + . . . + ABu(k − 2) + Bu(k − 1)
↕ min F (z) (cost function)
for all k in horizon in cost function
remainder of g(z) = 0 (equality constraints)
−1
1 2 1 NX 2 2
polynomial h(z) ≤ 0 (inequality constraints)
JN (x0 , u) = ||x(N )||P + ||x(k)||Q + ||u(k)||R long division
2 f 2 k=0 Necessary conditions for a minimum:
If z ∗ is a feasible minimum, then ∇z L(z ∗ , λ∗ , µ∗ ) = 0, ∇λ L(z ∗ , λ∗ , µ∗ ) = 0, µ∗ ≥ 0,
z }| {
 
−j −1
allows to transform the cost into 1 
−1
 z Fj z h(z ∗ ) ≤ 0 and µ∗ ∗
i hi (z ) = 0 for all i with Lagrange function
 = Ej z +
à z −1 à z −1

1 T T
JN (x, u) = u H(x0 )u + c(x0 ) u + d(x0 ) T T
| {z } L = F (z) + λ g(z) + µ h(z)
2
| {z }
whole part of
polynomial 
and multipliers λ and µ. Active set: A = j | µj > 0

with u ∈ UN (x0 ) polytopes. polynomial
long division
QP problem (= Quadratic cost with linear constraints) allows for efficient numerics. long division
→ continue division until Ej is of degree j − 1 and restructure the remainder to get Fj .
6.2. Unconstrained minimization: Newton’s method
5. Generalized Predictive Control (GPC) Find minimum z ∗ numerically as follows

5.1. System class for GPC 5.4. Prediction ˆ Initialize: Guess z (0) close to z ∗ , then for k = 0, 1, 2, . . .:
Scalar case:
System class:
     
ˆ Update: z (k+1) = z (k) + α(k) d(k) , with search direction
−1 −1 −1
ŷ(t + j|t) = B z Ej z ∆u(t + j − 1) + Fj z y(t)
−1 e(t)
        −1  
−1 −1 −d (k) 2 (k) (k)
A z y(t) = B z z u(t − 1) + C z d =− ∇ F z ∇F z

| {z }
 
Gj z −1
  and line search
ˆ Denominator A z −1 = 1 + a1 z −1 + a2 z −2 + . . . + an z −n α
(k)
= arg min F

z
(k)
+ αd
(k)

 
ˆ Numerator B z −1 = 1 + b1 z −1 + b2 z −2 + . . . + bm z −m , m < n Vector case:  
ˆ Stop if ||∇F z (k) || < εtol .
ˆ Shift operator z −k y(t) = y(t − k) y = Gu + p
ˆ Dead time z −d , in the following d = 0 As Newton’s method is based on approximating the function by quadratic Taylor expansion, only
ˆ e(t): white noise with zero mean with y = [ŷ(t + 1|t), ŷ(t + 2|t), . . . , ŷ(t + N |t)]T , one iteration of the algorithm is necessary when the cost function is quadratic.
T
ˆ ∆ = 1 − z −1
   
and u = [∆u(t), ∆u(t + 1), . . . , ∆u(t + N − 1)]
ˆ C z −1 for colored noise, in the following C z −1 = 1 Attention: The u vector corresponds to the input differences! If the absolute input values are 6.3. Constrained minimization: Quadratic Programming (QP) method
Cost needed, split up all ∆u(·) terms according to the definition of ∆. Applies for QP problem class:
N M
X 2
X
The choice of y and u defines G and p: 1 T T
J = δ(j)(ŷ(t + j|t) − w(t + j)) + λ(j)(∆u(t + j − 1)) F (z) = z Hz + c z (quadratic cost function)
j=1 j=1 2
···
 
g1,0 0 0
ˆ Horizons: N is prediction horizon, M is control horizon, in the following M = N . g(z) = Ez + e = 0 (linear equality constraints)
g2,1 g2,0 ··· 0 
ˆ

Weighting δ(j), λ(j)   
−1



−1

h(z) = Iz + i ≤ 0 (linear inequality constraints)
G= .  , p=F z y(t)+G z ∆u(t−1)
 
ˆ Reference trajectory w(t) . . ..
 . . . 
. . . . 
ˆ Prediction for time t + j from time t is ŷ(t + j|t)

ˆ Control input ∆u = u(t) − u(t − 1) gN,N −1 gN,N −2 ··· gN,0 Find minimum z ∗ numerically as follows
  h      iT ˆ Initialize: Guess initial active set A0 , then for k = 0, 1, 2, . . .:
with F z −1 = F1 z −1 , F2 z −1 , . . . , FN z −1 ,
5.2. Diophantine equation ˆ Update optimization variables:
 
For C z −1 = 1:
   
G1 z −1 − g1,0 z
    T 
ET I (k) z (k+1)
 
      H −c
−1 −1 −j −1  
−1

−1

1 = Ej z à z +z Fj z 2
    (k+1) 
G2 z − g2,0 − g2,1 z z  λ = −e

 E 0 0
    
   
′ −1

(k+1) (k)
G z = I (k) µ −i
 
withà = ∆A, .  0 0
 . 
Ej z −1 : polynomial of degree j − 1 .
 
 
where I (k) z + i(k) ≤ 0 are inequality constraints from active set Ak that are treated as
  
−1 −(N −1) N
  GN z − gN,0 − . . . − gN,N −1 z z
Fj z −1 : polynomial of degree of A equality constraints.
      ˆ Update active set:
Gj z −1 = B z −1 Ej z −1 and gi,j the j-th coefficient of polynomial Gi . (k+1)
When for j ∈ Ak : µj < 0, delete constraint j from active set
When for j ̸∈ Ak : Ij z (k+1) + ij ≥ 0, add constraint j to active set
5.5. QP problem And thus get Ak+1
Cost 
T

T T ˆ Stop if Ak does not change any more
J = u G QG + R u + 2(p − w) QGu + (p − w) Q(p − w)

is minimized by control sequence of future controls


 −1 6.4. Constrained minimization: Sequential Quadratic Programming (SQP) method
T T
u = − G QG + R G Q(p − w) Combine Newton’s Method (linearization of NP problem to obtain QP problem) and QP method
(choice of feasible active set of linearized problem)
where only u1 = ∆u(t) is applied as control.

Homepage: www.latex4ei.de – Please report mistakes immediately. from Elena Zhelondz – Mail: elena.zhelondz@tum.de Last revised: February 13, 2020 3/4
6.5. Constrained minimization: Interior Point (IP) method 8.5. Polynomial long division
Replace NP by Given are two polynomials A(x) and B(x).
F (z) (cost function)
Calculation of A(x) : B(x)
g(z) = 0 (equality constraints)
ˆ Order summands in both polynomials by descending degree.
h(z) + s = 0 (former inequality constraints, now equality constraints) If necessary, factor out xc in one of the polynomials so the highest degree in both polynomials
s≥0 (new inequality constraints) is the same.
Lagrange function ˆ Divide highest-degree summand of A(x) by highest-degree summand of B(x), note resulting
T T T factor d(x).
L = F (z) + y g(z) + w (h(z) + s) − µ s
ˆ Multiply the whole B(x) by d(x) and subtract the multiplication result from A(x).
Necessary conditions that are related to inequality constraints: s ≥ 0, µ ≥ 0 and sT µ = 0 ˆ Repeat steps 2-3 until done.
Relax complementarity conditions sT µ = 0 by introducing barrier parameter ε to sT µ = ε and
solve with ε → 0
8.6. Matrix inverse
7. Robust MPC The inverse of the matrix A ∈ C2×2 is defined as follows:
" # " #
7.1. Types of uncertainties a b −1 1 d −b
A= → A =
Parametric uncertainty, modeling errors, measurement noise, etc. c d ad − bc −c a
E.g. additive disturbance: x+ = f (x, u) + w, where w is the disturbance and w ∈ W, where
W is bounded.
8.7. Multidimensional derivative
The derivative of matrix-vector products is done as follows:
7.2. Robust stability
Under bounded small disturbances, does the state reach a small vicinity of the origin? d  T   
T
Nominal Robust Stability only if Lyapunov Function is continuous (e.g. for linear MPC!) x Ax = A + A x
dx e e e

7.3. Tube-based MPC for linear systems d  T 


x By = By
Nominal model: z + = Az + Bu dx e e
Disturbed (real) model: x+ = Ax + Bu + w
Use tube S(k), k = 0, 1, 2, . . . to check constraint admissibility: S(k) L is a set such that all 8.8. Binomial expansions
possible trajectories x(k) of the disturbed system fulfill x(k) ∈ {z(k) S(k)} (→ apply all
possible disturbances!) 2 2 2
(a ± b) = a ± 2ab + b
(The ⊕ operator denotes a direct sum.)
MPC feedback control: u∗ = v ∗ + K(x − z), where K is found offline. Appropriate choice of (a ± b)
3 3
= a ± 3a b + 3ab ± b
2 2 3
K reduces the size of the tube. 4 4 3 2 2 3 4
Optimize cost to find v ∗ such that x∗ is constraint admissible by constraint tightening inside the (a ± b) = a ± 4a b + 6a b ± 4ab + b
tube. Generally, the coefficients of the summands can be found via the sign of b Pascal’s triangle:

8. Math Annex 1
1 2 1
8.1. Set theory
1 3 3 1
Closed set: A set is closed if it contains all limit points of the set as well. The complement of a
closed set is always an open set. 1 4 6 4 1

Compact set: A set S of real numbers is called compact if every sequence in S has a subsequence .
.
that converges to an element again contained in S. .
A set is compact if it is closed and bounded.

Convex set: A set is convex if for any two points, it contains the whole line segment that joins
them.

8.2. Convex polytope


A polytope is an object of any dimension with only flat sides.
A polytope is called convex if the set of numbers that the it describes is a convex set.

8.3. Fixed point equation


For a recursively defined suite x(k + 1) = f (x(k)), possible final values result from the solution
of the fixed point equation:
x∞ = f (x∞ )

8.4. Binomial formulas


2 2 2
(a ± b) = a ± 2ab + b
2 2 2 2
(a + b + c) = a + b + c + 2ab + 2bc + 2ac
3 3 2 2 3
(a ± b) = a ± 3a b + 3ab ± b
2 2
(a + b)(a − b) = a − b
2
(ax + b)(cx + d) = ac · x + (ad + bc)x + bd

Homepage: www.latex4ei.de – Please report mistakes immediately. from Elena Zhelondz – Mail: elena.zhelondz@tum.de Last revised: February 13, 2020 4/4

You might also like