Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Model Predictive Control: Theory,

Computation, and Design, 2nd Edition


Rawlings James B.
Visit to download the full and correct content document:
https://ebookmass.com/product/model-predictive-control-theory-computation-and-desi
gn-2nd-edition-rawlings-james-b/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Process Control: Modeling, Design, and Simulation 2nd


Edition B. Wayne Bequette

https://ebookmass.com/product/process-control-modeling-design-
and-simulation-2nd-edition-b-wayne-bequette/

Predictive Control Daniel Lequesne

https://ebookmass.com/product/predictive-control-daniel-lequesne/

Model Predictive Control for Doubly-Fed Induction


Generators and Three-Phase Power Converters Alfeu J.
Sguarezi Filho

https://ebookmass.com/product/model-predictive-control-for-
doubly-fed-induction-generators-and-three-phase-power-converters-
alfeu-j-sguarezi-filho/

Surface Area and Porosity Determinations by


Physisorption: Measurement, Classical Theories and
Quantum Theory 2nd Edition James B. Condon

https://ebookmass.com/product/surface-area-and-porosity-
determinations-by-physisorption-measurement-classical-theories-
and-quantum-theory-2nd-edition-james-b-condon/
Philosophy and Model Theory Tim Button

https://ebookmass.com/product/philosophy-and-model-theory-tim-
button/

(eBook PDF) Data Mining and Predictive Analytics 2nd


Edition

https://ebookmass.com/product/ebook-pdf-data-mining-and-
predictive-analytics-2nd-edition/

Engineering Computation: An Introduction Using MATLAB


and Exce, 2nd Edition Musto

https://ebookmass.com/product/engineering-computation-an-
introduction-using-matlab-and-exce-2nd-edition-musto/

Tidal Energy Systems: Design, Optimization and Control


1st Edition

https://ebookmass.com/product/tidal-energy-systems-design-
optimization-and-control-1st-edition/

Organization theory and design 13th Edition Daft

https://ebookmass.com/product/organization-theory-and-
design-13th-edition-daft/
Model Predictive Control:
Theory, Computation, and Design
2nd Edition

ISBN 9780975937730

9 780975 937730
Model Predictive Control:
Theory, Computation, and Design
2nd Edition

James B. Rawlings
Department of Chemical Engineering
University of California
Santa Barbara, California, USA

David Q. Mayne
Department of Electrical and Electronic Engineering
Imperial College London
London, England

Moritz M. Diehl
Department of Microsystems Engineering and

D
Department of Mathematics
University of Freiburg
Freiburg, Germany

b Hi
No ll Publishing

Santa Barbara, California


This book was set in Lucida using LATEX, and printed and bound by
Worzalla. It was printed on Forest Stewardship CouncilÉ certified acid-
free recycled paper.

Cover design by Cheryl M. and James B. Rawlings.

Copyright c 2019 by Nob Hill Publishing, LLC

All rights reserved.

Nob Hill Publishing, LLC


Cheryl M. Rawlings, publisher
Santa Barbara, CA 93117
orders@nobhillpublishing.com
http://www.nobhillpublishing.com

No part of this book may be reproduced, in any form or by any means,


without permission in writing from the publisher.

Library of Congress Control Number: 2017909542

Printed in the United States of America.

First Edition
First Printing August 2009
Electronic Download November 2013
Electronic Download (2nd) April 2014
Electronic Download (3rd) July 2014
Electronic Download (4th) October 2014
Electronic Download (5th) February 2015
Second Edition
First Printing October 2017
Electronic Download October 2018
Electronic Download (2nd) February 2019
To Cheryl, Josephine, and Stephanie,

for their love, encouragement, and patience.


Preface to the Second Edition

In the eight years since the publication of the first edition, the field
of model predictive control (MPC) has seen tremendous progress. First
and foremost, the algorithms and high-level software available for solv-
ing challenging nonlinear optimal control problems have advanced sig-
nificantly. For this reason, we have added a new chapter, Chapter 8,
“Numerical Optimal Control,” and coauthor, Professor Moritz M. Diehl.
This chapter gives an introduction into methods for the numerical so-
lution of the MPC optimization problem. Numerical optimal control
builds on two fields: simulation of differential equations, and numeri-
cal optimization. Simulation is often covered in undergraduate courses
and is therefore only briefly reviewed. Optimization is treated in much
more detail, covering topics such as derivative computations, Hessian
approximations, and handling inequalities. Most importantly, the chap-
ter presents some of the many ways that the specific structure of opti-
mal control problems arising in MPC can be exploited algorithmically.
We have also added a software release with the second edition of
the text. The software enables the solution of all of the examples and
exercises in the text requiring numerical calculation. The software is
based on the freely available CasADi language, and a high-level set of
Octave/MATLAB functions, MPCTools, to serve as an interface to CasADi.
These tools have been tested in several MPC short courses to audiences
composed of researchers and practitioners. The software can be down-
loaded from www.chemengr.ucsb.edu/~jbraw/mpc.
In Chapter 2, we have added sections covering the following topics:
• economic MPC
• MPC with discrete actuators
We also present a more recent form of suboptimal MPC that is prov-
ably robust as well as computationally tractable for online solution of
nonconvex MPC problems.
In Chapter 3, we have added a discussion of stochastic MPC, which
has received considerable recent research attention.
In Chapter 4, we have added a new treatment of state estimation
with persistent, bounded process and measurement disturbances. We
have also removed the discussion of particle filtering. There are two

vi
vii

reasons for this removal; first, we wanted to maintain a manageable


total length of the text; second, all of the available sampling strate-
gies in particle filtering come up against the “curse of dimensionality,”
which renders the state estimates inaccurate for dimension higher than
about five. The material on particle filtering remains available on the
text website.
In Chapter 6, we have added a new section for distributed MPC of
nonlinear systems.
In Chapter 7, we have added the software to compute the critical
regions in explicit MPC.
Throughout the text, we support the stronger KL-definition of asymp-
totic stability, in place of the classical definition used in the first edition.
The most significant notational change is to denote a sequence with
(a, b, c, . . .) instead of with {a, b, c, . . .} as in the first edition.

JBR DQM MMD


Madison, Wis., USA London, England Freiburg, Germany
Preface

Our goal in this text is to provide a comprehensive and foundational


treatment of the theory and design of model predictive control (MPC).
By now several excellent monographs emphasizing various aspects of
MPC have appeared (a list appears at the beginning of Chapter 1), and
the reader may naturally wonder what is offered here that is new and
different. By providing a comprehensive treatment of the MPC foun-
dation, we hope that this text enables researchers to learn and teach
the fundamentals of MPC without continuously searching the diverse
control research literature for omitted arguments and requisite back-
ground material. When teaching the subject, it is essential to have a
collection of exercises that enables the students to assess their level of
comprehension and mastery of the topics. To support the teaching and
learning of MPC, we have included more than 200 end-of-chapter exer-
cises. A complete solution manual (more than 300 pages) is available
for course instructors.
Chapter 1 is introductory. It is intended for graduate students in en-
gineering who have not yet had a systems course. But it serves a second
purpose for those who have already taken the first graduate systems
course. It derives all the results of the linear quadratic regulator and
optimal Kalman filter using only those arguments that extend to the
nonlinear and constrained cases to be covered in the later chapters.
Instructors may find that this tailored treatment of the introductory
systems material serves both as a review and a preview of arguments
to come in the later chapters.
Chapters 2–4 are foundational and should probably be covered in
any graduate level MPC course. Chapter 2 covers regulation to the ori-
gin for nonlinear and constrained systems. This material presents in a
unified fashion many of the major research advances in MPC that took
place during the last 20 years. It also includes more recent topics such
as regulation to an unreachable setpoint that are only now appearing in
the research literature. Chapter 3 addresses MPC design for robustness,
with a focus on MPC using tubes or bundles of trajectories in place of
the single nominal trajectory. This chapter again unifies a large body of
research literature concerned with robust MPC. Chapter 4 covers state
estimation with an emphasis on moving horizon estimation, but also

viii
ix

covers extended and unscented Kalman filtering, and particle filtering.


Chapters 5–7 present more specialized topics. Chapter 5 addresses
the special requirements of MPC based on output measurement instead
of state measurement. Chapter 6 discusses how to design distributed
MPC controllers for large-scale systems that are decomposed into many
smaller, interacting subsystems. Chapter 7 covers the explicit optimal
control of constrained linear systems. The choice of coverage of these
three chapters may vary depending on the instructor’s or student’s own
research interests.
Three appendices are included, again, so that the reader is not sent
off to search a large research literature for the fundamental arguments
used in the text. Appendix A covers the required mathematical back-
ground. Appendix B summarizes the results used for stability analysis
including the various types of stability and Lyapunov function theory.
Since MPC is an optimization-based controller, Appendix C covers the
relevant results from optimization theory. In order to reduce the size
and expense of the text, the three appendices are available on the web:
www.chemengr.ucsb.edu/~jbraw/mpc. Note, however, that all mate-
rial in the appendices is included in the book’s printed table of contents,
and subject and author indices. The website also includes sample ex-
ams, and homework assignments for a one-semester graduate course
in MPC. All of the examples and exercises in the text were solved with
Octave. Octave is freely available from www.octave.org.

JBR DQM
Madison, Wisconsin, USA London, England
Acknowledgments

Both authors would like to thank the Department of Chemical and Bio-
logical Engineering of the University of Wisconsin for hosting DQM’s
visits to Madison during the preparation of this monograph. Funding
from the Paul A. Elfers Professorship provided generous financial sup-
port.
JBR would like to acknowledge the graduate students with whom
he has had the privilege to work on model predictive control topics:
Rishi Amrit, Dennis Bonné, John Campbell, John Eaton, Peter Findeisen,
Rolf Findeisen, Eric Haseltine, John Jørgensen, Nabil Laachi, Scott Mead-
ows, Scott Middlebrooks, Steve Miller, Ken Muske, Brian Odelson, Mu-
rali Rajamani, Chris Rao, Brett Stewart, Kaushik Subramanian, Aswin
Venkat, and Jenny Wang. He would also like to thank many colleagues
with whom he has collaborated on this subject: Frank Allgöwer, Tom
Badgwell, Bhavik Bakshi, Don Bartusiak, Larry Biegler, Moritz Diehl,
Jim Downs, Tom Edgar, Brian Froisy, Ravi Gudi, Sten Bay Jørgensen,
Jay Lee, Fernando Lima, Wolfgang Marquardt, Gabriele Pannocchia, Joe
Qin, Harmon Ray, Pierre Scokaert, Sigurd Skogestad, Tyler Soderstrom,
Steve Wright, and Robert Young.
DQM would like to thank his colleagues at Imperial College, espe-
cially Richard Vinter and Martin Clark, for providing a stimulating and
congenial research environment. He is very grateful to Lucien Polak
and Graham Goodwin with whom he has collaborated extensively and
fruitfully over many years; he would also like to thank many other col-
leagues, especially Karl Åström, Roger Brockett, Larry Ho, Petar Koko-
tovic, and Art Krener, from whom he has learned much. He is grateful
to past students who have worked with him on model predictive con-
trol: Ioannis Chrysochoos, Wilbur Langson, Hannah Michalska, Sasa
Raković, and Warren Schroeder; Hannah Michalska and Sasa Raković,
in particular, contributed very substantially. He owes much to these
past students, now colleagues, as well as to Frank Allgöwer, Rolf Find-
eisen Eric Kerrigan, Konstantinos Kouramus, Chris Rao, Pierre Scokaert,
and Maria Seron for their collaborative research in MPC.
Both authors would especially like to thank Tom Badgwell, Bob Bird,
Eric Kerrigan, Ken Muske, Gabriele Pannocchia, and Maria Seron for
their careful and helpful reading of parts of the manuscript. John Eaton

x
xi

again deserves special mention for his invaluable technical support dur-
ing the entire preparation of the manuscript.
Added for the second edition. JBR would like to acknowledge the
most recent generation of graduate students with whom he has had the
privilege to work on model predictive control research topics: Doug Al-
lan, Travis Arnold, Cuyler Bates, Luo Ji, Nishith Patel, Michael Risbeck,
and Megan Zagrobelny.
In preparing the second edition, and, in particular, the software re-
lease, the current group of graduate students far exceeded expectations
to help finish the project. Quite simply, the project could not have been
completed in a timely fashion without their generosity, enthusiasm,
professionalism, and selfless contribution. Michael Risbeck deserves
special mention for creating the MPCTools interface to CasADi, and
updating and revising the tools used to create the website to distribute
the text- and software-supporting materials. He also wrote code to cal-
culate explicit MPC control laws in Chapter 7. Nishith Patel made a
major contribution to the subject index, and Doug Allan contributed
generously to the presentation of moving horizon estimation in Chap-
ter 4.
A research leave for JBR in Fall 2016, again funded by the Paul A.
Elfers Professorship, was instrumental in freeing up time to complete
the revision of the text and further develop computational exercises.
MMD wants to especially thank Jesus Lago Garcia, Jochem De Schut-
ter, Andrea Zanelli, Dimitris Kouzoupis, Joris Gillis, Joel Andersson,
and Robin Verschueren for help with the preparation of exercises and
examples in Chapter 8; and also wants to acknowledge the following
current and former team members that contributed to research and
teaching on optimal and model predictive control at the Universities of
Leuven and Freiburg: Adrian Bürger, Hans Joachim Ferreau, Jörg Fis-
cher, Janick Frasch, Gianluca Frison, Niels Haverbeke, Greg Horn, Boris
Houska, Jonas Koenemann, Attila Kozma, Vyacheslav Kungurtsev, Gio-
vanni Licitra, Rien Quirynen, Carlo Savorgnan, Quoc Tran-Dinh, Milan
Vukov, and Mario Zanon. MMD also wants to thank Frank Allgöwer, Al-
berto Bemporad, Rolf Findeisen, Larry Biegler, Hans Georg Bock, Stephen
Boyd, Sébastien Gros, Lars Grüne, Colin Jones, John Bagterp Jørgensen,
Christian Kirches, Daniel Leineweber, Katja Mombaur, Yurii Nesterov,
Toshiyuki Ohtsuka, Goele Pipeleers, Andreas Potschka, Sebastian Sager,
Johannes P. Schlöder, Volker Schulz, Marc Steinbach, Jan Swevers, Phil-
ippe Toint, Andrea Walther, Stephen Wright, Joos Vandewalle, and Ste-
fan Vandewalle for inspiring discussions on numerical optimal control
xii

methods and their presentation during the last 20 years.


All three authors would especially like to thank Joel Andersson and
Joris Gillis for having developed CasADi and for continuing its support,
and for having helped to improve some of the exercises in the text.
Contents

1 Getting Started with Model Predictive Control 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Models and Modeling . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Linear Dynamic Models . . . . . . . . . . . . . . . . . 2
1.2.2 Input-Output Models . . . . . . . . . . . . . . . . . . 3
1.2.3 Distributed Models . . . . . . . . . . . . . . . . . . . 4
1.2.4 Discrete Time Models . . . . . . . . . . . . . . . . . . 5
1.2.5 Constraints . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.6 Deterministic and Stochastic . . . . . . . . . . . . . 9
1.3 Introductory MPC Regulator . . . . . . . . . . . . . . . . . . 11
1.3.1 Linear Quadratic Problem . . . . . . . . . . . . . . . 11
1.3.2 Optimizing Multistage Functions . . . . . . . . . . . 12
1.3.3 Dynamic Programming Solution . . . . . . . . . . . 18
1.3.4 The Infinite Horizon LQ Problem . . . . . . . . . . . 21
1.3.5 Controllability . . . . . . . . . . . . . . . . . . . . . . 23
1.3.6 Convergence of the Linear Quadratic Regulator . . 24
1.4 Introductory State Estimation . . . . . . . . . . . . . . . . . 26
1.4.1 Linear Systems and Normal Distributions . . . . . 27
1.4.2 Linear Optimal State Estimation . . . . . . . . . . . 29
1.4.3 Least Squares Estimation . . . . . . . . . . . . . . . 33
1.4.4 Moving Horizon Estimation . . . . . . . . . . . . . . 39
1.4.5 Observability . . . . . . . . . . . . . . . . . . . . . . . 41
1.4.6 Convergence of the State Estimator . . . . . . . . . 43
1.5 Tracking, Disturbances, and Zero Offset . . . . . . . . . . 46
1.5.1 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.5.2 Disturbances and Zero Offset . . . . . . . . . . . . . 49
1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

2 Model Predictive Control—Regulation 89


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.2 Model Predictive Control . . . . . . . . . . . . . . . . . . . . 91
2.3 Dynamic Programming Solution . . . . . . . . . . . . . . . 107
2.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 112

xiii
xiv Contents

2.4.2 Stabilizing Conditions . . . . . . . . . . . . . . . . . 114


2.4.3 Exponential Stability . . . . . . . . . . . . . . . . . . 120
2.4.4 Controllability and Observability . . . . . . . . . . . 120
2.4.5 Time-Varying Systems . . . . . . . . . . . . . . . . . 123
2.5 Examples of MPC . . . . . . . . . . . . . . . . . . . . . . . . . 131
2.5.1 The Unconstrained Linear Quadratic Regulator . . 132
2.5.2 Unconstrained Linear Periodic Systems . . . . . . . 133
2.5.3 Stable Linear Systems with Control Constraints . 134
2.5.4 Linear Systems with Control and State Constraints 136
2.5.5 Constrained Nonlinear Systems . . . . . . . . . . . 139
2.5.6 Constrained Nonlinear Time-Varying Systems . . . 141
2.6 Is a Terminal Constraint Set Xf Necessary? . . . . . . . . 144
2.7 Suboptimal MPC . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.7.1 Extended State . . . . . . . . . . . . . . . . . . . . . . 150
2.7.2 Asymptotic Stability of Difference Inclusions . . . 150
2.8 Economic Model Predictive Control . . . . . . . . . . . . . 153
2.8.1 Asymptotic Average Performance . . . . . . . . . . 155
2.8.2 Dissipativity and Asymptotic Stability . . . . . . . 156
2.9 Discrete Actuators . . . . . . . . . . . . . . . . . . . . . . . . 160
2.10 Concluding Comments . . . . . . . . . . . . . . . . . . . . . 163
2.11 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
2.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

3 Robust and Stochastic Model Predictive Control 193


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3.1.1 Types of Uncertainty . . . . . . . . . . . . . . . . . . 193
3.1.2 Feedback Versus Open-Loop Control . . . . . . . . 195
3.1.3 Robust and Stochastic MPC . . . . . . . . . . . . . . 200
3.1.4 Tubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
3.1.5 Difference Inclusion Description of Uncertain Sys-
tems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
3.2 Nominal (Inherent ) Robustness . . . . . . . . . . . . . . . . 204
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 204
3.2.2 Difference Inclusion Description of Discontinu-
ous Systems . . . . . . . . . . . . . . . . . . . . . . . 206
3.2.3 When Is Nominal MPC Robust? . . . . . . . . . . . . 207
3.2.4 Robustness of Nominal MPC . . . . . . . . . . . . . 209
3.3 Min-Max Optimal Control: Dynamic Programming Solution 214
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 214
3.3.2 Properties of the Dynamic Programming Solution 216
Contents xv

3.4 Robust Min-Max MPC . . . . . . . . . . . . . . . . . . . . . . 220


3.5 Tube-Based Robust MPC . . . . . . . . . . . . . . . . . . . . 223
3.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 223
3.5.2 Outer-Bounding Tube for a Linear System with
Additive Disturbance . . . . . . . . . . . . . . . . . . 224
3.5.3 Tube-Based MPC of Linear Systems with Additive
Disturbances . . . . . . . . . . . . . . . . . . . . . . . 228
3.5.4 Improved Tube-Based MPC of Linear Systems with
Additive Disturbances . . . . . . . . . . . . . . . . . 234
3.6 Tube-Based MPC of Nonlinear Systems . . . . . . . . . . . 236
3.6.1 The Nominal Trajectory . . . . . . . . . . . . . . . . 238
3.6.2 Model Predictive Controller . . . . . . . . . . . . . . 238
3.6.3 Choosing the Nominal Constraint Sets Ū and X̄ . . 242
3.7 Stochastic MPC . . . . . . . . . . . . . . . . . . . . . . . . . . 246
3.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 246
3.7.2 Stabilizing Conditions . . . . . . . . . . . . . . . . . 248
3.7.3 Stochastic Optimization . . . . . . . . . . . . . . . . 248
3.7.4 Tube-Based Stochastic MPC for Linear Constrained
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

4 State Estimation 269


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
4.2 Full Information Estimation . . . . . . . . . . . . . . . . . . 269
4.2.1 State Estimation as Optimal Control of Estimate
Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
4.2.2 Duality of Linear Estimation and Regulation . . . . 281
4.3 Moving Horizon Estimation . . . . . . . . . . . . . . . . . . 283
4.3.1 Zero Prior Weighting . . . . . . . . . . . . . . . . . . 283
4.3.2 Nonzero Prior Weighting . . . . . . . . . . . . . . . . 287
4.3.3 Constrained Estimation . . . . . . . . . . . . . . . . 294
4.3.4 Smoothing and Filtering Update . . . . . . . . . . . 295
4.4 Bounded Disturbances . . . . . . . . . . . . . . . . . . . . . 300
4.5 Other Nonlinear State Estimators . . . . . . . . . . . . . . . 308
4.5.1 Particle Filtering . . . . . . . . . . . . . . . . . . . . . 308
4.5.2 Extended Kalman Filtering . . . . . . . . . . . . . . . 309
4.5.3 Unscented Kalman Filtering . . . . . . . . . . . . . . 310
4.5.4 EKF, UKF, and MHE Comparison . . . . . . . . . . . 312
4.6 On combining MHE and MPC . . . . . . . . . . . . . . . . . 318
xvi Contents

4.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325


4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

5 Output Model Predictive Control 339


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
5.2 A Method for Output MPC . . . . . . . . . . . . . . . . . . . 341
5.3 Linear Constrained Systems: Time-Invariant Case . . . . 344
5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 344
5.3.2 State Estimator . . . . . . . . . . . . . . . . . . . . . . 344
5.3.3 Controlling x̂ . . . . . . . . . . . . . . . . . . . . . . . 346
5.3.4 Output MPC . . . . . . . . . . . . . . . . . . . . . . . . 348
5.3.5 Computing the Tightened Constraints . . . . . . . 352
5.4 Linear Constrained Systems: Time-Varying Case . . . . . 353
5.5 Offset-Free MPC . . . . . . . . . . . . . . . . . . . . . . . . . 353
5.5.1 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 355
5.5.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
5.5.3 Convergence Analysis . . . . . . . . . . . . . . . . . 360
5.6 Nonlinear Constrained Systems . . . . . . . . . . . . . . . . 363
5.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

6 Distributed Model Predictive Control 369


6.1 Introduction and Preliminary Results . . . . . . . . . . . . 369
6.1.1 Least Squares Solution . . . . . . . . . . . . . . . . . 370
6.1.2 Stability of Suboptimal MPC . . . . . . . . . . . . . . 375
6.2 Unconstrained Two-Player Game . . . . . . . . . . . . . . . 380
6.2.1 Centralized Control . . . . . . . . . . . . . . . . . . . 382
6.2.2 Decentralized Control . . . . . . . . . . . . . . . . . 383
6.2.3 Noncooperative Game . . . . . . . . . . . . . . . . . 384
6.2.4 Cooperative Game . . . . . . . . . . . . . . . . . . . . 392
6.2.5 Tracking Nonzero Setpoints . . . . . . . . . . . . . . 398
6.2.6 State Estimation . . . . . . . . . . . . . . . . . . . . . 405
6.3 Constrained Two-Player Game . . . . . . . . . . . . . . . . 406
6.3.1 Uncoupled Input Constraints . . . . . . . . . . . . . 408
6.3.2 Coupled Input Constraints . . . . . . . . . . . . . . 411
6.3.3 Exponential Convergence with Estimate Error . . . 413
6.3.4 Disturbance Models and Zero Offset . . . . . . . . 415
6.4 Constrained M-Player Game . . . . . . . . . . . . . . . . . . 419
6.5 Nonlinear Distributed MPC . . . . . . . . . . . . . . . . . . . 421
6.5.1 Nonconvexity . . . . . . . . . . . . . . . . . . . . . . . 421
6.5.2 Distributed Algorithm for Nonconvex Functions . 423
Contents xvii

6.5.3 Distributed Nonlinear Cooperative Control . . . . 425


6.5.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 428
6.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435

7 Explicit Control Laws for Constrained Linear Systems 451


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.2 Parametric Programming . . . . . . . . . . . . . . . . . . . . 452
7.3 Parametric Quadratic Programming . . . . . . . . . . . . . 457
7.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 457
7.3.2 Preview . . . . . . . . . . . . . . . . . . . . . . . . . . 458
7.3.3 Optimality Condition for a Convex Program . . . . 459
7.3.4 Solution of the Parametric Quadratic Program . . 462
7.3.5 Continuity of V 0 (·) and u0 (·) . . . . . . . . . . . . 466
7.4 Constrained Linear Quadratic Control . . . . . . . . . . . . 467
7.5 Parametric Piecewise Quadratic Programming . . . . . . . 469
7.6 DP Solution of the Constrained LQ Control Problem . . . 475
7.7 Parametric Linear Programming . . . . . . . . . . . . . . . 476
7.7.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 476
7.7.2 Minimizer u0 (x) is Unique for all x ∈ X . . . . . . 478
7.8 Constrained Linear Control . . . . . . . . . . . . . . . . . . 481
7.9 Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
7.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
7.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484

8 Numerical Optimal Control 491


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
8.1.1 Discrete Time Optimal Control Problem . . . . . . 492
8.1.2 Convex Versus Nonconvex Optimization . . . . . . 493
8.1.3 Simultaneous Versus Sequential Optimal Control 496
8.1.4 Continuous Time Optimal Control Problem . . . . 498
8.2 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . 501
8.2.1 Explicit Runge-Kutta Methods . . . . . . . . . . . . . 502
8.2.2 Stiff Equations and Implicit Integrators . . . . . . . 506
8.2.3 Implicit Runge-Kutta and Collocation Methods . . 507
8.2.4 Differential Algebraic Equations . . . . . . . . . . . 511
8.2.5 Integrator Adaptivity . . . . . . . . . . . . . . . . . . 513
8.3 Solving Nonlinear Equation Systems . . . . . . . . . . . . . 513
8.3.1 Linear Systems . . . . . . . . . . . . . . . . . . . . . . 513
8.3.2 Nonlinear Root-Finding Problems . . . . . . . . . . 514
8.3.3 Local Convergence of Newton-Type Methods . . . 517
xviii Contents

8.3.4 Affine Invariance . . . . . . . . . . . . . . . . . . . . . 519


8.3.5 Globalization for Newton-Type Methods . . . . . . 519
8.4 Computing Derivatives . . . . . . . . . . . . . . . . . . . . . 520
8.4.1 Numerical Differentiation . . . . . . . . . . . . . . . 521
8.4.2 Algorithmic Differentiation . . . . . . . . . . . . . . 522
8.4.3 Implicit Function Interpretation . . . . . . . . . . . 523
8.4.4 Algorithmic Differentiation in Forward Mode . . . 526
8.4.5 Algorithmic Differentiation in Reverse Mode . . . 528
8.4.6 Differentiation of Simulation Routines . . . . . . . 531
8.4.7 Algorithmic and Symbolic Differentiation Software 533
8.4.8 CasADi for Optimization . . . . . . . . . . . . . . . . 533
8.5 Direct Optimal Control Parameterizations . . . . . . . . . 536
8.5.1 Direct Single Shooting . . . . . . . . . . . . . . . . . 538
8.5.2 Direct Multiple Shooting . . . . . . . . . . . . . . . . 540
8.5.3 Direct Transcription and Collocation Methods . . 544
8.6 Nonlinear Optimization . . . . . . . . . . . . . . . . . . . . . 548
8.6.1 Optimality Conditions and Perturbation Analysis 549
8.6.2 Nonlinear Optimization with Equalities . . . . . . . 552
8.6.3 Hessian Approximations . . . . . . . . . . . . . . . . 553
8.7 Newton-Type Optimization with Inequalities . . . . . . . 556
8.7.1 Sequential Quadratic Programming . . . . . . . . . 557
8.7.2 Nonlinear Interior Point Methods . . . . . . . . . . 558
8.7.3 Comparison of SQP and Nonlinear IP Methods . . 560
8.8 Structure in Discrete Time Optimal Control . . . . . . . . 561
8.8.1 Simultaneous Approach . . . . . . . . . . . . . . . . 562
8.8.2 Linear Quadratic Problems (LQP) . . . . . . . . . . . 564
8.8.3 LQP Solution by Riccati Recursion . . . . . . . . . . 564
8.8.4 LQP Solution by Condensing . . . . . . . . . . . . . 566
8.8.5 Sequential Approaches and Sparsity Exploitation 568
8.8.6 Differential Dynamic Programming . . . . . . . . . 570
8.8.7 Additional Constraints in Optimal Control . . . . . 572
8.9 Online Optimization Algorithms . . . . . . . . . . . . . . . 573
8.9.1 General Algorithmic Considerations . . . . . . . . . 574
8.9.2 Continuation Methods and Real-Time Iterations . 577
8.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
8.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583

Author Index 601

Citation Index 608


Contents xix

Subject Index 614

A Mathematical Background 624


A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
A.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
A.3 Range and Nullspace of Matrices . . . . . . . . . . . . . . . 624
A.4 Linear Equations — Existence and Uniqueness . . . . . . 625
A.5 Pseudo-Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 625
A.6 Partitioned Matrix Inversion Theorem . . . . . . . . . . . . 628
A.7 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . 629
A.8 Norms in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
A.9 Sets in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
A.10Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
A.11Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
A.12Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
A.13Convex Sets and Functions . . . . . . . . . . . . . . . . . . . 641
A.13.1 Convex Sets . . . . . . . . . . . . . . . . . . . . . . . . 641
A.13.2 Convex Functions . . . . . . . . . . . . . . . . . . . . 646
A.14Differential Equations . . . . . . . . . . . . . . . . . . . . . . 648
A.15Random Variables and the Probability Density . . . . . . 654
A.16Multivariate Density Functions . . . . . . . . . . . . . . . . 659
A.16.1 Statistical Independence and Correlation . . . . . . 668
A.17Conditional Probability and Bayes’s Theorem . . . . . . . 672
A.18Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678

B Stability Theory 693


B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
B.2 Stability and Asymptotic Stability . . . . . . . . . . . . . . 696
B.3 Lyapunov Stability Theory . . . . . . . . . . . . . . . . . . . 700
B.3.1 Time-Invariant Systems . . . . . . . . . . . . . . . . 700
B.3.2 Time-Varying, Constrained Systems . . . . . . . . . 707
B.3.3 Upper bounding K functions . . . . . . . . . . . . . 709
B.4 Robust Stability . . . . . . . . . . . . . . . . . . . . . . . . . 709
B.4.1 Nominal Robustness . . . . . . . . . . . . . . . . . . 709
B.4.2 Robustness . . . . . . . . . . . . . . . . . . . . . . . . 711
B.5 Control Lyapunov Functions . . . . . . . . . . . . . . . . . . 713
B.6 Input-to-State Stability . . . . . . . . . . . . . . . . . . . . . 717
B.7 Output-to-State Stability and Detectability . . . . . . . . . 719
B.8 Input/Output-to-State Stability . . . . . . . . . . . . . . . . 720
B.9 Incremental-Input/Output-to-State Stability . . . . . . . . 722
B.10 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
xx Contents

B.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724

C Optimization 729
C.1 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . 729
C.1.1 Optimal Control Problem . . . . . . . . . . . . . . . 731
C.1.2 Dynamic Programming . . . . . . . . . . . . . . . . . 733
C.2 Optimality Conditions . . . . . . . . . . . . . . . . . . . . . 737
C.2.1 Tangent and Normal Cones . . . . . . . . . . . . . . 737
C.2.2 Convex Optimization Problems . . . . . . . . . . . . 741
C.2.3 Convex Problems: Polyhedral Constraint Set . . . 743
C.2.4 Nonconvex Problems . . . . . . . . . . . . . . . . . . 745
C.2.5 Tangent and Normal Cones . . . . . . . . . . . . . . 746
C.2.6 Constraint Set Defined by Inequalities . . . . . . . 750
C.2.7 Constraint Set; Equalities and Inequalities . . . . . 753
C.3 Set-Valued Functions and Continuity of Value Function . 755
C.3.1 Outer and Inner Semicontinuity . . . . . . . . . . . 757
C.3.2 Continuity of the Value Function . . . . . . . . . . . 759
C.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
List of Figures

1.1 System with input u, output y, and transfer function ma-


trix G connecting them; the model is y = Gu. . . . . . . . . 3
1.2 Typical input constraint sets U for (a) continuous actua-
tors and (b) mixed continuous/discrete actuators. . . . . . 9
1.3 Output of a stochastic system versus time. . . . . . . . . . . 10
1.4 Two quadratic functions and their sum. . . . . . . . . . . . . 15
1.5 Schematic of the moving horizon estimation problem. . . . 39
1.6 MPC controller consisting of: receding horizon regulator,
state estimator, and target selector. . . . . . . . . . . . . . . 52
1.7 Schematic of the well-stirred reactor. . . . . . . . . . . . . . . 54
1.8 Three measured outputs versus time after a step change
in inlet flowrate at 10 minutes; nd = 2. . . . . . . . . . . . . 57
1.9 Two manipulated inputs versus time after a step change
in inlet flowrate at 10 minutes; nd = 2. . . . . . . . . . . . . 57
1.10 Three measured outputs versus time after a step change
in inlet flowrate at 10 minutes; nd = 3. . . . . . . . . . . . . 58
1.11 Two manipulated inputs versus time after a step change
in inlet flowrate at 10 minutes; nd = 3. . . . . . . . . . . . . 59
1.12 Plug-flow reactor. . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.13 Pendulum with applied torque. . . . . . . . . . . . . . . . . . 62
1.14 Feedback control system with output disturbance d, and
setpoint ysp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

2.1 Example of MPC. . . . . . . . . . . . . . . . . . . . . . . . . . . 101


2.2 Feasible region U2 , elliptical cost contours and ellipse
center a(x), and constrained minimizers for different
values of x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.3 First element of control constraint set U3 (x) (shaded)
and control law κ3 (x) (line) versus x = (cos(θ), sin(θ)),
θ ∈ [−π , π ] on the unit circle for a nonlinear system with
terminal constraint. . . . . . . . . . . . . . . . . . . . . . . . . 106
2.4 Optimal cost V30 (x) versus x on the unit circle. . . . . . . . 107

xxi
xxii List of Figures

2.5 Closed-loop economic MPC versus tracking MPC starting


at x = (−8, 8) with optimal steady state (8, 4). Both con-
trollers asymptotically stabilize the steady state. Dashed
contours show cost functions for each controller. . . . . . . 159
2.6 Closed-loop evolution under economic MPC. The rotated
0
cost function V e is a Lyapunov function for the system. . . 160
2.7 Diagram of tank/cooler system. Each cooling unit can be
either on or off, and if on, it must be between its (possibly
nonzero) minimum and maximum capacities. . . . . . . . . 163
2.8 Feasible sets XN for two values of Q̇min . Note that for
Q̇min = 9 (right-hand side), XN for N ≤ 4 are discon-
nected sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
2.9 Phase portrait for closed-loop evolution of cooler system
with Q̇min = 9. Line colors show value of discrete actuator u2 . 165
2.10 Region of attraction (shaded region) for constrained MPC
controller of Exercise 2.6. . . . . . . . . . . . . . . . . . . . . . 174
2.11 The region Xf , in which the unconstrained LQR control
law is feasible for Exercise 2.7. . . . . . . . . . . . . . . . . . . 175
2.12 The region of attraction for terminal constraint x(N) ∈
Xf and terminal penalty Vf (x) = (1/2)x 0 Πx and the es-
timate of X̄N for Exercise 2.8. . . . . . . . . . . . . . . . . . . 177
2.13 Inconsistent setpoint (xsp , usp ), unreachable stage cost
`(x, u), and optimal steady states (xs , us ), and stage costs
`s (x, u) for constrained and unconstrained systems. . . . 181
2.14 Stage cost versus time for the case of unreachable setpoint. 182
2.15 Rotated cost-function contour è (x, u) = 0 (circles) for
λ = 0, −8, −12. Shaded region shows feasible region where
è (x, u) < 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

3.1 Open-loop and feedback trajectories. . . . . . . . . . . . . . . 198


3.2 The sets XN , Rb , and Rc . . . . . . . . . . . . . . . . . . . . . . 214
3.3 Outer-bounding tube X(z, ū). . . . . . . . . . . . . . . . . . . . 228
3.4 Minimum feasible α for varying N. Note that we require
α ∈ [0, 1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
3.5 Bounds on tightened constraint set Z̄ for varying N. Bounds
are |x1 | ≤ χ1 , |x2 | ≤ χ2 , and |u| ≤ µ. . . . . . . . . . . . . . . 233
3.6 Comparison of 100 realizations of standard and tube-
based MPC for the chemical reactor example. . . . . . . . . 244
3.7 Comparison of standard and tube-based MPC with an ag-
gressive model predictive controller. . . . . . . . . . . . . . . 245
List of Figures xxiii

3.8 Concentration versus time for the ancillary model predic-


tive controller with sample time ∆ = 12 (left) and ∆ = 8
(right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
3.9 Observed probability εtest of constraint violation. Distri-
bution is based on 500 trials for each value of ε. Dashed
line shows the outcome predicted by formula (3.23), i.e.,
εtest = ε. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
3.10 Closed-loop robust MPC state evolution with uniformly
distributed |w| ≤ 0.1 from four different x0 . . . . . . . . . . 263

4.1 MHE arrival cost Ẑk (p), underbounding prior weighting


Γk (p), and MHE optimal value V̂k0 . . . . . . . . . . . . . . . . . 289
4.2 Concatenating two MHE sequences to create a single state
estimate sequence. . . . . . . . . . . . . . . . . . . . . . . . . . 292
4.3 Smoothing update. . . . . . . . . . . . . . . . . . . . . . . . . . 297
4.4 Comparison of filtering and smoothing updates for the
batch reactor system. Second column shows absolute es-
timate error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
4.5 Evolution of the state (solid line) and EKF state estimate
(dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
4.6 Evolution of the state (solid line) and UKF state estimate
(dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
4.7 Evolution of the state (solid line) and MHE state estimate
(dashed line). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
4.8 Perturbed trajectories terminating in Xf . . . . . . . . . . . . 321
4.9 Closed-loop performance of combined nonlinear MHE/MPC
with no disturbances. First column shows system states,
and second column shows estimation error. Dashed line
shows concentration setpoint. Vertical lines indicate times
of setpoint changes. . . . . . . . . . . . . . . . . . . . . . . . . 323
4.10 Closed-loop performance of combined nonlinear MHE/MPC
for varying disturbance size. The system is controlled be-
tween two steady states. . . . . . . . . . . . . . . . . . . . . . . 324
4.11 Relationships between i-IOSS, FSO, and observable for K-
continuous nonlinear systems. . . . . . . . . . . . . . . . . . . 330

5.1 State estimator tube. The solid line x̂(t) is the center of
the tube, and the dashed line is a sample trajectory of x(t). 342
5.2 The system with disturbance. The state estimate lies in
the inner tube, and the state lies in the outer tube. . . . . . 343
xxiv List of Figures

p p p+1 p+1
6.1 Convex step from (u1 , u2 ) to (u1 , u2 ). . . . . . . . . . 386
6.2 Ten iterations of noncooperative steady-state calculation. . 403
6.3 Ten iterations of cooperative steady-state calculation. . . . 403
6.4 Ten iterations of noncooperative steady-state calculation;
reversed pairing. . . . . . . . . . . . . . . . . . . . . . . . . . . 404
6.5 Ten iterations of cooperative steady-state calculation; re-
versed pairing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
6.6 Cooperative control stuck on the boundary of U under
coupled constraints . . . . . . . . . . . . . . . . . . . . . . . . 412
6.7 Cost contours for a two-player, nonconvex game. . . . . . . 422
6.8 Nonconvex function optimized with the distributed gra-
dient algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
6.9 Closed-loop state and control evolution with (x1 (0), x2 (0)) =
(3, −3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
6.10 Contours of V (x(0), u1 , u2 ) for N = 1. . . . . . . . . . . . . . 432
6.11 Optimizing a quadratic function in one set of variables at
a time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
6.12 Constrained optimality conditions and the normal cone. . 445

7.1 The sets Z, X, and U(x). . . . . . . . . . . . . . . . . . . . . . 454


7.2 Parametric linear program. . . . . . . . . . . . . . . . . . . . . 454
7.3 Unconstrained parametric quadratic program. . . . . . . . . 455
7.4 Parametric quadratic program. . . . . . . . . . . . . . . . . . . 455
7.5 Polar cone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
7.6 Regions Rx , x ∈ X for a second-order example. . . . . . . . 468
7.7 Solution to a parametric LP. . . . . . . . . . . . . . . . . . . . 479
7.8 Solution times for explicit and implicit MPC for N = 20. . . 486

8.1 Feasible set and reduced objective ψ(u(0)) of the non-


linear MPC Example 8.1. . . . . . . . . . . . . . . . . . . . . . . 496
8.2 Performance of different integration methods. . . . . . . . . 505
8.3 Polynomial approximation x e 1 (t) and true trajectory x1 (t)
of the first state and its derivative. . . . . . . . . . . . . . . . 510
8.4 Performance of implicit integration methods on a stiff ODE. 512
8.5 Newton-type iterations for solution of R(z) = 0 from Ex-
ample 8.5. Left: exact Newton method. Right: constant
Jacobian approximation. . . . . . . . . . . . . . . . . . . . . . 516
8.6 Convergence of different sequences as a function of k. . . 518
8.7 A hanging chain at rest. See Exercise 8.6(b). . . . . . . . . . 587
List of Figures xxv

8.8 Direct single shooting solution for (8.63) without path con-
straints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
8.9 Open-loop simulation for (8.63) using collocation. . . . . . . 592
8.10 Gauss-Newton iterations for the direct multiple-shooting
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594

A.1 The four fundamental subspaces of matrix A . . . . . . . . 626


A.2 Matrix A maps into R(A). . . . . . . . . . . . . . . . . . . . . . 627
A.3 Pseudo-inverse of A maps into R(A0 ). . . . . . . . . . . . . . 627
A.4 Subgradient. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
A.5 Separating hyperplane. . . . . . . . . . . . . . . . . . . . . . . 642
A.6 Polar cone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
A.7 A convex function. . . . . . . . . . . . . . . . . . . . . . . . . . 646
A.8 Normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . 658
A.9 Multivariate normal in two dimensions. . . . . . . . . . . . . 660
A.10 The geometry of quadratic form x 0 Ax = b. . . . . . . . . . . 661
A.11 A nearly singular normal density in two dimensions. . . . . 665
A.12 The region X(c) for y = max(x1 , x2 ) ≤ c. . . . . . . . . . . . 667
A.13 A joint density function for the two uncorrelated random
variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
A.14 The probability distribution and inverse distribution for
random variable ξ. . . . . . . . . . . . . . . . . . . . . . . . . . 687

B.1 Stability of the origin. . . . . . . . . . . . . . . . . . . . . . . . 697


B.2 An attractive but unstable origin. . . . . . . . . . . . . . . . . 698

C.1 Routing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . 730


C.2 Approximation of the set U . . . . . . . . . . . . . . . . . . . . 738
C.3 Tangent cones. . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
C.4 Normal at u. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
C.5 Condition of optimality. . . . . . . . . . . . . . . . . . . . . . . 745
C.6 Tangent and normal cones. . . . . . . . . . . . . . . . . . . . . 747
C.7 Condition of optimality. . . . . . . . . . . . . . . . . . . . . . . 749
C.8 FU (u) 6= TU (u). . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
C.9 Graph of set-valued function U(·). . . . . . . . . . . . . . . . 756
C.10 Graphs of discontinuous set-valued functions. . . . . . . . . 757
C.11 Outer and inner semicontinuity of U(·). . . . . . . . . . . . . 758
C.12 Subgradient of f (·). . . . . . . . . . . . . . . . . . . . . . . . . 762
List of Examples and Statements

1.1 Example: Sum of quadratic functions . . . . . . . . . . . . . 15


1.2 Lemma: Hautus lemma for controllability . . . . . . . . . . . 24
1.3 Lemma: LQR convergence . . . . . . . . . . . . . . . . . . . . . 24
1.4 Lemma: Hautus lemma for observability . . . . . . . . . . . 42
1.5 Lemma: Convergence of estimator cost . . . . . . . . . . . . 43
1.6 Lemma: Estimator convergence . . . . . . . . . . . . . . . . . 44
1.7 Assumption: Target feasibility and uniqueness . . . . . . . 48
1.8 Lemma: Detectability of the augmented system . . . . . . . 50
1.9 Corollary: Dimension of the disturbance . . . . . . . . . . . 50
1.10 Lemma: Offset-free control . . . . . . . . . . . . . . . . . . . . 52
1.11 Example: More measured outputs than inputs and zero
offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.12 Lemma: Hautus lemma for stabilizability . . . . . . . . . . . 68
1.13 Lemma: Hautus lemma for detectability . . . . . . . . . . . . 72
1.14 Lemma: Stabilizable systems and feasible targets . . . . . . 83

2.1 Proposition: Continuity of system solution . . . . . . . . . . 94


2.2 Assumption: Continuity of system and cost . . . . . . . . . 97
2.3 Assumption: Properties of constraint sets . . . . . . . . . . 98
2.4 Proposition: Existence of solution to optimal control problem 98
2.5 Example: Linear quadratic MPC . . . . . . . . . . . . . . . . . 99
2.6 Example: Closer inspection of linear quadratic MPC . . . . 101
2.7 Theorem: Continuity of value function and control law . . 104
2.8 Example: Discontinuous MPC control law . . . . . . . . . . . 105
2.9 Definition: Positive and control invariant sets . . . . . . . . 109
2.10 Proposition: Existence of solutions to DP recursion . . . . . 110
2.11 Definition: Asymptotically stable and GAS . . . . . . . . . . 112
2.12 Definition: Lyapunov function . . . . . . . . . . . . . . . . . . 113
2.13 Theorem: Lyapunov stability theorem . . . . . . . . . . . . . 113
2.14 Assumption: Basic stability assumption . . . . . . . . . . . . 114
2.15 Proposition: The value function VN0 (·) is locally bounded . 115
2.16 Proposition: Extension of upper bound to XN . . . . . . . . 115
2.17 Assumption: Weak controllability . . . . . . . . . . . . . . . . 116
2.18 Proposition: Monotonicity of the value function . . . . . . . 118
2.19 Theorem: Asymptotic stability of the origin . . . . . . . . . 119

xxvii
xxviii List of Examples and Statements

2.20 Definition: Exponential stability . . . . . . . . . . . . . . . . . 120


2.21 Theorem: Lyapunov function and exponential stability . . 120
2.22 Definition: Input/output-to-state stable (IOSS) . . . . . . . . 121
2.23 Assumption: Modified basic stability assumption . . . . . . 121
2.24 Theorem: Asymptotic stability with stage cost `(y, u) . . . 122
2.25 Assumption: Continuity of system and cost; time-varying
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2.26 Assumption: Properties of constraint sets; time-varying case 124
2.27 Definition: Sequential positive invariance and sequential
control invariance . . . . . . . . . . . . . . . . . . . . . . . . . . 125
2.28 Proposition: Continuous system solution; time-varying case 125
2.29 Proposition: Existence of solution to optimal control prob-
lem; time-varying case . . . . . . . . . . . . . . . . . . . . . . . 125
2.30 Definition: Asymptotically stable and GAS for time-varying
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
2.31 Definition: Lyapunov function: time-varying, constrained
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
2.32 Theorem: Lyapunov theorem for asymptotic stability (time-
varying, constrained) . . . . . . . . . . . . . . . . . . . . . . . . 126
2.33 Assumption: Basic stability assumption; time-varying case 127
2.34 Proposition: Optimal cost decrease; time-varying case . . . 127
2.35 Proposition: MPC cost is less than terminal cost . . . . . . . 127
2.36 Proposition: Optimal value function properties; time-varying
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.37 Assumption: Uniform weak controllability . . . . . . . . . . 128
2.38 Proposition: Conditions for uniform weak controllability . 128
2.39 Theorem: Asymptotic stability of the origin: time-varying
MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
2.40 Lemma: Entering the terminal region . . . . . . . . . . . . . . 145
2.41 Theorem: MPC stability; no terminal constraint . . . . . . . 146
2.42 Proposition: Admissible warm start in Xf . . . . . . . . . . . 149
2.43 Algorithm: Suboptimal MPC . . . . . . . . . . . . . . . . . . . 149
2.44 Proposition: Linking warm start and state . . . . . . . . . . . 150
2.45 Definition: Asymptotic stability (difference inclusion) . . . 150
2.46 Definition: Lyapunov function (difference inclusion) . . . . 151
2.47 Proposition: Asymptotic stability (difference inclusion) . . 151
2.48 Theorem: Asymptotic stability of suboptimal MPC . . . . . 151
2.49 Assumption: Continuity of system and cost . . . . . . . . . 154
2.50 Assumption: Properties of constraint sets . . . . . . . . . . 154
2.51 Assumption: Cost lower bound . . . . . . . . . . . . . . . . . 154
List of Examples and Statements xxix

2.52 Proposition: Asymptotic average performance . . . . . . . . 155


2.53 Definition: Dissipativity . . . . . . . . . . . . . . . . . . . . . . 156
2.54 Assumption: Continuity at the steady state . . . . . . . . . . 157
2.55 Assumption: Strict dissipativity . . . . . . . . . . . . . . . . . 157
2.56 Theorem: Asymptotic stability of economic MPC . . . . . . 157
2.57 Example: Economic MPC versus tracking MPC . . . . . . . . 158
2.58 Example: MPC with mixed continuous/discrete actuators . 162
2.59 Theorem: Lyapunov theorem for asymptotic stability . . . 177
2.60 Proposition: Convergence of state under IOSS . . . . . . . . 178
2.61 Lemma: An equality for quadratic functions . . . . . . . . . 178
2.62 Lemma: Evolution in a compact set . . . . . . . . . . . . . . . 179

3.1 Definition: Robust global asymptotic stability . . . . . . . . 207


3.2 Theorem: Lyapunov function and RGAS . . . . . . . . . . . . 208
3.3 Theorem: Robust global asymptotic stability and regular-
ization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
3.4 Proposition: Bound for continuous functions . . . . . . . . . 211
3.5 Proposition: Robustness of nominal MPC . . . . . . . . . . . 214
3.6 Definition: Robust control invariance . . . . . . . . . . . . . 217
3.7 Definition: Robust positive invariance . . . . . . . . . . . . . 217
3.8 Assumption: Basic stability assumption; robust case . . . . 218
3.9 Theorem: Recursive feasibility of control policies . . . . . . 218
3.10 Definition: Set algebra and Hausdorff distance . . . . . . . . 224
3.11 Definition: Robust asymptotic stability of a set . . . . . . . 230
3.12 Proposition: Robust asymptotic stability of tube-based
MPC for linear systems . . . . . . . . . . . . . . . . . . . . . . 230
3.13 Example: Calculation of tightened constraints . . . . . . . . 231
3.14 Proposition: Recursive feasibility of tube-based MPC . . . . 235
3.15 Proposition: Robust exponential stability of improved tube-
based MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3.16 Proposition: Implicit satisfaction of terminal constraint . . 239
3.17 Proposition: Properties of the value function . . . . . . . . . 240
3.18 Proposition: Neighborhoods of the uncertain system . . . . 241
3.19 Proposition: Robust positive invariance of tube-based MPC
for nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . 241
3.20 Example: Robust control of an exothermic reaction . . . . . 243
3.21 Assumption: Feasibility of robust control . . . . . . . . . . . 251
3.22 Assumption: Robust terminal set condition . . . . . . . . . 252
3.23 Example: Constraint tightening via sampling . . . . . . . . . 254
xxx List of Examples and Statements

4.1 Definition: i-IOSS . . . . . . . . . . . . . . . . . . . . . . . . . . 272


4.2 Proposition: Convergence of state under i-IOSS . . . . . . . 272
4.3 Definition: β-convergent sequence . . . . . . . . . . . . . . . 272
4.4 Proposition: Bounded, convergent sequences are β-convergent 272
4.5 Proposition: Bound for sum cost of convergent sequence. . 273
4.6 Assumption: β-convergent disturbances . . . . . . . . . . . . 273
4.7 Assumption: Positive definite stage cost . . . . . . . . . . . . 273
4.8 Definition: Robustly globally asymptotically stable esti-
mation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.9 Proposition: Boundedness and convergence of estimate error 274
4.10 Theorem: FIE with β-convergent disturbances is RGAS . . . 277
4.11 Lemma: Duality of controllability and observability . . . . . 282
4.12 Theorem: Riccati iteration and regulator stability . . . . . . 282
4.13 Definition: Observability . . . . . . . . . . . . . . . . . . . . . 284
4.14 Definition: Final-state observability . . . . . . . . . . . . . . . 285
4.15 Definition: Globally K-continuous . . . . . . . . . . . . . . . 285
4.16 Proposition: Observable and global K-continuous imply FSO 285
4.17 Definition: RGAS estimation (observable case) . . . . . . . . 285
4.18 Theorem: MHE is RGAS (observable case) with zero prior
weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
4.19 Definition: Full information arrival cost . . . . . . . . . . . . 287
4.20 Lemma: MHE and FIE equivalence . . . . . . . . . . . . . . . . 287
4.21 Definition: MHE arrival cost . . . . . . . . . . . . . . . . . . . 288
4.22 Assumption: Prior weighting . . . . . . . . . . . . . . . . . . . 288
4.23 Proposition: Arrival cost of full information greater than
MHE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
4.24 Definition: MHE-detectable system . . . . . . . . . . . . . . . 290
4.25 Theorem: MHE is RGAS . . . . . . . . . . . . . . . . . . . . . . 290
4.26 Assumption: Estimator constraint sets . . . . . . . . . . . . 294
4.27 Theorem: Constrained full information is RGAS . . . . . . . 295
4.28 Theorem: Constrained MHE is RGAS . . . . . . . . . . . . . . 295
4.29 Assumption: Prior weighting for linear system . . . . . . . . 296
4.30 Assumption: Polyhedral constraint sets . . . . . . . . . . . . 296
4.31 Corollary: Constrained MHE is RGAS . . . . . . . . . . . . . . 296
4.32 Example: Filtering and smoothing updates . . . . . . . . . . 299
4.33 Definition: i-IOSS (max form) . . . . . . . . . . . . . . . . . . . 301
4.34 Assumption: Positive definite cost function . . . . . . . . . 302
4.35 Assumption: Lipschitz continuity of stage-cost bound com-
positions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
4.36 Assumption: Uniform i-IOSS contractivity . . . . . . . . . . . 302
List of Examples and Statements xxxi

4.37 Proposition: Locally Lipschitz upper bound . . . . . . . . . . 302


4.38 Proposition: The nameless lemma . . . . . . . . . . . . . . . 303
4.39 Theorem: MHE is RAS . . . . . . . . . . . . . . . . . . . . . . . 304
4.40 Example: MHE of linear time-invariant system . . . . . . . . 306
4.41 Example: EKF, UKF, and MHE performance comparison . . 313
4.42 Definition: i-UIOSS . . . . . . . . . . . . . . . . . . . . . . . . . 319
4.43 Assumption: Bounded estimate error . . . . . . . . . . . . . 319
4.44 Definition: Robust positive invariance . . . . . . . . . . . . . 320
4.45 Definition: Robust asymptotic stability . . . . . . . . . . . . 320
4.46 Definition: ISS Lyapunov function . . . . . . . . . . . . . . . . 320
4.47 Proposition: ISS Lyapunov stability theorem . . . . . . . . . 320
4.48 Theorem: Combined MHE/MPC is RAS . . . . . . . . . . . . . 322
4.49 Example: Combined MHE/MPC . . . . . . . . . . . . . . . . . . 323

5.1 Definition: Positive invariance; robust positive invariance . 345


5.2 Proposition: Proximity of state and state estimate . . . . . 345
5.3 Proposition: Proximity of state estimate and nominal state 347
5.4 Assumption: Constraint bounds . . . . . . . . . . . . . . . . . 348
5.5 Algorithm: Robust control algorithm (linear constrained
systems) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
5.6 Proposition: Exponential stability of output MPC . . . . . . 351
5.7 Algorithm: Robust control algorithm (offset-free MPC) . . . 359

6.1 Algorithm: Suboptimal MPC (simplified) . . . . . . . . . . . . 375


6.2 Definition: Lyapunov stability . . . . . . . . . . . . . . . . . . 376
6.3 Definition: Uniform Lyapunov stability . . . . . . . . . . . . 377
6.4 Definition: Exponential stability . . . . . . . . . . . . . . . . . 377
6.5 Lemma: Exponential stability of suboptimal MPC . . . . . . 378
6.6 Lemma: Global asymptotic stability and exponential con-
vergence with mixed powers of norm . . . . . . . . . . . . . 379
6.7 Lemma: Converse theorem for exponential stability . . . . 380
6.8 Assumption: Unconstrained two-player game . . . . . . . . 386
6.9 Example: Nash equilibrium is unstable . . . . . . . . . . . . . 389
6.10 Example: Nash equilibrium is stable but closed loop is
unstable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
6.11 Example: Nash equilibrium is stable and the closed loop
is stable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
6.12 Example: Stability and offset in the distributed target cal-
culation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
6.13 Assumption: Constrained two-player game . . . . . . . . . . 407
xxxii List of Examples and Statements

6.14 Lemma: Global asymptotic stability and exponential con-


vergence of perturbed system . . . . . . . . . . . . . . . . . . 414
6.15 Assumption: Disturbance models . . . . . . . . . . . . . . . . 415
6.16 Lemma: Detectability of distributed disturbance model . . 415
6.17 Assumption: Constrained M-player game . . . . . . . . . . . 420
6.18 Lemma: Distributed gradient algorithm properties . . . . . 424
6.19 Assumption: Basic stability assumption (distributed) . . . . 426
6.20 Proposition: Terminal constraint satisfaction . . . . . . . . 427
6.21 Theorem: Asymptotic stability . . . . . . . . . . . . . . . . . . 429
6.22 Example: Nonlinear distributed control . . . . . . . . . . . . 429
6.23 Lemma: Local detectability . . . . . . . . . . . . . . . . . . . . 443

7.1 Definition: Polytopic (polyhedral) partition . . . . . . . . . . 456


7.2 Definition: Piecewise affine function . . . . . . . . . . . . . . 456
7.3 Assumption: Strict convexity . . . . . . . . . . . . . . . . . . . 457
7.4 Definition: Polar cone . . . . . . . . . . . . . . . . . . . . . . . 459
7.5 Proposition: Farkas’s lemma . . . . . . . . . . . . . . . . . . . 459
7.6 Proposition: Optimality conditions for convex set . . . . . . 459
7.7 Proposition: Optimality conditions in terms of polar cone 461
7.8 Proposition: Optimality conditions for linear inequalities . 461
7.9 Proposition: Solution of P(w), w ∈ Rx0 . . . . . . . . . . . . 463
7.10 Proposition: Piecewise quadratic (affine) cost (solution) . . 464
7.11 Example: Parametric QP . . . . . . . . . . . . . . . . . . . . . . 464
7.12 Example: Explicit optimal control . . . . . . . . . . . . . . . . 465
7.13 Proposition: Continuity of cost and solution . . . . . . . . . 467
7.14 Assumption: Continuous, piecewise quadratic function . . 470
7.15 Definition: Active polytope (polyhedron) . . . . . . . . . . . 471
7.16 Proposition: Solving P using Pi . . . . . . . . . . . . . . . . . 471
7.17 Proposition: Optimality of u0x (w) in Rx . . . . . . . . . . . . 474
7.18 Proposition: Piecewise quadratic (affine) solution . . . . . . 474
7.19 Proposition: Optimality conditions for parametric LP . . . 478
7.20 Proposition: Solution of P . . . . . . . . . . . . . . . . . . . . . 481
7.21 Proposition: Piecewise affine cost and solution . . . . . . . 481

8.1 Example: Nonlinear MPC . . . . . . . . . . . . . . . . . . . . . 495


8.2 Example: Sequential approach . . . . . . . . . . . . . . . . . . 498
8.3 Example: Integration methods of different order . . . . . . 504
8.4 Example: Implicit integrators for a stiff ODE system . . . . 511
8.5 Example: Finding a fifth root with Newton-type iterations . 516
8.6 Example: Convergence rates . . . . . . . . . . . . . . . . . . . 517
List of Examples and Statements xxxiii

8.7 Theorem: Local contraction for Newton-type methods . . . 518


8.8 Corollary: Convergence of exact Newton’s method . . . . . 519
8.9 Example: Function evaluation via elementary operations . 522
8.10 Example: Implicit function representation . . . . . . . . . . 524
8.11 Example: Forward algorithmic differentiation . . . . . . . . 526
8.12 Example: Algorithmic differentiation in reverse mode . . . 528
8.13 Example: Sequential optimal control using CasADi from
Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
8.14 Theorem: KKT conditions . . . . . . . . . . . . . . . . . . . . . 549
8.15 Theorem: Strong second-order sufficient conditions for
optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
8.16 Theorem: Tangential predictor by quadratic program . . . 551

A.1 Theorem: Schur decomposition . . . . . . . . . . . . . . . . . 629


A.2 Theorem: Real Schur decomposition . . . . . . . . . . . . . . 630
A.3 Theorem: Bolzano-Weierstrass . . . . . . . . . . . . . . . . . . 632
A.4 Proposition: Convergence of monotone sequences . . . . . 633
A.5 Proposition: Uniform continuity . . . . . . . . . . . . . . . . . 634
A.6 Proposition: Compactness of continuous functions of com-
pact sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
A.7 Proposition: Weierstrass . . . . . . . . . . . . . . . . . . . . . 636
A.8 Proposition: Derivative and partial derivative . . . . . . . . 637
A.9 Proposition: Continuous partial derivatives . . . . . . . . . . 638
A.10 Proposition: Chain rule . . . . . . . . . . . . . . . . . . . . . . 638
A.11 Proposition: Mean value theorem for vector functions . . . 638
A.12 Definition: Convex set . . . . . . . . . . . . . . . . . . . . . . . 641
A.13 Theorem: Caratheodory . . . . . . . . . . . . . . . . . . . . . . 641
A.14 Theorem: Separation of convex sets . . . . . . . . . . . . . . 642
A.15 Theorem: Separation of convex set from zero . . . . . . . . 643
A.16 Corollary: Existence of separating hyperplane . . . . . . . . 643
A.17 Definition: Support hyperplane . . . . . . . . . . . . . . . . . 644
A.18 Theorem: Convex set and halfspaces . . . . . . . . . . . . . . 644
A.19 Definition: Convex cone . . . . . . . . . . . . . . . . . . . . . . 644
A.20 Definition: Polar cone . . . . . . . . . . . . . . . . . . . . . . . 644
A.21 Definition: Cone generator . . . . . . . . . . . . . . . . . . . . 645
A.22 Proposition: Cone and polar cone generator . . . . . . . . . 645
A.23 Theorem: Convexity implies continuity . . . . . . . . . . . . 647
A.24 Theorem: Differentiability and convexity . . . . . . . . . . . 647
A.25 Theorem: Second derivative and convexity . . . . . . . . . . 647
A.26 Definition: Level set . . . . . . . . . . . . . . . . . . . . . . . . 648
xxxiv List of Examples and Statements

A.27 Definition: Sublevel set . . . . . . . . . . . . . . . . . . . . . . 648


A.28 Definition: Support function . . . . . . . . . . . . . . . . . . . 648
A.29 Proposition: Set membership and support function . . . . . 648
A.30 Proposition: Lipschitz continuity of support function . . . 648
A.31 Theorem: Existence of solution to differential equations . 651
A.32 Theorem: Maximal interval of existence . . . . . . . . . . . . 651
A.33 Theorem: Continuity of solution to differential equation . 651
A.34 Theorem: Bellman-Gronwall . . . . . . . . . . . . . . . . . . . 651
A.35 Theorem: Existence of solutions to forced systems . . . . . 653
A.36 Example: Fourier transform of the normal density. . . . . . 659
A.37 Definition: Density of a singular normal . . . . . . . . . . . . 662
A.38 Example: Marginal normal density . . . . . . . . . . . . . . . 663
A.39 Example: Nonlinear transformation . . . . . . . . . . . . . . . 666
A.40 Example: Maximum of two random variables . . . . . . . . . 667
A.41 Example: Independent implies uncorrelated . . . . . . . . . 668
A.42 Example: Does uncorrelated imply independent? . . . . . . 669
A.43 Example: Independent and uncorrelated are equivalent
for normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
A.44 Example: Conditional normal density . . . . . . . . . . . . . 674
A.45 Example: More normal conditional densities . . . . . . . . . 675

B.1 Definition: Equilibrium point . . . . . . . . . . . . . . . . . . . 694


B.2 Definition: Positive invariant set . . . . . . . . . . . . . . . . . 694
B.3 Definition: K, K∞ , KL, and PD functions . . . . . . . . . . 695
B.4 Definition: Local stability . . . . . . . . . . . . . . . . . . . . . 696
B.5 Definition: Global attraction . . . . . . . . . . . . . . . . . . . 696
B.6 Definition: Global asymptotic stability . . . . . . . . . . . . . 696
B.7 Definition: Various forms of stability . . . . . . . . . . . . . . 697
B.8 Definition: Global asymptotic stability (KL version) . . . . . 698
B.9 Proposition: Connection of classical and KL global asymp-
totic stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
B.10 Definition: Various forms of stability (constrained) . . . . . 699
B.11 Definition: Asymptotic stability (constrained, KL version) . 700
B.12 Definition: Lyapunov function (unconstrained and con-
strained) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
B.13 Theorem: Lyapunov function and GAS (classical definition) 701
B.14 Lemma: From PD to K∞ function (Jiang and Wang (2002)) 702
B.15 Theorem: Lyapunov function and global asymptotic sta-
bility (KL definition) . . . . . . . . . . . . . . . . . . . . . . . . 703
B.16 Proposition: Improving convergence (Sontag (1998b)) . . . 705
List of Examples and Statements xxxv

B.17 Theorem: Converse theorem for global asymptotic stability 705


B.18 Theorem: Lyapunov function for asymptotic stability (con-
strained) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
B.19 Theorem: Lyapunov function for exponential stability . . . 706
B.20 Lemma: Lyapunov function for linear systems . . . . . . . . 706
B.21 Definition: Sequential positive invariance . . . . . . . . . . . 707
B.22 Definition: Asymptotic stability (time-varying, constrained) 707
B.23 Definition: Lyapunov function: time-varying, constrained
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
B.24 Theorem: Lyapunov theorem for asymptotic stability (time-
varying, constrained) . . . . . . . . . . . . . . . . . . . . . . . . 708
B.25 Proposition: Global K function overbound . . . . . . . . . . 709
B.26 Definition: Nominal robust global asymptotic stability . . . 710
B.27 Theorem: Nominal robust global asymptotic stability and
Lyapunov function . . . . . . . . . . . . . . . . . . . . . . . . . 710
B.28 Definition: Positive invariance with disturbances . . . . . . 711
B.29 Definition: Local stability (disturbances) . . . . . . . . . . . . 712
B.30 Definition: Global attraction (disturbances) . . . . . . . . . . 712
B.31 Definition: GAS (disturbances) . . . . . . . . . . . . . . . . . . 712
B.32 Definition: Lyapunov function (disturbances) . . . . . . . . . 712
B.33 Theorem: Lyapunov function for global asymptotic sta-
bility (disturbances) . . . . . . . . . . . . . . . . . . . . . . . . 713
B.34 Definition: Global control Lyapunov function (CLF) . . . . . 714
B.35 Definition: Global stabilizability . . . . . . . . . . . . . . . . . 714
B.36 Definition: Positive invariance (disturbance and control) . . 715
B.37 Definition: CLF (disturbance and control) . . . . . . . . . . . 715
B.38 Definition: Positive invariance (constrained) . . . . . . . . . 715
B.39 Definition: CLF (constrained) . . . . . . . . . . . . . . . . . . . 715
B.40 Definition: Control invariance (disturbances, constrained) 716
B.41 Definition: CLF (disturbances, constrained) . . . . . . . . . . 716
B.42 Definition: Input-to-state stable (ISS) . . . . . . . . . . . . . . 717
B.43 Definition: ISS-Lyapunov function . . . . . . . . . . . . . . . . 718
B.44 Lemma: ISS-Lyapunov function implies ISS . . . . . . . . . . 718
B.45 Definition: ISS (constrained) . . . . . . . . . . . . . . . . . . . 718
B.46 Definition: ISS-Lyapunov function (constrained) . . . . . . . 718
B.47 Lemma: ISS-Lyapunov function implies ISS (constrained) . 719
B.48 Definition: Output-to-state stable (OSS) . . . . . . . . . . . . 719
B.49 Definition: OSS-Lyapunov function . . . . . . . . . . . . . . . 719
B.50 Theorem: OSS and OSS-Lyapunov function . . . . . . . . . . 720
B.51 Definition: Input/output-to-state stable (IOSS) . . . . . . . . 720
xxxvi List of Examples and Statements

B.52 Definition: IOSS-Lyapunov function . . . . . . . . . . . . . . . 720


B.53 Theorem: Modified IOSS-Lyapunov function . . . . . . . . . 721
B.54 Conjecture: IOSS and IOSS-Lyapunov function . . . . . . . . 721
B.55 Definition: Incremental input/output-to-state stable . . . . 722
B.56 Definition: Observability . . . . . . . . . . . . . . . . . . . . . 722
B.57 Assumption: Lipschitz continuity of model . . . . . . . . . . 723
B.58 Lemma: Lipschitz continuity and state difference bound . 723
B.59 Theorem: Observability and convergence of state . . . . . . 723

C.1 Lemma: Principle of optimality . . . . . . . . . . . . . . . . . 734


C.2 Theorem: Optimal value function and control law from DP 734
C.3 Example: DP applied to linear quadratic regulator . . . . . . 736
C.4 Definition: Tangent vector . . . . . . . . . . . . . . . . . . . . 739
C.5 Proposition: Tangent vectors are closed cone . . . . . . . . 739
C.6 Definition: Regular normal . . . . . . . . . . . . . . . . . . . . 739
C.7 Proposition: Relation of normal and tangent cones . . . . . 740
C.8 Proposition: Global optimality for convex problems . . . . 741
C.9 Proposition: Optimality conditions—normal cone . . . . . . 742
C.10 Proposition: Optimality conditions—tangent cone . . . . . 743
C.11 Proposition: Representation of tangent and normal cones . 743
C.12 Proposition: Optimality conditions—linear inequalities . . 744
C.13 Corollary: Optimality conditions—linear inequalities . . . . 744
C.14 Proposition: Necessary condition for nonconvex problem . 746
C.15 Definition: General normal . . . . . . . . . . . . . . . . . . . . 748
C.16 Definition: General tangent . . . . . . . . . . . . . . . . . . . . 748
C.17 Proposition: Set of regular tangents is closed convex cone 748
C.18 Definition: Regular set . . . . . . . . . . . . . . . . . . . . . . . 749
C.19 Proposition: Conditions for regular set . . . . . . . . . . . . 749
C.20 Proposition: Quasiregular set . . . . . . . . . . . . . . . . . . 751
C.21 Proposition: Optimality conditions nonconvex problem . . 752
C.22 Proposition: Fritz-John necessary conditions . . . . . . . . . 753
C.23 Definition: Outer semicontinuous function . . . . . . . . . . 757
C.24 Definition: Inner semicontinuous function . . . . . . . . . . 758
C.25 Definition: Continuous function . . . . . . . . . . . . . . . . . 758
C.26 Theorem: Equivalent conditions for outer and inner semi-
continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
C.27 Proposition: Outer semicontinuity and closed graph . . . . 759
C.28 Theorem: Minimum theorem . . . . . . . . . . . . . . . . . . . 760
C.29 Theorem: Lipschitz continuity of the value function, con-
stant U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
List of Examples and Statements xxxvii

C.30 Definition: Subgradient of convex function . . . . . . . . . . 762


C.31 Theorem: Clarke et al. (1998) . . . . . . . . . . . . . . . . . . 762
C.32 Corollary: A bound on d(u, U(x 0 )) for u ∈ U (x) . . . . . . 763
C.33 Theorem: Continuity of U(·) . . . . . . . . . . . . . . . . . . . 765
C.34 Theorem: Continuity of the value function . . . . . . . . . . 765
C.35 Theorem: Lipschitz continuity of the value function—U (x) 766
Notation

Mathematical notation
∃ there exists
∈ is an element of
∀ for all
=⇒ ⇐= implies; is implied by
=6 ⇒ ⇐=
6 does not imply; is not implied by
a := b a is defined to be equal to b.
a =: b b is defined to be equal to a.
≈ approximately equal
V (·) function V
V :A→B V is a function mapping set A into set B
x , V (x) function V maps variable x to value V (x)
x+ value of x at next sample time (discrete time system)
ẋ time derivative of x (continuous time system)
fx partial derivative of f (x) with respect to x
∇ nabla or del operator
δ unit impulse or delta function
|x| absolute value of scalar; norm of vector (two-norm unless
stated otherwise); induced norm of matrix
x sequence of vector-valued variable x, (x(0), x(1), . . .)
kxk sup norm over a sequence, supi≥0 |x(i)|
kxka:b max a≤i≤b |x(i)|
tr(A) trace of matrix A
det(A) determinant of matrix A
eig(A) set of eigenvalues of matrix A
ρ(A) spectral radius of matrix A, max i |λi | for λi ∈ eig(A)
A−1 inverse of matrix A
A †
pseudo-inverse of matrix A
A0 transpose of matrix A
inf infimum or greatest lower bound
min minimum
sup supremum or least upper bound
max maximum

xxxix
xl Notation

arg argument or solution of an optimization


s.t. subject to
I integers
I≥0 nonnegative integers
In:m integers in the interval [n, m]
R real numbers
R≥0 nonnegative real numbers
Rn real-valued n-vectors
m×n
R real-valued m × n matrices
C complex numbers
B ball in Rn of unit radius
x ∼ px random variable x has probability density px
E(x) expectation of random variable x
var(x) variance of random variable x
cov(x, y) covariance of random variables x and y
N(m, P ) normal distribution (mean m, covariance P ), x ∼ N(m, P )
n(x, m, P ) normal probability density, px (x) = n(x, m, P )
∅ the empty set
aff(A) affine hull of set A
int(A) interior of set A
co(A) convex hull of the set A
A closure of set A
epi(f ) epigraph of function f
leva V sublevel set of function V , {x | V (x) ≤ a}
f ◦g composition of functions f and g, f ◦ g (s) := f (g(s))
a⊕b maximum of scalars a and b, Chapter 4
A⊕B set addition of sets A and B, Chapters 3 and 5
A B set subtraction of set B from set A
A\B elements of set A not in set B
A∪B union of sets A and B
A∩B intersection of sets A and B
A⊆B set A is a subset of set B
A⊇B set A is a superset of set B
A⊂B set A is a proper (or strict) subset of set B
A⊃B set A is a proper (or strict) superset of set B
d(a, B) Distance between element a and set B
dH (A, B) Hausdorff distance between sets A and B
x & y (x % y) x converges to y from above (below)
sat(x) saturation, sat(x) = x if |x| ≤ 1, −1 if x < −1, 1 if x > 1
Notation xli

Symbols
A, B, C system matrices, discrete time, x + = Ax + Bu, y = Cx
Ac , Bc system matrices, continuous time, ẋ = Ac x + Bc u
Aij state transition matrix for player i to player j
Ai state transition matrix for player i
ALi estimate error transition matrix Ai − Li Ci
Bd input disturbance matrix
Bij input matrix of player i for player j’s inputs
Bi input matrix of player i
Cij output matrix of player i for player j’s interaction states
Ci output matrix of player i
Cd output disturbance matrix
C controllability matrix
C∗ polar cone of cone C
d integrating disturbance
E, F constraint matrices, F x + Eu ≤ e
f,h system functions, discrete time, x + = f (x, u), y = h(x)
fc (x, u) system function, continuous time, ẋ = fc (x, u)
F (x, u) difference inclusion, x + ∈ F (x, u), F is set valued
G input noise-shaping matrix
Gij steady-state gain of player i to player j
H controlled variable matrix
I(x, u) index set of constraints active at (x, u)
0
I (x) index set of constraints active at (x, u0 (x))
k sample time
K optimal controller gain
`(x, u) stage cost
`N (x, u) final stage cost
L optimal estimator gain
m input dimension
M cross-term penalty matrix x 0 Mu
M number of players, Chapter 6
M class of admissible input policies, µ ∈ M
n state dimension
N horizon length
O observability matrix, Chapters 1 and 4
O compact robust control invariant set containing the origin,
Chapter 3
p output dimension
xlii Notation

p optimization iterate, Chapter 6


pξ probability density of random variable ξ
P
ps (x) sampled probability density, ps (x) = i wi δ(x − xi )
P covariance matrix in the estimator
Pf terminal penalty matrix
P polytopic partition, Chapter 3
P polytopic partition, Chapter 7
PN (x) MPC optimization problem; horizon N and initial state x
q importance function in importance sampling
Q state penalty matrix
r controlled variable, r = Hy
R input penalty matrix
s number of samples in a sampled probability density
S input rate of change penalty matrix
S(x, u) index set of active polytopes at (x, u)
0
S (x) index set of active polytopes at (x, u0 (x))
t time
T current time in estimation problem
u input (manipulated variable) vector
+
u
e warm start for input sequence
u+ improved input sequence
UN (x) control constraint set
U input constraint set
v output disturbance, Chapters 1 and 4
v nominal control input, Chapters 3 and 5
VN (x, u) MPC objective function
VN0 (x) MPC optimal value function
VT (χ, ω) Full information state estimation objective function at time T
with initial state χ and disturbance sequence ω
V̂T (χ, ω) MHE objective function at time T with initial state χ and distur-
bance sequence ω
Vf (x) terminal penalty
VN (z) nominal control input constraint set
V output disturbance constraint set
w disturbance to the state evolution
wi weights in a sampled probability density, Chapter 4
wi convex weight for player i, Chapter 6
wi normalized weights in a sampled probability density
W class of admissible disturbance sequences, w ∈ W
Notation xliii

W state disturbance constraint set


x state vector
xi sample values in a sampled probability density
xij state interaction vector from player i to player j
x(0) mean of initial state density
X(k; x, µ) state tube at time k with initial state x and control policy µ
Xj set of feasible states for optimal control problem at stage j
X state constraint set
Xf terminal region
y output (measurement) vector
Y output constraint set
z nominal state, Chapters 3 and 5
ZT (x) full information arrival cost
ẐT (x) MHE arrival cost
e T (x)
Z MHE smoothing arrival cost
Z system constraint region, (x, u) ∈ Z
Zf terminal constraint region, (x, u) ∈ Zf
ZN (x, u) constraint set for state and input sequence

Greek letters

ΓT (χ) MHE prior weighting on state at time T


∆ sample time
κ control law
κj control law at stage j
κf control law applied in terminal region Xf
µi (x) control law at stage i
µ(x) control policy or sequence of control laws
ν output disturbance decision variable in estimation problem
Π cost-to-go matrix in regulator, Chapter 1
Π covariance matrix in the estimator, Chapter 5
ρi objective function weight for player i
Σi Solution to Lyapunov equation for player i
φ(k; x, u) state at time k given initial state x and input sequence u
φ(k; x, i, u) state at time k given state at time i is x and input sequence u
φ(k; x, u, w) state at time k given initial state is x, input sequence is u, and
disturbance sequence is w
χ state decision variable in estimation problem
ω state disturbance decision variable in estimation problem
xliv Notation

Subscripts, superscripts, and accents


x̂ estimate
x̂ −
estimate before measurement
x
e estimate error
xs steady state
xi subsystem i in a decomposed large-scale system
xsp setpoint
V0 optimal
V uc unconstrained
V sp unreachable setpoint
Acronyms

AD algorithmic (or automatic) differentiation


AS asymptotically stable
BFGS Broyden-Fletcher-Goldfarb-Shanno
CLF control Lyapunov function
DAE differential algebraic equation
DARE discrete algebraic Riccati equation
DDP differential dynamic programming
DP dynamic programming
END external numerical differentiation
FIE full information estimation
FLOP floating point operation
FSO final-state observable
GAS globally asymptotically stable
GES globally exponentially stable
GL Gauss-Legendre
GPC generalized predictive control
EKF extended Kalman filter
i-IOSS incrementally input/output-to-state stable
IND internal numerical differentiation
i-OSS incrementally output-to-state stable
IOSS input/output-to-state stable
IP interior point
ISS input-to-state stable
i-UIOSS incrementally uniformly input/output-to-state stable
KF Kalman filter
KKT Karush-Kuhn-Tucker
LAR linear absolute regulator
LICQ linear independence constraint qualification
LP linear program
LQ linear quadratic
LQG linear quadratic Gaussian
LQP linear quadratic problem
LQR linear quadratic regulator
MHE moving horizon estimation

xlv
Another random document with
no related content on Scribd:
* Agrostemma githago.—The
common corn cockle (cockle;
mullein pink) is a weed common to
both the United States and Europe.
Poultry and household animals are
occasionally poisoned by eating the
seeds or the bread made from wheat
contaminated with the seeds.

MAGNOLIACEÆ (MAGNOLIA
FAMILY).

Illicium floridanum.—The
leaves of this species of anisetree are
supposed to be poisonous to stock.

RANUNCULACEÆ (CROWFOOT
FAMILY).

* Aconitum napellus.—Aconite
Fig. 77.—Slender nettle (monkshood; wolfsbane) is very
(Urtica gracilis). commonly cultivated in gardens, and
is therefore capable of doing great
damage to stock. Horses and cattle have frequently been poisoned by
eating the leaves and flowering tops.
* Aconitum columbianum.—The Western aconite, or
monkshood, is native in the north-western portion of America, where
it sometimes poisons sheep.
Anemone quinquefolia.—The common wind flower, which
grows throughout most of the United States, is extremely acrid and
poisonous. Cattle seldom touch it. The plant loses most of its poison
in drying.
Fig. 78.—Pokeweed (Phytolacca decandra), one-half natural size.

* Delphinium tricorne.—The dwarf larkspur, or stagger weed,


of the north-eastern quarter of the United States has been especially
reported from Ohio as fatal to cattle in April, when the fresh leaves
appear.
* Delphinium consolida.—The seeds of the commonly
introduced field larkspur are well known to be poisonous; the leaves
are known in Europe to be fatal to cattle.
* Delphinium menziesii.—The purple larkspur of the north-
western quarter of the United States is very common throughout
Montana. In one case of poisoning reported by Dr. E. V. Wilcox, of
the Montana Experiment Station, over 600 sheep were affected, 250
of which were claimed to have been killed by the weed. An
experiment made by Dr. S. B. Nelson, Professor of Veterinary
Sciences in the Washington State Agricultural College, shows that it
is possible to feed as much as 24¾ lbs. of the fresh leaves to a sheep
within a period of five days without any apparent ill effect taking
place. An experiment made by Dr. Wilcox shows that the extract
from less than an ounce of the dried leaves killed a yearling lamb in
two hours, the dose having been given by way of the mouth.

LARKSP
UR
POISONI
NG IN
SHEEP.[3]

3. The
following
account is
summarise
d from a
bulletin of
the
Montana
Experiment
Station by
Dr. Wilcox.
Severe
losses
have
from time
Fig. 79.—Corn cockle to time Fig. 80.—Aconite
(Agrostemma githago). a, been (Aconitum
Sprays showing flowers and recorded, columbianum). a,
seed capsule, one-third especially Flowering plant; b, seed
natural size; b, seed, natural in capsule—both one-third
size; b′, seed, four times America, natural size.
natural size. from
larkspur
poisoning, the number of animals lost
amounting to thousands. The first signs of poisoning are slight
general stiffness and straddling gait, especially of the hind legs. The
stiffness becomes more and more pronounced, until walking is
difficult and evidently painful. Soon there are manifested various
involuntary twitchings of the muscles of the legs and sides of the
body, and loss of control or co-ordination of the muscles. Ordinarily
there is no increase in the quantity of the saliva, no dribbling of
saliva from the mouth, no champing of the jaws or attempts at
swallowing. The sheep manifest none of the mental disturbances
frequently seen in cases of poisoning from other sources, as for
example loco weed and lupine. There is no impairment of the special
senses. The sheep seem to hear and see as well and as correctly as
under normal conditions of health.
No indications of any disturbances of the digestive functions are to
be seen. The appetite remains good, and the sheep eat up to the very
last. They were observed eating industriously during the intervals
between the attacks of spasms which they have during the last stages.
At first the frequency of the pulse and of the respiratory
movements is lessened and the temperature is lowered. The pulse
remains very weak, but in the later stages becomes very rapid, in
some cases 130 per minute. Toward the last also the respiration is
very shallow and rapid. During the final convulsions the respiration
is sometimes 120 per minute, but so shallow that the air is simply
pumped up and down the windpipe. The air in the lungs is therefore
not renewed, and the animal dies by asphyxia or suffocation.
So long as the sheep can stand on its feet, or walk, it keeps up with
the flock as nearly as possible. The exercise, however, excites it,
makes its respiration more rapid, and it has frequently to lie down
for a moment and then get up and hobble along after the flock. The
worst cases can thus easily be detected, since they straggle behind
the rest of the flock.
The later stages follow rather rapidly. The involuntary movements
become more frequent and more severe. All four legs tremble and
shake violently. In fact, all the muscles of the body contract
spasmodically until the animal totters over on its side and dies in the
most violent spasms.
Larkspur has the effect of arresting the heart’s action and
respiration and of paralysing the spinal cord.
Treatment. Place the animal by itself in a cool, quiet, shaded
place and avoid all excitement. Of the drugs tested, atropine sulphate
dissolved in camphor water has given the best results. Wilcox (Bull.
15, Montana Ex. Station) recommends for sheep from ¹⁄₂₀ to ¹⁄₁₅
grain in the earlier, and ⅙ to ¼ grain in the later convulsive stages.
Cattle require from four to five times these doses. Inhalations of
ammonia vapour, and small doses of alcohol and ether, are also
useful.

Fig. 81.—Delphinium menziesii.

(To illustrate “Larkspur Poisoning.” From


the Annual Report, U.S.A.
Department of Agriculture, 1898.)
Fig. 82.—Delphinium menziesii.

(To illustrate “Larkspur Poisoning.” From


the Annual Report, U.S.A.
Department of Agriculture, 1898.)
Fig. 83.—Delphinium scopulorum.

(To illustrate “Larkspur Poisoning.”


From the Annual Report, U.S.A.
Department of Agriculture, 1898.)

In other cases very good results have been obtained from giving
permanganate of potash in the form of a drench: 5 to 10 grains for an
adult sheep or pig, 15 to 20 grains for a horse, and 30 to 50 grains for
an ox, dissolved in a pint or two pints of water.

* Delphinium geyeri.—The Wyoming larkspur is well known


throughout Wyoming, Colorado, and Nebraska under the name of
poison weed. It is reported to be the most troublesome plant to stock
in Wyoming, the dark-green tufts of foliage being especially tempting
in spring when the prairies are otherwise dry and barren.
Del
phini
um
recur
vatu
m.—
This
species
of
larksp
ur
grows
in wet
subsali
ne soil
in the
southe
rn half
of
Califor
nia. It
has
Fig. 84.—Dwarf larkspur been Fig. 85.—Cursed crowfoot
(Delphinium tricorne), one- report (Ranunculus sceleratus.)
third natural size. ed
from
San Luis Obispo county as fatal to
animals.
Delphinium scopulorum.—The tall mountain larkspur of the
Rocky Mountains has been reported to the Canadian Department of
Agriculture as poisonous to cattle in the high western prairies of
Canada.
Delphinium trolliifolium.—This plant is common throughout
the coast region of northern California, Oregon, and Washington. In
Humboldt County, Cal., it is known as cow poison, on account of its
fatal effect on cattle. Its toxic character has been questioned. Perhaps
it is not equally poisonous throughout all stages of its growth.
* Helleborus viridis.—The green hellebore is a European plant,
sometimes self-sown from gardens. All parts of the plant are
poisonous. Cattle have been killed by eating the leaves.

POISONING BY HELLEBORE.

This form of poisoning is of slow


progress, the plant producing
irritation of the digestive mucous
membrane. The symptoms
consist in loss of appetite, blackish,
glairy diarrhœa, and intermittence
of the pulse.

* Ranunculus sceleratus.—
The cursed crowfoot, or celery-
leafed crowfoot, is found
throughout the eastern half of the
United States and also in Europe.
Cattle generally avoid all of the
buttercups, but fatal cases of
poisoning from this plant are
recorded in European literature.
When dried in hay, the plant
appears to be non-poisonous. The
bulbous crowfoot (R. bulbosus) and
the tall crowfoot (R. acris) are well-
Fig. 86.—Mandrake known to be very acrid in taste, and
(Podophyllum peltatum). it is probable that all of the species
which grow in water or in very
marshy land are poisonous.

POISONING BY RANUNCULACEÆ.

Poisoning only occurs when the green plants are eaten. Drying
causes certain essences contained in them to disappear, and thus
destroys their toxicity.
This form of poisoning is indicated by yawning, colic, blackish,
fœtid diarrhœa, and rapid loss of strength.
The animals suffer from stertorous breathing, weakness of the
pulse, and aberration of vision. They die in convulsions.

BERBERIDACEÆ (BARBERRY FAMILY).

Podophyllum peltatum.—The leaves of the common mandrake,


or May apple, of the eastern half of the United States, are sparingly
eaten by some cattle. Cases of poisoning are very rare, but the
experience of one correspondent shows that the milk from a cow that
had been feeding on the plant off and on for about three weeks was
so extremely laxative as to be positively poisonous. The accident
occurred to a baby, fed exclusively on cow’s milk. The physiological
effect of the milk was precisely like that of mandrake. It was shown
that the cow ate the plant, which was abundant in one pasture, and
when the animal was removed to a pasture free from the plant the
child’s illness stopped at once.

BUTNERIACEÆ (STRAWBERRY-SHRUB FAMILY).

Butneria fertilis.—The large oily seeds of the calycanthus, or


sweet-scented shrub, contain a poisonous alkaloid, and are strongly
reputed to be poisonous to cattle in Tennessee.

PAPAVERACEÆ (poppy family).

Argemone mexicana.—The Mexican poppy is reputed to be


poisonous to stock both in the United States and in New South
Wales. The seeds are narcotic, like opium.
* Chelidonium majus.—The yellow milky sap of the celandine,
an introduced weed common in the eastern United States, contains
both an acrid and a narcotic poison. Both are powerfully active, but
cases of poisoning are rare, as stock refuse to touch the plant. Reeks,
of Spalding, however, describes (J. Comp. Path. and Therap., Dec.
1903, p. 367) an outbreak of poisoning by common celandine in
which twenty-one valuable cows were affected and three died. The
symptoms comprised excessive salivation and thirst, convulsions,
unconsciousness and epileptiform movements.
* Papaver somniferum, opium poppy, or garden poppy: P.
rhœas, field poppy, red poppy, or corn poppy.—These plants are
sometimes self-sown from gardens. Both contain acrid and narcotic
poisons, and European literature records the death of various
animals from eating their leaves and seed pods.

POISONING BY POPPIES.

The consumption of poppies causes arrest of peristalsis, secretion


of foamy saliva, colic, depression, coma, and in severe cases death by
stoppage of respiration.

PRUNACEÆ (PLUM FAMILY).

* Prunus caroliniana.—The laurel cherry, or mock orange, is


native in the south-eastern quarter of the United States, and is there
often cultivated for hedges. The half-withered leaves and the seeds
yield prussic acid, and are poisonous when eaten by animals.
* Prunus serotina.—The wild black cherry is a valuable forest
tree which ranges throughout the eastern half of the United States.
Cattle are killed by eating the partially withered leaves from branches
thrown carelessly within their reach or ignorantly offered as food.
The leaves of various other wild and cultivated cherries are probably
poisonous to cattle in the same way.

VICIACEÆ (PEA FAMILY).

Aragallus lambertii.—The Lambert, or stemless loco weed, is,


next to the following species, the best known representative of a large
group of closely related plants which are native to the western half of
the United States, and are known as
loco weeds on account of the
peculiar excited condition which
they induce in animals that eat of
their leaves. Horses and cattle are
both affected, but the chief damage
is done to horses. After being
permitted to graze on any of these
plants the animal acquires an
unnatural appetite for them, and
soon refuses all other kinds of food.
It rapidly becomes unmanageable,
shows brain symptoms, and finally
dies from lack of proper
nourishment.
Astragalus mollissimus.—This,
the woolly loco weed, is perhaps the
best known of all the loco weeds. It is
the species most abundant in
Colorado, where from 1881 to 1885
nearly $200,000 was paid out in
bounties in an attempt to
exterminate it. The plant is still Fig. 87.—Black cherry
abundant in that State, and reports (Prunus serotina), one-third
of the damage done by it continue natural size.
frequent. Specimens of the three
following species of Astragalus have been forwarded to the Division
of Botany with the information that they were causing great financial
loss in the districts noted. It is quite probable that other species are
dangerous also.
Fig. 88.—White loco weed (Aragallus
spicatus) in flower.

(From the Annual Report, U.S.A.


Department of Agriculture, 1900.)
Fig. 89.—White loco weed (Aragallus
spicatus), showing seed pods.

(From the Annual Report, U.S.A. Department


of Agriculture, 1900.)
Fig. 90.—Loco weed (Astragalus
splendens).

(From the Annual Report, U.S.A.


Department of Agriculture, 1900.)

POISONING BY WHITE LOCO WEED (ARAGALLUS SPICATUS).

This is an erect tufted perennial, 4 to 18 inches high, with


pinnately divided leaves and spikes of white or cream-coloured
flowers, shaped like those of the pea. The pod is one-celled, and
when shaken produces a rattling sound, which gives the plant the
name of “rattle weed” in some localities. The white loco weed is
exceedingly common throughout Montana. It occurs most
abundantly on the northern slopes of foothills up to an altitude of
about 8,000 feet. Its preferred habitat is for the most part in rather
dry situations. The habit of the plant varies in different parts of
Montana. In
some localities
the flowers are
pure white, while
in others they are
decidedly yellow.
In Colorado
the plant which
is most
ordinarily known
as loco weed is
Astragalus
mollissimus,
while in Montana
the species
already named is
perhaps most
important; but
there are others
which have a
rather wide
Fig. 91.— distribution and
Stemless loco are known to Fig. 92.—Woolly loco weed
weed (Aragallus produce the (Astragalus mollissimus). a,
same effects. Whole plant; b, section of
lambertii). a,
Flowering plant; Among these pod—both one-third natural
may be size.
b, seed pods; c,
cross-section of mentioned A.
seed pod—all splendens, A.
one-third natural lagopus, and A. besseyi.
size. The losses caused from the loco disease are
very heavy in nearly all the Rocky Mountain
States. The locoed condition is so commonly
observed among sheep and horses that cases are not reported, and it
is practically impossible to learn the exact extent of the disease. In
the Judith Basin one prominent stockman was nearly ruined
financially by the prevalence for a number of years of the loco habit
among his sheep. In another instance the raising of horses was
abandoned over a large tract of country on account of the loco weeds.
The loco disease occurs under two forms—an acute and a chronic.
An acute case of loco disease was observed by Dr. Wilcox in a two-
year-old ewe with a lamb at its side. The ewe was observed eating
large quantities of white loco weed on May 22nd, 1900. During the
afternoon of the same day it became unmanageable, and the lamb
was badly affected. An examination of the ewe at this time showed
that it was completely blind and was affected with dizziness. It
walked around in long circles to the right, and after a short period
remained standing for a few moments in a sort of stupor. At the
beginning of each attack the head was elevated and drawn to the
right; eyelids, lips, and jaws were moved rapidly. Each attack lasted
from one to two minutes and the intervals between the attacks lasted
about five minutes. The second day the attacks became more severe
and of longer duration, the head being turned more decidedly to the
right and the animal sometimes falling upon the ground. Similar
symptoms, accompanied by digestive disturbances, were manifested
by the lamb during the second day, and it died during the afternoon.
On the morning of the third day it was found that the ewe was
pushing against the fold, and had apparently been in that position
during the greater portion of the night. The animal then began to
whirl round to the right. Later it became unable to stand, and the
spasmodic movements were largely confined to the legs. On the
morning of the fourth day it died. The pupil of the eye was at no time
dilated, and the expression was nearly normal. The pulse was at first
very irregular, but on the second day became again regular and of
normal frequency. The only remedy which was tried was frequent
injections of one-quarter grain doses of morphine, but this was
without effect. Two other ewes ate smaller quantities of loco weed at
the same time and were similarly affected, but less severely. In these
cases morphine was tried with better success. The lambs, however,
died from the poisonous properties contained in the milk of the
mother.
The general symptoms of loco disease are quite familiar to all
stock raisers. Perhaps the most characteristic are those of cerebral
origin, and are shown in peculiarities of gait and action, which may
be compared to a drunken condition. The brain disturbances may
consist in impairment of the special senses or in irregular motor
impulses, which produce incoherent muscular action. In some cases
the animal becomes blind. More frequently the animal makes errors
in judgment of the size and distance of objects. These visual
disturbances are often quite ludicrous. The animal often takes fright,
apparently at imaginary objects, or at objects which under ordinary
circumstances would cause no alarm. Locoed horses are somewhat
dangerous for driving purposes on account of their tendency to run
away. Such horses are frequently attacked with kicking fits without
any apparent cause. The sense of hearing is often affected, and the
response to sounds is irregular and out of proportion to the volume
and character of the sound. Irregularities in muscular movements of
sheep may assume a variety of forms. The animal may simply carry
its head in an extended or otherwise unnatural condition. In some
cases the back is arched. Trembling is a characteristic symptom. In
locoed horses a great difficulty is sometimes experienced in
persuading them to go backward. Locoed sheep are exceedingly
difficult to manage. The different members of the flock may suddenly
take a notion to run away in different directions, with the result that
it is almost impossible for the shepherd to prevent their becoming
separated. In cattle the disease appears to be rare, although
symptoms, so far as observed, are essentially the same as those in
sheep and horses. Occasionally locoed cattle manifest dangerous
symptoms, and attack men and other animals.
In chronic cases of loco the animal gradually becomes more
emaciated and crazy. In sheep the fleece may be shed in patches or as
a whole. The animal becomes unable to care for itself, and is apt to
fall into the water while attempting to drink. Fits of trembling are of
frequent occurrence, and the animal finally dies of inadequate
nutrition and total exhaustion. In chronic cases of loco disease in
horses the animal is usually left to its own resources on the range.
During the later stages it may remain for weeks at a time upon a
small area of ground without taking water. Dr. Wilcox saw a number
of such cases in horses that were almost unable to walk. Under such
circumstances the animals seldom or never lie down. One horse
which was seen remained for a period of two weeks, in 1897, upon a
piece of ground about 150 feet square. During this time the horse had
no water.
Numerous autopsies on locoed sheep and horses revealed slight
congestion of the brain membranes in all cases. The lungs and heart
were in normal condition. Fatty tissue was considerably reduced in
quantity, and the muscles were paler in colour than under normal
conditions.
The most serious mistake in connection with loco disease is made
in allowing locoed sheep to remain with the rest of the flock. The loco
habit is apparently learned by imitation of locoed animals, and so
long as locoed sheep are allowed to remain with other sheep the loco
habit rapidly spreads. An experienced sheep raiser, after being nearly
ruined financially through the loco disease, adopted the method of
immediate isolation and the feeding of locoed sheep for mutton. His
stock was replaced with sheep that were free from the loco habit, and
the trouble has been entirely eradicated from his range.
No specific remedy for the loco disease has been discovered, and in
the nature of the case no such remedy is likely to be found. In the
present state of knowledge concerning the subject the only rational
treatment to be recommended is that of confinement and feeding
with a nutritious diet. By separating the locoed sheep at once from
other sheep the spreading of the habit will be prevented, and the
locoed animals may be fattened and thus prevented from becoming a
total loss. Although locoed animals may readily be fattened and sold
for mutton, their recovery from the loco habit is apparent only, and
is due to their inability to obtain the loco weed. Such animals when
allowed to run upon the range again almost invariably return to their
old habit of eating loco weed. Animals which have once been locoed
are, therefore, unsuitable for stocking the range.
In combatting the loco disease the most rational methods include
providing salt for the sheep, the immediate removal of locoed sheep
from the band, confining them in a fold, and feeding them upon a
nutritious diet. They may thus be fed for market, and their
pernicious habit will not spread to other sheep. In the case of locoed
horses, an apparent recovery takes place if they are confined in a
stable and fed on ordinary cultivated forage or allowed to run in
pastures where no loco weeds are found. Such horses are always
somewhat dangerous, and more apt to run away or become
unmanageable than horses which have not become affected with this
disease.
* Crotalaria sagittalis.—The rattlebox (rattle weed; wild pea) is
an annual weed which grows on sandy soil throughout most of the
eastern half of the United States. In some years it is especially
abundant in the bottom lands of the Missouri Valley. Horses and
sometimes cattle are killed in this region by eating grass or meadow
hay which is contaminated with the plant.
Lupinus leucophyllus.—This
herbaceous shrub is a representative
of a very large genus of plants, many
of which are widely and abundantly
distributed throughout the western
United States, and are generally
known as lupines. The above species
is very abundant in Montana, where
it is said to have caused the death of a
very large number of sheep. There is
some question whether the animals
are killed by a poisonous constituent
of the plant or merely by tympanites.
The seeds of all the lupines are
probably deleterious in the raw state.
In Europe, however, the seeds of
Lupinus albus, after the bitter taste
has been removed by steeping and
boiling, are eaten by human beings as
well as by cattle.

Fig. 93.—Rattle box


(Crotalaria sagittalis). a,
Whole plant; b, cross-section
of seed pod—both one-third
natural size.

You might also like