Download as pdf or txt
Download as pdf or txt
You are on page 1of 204

Books must follow sciences, and not sciences books.

FRANCIS BACON

,

Flyball Governor. one of the earliest automatic control devices. was invented by James Wa tt to govern the speed of the steam engine (1788). Reprinted by permission of Scientific American. Diagram courtesy of Roy De Carava.

Computer Control System is utilized in the closed-loop control of a 650 million-watt power plant at the Paradise Station of the Tennessee Valley Authority (1964). Reprinted by permission of Scientific American. Photograph courtesy of Paul 'Yeller.

I

- ,

Time-Domain Analysis

and Design oj' Control Systems

...

Time .. Domain Analysis

and Design of Control Systems

/

RICHARD C. DORF, Unioersity of Santa Clara :=::

ADD1SON-IVESLEY PUBLISHIKG COl\IPA::\Y, 1::\C.

READING, MASSACHUSETTS . PALO ALTO • LO?\TDOX NEW YORK . DALLAS . Al'LANTA . BARRINGTON, ILLINOIS

VNIVEIlSIO.·\D CPiTR1\L OE VENEZUELA Facu lt .. d de Ingenieri a

BIBLIOTECA.

Copyright © 1965 Philippines Copyright 1965

ADDISO"i/~WESLEY PUBLISHING CO]HPAl"\Y, INC.

Printed in the United States of America

ALL RIGHTS RESERVED.

THIS BOOK, OR P.'\.RTS THEREOF,

MAY NOT BE REPRODUCED 1::<1 ANY FORM

WITHOUT WEI"fTEr, PERMISSION OF THE PUBLISHER.

Library of Congress Catalog Card No. 65-19231

-,

i(

Preface

Education is the language of content and the language of relationship. A book, as a prime means to education, should provide the reader with both content and a relationship. That is, a good text book provides information and also the conceptual philosophy underlying the information. These dual purposes can often be best served by engaging the reader in a dialogue with the author. Since the author attempts to provide the thoughts and concepts advanced by many, he introduces the reader to a dialogue or conversation with the contributors to the field of interest. The book that succeeds in this purpose is a valuable aid to any educated man.

This book is designed to introduce the student of control theory to analysis and design in the time domain. In the past, the analysis and design of control systems has been accomplished utilizing a transformation of the system equations and specifications to a frequency domain. These methods involved the use of the Laplace or Fourier transform and usually assumed the existence of a linear, time-invariant system. However, as the digital computer became a readily available computational aid, it became possible, if not. mandatory, that the control designer reconsider the time-domain methods,

The purpose of this book is to provide the student and the practicing engineer with an elementary exposition of time-domain methods. The book is intended to be tutorial, that is, to provide a learning experience. Therefore the book can be used as an introductory text for a semester course at the senior or first-year graduate level. The book may also be used as a supplementary text book for graduate courses in control systems, sampled-data systems, and computer systems.

The author has attempted to avoid the formalism and tediousness which is often found in the modern control theory literature, while maintaining the necessary correctness of statement. The approach is based on the belief that once the basic concepts of the theory are clearly understood, the reader can then appreciate the rigorousness and generality of the modern theory on which so much

\"ii

VHi PREFACE

emphasis has been placed in some recent publications. The presentation requires a basic understanding of differential equations, systems analysis, and the ability to perform basic matrix operations.

This book is, then, not for experts. It consists of a simple yet hopefully correct presentation: but if it. provides some understanding of the value of timedomain analysis and design, together with some idea of the unification which exists in this diverse and exciting field, then it will have achieved its object.

The thesis which is the foundation of the author's intent is that the ease of automatic digital computation allows the analyst and designer to readily use the time-domain methods. It appears wise to devise new methods of approach when one is provided with new tools. These methods allow the investigation of linear and nonlinear systems, time-invariant and time-varying, multi variable and optimum systems.

The material of this book has been developed by utilizing it as notes in several courses in control systems and as a separate course for first-year graduate students. I wish to acknowledge the invaluable aid and suggestions that my students have provided.

Finally, I wish to acknowledge the fortunate circumstances which have provided me with the inspiration of my colleagues Professors George J. Thaler, Harold Titus, Gene 'Franklin, and Stanley Schmidt: the assistance of my secretary, 1\Jr8. :"IcKenna; and the cont.inual encouragement of my wife Joy to whom I wish to dedicate this book.

Santa Clara, California A. ugW'lt 1964

KC.D.

..

Contents

1. Introduction

2. Introduction to the State Space Technique

2-1 Introduction

2-2 The Phase Plane and the Phase Space 2-3 The State Space

2-4 The State Vector Differential Equation.

2-5 The Solution of the Linear Vector Differential Equation 2-6 Properties of the Fundamental Matrix .

2-7 The Fundamental State Transition Equation 2-8 Examples

2 3 4 5 5

10 12 13

3. Useful State Space :\lethods

3-1 State Transition Flow Graphs

3-2 Linearization of Nonlinear Systems

3-3 Linear Transformation of the State Vector

24

33

4. System Stahility and LiapUllOY'S ::Hethod

4-1 Introduction 43

4-2 The Concept of Stability 43

4-3 The Direct Method of Liapunov . 44

4-4 The Liapunov Function in Terms of Transformed State Variables 50

4-5 Krasoviskii's Theorem Oil Asymptotic Stability of Nonlinear Systems 53

4-6 The Variable-Gradient. :Hethod for Generating Liapunov Functions 56

4-7 Stability Regions for Yonlinear Systems 59

4-8 Numerical Applications of Liapunov's Direct :\lethod 62

ix

x CONTENTS

o, Controllability and Observability of Linear Systems

5-1 Introduction

5-2 Controllability and Observability of Linear Systems 5-3 Observing the State of a Linear System

66 67 75

6. The Forrmalatfon of the State Variable Equations for Discrete-Time Systems

6-1 Formulation of the Discrete-Time State Equations from the Con-

tinuous- Time State Equations 84

6-2 Formulation of the Discrete State Equations from the System Flow

Graph 88

6-3 Formulation of the Discrete State Equations from Difference

Equations 90

6-4 Formulation of the State Equations Using Discrete Signal Flow

Graph Techniques 93

6-5 Formulation of the State Equations Using Hybrid Flow Graph

Techniques 96

6-6 Time Response of Sampled Systems Between the Sampling Instants 99

6-7 Stability of Discrete-Time Systema Using Liapunov's Method 100

7. Analysis of Discrete-Trme Systems by :\Oleans of Tfme-Dornairr l\'latrices

7-1 The Time-Domain Matrix 105

7-2 Evaluat.ion of the System Transfer Matrix of a Transfer Function 107

7-3 Response Between the Sampling Instants 109

7-4 Formulation of the Matrix Equation for Closed-Loop Systems. 110

7-5 A Method of Evaluating the Response of a Closed-Loop System

wi thou t Inversion of Matrices l11

7-6 Digital Computer Calculation of the Response of Closed-Loop

Systems 114

8. Analysis of Nonlinear Systellls

8-1 Introduction 119

8-2 Analysis of Nonlinear Systems Using State Variable Methods 120

8-3 Analysis of Nonlinear Systems Utilizing the Time-Domain Matrix 126

9. Design and Synthesis of Control Systems

9-1 Introduction

9-2 Design of Control Systems Using State Space Techniques

134 135

CONTENTS xi

9-3 The Design of Control Systems Using Time-Domain Matrix Tech-

niques 145

9-4 Synthesis at the Sampling Instants . 149

9-5 Synthesis of the Discrete Compensation Taking into Account the

Intersample Response . 153

10. Opt.irnurn Control Systems

10-1 Introduction 158

10-2 The Formulation of the Optimal Control Problem 159

10-3 Pontryagin's Maximum Principle 163

10--4 Dynamic Programming 171

10-5 Optimization of Linear Systems wi th Quadratic Performance Indices 177 10-6 Adaptive and Learning Control Systems 179

Appendix A

A-I Use of the Impulse Response to Evaluate the System Transfer

Matrix 187

A-2 Evaluation of the System Transfer Matrix for Intersample Response

Using the Impulse Response 188

Appendix B

B-1 Digital Computer Program for the System

GhG(s) = (1 - E-·T)/s2(s -+- 1) 190

B-2 Response of the System of Section B-1 for the First 17 Sampling

Instants . 191

Index

192

CHAPTER ONE

Introduction

The analysis and design of automatic control systems is of vital interest to engineers. The mathematical and experimental methods available enable engineers to design and construct automatic control equipment for systems that range from the Toronto Traffic Control System to the Apollo spacecraft. The control of large complex systems is imperative for a complex industrial economy as exists in the United States today.

The methods most engineers use today are based on the Laplace transformation and the real frequency variable. The Bode, Nichols, and root locus methods have provided outstanding service to engineers. However, with the introduction of the digital computer, one must reconsider the utility of the existing methods and the possibility of new or refurbished techniques. The insight of analysis and design provided by the investigation of system dynamics using time-domain methods is considered within these covers.

The insight provided by time-domain techniques was obvious for several centuries and was utilized by such masters of applied mathematics as K owton, Lagrange, and Cauchy. However, the resurgence of interest at present can be attributed to several factors of our modern society. The high-speed digital computer is now able to solve hundreds of simultaneous differential equations in a relatively short time. Furthermore, as society and its industry become more complex, the necessity of describing these systems demands a large number of equations and new techniques of solution. For example, the control of large chemical plants is now possible by using a digital computer as the controlling element. Also, the exploration of interplanetary space has spawned a new' technology based on high-speed, complex trajectory control systems. Another factor is the demand for continually more efficient systems in industry as well as for space exploration, enlivening the field of optimum control which uses timedomain techniques. Furthermore, the interest in the method of studying the stability of systems according to the Russian mathematician Liapunov has aroused interest in the time-domain formulation of t.he system describing equations. Also, time-domain methods facilitate the analysis and design of systems with nonlinear elements. The readcr can clearly see the strong motivation for studying the formulation of the dynamical equations of the system using time-domain methods.

1

"I:

r ! i.i i,1

!

CHAPTER T'YO

Introduction to the State Space Technique

2-1 Introduction

The study of the dynamics of lumped element systems. can be carried out in the time domain by a number of standard techniques. The most, common method utilized is the classical method of differential equations, Other methods used less frequently- are the superposition integral and the convolution integral, The difficulty encountered with these time-domain methods for relatively complex systems was the motivating force which led to the development of transform methods such as the Laplace and Fourier transform.



i(l)

FIG. 2-1. RLC Circuit.

FIG. 2-2. Spring, mass, damper system.

The classical formulation of differential equations is a powerful technique of analysis [I], The integra-differential equation for the linear RLC electric circuit shown in Fig. 2-1 is

v(t) = Ri(t) + L d~~t) + ~ 101 i(t) dt, (2~1)

where vet) is the forcing function. Similarly, the differential equation for a linear mechanical spring, mass, damper system shown in Fig. 2-2 is

r(t) = M d2y(t) + f dy(t) + Ky(t)

1 dt2 dt' ,

2

(2-2)

3

2-2J

THE PHASE PLAXE AND THE PHASE SPACE

where rCt) is the forcing function. The solution for these differential equations can be obtained by use of an integrating factor or the selection of a suitable exponential function. The resulting solution provides the variation of the dependent variable i or y with respect to the independent variable, time.

2-2 The phase plane and the phase space

Examining the second-order differential equation for the spring, mass, and damper, One notes that the two physical variables are position and velocity. These two variables may be called phase variables, and the plane for which these two variables are the axes is called the phase plane. The curve defined by the relation between the two phase variables is called the phase trajectory, The phase plane for this system is shown in Fig. 2-3. It may be used to determine the trajectories of nonlinear systems as well as linear systems, Consider the linear second-order differen-

dy = v dt

Initial position Yo

tial equation

. d2y dy

M dt2 + f dt + Ky = r(t). (2-3)

We let dy/dt = v, and then

dv

Jlf dt + fv + Ky = ret).

FIG. 2-3. The phase plane.

Then, writing the second differential equation as a set of two first-order differential equations, we obtain

du f K, r(t)

dt - - MV - MY T 11-1 .

dy

dt = t',

(2-4)

Therefore,

dv/dt d» r - ft' - Ky

dy/dt = dy = )'111)

(2-5)

which provides an equation of the slope of the phase trajectory dv/dy on the 'phase plane as a function of the phase variables. Typically, the determination of a phase trajectory for a second-order system is obtained by the isocline graphical construction method [2]. However, the point of interest here is the concept, of phase variables and the phase plane.

In reality, the engineer is usually faced with systems which are represented by differential equations greater than second-order. However, the extrapolation of the concept of phase variables to higher derivatives and the phase plane to

4

XNTROD'C"CTION TO THE STATE SPACE TECHNIQ"GE

[CHAP. 2

a phase space is not. difficult. For example, for a third-order mechanical system, we could have the phase variables position y, velocity dyjdt, and acceleration d'2y/dt2. Now, however, a method of solution must be provided other than a graphical solution on a plane.

2-3 The state space

Reeonsidering the set of equations (2-4), one notes that the original secondorder differential equation (2-3) was resolved into two first-order differential equations in terms of the phase variables y and v. In general, considering the differential equation

dny dn-Iy ,dy ,

an d(; + an-l dtn-1 -\- ••• T al dt T GoY = r(t), (2-6)

the phase variables may be defined as the n variables

dy d2y an-Iy

y, dt l dt2 ' .. 4 'dtn-1 .

More generally these variables are called the state variables of the system. This nomenclature is necessary since the dynamics of a system are not always described by the phase variables of a mechanical system, but might include such variables as temperature. in the case of a chemical plant.

The state of a dynamical system is a set of numbers such that the knowledge of these numbers and the input or forcing functions will, with the equations describing the dynamics, provide the future state and output of the system. For example, given the present state of a mass on a spring and using the knowledge of the equation of dynamics, we find that the future state of the mass can be determined for any future time. The state of a spring, mass, and damper is described by the position and velocity of the mass.

The state variables constitute a set which is written as a column matrix called the state vector J( = {Xl! X2, X3, ... xn}T, where xT is the transpose of the vector x [3]. The introduction of the vector notation follows intuitively from the phase plane where the line from the origin to the state (y, IJ) at. a particular time may be considered a vector. The extension of this visualization to higherorder systems results in an n dimensional vector.

Now, let us reconsider the spring and mass system. The dynamical equations which describe the future state as a function of the present state are given in Eq. (2-4), and rewritten as

I • r,

!"

f'l

!

dXl

at = X2,

(2-7)

In this case the state vector is

x = {Xl}'

X2

2-5J

5

SOLUTIO::\" OF LINEAR VECTOH DIFFEREN'.rIAL EQUATIO:\'

Furthermore, the state space is then defined as the set of all possible numbers the state variables can assume. In the case of the two-dimensional space for the mass and spring, the state space is equivalent to the phase plane. For example, the state space is mathematically 3. vector space which is defined as a set of elements called vectors satisfying certain axioms which assure commutativity and associativity [4].

2-4 The state vector differential equation

The concept of state is based on the obtainability of the future state given the present state and all the system inputs. The calculation of future states is based on the determination of a relationship between the present and future state. This relationship in the time domain for a dynamical system is the set of system differential eq ua tions. Reconsidering Eq. (2-7) I we have

dXl

dt = X2,

(2-8)

It is now convenient to introduce vector matrix notation, Therefore we obtain

(2-9)

where the notation

~[~~J

indicates the differentiation of each element of the state vector. Reducing the vector matrix equation (2-9) to the more compact vector matrix form, we have

d.,.. = Ax ..L Br dt "

(2-10)

where x is the state vector, A is an n X n square matrix, and B is an n element column vector. Therefore, the solution of this vector matrix differential equation will provide the desired future state of the system. It is interesting to note the physical significance of this equation which relates the rate of change of the state of the system to the state of the system and the input signals.

2-5 The solution of the linear vector differential equation

Initially, consider the familiar solution of the first-order differential equation

dx = ax...!.. br dt .,

(2-11)

where x(t) and ret) are scalar functions of time. The solution of this equation can be obtained by using an integrating factor or by using the Laplace trans-

6

IKTRODt:CTION TO 'l'HE STATE SPACE TECHNIQUE

[CHAP. 2

form method. First, taking the Laplace transform of Eq. (2-11), we obtain

sX(s) - x(O) = aXes) + bR(s).

(2-12)

Solving for X(s) , we are led to

X(s) = x(O) + bR(s) .

S-Q s-a

(2-13)

The inverse Laplace transform of equation (2-13) gives the solution .T(t) = eatx(O) + J: ea{l-T)br(T) dr,

(2-14)

where the second term on the right-hand side of the equation is the familiar convolution integral.

Now let us obtain the solution by using an integrating factor pet). Multiplying Eq. (2-11) by the integrating factor and rearranging, we find

dx

P dt - apx = bpr,

or

d (dP)

dt (px) - di + ap x = pbr.

(2-15)

i

I"

f,1

[I

I

I

, I

r

In order for Eq. (2-15) to be an exact differential in x, we require that

dp

di + ap = O.

(2-16)

This equation is called the adjoint differential equation of Eq. (2~11). The solution of the adjoint differential equation is

pet) = p(O)e-at.

(2-17)

So we find that the integrating factor as we might expect, is an exponential function. Substituting this integrating factor in to Eq. (2-15), we find

d

dt (px) = pbr

or

d t« " ) = e-atbr. dt e X

(2-18)

Integrating this equation, we get

(2-19)

SOLUTION OF LINEAR VECTOR DIFFERENTIAL EQUATION

7

Therefore we obtain

",-alx(t) _ e-atox(to) = t' e-aTbr(T) dr.

J 10

(2-20)

Solving for the dependent variable x(t), we find

",-atx(O = ",-atox(to) + (I e-aTbr(T) dr, J to

and therefore

x(t) = e+a(t-Io)x(to) + (I ea(t-~)br(T) dr, t.;

(2-21)

where the first term on the right side is the natural response of the system and the second term, the forced response.

Now considering the vector matrix differential equation, we expect the solution to be of a form similar to Eq. (2-11). In fact, a solution is expected which involves a matrix exponential function. Consider the vector equation

dx

dt = Ax + Br,

(2-22)

where we 'will consider' the possibility of multiple input functions by including an input vector r which is an m. element column vector. Then the matrix B is the n X m matrix which relates the m inputs to the rate of change of the n states. In order to develop the matrix exponential function, consider the Laplace transform of Eq. (2...,.22):

sX(s) - x(O) = AX(s) + BR(s).

(2-23)

The vector differential equation of Eq. (2-23) is represented by the flow graph shown in Fig. 2-4. The matrix solution to this equation is

Xes) = (sI - A)-lx(O) + (sI - A)-lBR(s),

(2-24)

where I is the identity matrix and ( ) -1 indicates the matrix inversion.

x(~)

FIG. 2-4. Vector flow graph of linear, time-invariant system.

..L .. _

8

INTRODUCTION TO THE STATE SPACE TECHNIQ'C"E

[CHAP. 2

Considering the homogeneous unforced vector differential equation, we have

8X(S) = AX(s).

(2-25)

The n homogeneous linear algebraic equations have a nonzero solution if and only if their determinant vanishes; that is,

det [sl - A] = O,

Expanding the determinant leads to the nth~degree characteristic equation in the Laplace variable e. The characteristic roots S; of this equation are often called the eigenvalues of the matrix A. Alternatively, the roots may be thought of as the complex natural frequencies of the system represented by the matrix A.

In general it is difficult to obtain the required inverse for the solution. Therefore, consider a solution of the vector differential equation by a series expansion. Initially, consider the homogeneous equation, that is, with no inputs:

dx

dt = Ax.

(2-26)

Now assume a Taylor series expansion about the origin of the state space

(2-27)

where the E/s are column vectors, whose elements are constants. In order to determine the unknown vector coefficients, we successively differentiate Eq. (2-27) and evaluate when t is equal to to. It is convenient for this derivation to consider the initial time to equal to zero. Initially, we take the zeroth derivative and set t equal to zero, obtaining

x(O) = Eo.

Then taking the first derivative of Eq. (2-27) and setting t = 0, we obtain

dx(O) _ E

dt - 1·

However, we note that Eq. (2-26) at t = 0 is

d~~O) = Ax(O),

and therefore

El = Ax(O).

Differentiating Eq. (2-27) twice and setting t = 0, we have

(!

SOLUTION OF LINEAR VECTOR DIFFERENTIAL EQUATION 9

the derivative of the original system equation (2-26) J and setting t = 0, we obtain

d2x(O) = A dx(O) = A(A -(0)

dt2 dt x.

Thus we obtain the equation

.. where A 2 implies the matrix multiplication of AX A. Now continuing the process of differentiating, the series solution for xU) is obtained as

(2-28)

This infinite series defines the matric exponential function, exp (At) J which may be shown to be convergent for all square matrices A [5]. Therefore we have

(2-29)

Comparing this solution with the solution to the scalar differential equation· found to be Eq. (2-21), one notes the equivalence of the unforced part of the solution.

The mat-ric exponential function, exp (At), describes the unforced natural response of the system and is called the fundamental Or transition matrix q.,(t). The exponential matrix possesses the important properties which the scalar exponential function possesses. That is, the derivative of the function yields the function itself multiplied by a constant. Using this property, let us attempt to determine the forced or particular solution for the original equation (2-26).

, We shall utilize the method of variation of parameters to determine the particular solution.

Consider the particular solution stated in a general form as

(2-30)

Rewriting the original differential equation (2-22), we have

dx(t)

dt = Ax(t) + Br(t).

(2-31)

Substituting the selected particular function Eq, (2-30) into Eq. (2-31) J we obtain

dq., .J_.1.. dq - A' .J_ B

dt q , 0/ dt - <pq; r.

(2-32)

10

INTRODUCTION TO THE STATE SPACE TECHNIQUE

[CHAP. 2

Considering the derivative of the fundamental matrix, we obtain

Substituting this relation in Eq. (2-32), we find

A.pq + .p ~i = A4>q + Bl'

or

(2-33)

We are now able to integrate this equation and determine the particular solution as we did in the scalar case. Therefore,

and the particular solution is

:Xp(t) = ¢(t) Jot 4>-I(r)B(r)r(r) dr.

Therefore the total solution, which is the sum of the particular and complimentary solutions, is

:x(t) = 1>(t)x(O) + Jot </>(t)</>-l(r)B(r)r(r) dr. (2-35)

This solution is analogous to the familiar total solution for the scalar first-order differential equation given in Eq. (2-21). The properties of the fundamental matrix 4>(t) allow a further simplification of Eq. (2-35), and therefore these properties shall be considered,

2-6 Properties of the furrdarnerrtal manrix

I t has been shown that an important property of the fundamental exponential matrix is that d¢/dt = A1>. Therefore it is noted that the fundamental matrix solves the original homogeneous (unforced) vector differential equation.

Consider the solution to the unforced vector differential equation which was found to be

xCi) = 1>(t)x(O)

or

• [Xl (0)1

: Xz(O).

cf>"n(t) Xn(O)

(2-36)

[Xl (t)1 [cf>1l (t)

x2:(t) = cf>2~(t)

x,,(tj cf>nl(t)

PROPKRTIES .OF THE FUNDAMENTAL ~[ATRIX 11

Hence one notes that in order to determine the fundamental matrix, all initial conditions are set to zero except. for one state, and the output of the states is evaluated with respect to time. For example, if

and

Xz(O) = X3(O) = ... = x,,(O) = 0,

we have Xl (0 = <PI I (0, xz(t) = 1>21 (0, and so on. That is, in general the term ¢ij(t) is the transient response of the ith state due to an initial condition on the fth state when there are zero initial conditions on all the other states. In the next chapter, this property will be utilized to determine the state equations of any system represented by an analog computer simulation.

Reconsidering the series expansion used in Eq. (2-27), in order to obtain the matric exponential function, we note that the Taylor expansion was about t = O. If, more generally, we expand the series about an arbitrary initial time t = to, we obtain

A(t-tol

e ,

(2-37)

and' the complementary solution is

(2-38)

Since the dynamics of the system are not changing with time, we use the argument of rP as t - to. If the system is time-varying, then we must use the fundsmental matrix .p(t, to), which is dependent upon the present time t and the initial time to and not only the time difference between t and to. Considering the evaluation of the system transient response at various times tl and tz while the initial time was to, we write for a time invariant system

and

X(t2) = .p(t2 - tr)X(t1)

= .p(t2 - tr)q,(tl - to)x(to)·

(2-39)

,Ve note that we can write the state at t2 as

so that

(2-40)

This important relationship illustrates the nature of the transition matrix q,(t2 - to) as a sequence of state transitions. Considering that Eq. (2-40) must still hold when t2 = to, we obtain from the left-hand side of Eq. (2-40)

12

INTRODUCTION TO THE STATE SPACE TECHNIQUE

[CHAP. 2

and from the right-hand side of (2-40), we get

Therefore

and

(2--41)

If the initial time of interest is to = 0, then

or, in general,

(2-42)

'Ve can now use this property to determine a simplified relation for the fundamental transition equation.

2-7 The fundamental state transition equation

Rewriting the total solution to the vector differential equation as found in Eq, (2-35) for an arbitrary initial time to, we have

x(t) = <jJ(t - to)x(to) + rt <jJ(t)<jJ-l(T)B(T)r(r) dr, J10

Substituting t.he equation for' the inverse of the transition matrix using the property stated in Eq, (2-42), we have the fundamental state transition equation for a time-invariant system:

xCi) = <t>(t - to)xCto) + t' <t>Ci - T)B(T)r(T) dr, J to

(2-43)

In the case of a time-varying system, 'VB must use the more general form of the transition matrix ¢(t, to). In this case the system matrix A is time-varying and written as A(t). One assumes a solution in terms of .p(t, to) and, as in the time-invariant case, determines the total solution for the differential equation. Then the solution is found to be

xU) = .p(t, to)x(to) + {t ¢(t, T)B( T)r( T) dr, J to

(2--44)

It. is very interesting to note that the solutions given in Eqs. (2-43) and (2-44) contain t,he unforced natural response due to the initial conditions plus a matrix convolution integral containing the matrix of the inputs reT) .

...

2--8]

13

EX.,IJ\((PLES

2-8 Examples

Ex.nlPLE 2-1. It is theoretically possible to utilize the definition of the transition matrix in order to obtain the transition matrix and therefore the solution to the vector differential equation. However, only in very simple cases would this method be of real value. Let us consider the simple open -loop system shown in Fig. 2~5. The differential matrix equation for the unforced system is

dx [0 1J

dt = 0 0 x.

The definition of the transition matrix given in Eq. (2-28) is

A2t Akl

q,(t) = eft = I + At + 2 + ... + k! + ....

(2-45)

Substituting the system matrix A of this system, we obtain

~(t) = [~ ~] + [~ ~] t + [~ ~] t2 + all null matrices,

since A2 = A3 = ... = Ak = ... = O. Therefore the solution to this very simple matrix differential equation is

x(1) = [~ :] x(to).

(2-46)

Of course, while we have determined the solution of this differential equation, we shall seldom have the opportunity to use this simple method of determining the transition matrix.

FIG. 2-5. Open-loop system.

Ex-e..:MPLE 2-2. Let us reconsider the spring, mass, damper system previously discussed. The equations representing the dynamics were found/to be

:t[:I]=[_; _ ~][:I]+[~]r(t).

2 A[ J[ 2 .(11

Let K/A! = 2 andflM = 3, and 11M = 1, then

(2--47)

A _ [0 1],

-2 -3

· .. _

14

I~TRODUCTlON TO THE STATE SPACE TECHNIQUE

[CHAP. 2

and in order to determine the transition matrix, we must determine the characteristic roots of the system. Recalling that

we find that

[SI-AJ=[8 0]_[ 0 1]=[S -1]

o S -2 -3 2 8 + 3

(2-48)

The determinant of this equation is the characteristic equation

det lsI - AI = s(s + 3) + 2 = 82 + 38 + 2 = (s + 1) (s + 2),

which has two real distinct. roots. Therefore, the inverse of IsI - A] may be determined as

lsI - A]-I

adjoint lsI - A] det lsI - A]

1 [8 + 3

- -::-(8-+""""""'1 )--;-( s-_C-: 2) - 2

(2-49)

The adjoint of an n X n matrix A = Gij is equal to

[:~~ ::~ ...

al" aZn .••

a"l]

: ,

. ann

where OIij are the cofactors of Gij. Then, obtaining the inverse transform of lsI - A]-\ we have

[ (? -I -2t)

.p(t) = ~e - e

(-2e-1 -+- 2e-2t)

(+,,-1 _ e-2t) ] (_e-t + 2e-21) ,

(2-50)

where we have assumed that to = 0. First, consider the ease where the system is unforced and initially displaced to the position Xl (0) = 1 and held at that position. If the mass is released without a velocity component, that is X2(0) = 0, we find that

x(t) = rp(t)x(O) = .p(t) . [1] = [cf>ll (t)] ,

o <P21 (t)

or

(2-51)

EXAMPLES

15

Xz

-1

FIG. 2-6. System transient solution.

FIG. 2-7. The phase plane trajectory.

The transient solution is shown in Fig. 2-6, and the state-space (phase-plane) trajectory is shown in Fig. 2-7. If the initial condition of the unforced system is set. to Xl (0) = 1 and X2(0) = -1, we obtain

(2-52)

The state space trajectory for this case moves along the characteristic vector for the characteristic root -1 in the state plane as shown in Fig. 2-8.

Now, let us consider the system response to a step input of unit magnitude.

The transient response will be obtained from the fundamental state transition equation (2-43), which is

x(t) = rp(t - tolx(iol + It rp(t - r)B(r)r(r) dr. (2-53)

to

Considering the case where the initial conditions are zero and the input is a step of unit magnitude, we have

x(t) = lJ,~ <pdt - r) dT] J

c (2-54)

1 <pzz(t - r) dr

10

since

B(r)r(r)

Therefore

(t) 11(+ -(t-n -2{t-7») d

Xl = e - e r,

to

(2-55)

FIG. 2-8. Phase plane portrait for x(O) = (1, -1):1".

16

IN'l'RODUCTIO;c.r TO THE STATE SPACE TECHNIQUE

[CHAP. 2

Completing the integration, we have

(t) - l -(I-to) + 1 -2(I-to)

Xl -"2 - e "2e ,

(2-56)

The transient response is shown in Fig. 2-9 and the phase plane trajectory in Fig. 2-10.

------~L_------~l--Xl 2"

FIG. 2-9. Response for a unit step input. Fig. 2-10. Phase plane portrait for a unit step input.

EXAMPLE 2-3. Consider the RC network shown in Fig. 2-11. In order to determine the transition matrix for this system we must determine the state variables and the describing differential equations. The dynamic character of the network is determined by the storage of energy in the capacitors. The

differential equations are obtained utilizing 4G ';L.

Kirchhoff's current law as follows:

dVl

C -d = 4G(V2 - ~'l) at node 1; t

dvz

2C - = -4G(vz - Vl) - 8Gvz dt

+ 8Gr(t) at node 2.

(2-57)

r(t)

FIG, 2-11. sc network.

Therefore the state variables are identified as the capacitor voltages. This is intuitively satisfactory, since the energy storage equations are of the form E; = tCc;. The state differential matrix equation is

d [Xl] [-4G/C

dt X2 = 2G/C

where

4G/C] [Xl] [0 ]

+ ret),

-6G/C X2 4G/C

(2-58)

:\

-~---

80

EXAMPLES 17

, Now let G Ie = k and evaluate the transition matrix as follows:

4k] = [(8 + 4k)

-6k -2k

-4k ]

(8 + 6k)

We note that there are no zeros in this matrix as there were in the last example. The determinan t is

[(8 + 4k) det

-2k

-4k 1

= 82 + lOk8 + 16k2 =

(s + 6k)

(8 + 2k)(8 + 8k),

and hence the charaeteristio roots of the system. are - 2k and -Slc. Then the transition matrix is

{I [es + 6k)

¢J(t) = £-1 (8 + 2k)(s + Sk) 2k

4k ]}

(s + 4k)

(!e-2kl _ !e-Skt)] (te-2kt + ie-Ski)

(2-59)

For example .. if with no input. we have the initial conditions Xl(O) = 1 and X2(O) = -1, we obtain

[ -Skt]

x(t) = cp(t)x(O) = e .

_e-Skt

(2-60)

EXA:l.fPLE 2-4. We have seen that the state variables are the variables which describe the energy of the storage elements. Therefore, considering the RLC network shown in Fig. 2-12, we select the state variables as the capacitor voltage and the inductor current. Utilizing Kirchhoff's current laws we must determine the vector differential equation

~; = [a11 au] x + [bll] e(t), (2-61)

a21 a22 b21

where

To determine the matrix element all we need to determine the relation between dxddt and Xl while X2 = eel) = O.

FIG. 2-12. The RLC network of

Example 2-4.

I i

! I

. I

i I

.

18

INTRODUCTION TO THE STATE SPACE TECHXIQUE

iL=O

[CHAP. 2

FIG. 2-13. Reduced network of Example 2--4 with X2 = O.

FIG. 2-14. Reduced network of Example 2--4 with Xl = O.

Examining the reduced network as shown in Fig. 2-13, we obtain

Hence the element

1

all = - C(RI + R2) •

The matrix element al2 is determined by setting Xl = e(t) = 0 and obtaining the relationship between dxddt and X2. The reduced network is shown in Fig. 2-14, and the resulting equation is

or

Therefore the element

(2-62)

In order to evaluate the element bll, we set Xl = X2 = 0 and determine the relation between dxddt and e(t) as

1

Z 0 = --;Rc:-1-+-----=Ro--2 e (t)

or

dXI 1

at = C(RI + R2) e(t).

Hence the element

(2---{)3)

'I RI t'c L2 i2
'(04' '\Nv\r-----j I !' '000' t~
c
~. .:
.;":
c,
I"
FIG. 2-1.5 . The RLC network of Example 2-5. .. ~. EXAMPLE 2-5. Consider the RLC network shown in Fig. 2-15. \Ve know from

• l: the previous examples that the capacitor voltage and inductor current are state

f variables. Therefore in this case we have three state variables, i2, is, and Va·

:~ The matrix relationship for which the matrix elements are to be determined is

2-8]

19

EXAMPLES

After evaluating the elements of the second row of the matrix, we obtain the total vector differential equation as

(2-64)

Noting that Eq. (2-64) is of the same form as the vector differential equation (2-58) of the previous example, we see that the ensuing mathematical steps are similar and will be left as an exercise for the reader.

r au al2
dx
dt = an an
an an al31 rb 111

a23 x + b21 e(t),

a33 bal

(2-65)

where x = {Xl, X2, Xa}T = {vo i2, isV. In order to determine the relationships, we utilize Kirchhoff's current law to obtain

(2-66)

Since

C dVe . • + .

dt = '1.1 = l2 la,

we have the relationship required to complete the first. row of the matrix equation. To obtain the relationship for diz/dt in terms of the state variables, we write the Kirchhoff loop equation for the outside loop of the network, which is

L di2 - R· R· +

2 - - -Vo - 2t2 - ltr e.

dt

(2-67)

20

INTRODUCTION TO THE STATE SPACE TECHNIQUE

[CHAP. 2

Substituting Eq. (2-66) in order to eliminate i1 in this equation, we obtain

L diz - R' R (. + . ) +

2 - - -Vc - 2~2 - 1 ~2 I3 e,

dt

(2-68)

which provides the desired relationship. To obtain the relationship for dis/dt in terms of the state variables, 'we obtain the voltage equation for the loop on the left-hand side of the network, which is

(2-69)

Therefore the complete vector matrix differential equation is

0 1 1 0
C C
dx 1 (R1 + R2) Rl x+ 1 e(t) . (2-70)
dt L2 £2 L2 L2
1 RI Rr 1
L3 L3 L3 L3 ·t

In order to obtain the transition matrix, the inverse of the matrix [sf - A] must be obtained. This process requires the determination of the roots of the third-order characteristic equation. Finally, the inverse Laplace transform is used to determine each term of the transition matrix, as illustrated in the previous examples. It can be seen that this process of determining the inverse of the matrix lsI - A] and the inverse Laplace transform of each element will be an increasingly tedious task for higher-order systems. In the next chapter we can look forward to devising a simpler method of determining the transition matrix of the system.

: i

PROBLEMS

2-1. Determine the transition rna trices for the following coefficient matrices A:

(a) A = [1' 2] 3 2

(b) A _ r: ~ :1 l-2 -1 .•

(C)A=r~ : ~l

lo -2 .;

2-2. An aircraft. landing system can be represented by a set. of linearized differential equations. The pitch angle B(t) and the elevator deflection o(t) are related by

d3e d2e 2 dB 2 d8 2

di3 + 2two &2 + wo & = KTowo dt + Kwo 0,

PROBLEMS 21

where

t = damping factor,

wo short-period resonant frequency,

K gain constant,

To path time constant.

The pitch angle I)(t) and the altit.ude h(l) are related by the differential equation

d2h dh _ T

To dt2 + di - T I)(t),

where the aircraft velocity F is assumed to be constant. Assume the initial conditions are zero. The elevator deflection oCt) is the control input. (a) Define a set of state variables as the altitude and its derivatives and obtain the vector differential equation. (h) Usually only the altitude and the derivative of the altitude are readily measurable. Therefore, choose a set of measurable state variables and determine the vector differential equation in terms of these state variables.

FIGURE P2-3

2-3. An important problem in Aerospace Guidance and Control is the control of the attitude of orbiting satellites. This problem is similar to controlling a cart. with an inverted pendulum. The object of the control system is to maintain the inverted pendulum in a vertical position by moving the cart. The cart and pendulum are shown in Fig. P2-3. Determine the state variables and the vector differential equation.

F(l)

?/J//wh//}///Y//Y//Y//Y//Y//?Y//'P//d1'/??0??!; Friction = B1

(a)

(b)

FIG'GEE P2-4

2-4. Determine the state variables and vector differential equation for the mechanical system shown in Fig. P2-4(a). The electrical circuit: analog of the mechanical system is shown in Fig. P2-4(b).

---- ... """.

\

I:'<TRODUCTlON TO THE STATE SPACE TECHNIQUE

[CHAP. 2

22

FIGURE P2-5

2-5. A dynamic vibration absorber system is shown in Fig. P2-5. This system is representative of many situations involving the vibration of machines containing unbalanced components rotating at angular frequency w. The parameters ilh and k12 may be chosen so that the main mass M 1 does not vibrate whenj(i) = a sin wi. Det-ermine the state variables and the vector differential equation.

FIGURE P2-6

r .II

2-6. Consider the pressure process shown in Fig. P2-6. The input pressures are Ul andu2 and the vessel pressures are Xl and xz. The system is represented by the lumped parameters, orifice resistance, and vessel capacitance, as shown. Specifically, let Rl = 2, R2 = 1, Rs = 1, Cl = 3, and C2 = 2. Obtain the vector differential equation representing this process. Assume that the flow is laminar and the pressure changes within the tanks are isothermal.

ij=lj constant

C

FIGURE P2-7

! ~.

f = f motor+ fhad

2-7. An armature-controlled DC motor is often used in electrical control systems as the load actuator. A schematic diagram representing the DC motor and internal load is shown in Fig. P2-7. The field current is maintained constant during operation. Assume the motor is operating in the linear region. (a) For the state variables xT = {e, e, iq}, determine the vector differential equation. (b) Determine the vector differential equation when xT = fe, 8, e} are the state variables.

i ~

23

BIBLIOGRAPHY

1. M. E. VAN VALKENBURG, ]'/etwOrk Analysis, Prentice-Hall, Englewood Cliffs, N. J., 1955.

2. G. J. THALER and M. P. PASTEL, Nonlinear Control Systems, Chapter 3, lVIcGrawHill, New York, 1962.

3. F. E. HOHN, "Elementary Matrix Algebra", Macmillan, New York, 1958.

4. P. R. HALMOS, Finite-Dimensional Vector Spaces, Van Nostrand, Princeton, N. J., 1958.

5. L. A. PIPES, ])I oirix .1.11 eihods for Engineers, Prentice-Hall, Englewood Cliffs, N. J., 1963.

6. L. A. ZADEH and C. A. DESOER, Linear System Theory, :\IcGraw-Hill, New York, 1963.

CHAPTER THREE

Useful State Space Methods

I'

I

3-1 State transition flow graphs

It was noted in the preceding chapter that the determination of the state transition matrix for a higher-order system is a tedious process of matrix inversion. It would be worthwhile therefore to develop a method of determination of the transition matrix which both simplifies the necessary steps and intuitively illustrates the physical foundation of the vector transition equation. The transition flow graph provides us with such a method.

Consider an analog computer simulation of a system. Note that the only dynamic element in the computer is the integrator, that is, the storage element. The output of each integrator can be considered to be a state variable, and the outputs of all the integrators constitute the set of state variables called the state vector x. Therefore if we can represent the system we are investigating by an analog computer diagram, we can determirre the state vector x.

To determine the state transition equation directly from an analog computer diagram, we recognize that the computer diagram can be represented by a signal flow graph. It then remains to deduce a method for the determination of the state transition equation, which is

.' ..

j I'

!.

x(t) = </I(t - to)x(to) + (t </I(t - r)Br(r) dr, J 10

(3-1)

The Laplace transform of this equation was previously found to be

(3-2)

The key point to recognize is that if we can determine Eq. (3-2) directly from the signal flow graph, we can avoid the matrix inversion. Once Eq. (3-2) is determined for a specific system, the inverse Laplace transform of each element is found in order to obtain Eq. (3-1) directly. The system equation is obtained

24

3-1]

STATE TRAXSITION FLOW GRAPHS

25

in the form of Eq. (3-2) by utilizing Mason's signal flow graph gain formula [1],

which is ~

(3-3)

where 1\[ k = gain of the leth forward path,

b.(s) = system determinant or characteristic equation 1 - (sum of all individual loop gains)

+ (sum of gain products of all possible combinations of two non-touching loops)

- (sum of gain products of combinations of three nontouching loops) + ... ,

b.k(.s) = the value of b.(.s) for that part of the graph not touching the kth forward path.

This method can best be illustrated by reconsidering the example of the spring, mass, and damper. The equations describing the dynamics were found to be

[Xl] [ 0

d

- - K

dt Xz - --

M

- ~] x + [ : ] ret),

j_1I M

(3-4)

where

Setting KIM = 2, flM = 3 and ljM = 1, as before, we have

~~ ~ [_: J x + [l(tl

(3-5)

This set of equations is represented by the analog computer (or signal flow) diagram shown in Fig. 3-1.

The initial conditions of . the state variables are shown on the signal flow graph. The initial time is usually chosen to be to. Utilizing Mason's gain

FIG. 3-1. Signal flow diagram for the spring and mass system.

26

USEFL"L STATE SPACE METHODS

[CHAP. 3

formula, we first obtain the system characteristic ~quation by setting .6.(s) = O. Therefore

( 3 2)

LJ.(s) = 1 - - - - - = 0

S 82

= SZ + 3s + 2 = (s + 1)(s + 2).

(3-6)

Evaluating the first term of the transition equation ¢11(8) and utilizing Mason's formula, we have

XI') _ 1· LJ.l(S) Xl(tO)

11.8 - LJ.(8) 8'

1 + (3/s) Xl (to)

X 1(8) = 1 + (3/s) + (2/S2) -s- ,

since

Ii I

LJ.l(S) = (1 + ~) .

(8 + 3)

XI(8) = (s + 1)(8 + 2) XI(tO)'

Hence

(3-7)

(3-8)

(3-9)

i I.~

Next, considering the second term in the first row of the matrix, 4>12(8), we obtain

x ( ) _ 0/8) 6.2(S) X2(tO) _ l/s Xz(to) _ Xz(to)

~ I 8 - 6(s) S - 1 + (3/s) + (2/S2) S - (8 + l)(s + 2)

(3-10)

In order to determine 4>21(S), we write

-2· S-l Xl(tO) -2

Xz(s) = (1 + 38-1 + 2s-Z) -s - = (s + 1)(s + 2) XICtO)'

Considering the effect of the input R(s) on Xl (S), we obtain

8-2 R(s)

X1(8) = (1 + 38-1 + 2s-2) R(s) = (8 + 1)(s + 2) .

i.

(3-11)

(3-12)

Continuing in this manner, we obtain the total transition equation in the Laplace variable as

-.- .. ---.-.--------...;.._-- ....-.l

8 + 3 1

X1(s) = (s + 1)(8 + 2) Xl(tO) + (8 + 1)(s + 2) xz(to)

+ (8 + 1)\8 + 2) R(s),

-2 s

Xz(s) = (8 + l)(s + 2) XI(tO) + (s + 1)(8 + 2) xz(to)

+ (8 + l)s(s + 2) R(s).

i

1_L

(3-13)

3-1]

27

STATE TRAKSITIO);" FLO"W GRAPHS

In order to obtain the state transition equation (3-1), we need only take the inverse Laplace transform of Eq. (3-l3). <Then we obtain

XI(t) = (2e-1 - e-21)xl(tO) + (e-I - e-2t)xz(to) + £-1 {(oS + ~«(; + 2)}' X2(t) = (-2e-t + 2e-Zf)xl(lo) + (_e-t + 2e-2f)xz(to)

r -1 { sR(s) }

, £ (8 + 1) (8 + 2) ,

(3-14)

where we have assumed to = O.

We now have the transition matrix, which is the identical matrix ,ve found in Eq. (2-50) using the inverse matrix process. Fortunately, we gain an extra benefit from this signal flow graph method since we have included the effect of the input signal. It is a distinct advantage that we have avoided the necessity of evaluating the integral

(I q,(t - T)Br(T) dr . J 10

Using the signal flow method, we directly obtain the response due to the input signal. For example, consider t.he case discussed in Section 2-8, Example 2-2. 'where the system 'was subjected to a unit step input at the time to and the initial conditions were equal to zero. It is obviously easy to solve for the response directly using Eq. (3-14) as follows:

(t) - ('-1 { (l/s)e-sto } _ .! _ -(I-iol + 1:. -2(/-10)

Xl - "-' (8 + 1)(8 + 2) - 2 e 2 e )

(t) = ",-I { oS· O/s)e-SIO } = -(I-tol _ -2(t-Io)

xz. do_, (s + 1)(8 + 2) e e .

(3-15)

This solution is identical to that obtained in Eq. (2-56) utilizing the integral equation form.

R(S)------tl 8(S+1~(8+2) 1-1-- ..... C(s) FIG. 3-2. Open-loop third-order system.

The dominant advantage of the signal flow method of obtaining the state transition equation is that it does not become mare difficult or tedious as the order of the system increases. As an example to illustrate the method for a third-order system, consider the open-loop control system shown in block diagram form in Fig. 3-2. The differential equation describing the system is

(3-16)

-------

28

USEFUL STATE SPACE METHODS

[CHAP. 3

Fro. 3-3. Signal flow graph for the open-loop third-order system.

The state variables are chosen as

T ( 1 {1.. de

x = ,Xl) X2, X3J = c, c, cJ, where c = dt'

Then the set of first-order differential equations describing the system is

dxs

- = -2X2 - 3xs + 1". dt

(3-17)

Rewriting this equation in matrix form, we obtain

~~ = l~ ~ ~1 x -+- l~1 r.

o -2 -3 1

(3-18)

The transition signal flow graph of this system is shown in Fig. 3-3. The characteristic equation of the system is

.,

6.(s) = 1 + 3s-1 + 2s-2 = S2 + 3s + 2 = (s + l)(s + 2) = O.

Using Mason's gain formula, we obtain the equation for X,(s) which is

(3-19)

where

6.(s) = 6.1 (8) = 1 + 38-1 + 28-2,

~zCs) = 1 + 38-1, ~3(S) = 1.

Therefore we have

+ R(s)

8(82 + 38 + 2)

(3-20)

."

J

3-1]

STATE TRANSITION FLOW GRAPHS

Continuing this process, we obtain the state transition equation as follows:

0) (8 + 3) Cq~S)) R(s)
sq(s) sq(s)
x(t) = £-1 0 (s + 3) CtS)) :x(to) + £-1 R(s) (3-21)
q(s) q(s)
0 (-28) CtS») sR(s) J
q(s) q(8) where q(s) = 82 + 3s -+- 2. Completing the inverse Laplace transformation for a step input of magnitude equal to r(to), we find

x(t)

(!n(T) - 2e-T + te-2T) (2e-T _ C2T) (2e-T _ 4e-2T)

(3-22)

where T = t - to in order to simplify the notation.

R(s)

FIG. 3-4. Signal flow graph for the closed-loop third-order system.

NO\v let us consider the closed-loop feedback system where the open-loop transfer function is that shown in Fig. 3-3. Closing the open-loop control system with a unity negative feedback path, the signal flow diagram is shown in Fig. 3-4. Then the matrix differential equation becomes

dx = l ~

dt

-1

(3-23)

29

30

USEFUL STATE SPACE l't'lETHODS

[CHAP. 3

Comparing Eq. (3-2:3) with the open-loop system equation ~-18), we note the only altered term is 031 which accounts for the closed-loop feedback of Xl· The determinant or the characteristic equation is

.6.(s) = 1 + 3s-1 + 2s-2 + S-3 = S3 + 3s2 + 2s + 1

= (s + 0.34 + ).55)(8 + 0.34 - j.55)(8 + 2.34) = 0, (3-24)

which is third order compared with the open-loop system second-order characteristic equation. Comparing the signal flow graphs, Fig. 3-3 and Fig. 3-4, we note that the terms due to Xl (s) will be added to the matrix equation (3-21). Instead of ¢>21(S) = <P31(S) = 0, we find these transition elements are

-X1(tO) -Xl (to)

S3 + 382 + 2s + I = ----peS)

and

l

II.

Thus the state transition equation is

0) (Spt)3) C~S»)

x(t) = £-1 ( p~S») (S(Spt) 3») C~S») x(to)

(;(S~) (p(~;) (psc:»)

R(s) pes)

pes) 1

82 R(s) I pes) )

(3-25)

Hence it can be seen that the determination of the state transition equation is not more difficult for a closed-loop system than for an open-loop system. The evaluation of th0 inverse Laplace transform of Eq. (3-25) follows directly in a manner similar to Eq. (3-22) for the open-loop system.

Finally, in this section we will consider an open-loop system with a transfer function of the form

G(s) = KIIi(S + Zi) . 8NIIj(s + Pi)

3-1J

STATE TRAXSITlON FLOW GRAPHS

31

The transfer functions of the systems considered previously possessed a zerothorder polynomial in the numerator. It would be worthwhile to reconsider the open-loop system of Fig. 3-2 when

C(s) _ G( ) R(s) - 13

(13 + 3)

(3-26)

13(13 + 1)(13 + 2)

The zero in the numerator necessitates an additional node in the signal flow graph of Fig. 3-3. We rewrite Eq. (3-26) as

C(s) _ (13 + 3)X(s)

R(s) - 13(13 + 1)(13 + 2)X(s)

(3-27)

Examining the denominator, we have

Xes) 1

R(s) = 13(13 + 1)(13 + 2) ,

(3-28)

which is represented by the flow graph of Fig. 3-3. written as

The numerator can be

C(s) = (13 + 3)X(s)

= sX(s) + 3X(s) = X2(S) + 3Xl(S).

(3-29)

Therefore, for this case, the output C(s) is not equal to the state variable Xl (13) but rather to a sum of two state variables. In matrix form, the output e(t) is

(3-30)

The total open-loop system is represented by the signal flow graph shown in Fig. 3-5. In general, with zeros present in the denominator of G(s), we obtain the output matrix equation

e(t) = ex,

(3-31)

where C is an In X n matrix and c is an m element column vector.

R(s)

C(s)

-2

1

FIG. 3-5. Signal flow graph for an open-loop system with a zero.

--- - - - -- -- -

32

VSEFrL STATE SPACE METHODS

{CHAP. 3

3-2 Linearization of nonlinear systerns

It would be well at this time to point out that although only linear systems have been discussed at this point, the methods considered can be utilized if the system's equations can be linearized. In a large percentage of investigations, one can consider the linear case since the system variables experience small changes about the equilibrium point, and the lineal' small signal case results. The linear system vector differential equation was written as

dx

dt =Ax + Br.

(3-32)

Now if the system is nonlinear and the coefficient matrix is itself a function of the state variables, we write

dx

dt = F'(x, t) + Br,

(3-33)

where F(x, t) indicates the function of x and t. In the more general nonlinear case, we may have to consider the equation

dx

dt = F(x, r, t).

(3-34)

Fortunately .. we can oftcn consider the approximate small signal case for the nonlinear system and linearize the system equation about the equilibrium point. In the case of servomechanism, regulation, and guidance problems, we desire that the equilibrium x, be equal to the reference or desired position so that. we may write x, = r; and if r = 0, then we have

F(x, r) - F(x,O) = 0,

(3-35)

where 0 equals the null vector whose elements are all zeros. In order to linearize the equations of the system, we obtain a Taylor series expansion about the equilibrium position (xe) re). The equilibrium position is determined from the relation

(3-36)

We define the small variation about the equilibrium point as x'" = x - x, and r * = r - r e, where x * and r * equal the variation from the sta te vector and the input vector, respectively. Then the Taylor series expansion is

d;t* = Ax* + Br* + F(xe, r8) + g(x, r),

(3-37)

where A is an n X n matrix whose elements are

_ af;(x, 1') /!

aij ax'

J evaluated at

3-3]

33

LIXEAR TRA_NSFORMATIOX OF THE STATE VECTOR

and B is an n X 1n matrix whose elements [2J are

The term g(x, r) contains the second and higher derivatives in x and r. The matrix of elements af;/axj is called the Jacobian matrix of F with respect to x; and the matrix of elements aj;j Br k is called the Jacobian matrix of F with respect to x, 1£ the variation from equilibrium is sufficiently small, we may neglect the higher-order terms g(x, r), and we obtain the matrix equation

dx'"

dt = Ax'" + Br",

(3-38)

Hence we obtain an equation identical to Eq. (3-32) which represents a linear system. The accuracy of the system response solution obtained utilizing Eq. (3-38) is a function of the accuracy of the assumption of small variations of x and r and small nonlinear effects.

This linearization method, utilizing the Jacobian matrix, is very useful for systems possessing nonlinearities such as gain saturation and gear backlash. If the nonlinear elements result. in large nonlinear signal effects, such as for a relay or on-off control, then other methods of analysis must be considered. Several time-domain methods of analysis will be considered in the following chapters,

3-3 Linear transformation of the state vector

Often in the solution of differential equations, we found it effective to change the variables of interest to another coordinate system. This change of variables often led to a clarification of the system solution, This is also true for t.he vector differential equation we have been considering, that is,

~; = Ax + Br.

(3-39)

A change of the coordinate system from the phase variables x to a new set of related state variables y often aids in the comprehension of the solution. Also, the linear transformation is useful in Liapunov's method of determination of stability to be considered in the following chapter.

A linear matrix transformation may be written as

x = Ty,

(3-40)

where the matrix T transforms the state variable y into the state variable x. Also, we may write the inverse transformation in terms of y as

(3-41)

34

l'SEFCL STATE SPACE METHODS

[CHAP. 3

illustrating the fact that the new variables yare linear combinations of the original state variables x. For example, in general, Yl is written as

where ti? equals the ijth element of the inverse T matrix. Taking the derivative of Eq. (3-40), we obtain

dx = T dy.

dt dt

(3-42)

Substituting Eqs. (3-42) and (3-40) into the original vector differential equation (3-39), we obtain a differential equation in terms of the transformed variables y as follows:

T dy = ATy + Br

dt .

(3-43)

Premultiplying by T-1, we obtain

(3-44)

Since this differential equation represents the same system as Eq. (3-39), the characteristic equations must be equal, To show this fact, we consider the determinant of Eq. (3-44):

det IT-1AT - }.II = det IT-lCA - AI)TI = det IT-IliA - AIIITI = det IA - All = O.

(3-4.5)

i'

i

.L

: i

,.

Ii

Therefore the. characteristic roots and characteristic vectors of the original differential equation and the transformed differential equation are identical.

A very simple and useful state space is the one for which the matrix T-1AT contains only diagonal elements. This space is sometimes called the normal coordinate space, and T is called the normal or diagonalizing transformation. The diagonal matrix is called the Jordan matrix and is written [3J

(3-46)

Hence the differential equation in y is

(3-47)

The Jordan matrix formulation has the advantage of uncoupling the response due to each characteristic vector. Let us first consider the case where we have

.j

"

3-3]

LIXEAR TRANSFORMATION OF THE STATE VECTOR

35

real unrepeated roots. Then, we expect the Jordan matrix to be diagonal as. follows:

Al 0 0 ···0
0 A2 0 ···0
A= 0 0 A3"'O
0 0 0 ... An where A b A2, ... An are the characteristic roots of the nth-order system.

(a)

FIG. 3-6. A second-order system.

An example will clearly illustrate the concept of the uncoupling transforrnation. Consider the system shown in block diagram forrn in Fig. 3-6(a) and signal flow graph form in Fig. 3-6(b). The differential equation representing the system is

(3-48)

This equation is identical to that considered for the spring, mass, damper in Chapter 2. Of course, the equation could represent an open-loop control system as shown in Fig. 3-6(a). The original state vector is

and we wish to determine the transformation

(3-49)

A simple, yet illustrative, method of determining the transformation is to use a partial fraction expansion. For Xl (s), we have

R(s)

R(s) - R(s)

(s + 1) + (S + 2) .

(3-50)

(s + ll(s + 2)

For X2(s), we obtain

X (s) = sR(s)

2 (s__l_1)(s+2)

(3-51)

-R(8) _l_ 2R(8)

(s + 1) I (s + 2) .

36

USEFVL STATE SP.~CE METHODS

[CRAP. 3

Equations (3-50) and (3-.51) are represented graphically by the signal flow diagrams in Fig. :3-7. Note that YI(S) is dependent only upon the transfer function l/(s + 1), or, in other words, is dependent. only upon the characteristic root of -1. The relation for Y2(s) is dependent only upon the root of -2; that is, the definition of Y (s) is

Y ( ) = R(s) 1 S S + 1

and

(3-52)

In differential equation form, these equations are

dYl

dt + Yl = ret),

Written in matrix form, we have

(3-53)

~~ = .: _:] Y + GJ r(t)

As expected, the elements on the main diagonal of the A. matrix arc the characteristic roots of the system. We note that we may solve Eq. (3-46) for T since we know A and A.. However, it is simpler to determine T from the signal flow graph, Fig. 3-7. Writing the signal flow equations, we obtain

or

dy A. + T-1B

dt = Y f.

I

I.

r

1

or

[ 1 -1]

x = -1 2 y.

(3-56)

Hence

T = [ 1 -1]

-1 2

(3-57)

and

.. "

j.

(3-54)

(3-55)

(0)

FIG. 3-7. Signal flow graph of the partial fraction expansion .

3-3]

LINEAR TRANSFORMATION OF THE STATE VECTOR

R_(S_)_--I .. I (s+ 1)~(S+2) I

FIG. 3-8. Third-order system with repeated real roots.

One can obtain the Jordan matrix using T, T-I, and A. In this case, we obtain

(3-58)

which is identical to the solution, Eq. (3-154).

Let. us now consider the linear uncoupling transformation when the system possesses repeated real roots. For example, consider the third-order system represented by the block diagram shown in Fig. 3-8. The equation for X I (8) is

Expanding Xl (8) in a partial fraction expansion, we have

X1(8) = __ 1_ + -1 + _1_.

J?(8) (8 + 1)2 (8 + 1) 8 + 2

Then for X2(s), we obtain

X2(s) R(s)

s -1 2 -2

- ---+---'----.

(s + 1)2(8 + 2) - (8 + 1)2 S + 1 ' s + 2

And finally for X3(S), vre have

X3(S) S2 1 -3 4

R(s) = (s + 1)2(s + 2) = (3 + 1)2 + (s + 1) + s + 2·

(3-59)

(3-60)

(3-61)

(3-62)

The signal flow diagram for the transformed states is shown in Fig. (3-9). This figure illustrates that for the repeated roots the states Y 1 (s) and Y 2(S) cannot be decoupled. The equations for Yes) are written as

R(s) Y3(s)=s+2·

17 ( . R(s)

2 s) = S + 1 '

Therefore the vector differential equation in y is

_ ~ ~] y + r~] r.

o -2 L

(3-63)

(3-64)

37

38

USEFUL STATE SPACE METHODS

[CHAP. 3

R{s)

1

:1

,i

FIG. 3-9. Signal flow diagram for the transformed states.

The repeated real root at -1 has the effect on the diagonal matrix of adding a one in the element position above the repeated root in the first row, and deleting the one in the forcing function matrix in the same row that the one is added to in the A matrix. In general, if the following matrix A was obtained :

Al 1 0 0 0 0
0 Al 0 0 0 0
0 0 A2 1 0 0
A 0 0 0 A2 1 0 (3-65)
0 0 0 0 A2 0
0 0 0 0 0 A3 we would deduce that the system possessed two real repeated roots Al, and three real repeated roots ).2.

Finally, for systems with a pair of states possessing complex conjugate roots, we shall leave the roots coupled in the complex conjugate form. In order to illustrate this approach, we shall consider the system shown in Fig. (:3-10)

FIG. 3-10. Closed-loop control system.

j.

3-3J

39

LI:,\TEAR TRANSFORMATION OF THE STATE VECTOR

where the damping ratio 1 is less than one. The vector differential equation for the system is

dx = [ °

dt 2

-w",

1 ) x + [ °2) ret).

-2.1w", w"

(3-66)

If we consider -the simplified system where Wn = 1 and 1 = 0, \ve have

~: = [_~ ~) x + [~]r(t).

(3-67)

This system illustrates the problem very well since the roots of the system are imaginary, yet the states are real. However, if we attempt to transform the state variables to a new coordinate system following the procedure previously utilized, we obtain

Res)

=--.'

(3-68)

and

8 -)

Therefore the transformed differential equation is

(3-69)

The uncoupling transformation for complex roots results in a diagonal matrix with imaginary coefficients. Normally, complex roots are not completely deeoupled, but only partially declmpled, so that only matrices with real elements result.

In the case of complex roots, one may partially uncouple the states by means of a transformation. The transformed state variables yare set equal to a linear combination of the original state variables x. Let us reconsider Eq. (3-66) and write the equation for Xl (13) as

1

(8 + .\Wn + jw"p,)(13 + .\Wn - jwnp,) ,

where J.!. = (1 obtain

(3-70) 12)1/2. Expanding X1(s)/R(s) into partial fractions, we

X1(s) ...L K1. + 1?1. = "

---= V1+V1,

w~R(s) S I tw" + JWnP, S + .\w" - )WnJ.!.

where K 1 = bWnfL and K 1 = the conjugate of K I' However, we learned from the development of Eqs. (3-67) and (3-69) that if we 'want to avoid complex

(3-71)

40

USEFUL STATE SPACE METHODS

[CHAP. 3

coefficients, we cannot set Y 1 (s) = VI (s). Thus, if we choose

and

(3-72)

we have

(3-73)

Hence

T = [~

(3-74)

and

(3-75) (3-76)

Therefore Eq. (3-76) is

. [-twn

y=

-Wn!1-

(3-77)

In this case, the variables x and y were equated to linear combinations of each other as shown in Eq, (3-72) for x and Eq. (3-75) for y. Of course this can be deduced from the fact that the T and T-1 matrices are constant coefficient matrices. The resulting transformed differential equation does not have imaginary coefficients as did Bq. (3-69) for the completely uncoupled y variables. Also, examining Eq. (3-77), we do not have the diagonal matrix as results for the completely uncoupled y variables. For this example, we have succeeded in obtaining transformed variables, but not uncoupled variables. In the following chapter we shall have an opportunity to use several linear transformations of variables.

PROBLEl\IS

3-1. A unity feedback control system has a plant with the transfer function

. ) 1

G(s = s(s + 2)

Draw the flow graph and determine the vector differential equation. Then obtain the state transition equation.

3-2. An open-loop control system has a plant with the transfer function

1

G( 8) = -8(,-82:-+.,----.,.7-s -'+----'12"")

Draw the state flow graph and determine the vector differential equation. Finally, obtain the state transition equation.

PHOBLEMS 41

3-3, An open-loop control system has a plant with the transfer function '/+3s+2

G(s) = S(82 + is + 12) .

Draw the state flow graph and determine the vector differential equation. Discuss

;:, the difference, if any, between the state transition equations of this problem and Problem 3-2.

3-4. The block diagram of a fecdback control system is shown in Fig. P3-4. The process transfer function is

G(8) = 2(s + 1) . 8(S + 2)

(a) Determine the vector differential equation. (b) Determine the state transition equation.

R(s) ;1-9 -I G(,) 1 C(~) R(s) ;1-9 -IK/" I C(s)
I .. •
FIGURE P3-4 FIGURE P3-5 3-5. The block diagram of a feedback control system is shown in Fig. P3-5. Determine the transition matrix for the system.

3-6. Consider the computation of the trajectory of a cannon ball with air resistance as an introduction to ballistic computations. The cannon ball is fired with an initial velocity v(O) and at an initial elevation angle e(O). It is desired to find the trajectory of the cannon ball with respect to a rectangular

cartesian coordinate system as shown in Fig. Y

P3-B. The aerodynamic drag force is assumed

to be proportional to the square of the instantaneous velocity of the projectile. It is usually convenient. to write the dynamic equations with respect to the velocity axis and the axis perpendicular to the velocity axis. Determine the vector differential equation. Obtain the linearized vector differential equation and state the assumptions concerning the linear equations.

3-i. A section of a panoramic receiver syst-em is shown in Fig, P3-7, Determine the vector differential equation. Obtain the linearized vector differential equation.

~----------------------~x

FIGURE P3-6

1'(1)

c

R

L

FIGIJRE P3-i

___ ---

42

USEFUL STATE SPACE METHODS

[CHAP. 3

3-8. An open-loop system has a transfer function G(s) 1/(82 + 4s + 3). Deter-

mine the diagonalizing transformation T and the differential equation in the transformed variable. Then draw the flow diagram for the transformed state variables. 3-9. Repeat Problem 3-8 when the open-loop transfer function is

1

(a) G(s)

(s + 1)(8 + 2)(8 + 3) , 1

(b) G(s)

(c) G(s)

(8 + 1)2

BIBLIOGRAPHY

1. 8. J. 1-1A80:"1, "Feedback Theory-Further Properties of Signal Flow Graphs," Proe. I.R.E. 44,7,920-926 (July, 1956).

2. W. KAPLAN, Advanced Calculus, Addison-'Wesley, Reading, Mass., 1952.

3. J. S. FR;.,.:ME, "Matrix Functions and Application;;," IEEE Spectrum, pp. 102-108, April, 1964.

j 1 ·1 ,

J

:1 J

~

,

CHAPTER FOUR

System Stability and Liapunov's Afethod

4-1 Introduction

A fundamental aspect of the study of dynamic systems is the determination of their stability. There are many powerful methods for the study of stability available such as Routh's method and the Nyquist technique. However, these methods utilize the complex variable transformation and are only practical for linear systems. Considering our general interest in high-order nonlinear systems we turn to a stability method utilizing the matrix formulation developed in the preceding chapters. Perhaps the most general method of the determination of stability is Liapunov's direct method developed 70 years ago in Russia [1]. If one utilizes the state space formulation of the dynamic equations of the sys.tem, the system stability may be determined without solving the vector differential equation. This is a distinct advantage for higher-order nonlinear systems. While the method is not mechanical and often requires considerable ingenuity, it often provides information about the stability when other methods are ineffectual.

The mathematical proofs and a discussion of t.he mathematical issues are presented in detail in the literature [2, 3]. It is the purpose of this chapter to present Liapunov's direct method and to illustrate its use in the study of system stability.

4-2 The concept of stability

The physical idea which is the foundation of the mathematical concept of stability isthe idea of a bounded system response. That is, if a system is subjected to a sudden disturbance or input and the response of the system is bounded in magnitude, then the system may be said to be stable. In other words, if an equilibrium condition exists, it is asymptotically stable if the system ultimately returns to the equilibrium position following any small disturbance. vVe shall employ a definition of stability in the sense of Liapunov's method.

43

,.

i : {

44

S,{STEl\'! STABILITY AND LIAPUNOY'S :METHOD

[CHAP. 4

In order to illustrate the concept of asymptotic stability consider a system whose equilibrium state exists at the origin x = O. The Euclidean length of the vector from the origin, oft.en called the norm, may be written as

Jlxl'l = exT x//2

= (xi + x~ + ... + x;;li2.

This idea is easily illustrated on the phase plane for a second-order system and the norm is simply

[[xII = ({Xl) X2} [::])1/2

= (xi + X~)1!2.

Unstable

FIG. 4-1. Stability regions in state space.

Within the n-dimensional state space, vre let S(R) be a spherical region of radius R. Then the region S(R) is said to be stable if for any Set) a transient starting in SeE) does not leave S(m as shown in Fig. 4-1. If, in addition, there exists a 0 > 0, and x (0) is in the sphere S ( 0), and the transient solution approaches the equilibrium state x = 0 as time approaches infinity, then the system solution (or response) is asymptotically stable.

Furthermore, if 0 can be arbitrarily large then the solution x = 0 is asymptotically stable in the large, often called global stability,

The region of stability is not limited for a linear system, but is limited for a nonlinear system. The nature of the nonlinearity governs the size of the sphere, 8( B).

4-3 The direct method of Liapunov

The direct method of Liapunov is based on the concept of energy and the relation of stored energy and system stability. We recall that in our previous determination of the state variables of the RLC network we utilized the inductor current and the capacitor voltage. Both of these variables represent the energy state of the network. The energy stored in the capacitor is

Ee = tCv~,

and the energy stored in the inductor is

4-3]

THE DIRECT METHOD OF LIAPUNOV

45

In the case of the mechanical system comprised of the spring and mass, we used the position and velocity as the state variables. For the dynamic mechanical systems the potential energy in the spring is

and the kinetic energy is

E - .!.il-[ (dXI)2

2 - 2.1. dt

where K = spring constant and ;1[ = mass.

Fundamental to Liapunov's method is the idea that for a stable system the stored energy will decay with time. That is, since the system is characterized by the state variables which represent the energy state of the system, the stability may be determined by examining a function of the state variables. In fact, the energy of a system is a positive quantity and if the time-derivative of the energy is negative we may denote the system as asymptotically stable.

Consider a simple RC circuit which is represented by the dynamic equation

dVe

C-+Gv =0

dt C

(4-1)

or

The solution of this differential equation is

xI(l) = XI(O) exp (-tiRe).

(4-2)

The system is stable. The energy for the system is

B; = ~Dxi = tCxi(o) exp (-2t/ Re),

(4-3)

which is positive. The time derivative of the energy is

(4-4)

which is negative. Also, the ratio

RC 2

(4-5)

-(dEc/dO

is one half the magnitude of the circuit time constant.

The reason we have investigated the circuit is to extend these familiar concepts of energy and energy change to a more generalized form. Let us postulate a scalar function called a Liapunov function V(:s:), where rex) = 0 when x equals the equilibrium state, "We will require that V(x) be greater than zero

... _-- . "'"" ... - - ._ .. - _._ .. _ .... _.

SYSTEM STABILITY AND LrAPUNOY'S METHOD

46

[CHAP. 4

for all x other than the equilibrium state and that the time-derivative of VeX) be negative. Then the system represented by Vex) is asymptotically stable. That is, a system is asymptotically stable in some region of the state space if, in that region,

Vex) > 0 for x r£ x€) (4-6a)
dY /dt = 17(x) < 0 for x ¢ x., (4-6b)
"V(x) = 0 for x = x€) (4-6c)
"V(x) ___. co for 1.lxll ___. co , (4-6d) where x, = the equilibrium state. The Liapunov function V may be thought of as a measure of the distance from the equilibrium state. Equation (4-6) is a sufficient condition for asymptotic stability.

In addition to providing a stability test, the Liapunov function may provide information about the rate of decay of V as the time constant does for the RC circuit. A parameter 1] may be defined as the ratio (- -y" / V). Therefore 1 / 1] is the largest time constant relating to changes in Vex) and largc values of 1] are desired for rapid decay of the system energy. However, 1] should only be thought of as a figure of merit, since its value depends upon the choice of Vex).

Initially, we shall investigate the linear case where the system homogeneous differential equation is

x = Ax.

(4-7)

Let us choose the Liapunov function to be the square of the norm multiplied by an n X n symmetric positive definite matrix P so that

(4-8)

The time derivative of F is then

(4-9)

Substituting Eq. (4-7) into this equation, we obtain T' = xTpAx T (Ax) TpX.

(4-10)

Since (Ax) T = xT AT, we have

(4-11)

Now since T'(x) was defined as positive, we require, for an asymptotically stable system, that l;~(x) be negative. Therefore we require that

(4-12)

where

(4-13)

4-3]

THE DIRECT METHOD OF LIAPUNOV

47

Consequently, for asymptotic stability for a linear system it is sufficient that Q be positive definite. A necessary condition for a positive definite Q is that each element on the diagonal of a symmetric Q matrix be positive.

Sylvester's rule states that a necessary and sufficient condition that a matrix be positive definite is that the determinant of all principal minors of the matrix be positive [4]. For example, for a 3 X 3 symmetric Q matrix, it is required that the following determinants be positive:

det (Q).

It is often useful to construct a negative definite time derivative of the Liapunov function and then determine if the resulting Liapunov function is positive definite. That is, for a linear system, we choose Q as a symmetrical positive definite matrix. Then, for a stable system, the resulting P matrix must be positive definite.

For a nonlinear system, Eq. (4-6) may not be satisfied, although the system is stable. If we choose a T(x), determine the T7(x) for this Liapunov function, and find that Vex) is not negative definite! we cannot conclude that the system is unstable. The Liapunov theorem of Eq. (4-6) for stability is sufficient but not necessary. Therefore the only recourse in choosing the Liapunov function is another trial function.

Fortunately there exists another approach to utilizing Liapunov's method.

This approach is based on the initial selection of a negative definite v""(x). It can be shown that if there exists a time derivative of 'V which is negative definite, that is,

dV(x) < 0

dt '

and the limitx___'", 17(x) = -r:r;" then the system is unstable if V(x) is not positive [2]. For unstable systems, regions of instability can be established using this theorem.

EXAMPLE 4-1. Before proceeding any further, it will be helpful to re-examine the linear RC network described in Example 2-3. The system matrix for the network was

A = [-4k 2k

4kJ

-6k

We wish to identify the restrictions on the system parameters in order to guarantee stability. It is required that. Eq. (4-13) be satisfied. Since Q is an arbitrary symmetric positive definite matrix, in this case we choose Q = I, the identity matrix. The matrix I is positive definite since all the principal minors are positive. Then Eq. (4-13) becomes

[PH Pl2J [-4k

Pl2 P22 2k

4kJ .:

-6k + 4k

2kJ [Pll

-6k P12

P12J=[-1 OJ. (4-14)

P22 0-1

48

SYSTEM STABILITY AND LIAPUNOV'S METHOD

[CHAP. 4

The elements PIll P12, and P22 are to be determined and we obtain three simultaneous equations as follows:

= -1,

4kpll - 10kpI2 + 2kpn = 0, 8kp12 - 12kp22 = -1,

or in matrix form, we have

r -:: -1~: ~kj r ::~j = r -~j .

l 0 8k -12k lp22 l-l

Solving for the elements of P we obtain

p = 4~k[:

(4-15)

(4-16)

Investigating for positive definiteness, we find the principal minors are det 171 > 0

and

det [: :] = 42 - 16 > O.

Therefore the system is asymptotically stable as long as k > O. The Liapunov function for this investigation was

.'

(4--17)

:] .

(4-18)

The Liapunov function contains two energy terms plus an added term of the states multiplied together. The state variables were chosen as the capacitor voltages in the state equation formulation.

Knowledge of the rate of decay of the Liapunov function is determined by evaluating 1/, the ratio (-V IV). Substituting the Liapunov function Eq. (4-8) and the derivative of the Liapunov function Eq. (4-12), we obtain

xTQX

1) = -_. (4-19)

xTpx

4-3]

THE DIRECT METHOD OF LIAl'UNOV

49

Since we chose Q = I, we have

The quotient ().:Tpx) / (xT x). often called the Rayleigh quotient, satisfies the relation Amax ::::: (n"x)/(xTx) ::::: Amin, where Amax and Amin are the characteristic roots such that Amax ::::: Al ::::: A2' •• ::::: Amin [5J. Thus

1 :s; 11 < _1_.

Amax Amin

(4-20)

A conservative estimate of 11 is the lower bound

1

1]=--'

Ama"

(4-21)

Therefore, for the RC network under consideration, the roots of P are found from the relation

(4-22)

The two characteristic roots are found to be Amax = 0.263/k and Amin = 0.062/k. Therefore, the figure of merit is 3.8k :s; 11 :s; 16.1k. This figure of merit was obtained without solving the original matrix differential equation. Since we did solve this differential equation for Example 2-3, we know that the longest time constant of the system was 1/2k and therefore the energy would decay 'asymptotically with a time constant of 1/4k. Therefore, the figure of merit provided by the Liapunov function is a good indication of the rate of decay of energy for this simple linear system.

The general solution for P for the linear second-order system is determined from the set of equations obtained in the manner of Eq. (4-lEi). That is, if

A = [all a12]

aZl aZ2

and the equation

PA + ATp = -I

"(4-23)

is multiplied out and rearranged, we obtain

o 1[P111 [-11

a21 PI2 = o·

2a22 PZ2 -1

(4-24)

SYSTEM STABILITY AXD LIA:?UNOV'S METHOD

50

[CHAP. 4

Now we may solve for P if the inverse of the square matrix in Eq. (4-24) exists requiring that the determinant of the square matrix be nonzero. Then solving for P we are led to

-(aI2a22 + a21a11)], (4-25) (det A + a~l + a~2)

where trace A = the sum of the diagonal terms of A = all + a22 and det A = determinant of A = allan - aI2a21' The system is asymptotically stable if and only if the matrix P is positive definite. Therefore the principal minors of P must be positive, requiring that

detA + a~l + a§2

Pll = - 2 (tr A) (det A) > 0

(4-26)

and

(4-27)

Equation (4-27) implies that since the numerator of the inequality is always positive, we must require that

(4-28)

Then examining Eq. (4-26), we find the other requirement

trace A = all + a22 < O.

(4-29)

For example, let us acquire the requirements for asymptotic stability for the spring, mass, and damper system investigated in Chapter 2, where

We determine immediately that (Kllli) > 0 and (fIAi) > 0 for a stable system response.

4-4 The Liapunov function in terms of transformed state variables

There is an advantage in choosing a Liapunov function in terms of the canonical or uncoupled variables discussed in Section 3-3. We recall that we defined the transformed variable y as

(4-30)

and

x = Ty.

i

4-4J

LIAPU!>,'ov FUNCT(OX -IN TRANSFORMED STATE VARIABLES

51

Then the matrix differential equation was found to be

or

(4-31)

Now consider a Liapunov function which is the square of the norm of the canonical variable y:

(4-32)

Then the time derivative of the Liapunov function for an unforced system is

(4-33)

Therefore for an asymptotically stable linear system it is necessary and sufficient that

Q=-(AT+A).

(4-34)

For a linear system with real, nonrepeated roots we know that A is a diagonal matrix with the roots of the system lying 011 the diagonal. Therefore, for that case AT = A and

Q = -2A,

(4-35)

where Q is a positive definite symmetric matrix. For example for a third-order linear system it would be necessary and sufficient that.

o

~ 1,

-2AJ

(4-36)

where qi is a positive number. Therefore we find that this equation is satisfied if all the roots are negative or lie in the left-hand s-plane. This fact is, of course, common knowledge.

Furthermore, the figure of merit 1] may be readily evaluated from the definition

(4-37)

Substituting Eqs, (4-32) and (4-33), we are led to

1) = ('-),TeAT + A)Y) , yTJy

For a system 'with real nonrepeated roots we obtain

(4-38)

-- ------_. __ .. _ .. ..-lij-----------

52

SYSTEM STABILITY AND LIAPUNOV'S METHOD

[CHAP. 4

lJ(s)

(a) (b)

FIG. 4-2. Closed-loop system of Example 4-2.

A conservative estimate of 1) is, then,

1) = minimum root (- 2A.).

(4-39)

EXAMPLE 4-2. Let us investigate the system shown in Fig. 4-2. The response of the uncontrolled system is obviously a constant amplitude sine wave. We shall use Liapunov's direct method to determine a control signal u(t) which will result in a stable system response. The matrix differential equation is

(4-40)

The Liapunov function to be used is

Vex) = xTIx = xi + x~.

(4-41)

The time derivative of the Liapunov function is then 17 = 2(Xli:l + X2X2)

2(X1X2 + X2 (-Xl + u)) = 2xzU.

(4-42)

Therefore in order to assure stability we require that

(4-43)

where Kl is a constant of a magnitude to be chosen on the basis of the desired response. If an external input R(s) exists then we would use

(4-44)

Control engineers recognize this as velocity feedback, a common compensation approach.

Ex.nIPLE 4-3. Now let us consider the compensation of the more general system represented by the differential equation

x = Ax + Bu(t).

(4-45)

4-5]

KRASOVSKIl'S 'l'BBOREM ox S'l'ABILITY OF :'\'ONLINEAR SYSTEMS

53

If we choose the Liapunov function

(4-46)

then

(4--47)

In order to improve stability we can see that we will require a u(t) of the form

(4-48)

Let us examine the same problem in terms of the canonical state variables y.

The differential equation in terms of y was found to be

(4-49)

where u is a vector representing j control signals. Then the time derivative of V(y) is

(4-50)

In order to improve the stability of the system, the control vector should be

u = -Dy

(4-51)

within the physical constraints of the signals, where D is a positive definite matrix. If we can select the matrix B so that T-1 B is diagonal, then each control variable is a function of only one state, that is,

(4-52)

It is important to note that the solutions obtained in this example result in a control signal which is a function of the state variables. In order for Eqs. (4--48) or (4-51) to be a practical solution, the state vector x or y must be available. This implies the necessity of measuring all the state variables which for the majority of syst.ems is not possible. Therefore, it is usually necessary to measure the available state variables and use a computation procedure to obtain the remaining state variables. This problem will be discussed in Chapt.er 5.

4-5 Krasovskii+s t.heorern on asyrnptot.ic stability of nonlinear systellls

The previous sections of this chapter have been concerned with the use of Liapunov's direct method for linear systems. An extension of Liapunov's method by Krasovskii provides a useful method of evaluation of the stability of nonlinear systems [6]. The differential equation of the nonlinear system may be written as

x = F(x),

(4-53)

-

- ~~--

- ------

SYS'tEM STABILITY AND LIAPUNOV'S METHOD

[CHAP. 4

where F(x) is a column matrix whose elements may be nonlinear functions of the state variable x. For the nonlinear system there may be more than one equilibrium condition x = Xe. In any case, following the development of Section 3-2, the linearized formulation of the differential equation (Eq. 3-31) was obtained using the Jacobian matrix

;
j
I aft afl ... aft
i
ax! aX2 aXn
J =aF = a12 a12 (4-54)
ax aXl aX2
afn afn ... af",
ax! aX2 ax", The Liapunov function is chosen to be the quadratic form

(4-55)

where P = a symmetric positive definite matrix. Since

(4-56)

the derivative of the Liapunov function is

TT = FTpF + :i<TpF

= FTpJF + (JF)TpF = FT(PJ + JTp)F.

(4-57)

Therefore since V is positive definite, in order to determine the stability it is necessary to examine the relation

(4-58)

It is sufficient that Q be negative definite in order to prove asymptotic stability. We use Sylvester's theorem which states that all the principal minors of Q must be negative in order to determine the stability conditions. This method can best be illustrated by an example.

EX_>\_MPLE 4-4. Consider the nonlinear system shown in Fig. 4-3, where the nonlinear function is single valued. The differential equation for the system is

x = F(x) = [.fI(X)] = [ Xz ].

fz(x) -Kg(Xl) - X2

(4-59)

4-5J

KRASOVSKU'S THEOREM ON STABILITY OF NONLINEAR SYSTEMS

55

(4-60)

(4-61)

('

(4-62)

(4-63)

In order to determine the requirements for the coefficients, we will constrain - l' to be positive definite. ,"Ve note that Eq, (4---{j3) is of the form

(4-64)

FIG. 4-3. Nonlinear second-order system.

Using the form of the Liapunov function of Eq. (4-55), we obtain

v = FTpF = [JI]T [Pll P12] [it]

12 P12 pzz fz

= PlIft + 2Plzfdz + P2ZJ~,'

'where P is a positive definite symmetric matrix. Then

Since

av

aft = 2Pllir + 2p12h

aV

aiz = 2p12iI + 2pzz/z,

aiz = -1, aiz = -K ag(Xl) ah

axz aXl ax) axz

1 ail = 0

'axr '

we obtain after simplification

( ag(xr)) '2 ( ag(xr))

-2P12K ~ Ii + 2pu - 2p'22K ~ - 2p12 h12

+ (2p12 - 2Pz2)f~.

• 2 2

V = ail + 2bhh + Cf2 = FQF,

where

, ,

!

56

SYSTEM STABILITY AND LlAP"GNOV'S METHOD

[CHAP. 4

For -Q to be positive definite, it is required that

a < 0

(4-65)

and

(4-66)

ac - b2 > O.

Equation (4-65) requires that

2p K ag(XI) > 0

1Z aXl

or

(4-67)

PI2 > 0

and

when K > O. Equation (4-66) requires that

(-2P1ZK a~~ld) (2p12 - 2pzz) - (Pll - PZ2K a~~ll) - PIZY > o.

(4-68)

If g(X1) = xl, then it is necessary that

12p12Kxi(P22 - pd > (Pu - 3P22Kx¥ - pd2.

(4-69)

If pzz = fJp12, where fJ > 1, and Pll = P12, then we may write Eq. (4-69) as follows:

(4-70)

where Xl equals the magnitude of Xl at the operating point.

4-6 The variable-gradient met.hod for generating Liapuriov functions

In the previous sections of this chapter we have considered the selection of a Liapunov function in order to test for system stability. A method has been developed which provides the generation of a Liapunov function rather than a trial selection [8]. Heretofore, we have utilized Liapunov functions of a quadratic form, but this restriction is not necessary. The generation method is based on the concept of the gradient of the Liapunov function. We wish to find a Liapunov function

Vex) = V(Xll X2, ... , x".).

As usual, we take the time derivative of V remembering that V may be nonlinear in x:

T' = aY dx} + BV dX2 + ... + BV dx".

ax I dt QX2 dt ax" dt

(4-71)

4-6]

57

METHOD FOR GENERATING LIAPUNOYFUXCTIONS

Expressing this equation in terms of the gradient of V, we obtain

aV T
ax} Xl
V= aV X2 (VV)Tx = oT;., (4-72)
aX2
aV Xn
aXn where 0 = gradient of V = (VV) is a vector. The Liapunov function is obtained by evaluating the line integral of 0 along x as follows:

v = fox o· dx.

The simplest path of integration is that given by

, . .-- ...

(4-73)

where On equals the component of 0 in the Xn direction. Now for a unique scalar function F to be obtained by line integration of the vector 0 = VV, the curl of the gradient must equal zero [9]. That is, it is required that

v X 0 = V X (VV) = o.

(4-74)

Therefore for a third-order system it is necessary that

a02 80}

-=-,

8xl 8x2

(4-75)

For a second-order system the curl equation provides the following relation:

(4-76)

In general, the curl equations may be written as

(4-77)

Examining this procedure we notice that the problem of determining a Liapunov function which satisfies Liapunov's theorem has been transformed into the

58

SYSTEM STABILITY AND LIAPUNOV'S METHOD

[CHAP. 4

problem of determining the gradient of V such that the curl of the gradient of V is equal to zero. Furthermore, the V and V must prove or disprove stability, that is, satisfy Liapunov's theorem.

To obtain the gradient of V, we shall assume an arbitrary column vector (VV) = 0 whose coefficients may be functions of the state variables, that is,

l (Cl:llXl + Cl:12X2 + Cl:lnXn) j

s = (Cl:2tX1 + Cl:22X2 + Cl:2~xn) •

(Cl:nlXl + Cl:n2X2 + . .. 2xn)

(4-78)

The procedure used to evaluate the Liapunov function is as follows: (1) from s determine "(" using Eq. (4-72);

(2) constrain T' to be at least semi-definite, that is, possess the same sign throughout the state space except perhaps at isolated points at which it may be zero;

(3) use the curl equations to determine the remaining unknown coefficients

of 0;

(4) recheck V since step 3 may have altered it; (5) determine V by integrating as in Eq. (4-73).

Now let us use this procedure to investigate the stability of a nonlinear system.

EXAMPLE 4-5. Consider the sample problem examined in the preceding section in Example 4-4 using Krasovskii's method. The system block diagram is shown in Fig. 4-3. The system differential equation is

• [ (X2) 1 [it (X)]

x = (-Kg(Xl) _ X2) = hex) ,

where Kg(Xl) = xt. We select an arbitrary 0 of the form

(4-79)

(4-80)

From Eq. (4-54) we obtain

V = oT X = [(<XlIXl + <X12X2)]T [ X2 J

(<X21Xl + 2X2) (-xf - X2)

= XrX2(Cl:ll - <X21 - 2xD + X~(<X12 - 2) - <X21Xi.

(4-81)

If the system is asymptotically stable, there are a large number of Liapunov functions that could be used. To constrain V in order to prove stability we shall

4-7]

59

STABILITY REGIONS FOR NONLINEAR SYSTEMS

cause V to be at least negative semi-definite, Let us choose the coefficient of the XIX2 term to be equal to zero and the coefficients of the x~ and xi terms to have zero or negative coefficients. Therefore we would set

o < a12 < 2,

and

(4-82)

In this case we will choose Ct:lZ = 1 and determine the required value of Ct:21 from the curl equation. Thus we obtain

{j = [(Ct:ZIXl + 2x1 + X2)]. (azlxl + 2xz)

(4-83)

Using the curl relat-ion for the second-order system, Eq. (4-76), we find that

or

(4-84)

Therefore we acquire the equation for the gradient as follows:

(VV) = s = [(2Xr + Xl + .<:z)].

(Xl + 2X2)

We note that }' is negative definite and it only remains to show that Y is positive definite in order to prove stability. Int.egrating from the origin to the arbitrary point in the state space x, we obtain

(4-85)

(4-86)

We have determined that V is positive definite for all points in x, and therefore the system is asymptotically stable.

4-7 Stability regions for nonlinear systems

It. must be emphasized that for nonlinear systems, if the Liapunov test fails, there is no certainty of system instability. The Liapunov method simply indicates asymptotic stability when the requirements are satisfied. If a test fails with a chosen Liapunov function, the analyst must choose another Liapunov function and repeat the test. Furthermore, for nonlinear systems, the Liapunov

SYSTEM STABXLITY A!\'D LIAPUNOY'S METHOD

60

(CHAP. 4

method will often provide the requirements for stability for a region of the state space. However, even though the test fails outside the specified region in the space the system may be stable in that area of the space. The idea of investigating stability and determining the regions of stability is a very useful concept.

In order to illustrate this concept of stability regions let. us examine an illustrative example by using several Liapunov functions.

Ex.nlPLE 4-6. A simple example representing a nonlinear system may be stated in the form of the following nonlinear differential equations:

(4-87) (4-88)

It. is interesting to note that these equations might represent two colonies of animals of different species. The population of the colonies are Xl and X2 and the interaction of the colonies results in the second term in each equation. These equations apply to a case where, if there were no interaction, the populations would decay with time. The effect of the interaction is to decrease the rate of decay, and, in fact, if the product of the two populations is large enough, the colonies grow in size [11]. Since the populations must be positive integers, the first quadrant. of the state plane is of interest, However, for interest. we shall examine the solution on the complete state plane.

'Ve note that there are two equilibrium points, that is, when x = 0 and when

xT = {1,2}.

First let us evaluate the simplest Liapunov function, a quadratic function which is always positive definite:

(4--89)

Then

(4-90)

Substituting Eqs. (4-87) and (4-88) into Eq. (4-90), we obtain 11- = 2XI( -2.Y1 + /lX2) + 2X2( -X2 + XIX2) = 2Xr(X2 - 2) + 2X~(Xl - 1).

(4-91)

For asymptotic stability, we require that

T;' = 2Xr(X2 - 2) + 2X~(Xl - 1) < o.

The limiting condition for stability is

Xr(X2 - 2) + X~(XI - 1) = O.

(4-92)

(4-93)

Investigation of Eq. (4-93) shows that there are curves dividing the stable region from t.he region where T-;' > O.

4-7J

61

STABILITY REGIONS FOR :t\'ONLINEAR SYSTEMS

The dividing line in the vicinity of the origin in the first quadrant is shown in Fig. 4-4. Furthermore, Eq. (4-92) can be tested in the third quadrant by setting Xl = -ixil and Xz = -lx21. Then we obtain

(4-94)

which is always less than zero for all values of Xl and Xz. Hence the system is stable in the third quadrant. Information about the second and fourth quadrant is as easily obtained from Eq. (4-92). However, LaSalle has shown that the region within the curve V = constant, tangent to the V = 0 curve in quadrant one, contains stable solutions [10j. In this case V = xf +- x~ = constant is a circle. Therefore, from this first selection of a Liapunov function we have the information summarized on Fig. 4-4.

X2
Possibly 3 I 1:>0 Possibly
\
unstable \ unstable
\
T>=O
,
,
St.abl~
,
Xl
1 2 3 Stable ----'"'"'~ quadrant

FIG. 4-4. Boundaries of assuredly stable solutions.

In order to further explore the regions of possible instability, we shall use the gradient method to generate a Liapunov function. Since the gradient method allows the analyst to have an influence on the coefficients of V, we will attempt to obtain a ""(~ which is easily tested. Let

s = [allXl ~ a1zX2] , 0:12X1 T 2X2

(4-95)

where 0:12 has been equated to 0:21 in order to satisfy the curl equation. Then using Eq. (4-72), we obtain

T:' = bIiI +- 0212

= (allX1 +- a12X2)( -2Xl +- X1X2) +- (a12Xl +- 2X2)( -X2 -1- XIX2)

= xi( -2a11) +- XIX2( -3ad - 2x~ +- (2 +- adxlx~ +- (all + adxIx2' (4-96)

Choosing 0:11 = 1 and 0:12 = -2, ,ve have

• 2, 2 2

V = -2XI +- 6XIX2 - 2X2 - XIX2,

(4-97)

---- --_ ------

~~

SYSTEM STABILITY AND LIAPUNOV'S METHOD

62

[CHAP. 4

which is not always negative. Solving for "IT, we get

t' ("'I ("'2 xi 2

"IT = Jo (I' dx = Jo 01 dXI + JO 02 dxz = 2 - 2XIXZ + X2. (4-98)

Unfortunately, F is not always positive. However, information about stability can definitely be obtained from Eqs. (4-97) and (4-98). For example, for the second quadrant we set Xl = -IXII and X2 = +Ixzl. Then Eq. (4-97) becomes

" 2 Z 2

V = -2lxll - 61xlllxzi - 21x21 - IXII Ixz!, (4-99)

which is always negative. Equation (4-98) is then

V = IXdl2 -+- 21xlllxzI + IXzI2,

(4-100)

which is always positive. Therefore the system is always stable in the second quadrant. In the fourth quadrant, we let Xl = I x 11 and x z = -IX21 and obtain from Eq. (4-98)

Ix 12

17= + + 21x111xzl + ixzlz, (4-101)

which is always positive. Then Eq. (4-97) becomes

X2

~ Unstable \/ region

\ -/J--i~;___( e~~J~~I~~

:'" \" point

I I

,+

---i----t---+---+---+---+- XI.

/!

3

..

. ~ 12

V = -2Ixd- - 61xlllx21 - 21x2 ,

(4-102)

which is always negative. Therefore by using Liapunov's method we have found that the second, third, arid fourth quadrants are stable and part of the first quadrant. is assuredly stable. This is in fact the case and the state plane solution obtained by phase plane isocline methods as shown in Fig. 4-5.

Fro. 4-5. Phase plane trajectories,

4-8 Nnrnerteal applications of Liapunov's direct met.hod

One of the theses of this book is the idea that new methods of analysis should be developed for utilization of the new computational aids. In fact, Liapunov's method is such a method, While not. new, it has found maturity and interest only in the last few years. It is clear to the reader, after studying the preceding sections, that Liapunov's method, being an algebraic technique, is well suited for automatic numerical calculation. As discussed in the preceding section, the analyst desires to determine the regions of stability in the state space. For a nonlinear system, this process of evaluating F and 17" throughout the state space often becomes a tedious algebraic process. Therefore automatic digital computer programs for testing the Liapunov functions would be of great value. Furthermore, if a program could be written which generates the Liapunov

PROBLEMS 63

function, then the analyst would have a complete Liapunov stability test program. In fact, this is the subject of research at this time [12].

The computational method utilized by Rodden determines the minimum region of stability by using the technique of tangency discussed in Section 4-7. That is, it has been shown that if a region in the state space can be found within which 17 > 0 and J;'" < 0, except at the origin, and within the region V is always less than some maximum value, then any solution is asymptotically stable [10]. Therefore as shown in Fig. 4-4, solutions within the circle tangent to the -r:" = 0 curve are stable. The curve V = constant is a circle for Eq. (4-89).

Therefore numerical analysis t.echniques are utilized to generate the Liapunov function and then to test for regions of stability. The formulation of the Liapunov function by computer techniques could logically follow the steps outlined for the variable gradient method in Section 4-6. Alternatively, one might use the method of Ingwerson to generate the Liapunov function [13]. Then, in order to test. for stability regions a step by step numerical procedure to find the point of tangency of the V and V = 0 surfaces is programmed for the computer. First a search is performed to find the surface V = 0 closest to the origin. Then the point of tangency of a surface V = constant and the V = 0 surface is located. Finally the region of the surface V = constant is evaluated. Within this surface lies a region of stability. However, it is not known if the region outside this surface contains unstable solutions.

Rodden has formulated a procedure, coded it in Fortran, and evaluated the procedure on an IB:NI 7090 computer for second- and third-order systems. While the results were salutary, the convergence to the stability boundaries was slow. However, it does appear that the reader might consider the potential of computer evaluation of stability regions in state space using Liapunov's method.

PROBLEl\IS

4-1. A linear control system is shown in Fig. P4-1. (a) Using Liapunov's direct method, determine the limiting value of gain K for a stable system. (b) Determine the figure of merit 1) when K is equal to one-half the limiting value.

e(a)

FIGURE P4-1

4-2. A simple position servomechanism is shown in Fig. P4-2. The system consists of a solenoid of resistance R and inductance L which produces a force 1m proportional to the current in the solenoid. The mass .ill to be positioned is mounted on a spring of stiffness k. The feedback voltage proportional to position is Cj = klY. The

64

SYSTEM STABILITY AND LIAPUNOY'S METHOD

[CHAP. 4

amplifier has a gain of k2. It is required that the deflection y be proportional to the input voltage ret). The parameters of the system, measured in a consistent set of units, are R = 10, L = 0.1, iff = 10, k = 5000, k3 = 1000, kl = 10, and k2 = 50. Using Liapunov's method, (a) determine if the system is stable, and (b) determine the effect of a change of kz on the stability.

FIGURE P4-2

-G-:tIlI'!\ _l

1,"[ ----

- k

% % %

4-3. A nonlinear control system is shown in Fig. P4-3(a). The nonlinear element is represented by the characteristic shown in Fig. P4-3(b). Determine the conditions necessary for system stability.

Nonlinear element

(a)

FIGURE P4-3

(b)

4-4. A nonlinear system may be represented by the nonlinear differential equation

2 ( )

d Y dy dy -

&2 + 1 + E dt at + y = O.

Determine the region of stability of the system and the limitations On E, if any, for stability.

4-5. Reconsider Example 4-6 when the interaction of the two colonies is repre-

sented by the set of equations

Determine the region of stability of the system.

4-6. A mass on a nonlinear spring may be represented by the equation

2

dd; + b ~~ + ky2 = 0,

where b is the friction constant and k is the spring constant. Determine the conditions necessary for system stability and the region of stability.

BIBLIOGRAPHY 65

4-7. A nonlinear system is descri bed by the following set of simultaneous equations:

Xz = 3X1 - xf - Xz.

Find the equilibrium points and determine whether each point is a stable or unstable equilibrium point.

BIBLIOGRAPHY

1. A. A. LrAPuNov, Probleme general de la stabilil6 du mouoemeni, Princeton University Press, 1907 (French translation of 1892 Russian paper).

2. \V. HAHN, Theory and Application of Liapunov's Direct Melhod, Prentice-Hall, Englewood Cliffs, N. ,J., 1963.

3. R. KAL)'fAN and J. BERTRAM, "Control System Analysis and Design via the Second Method of Liapunov," Journal of Basic Enqr., Trans. A.8.M.E., June 1960, pp. 371-393.

4. F. E. HaHN, Elementary Moiri» Algebra, Macmillan, New York, 1958.

5. R. BELL:l.1AK, Introduction to },fatrix Analysis, McGraw-Hill, New York, 1960, p.ll1.

6. N. N. KRASOVSKII, "On the Global Stability of a System of Nonlinear Differential Eq ua tions," Prikladnaja ill atematika i M cJ(anika, Vol. 18, pp. 735-737, 1954. 7. 1. G. MALKIN, Teoriya Ustoichivocli Duizhema (Theory oj Stability oj Motion), Moscow, 1952; English Translation: Commerce Dept., AEC Trans. 3352, Washington,

D. C., 1959.

8. D. G. SCHULTZ and J. E. GIBSON, "The Variable Gradient Method for Generating Liapunov Functions," AI EE Trons., Part II, .pp. 203-210, September 1962.

9. 11. R. SPIEGEL, Vector .1.nalysis, Schaum, New York, 1959, p. 83.

10. J. LASALLE and S. LEFSCHETZ, Stability by Liapunov's Direct Method, Academic Press, New York, 1961.

11. \Y. J. CUN~INGHAM, "The Concept of Stability," American Scientist, Vol. 51, pp. 425-436, December 1963.

12. J. J. RODDEN, "Numerical Applications of Liapunov Stability Theory," Proc. oj J.A.C.C., June 1964, pp. 261-268.

13. D. R. IKGWERSOi'" "A Modified Liapunov Method for Nonlinear Stability Analysis," I RE Trans. on Automatic Control, Vol. AC-6, pp. 199-210, May 1961.

14. A. I. LUR'E, "Some Nonlinear Problems in the Theory of Automatic Control," Gostekhizdat, 1951 (Russian); English Translation: Her Majesty's Stationery, London, 1957.

15. A. M. LETOV, Stability in Nonlinear Control Systems, Princeton University Press, 1961, English Translation; 1955, Russian Edition .

....,..

CHAPTER FIVE

Controllability and Observability of Linear Systems

5-1 Introduction

The concepts of controllability and observability of linear systems were introduced by Kalman and playa central role in modern control theory [1]. These concepts are particularly useful in the study of optimal control theory. Furthermore, in the interconnection of multivariable plants and controllers, the designer must ascertain if the plant is controllable. Essentially, a plant is completely controllable if every desired transition of the plant's state can be effected in finite time by the unconstrained control inputs. Although most, if not all, physical systems are controllable, the linearized model of the system may not be controllable [2].

A plant is said to be completely observable if every transition of the plant's state eventually affects some of the plant's outputs. Determining the observability of a multivariable system allows the designer to choose the measurable outputs of the system model. Furthermore, if an unobservable plant is connected in a feedback loop, then instability of the unobservable states will not be detected in the output. Also, if the plant is not completely controllable, then unless the unstable states are contained in the subset of controllable states, they cannot be affected by the control signals. In any case, the feedback loop 'will not stabilize the system if it is designed assuming complete observability and controllability.

In contrast to the theoretical concepts of the first sections of this chapter, the final section is concerned with the practical problem of measuring the states of a linear system. In all the previous discussion, we have assumed that the state of the system x is available for measurement. In fact, many optimal control systems are designed on this basis. Actually, we realize that it is exceedingly difficult to measure all the state variables of a system. For example, for a servomechanism, we usually are only able to measure the position and velocity, that is, Xl and X2. If the differential equation representing the servomechanism was

66

....,..

5-2]

67

CONTROLLABILITY AND OBSERVABILITY OF LIXEAR SYSTEMS

fourth order, we would require the measurement of four state variables. The question is then, how do we construct an estimate of the remaining unmeasured state variables based on the measurement of the available state variables? It is the purpose of the final section to show how the available system inputs and outputs may be used to construct an estimate of the system state vector.

5-2 Controllability and observability of linear systems

The dual concepts of controllability and observability are based on the concept of the modes of oscillation of a lineal' system. For a linear time-invariant system with distinct characteristic roots, the response of the unforced system is the superposition of all the modes of the system. The question of controllability of a given mode is concerned with the existence of the coupling of the mode to the control input. The question of observability is concerned with the existence of the coupling of a given mode to the output variables. Intuitively, we realize that we cannot control a mode of the system response if we cannot affect it due to a Jack of coupling between the input and the mode. Also, we realize, we cannot observe the response of a mode if it is not coupled to any output variable.

We will consider the lineal' time-invariant system which is represented by the vector differential equation

x = Ax + Bu.

(5-1)

In general, the system output v is

v = Cx + Du,

(5-2)

where C and D are matrices which relate the output vector to the state vector and the control vector.

A plant is said to be completely state controllable if, knowing the matrices A and B and the state x(to) at time to, we can construct some control u over thc time interval (If - 10) which will bring the state of the system to a final state X(tf). The phrase "some control" indicates that no constraints arc placed on the amplitude or energy of u. In other words, a system is controllable if it can be transferred from its initial state to a final state in a finite time. For the time-invariant plant, the question of controllability and observability does not depend upon the initial time 10 and we may t.ake to = O.

We will assume that the system possesses distinct. modes of response due to distinct characteristic roots. This is a useful assumption since it allows us to use the uncoupling transformation discussed in Chapter 3. Even with repeated roots we may assume that they may be separated by a negligible amount in order to assume distinct roots. This allows us to use the Jordan matrix and to uncouple each response mode. We will utilize the linear transformation

x = Ty.

(6-3)

68

CONTROLLABILITY AN» OBSERVABILITY OF LINEAH SYSTEMS

[CHAP. 5

We found in Section 3-3 that the differential equation for the transformed variable y is

(.5-4)

The diagonal matrix called the Jordan matrix is written as

(5-5)

Therefore the transformed differential equation may be written as

(5-6)

Since the Jordan matrix is diagonal, each mode or transformed state variable Yi is uncoupled from the other state variables Yj. For example, the equation for the first state variable is

(5-7)

where G = T-1B. Obviously, in order to control this mode at least one element of the first row of the G matrix must not be zero. Therefore, in general, a linear time-invariant system is controllable if each row of G = T-1B contains at least one nonzero element. This requirement is a necessary and sufficient condition.

Now we shall determine the necessary and sufficient conditions for a linear time-invariant system to be observable. An unforced plant is said to be observable if knowledge of the matrices characterizing the system and the unforced system response over a finite time interval (t, - to) is sufficient to determine uniquely the initial state of the system; that is, a system is not observable if it possesses dynamic modes of response which cannot be ascertained from measurement of the available outputs.

The equation for the output vector v in terms of the transformed uncoupled state variables is obtained by substituting Eq. (5-3) into Eq. (5-2). Then we have

v = CTy + Du = Hy + Du,

(5-8)

where H = CT. For the unforced system, u = 0, and we may write

rVl]

Y2

V = [hI h2 h3••· hnl y~ ,

(5-9)

where hi equals the it.h column of the matrix H. Hence

(5-10)

5-2]

69

CONTROLLABILITY A~D OBSERVABlLITY OF LINEAR SYSTBiliS

For example, for a third-order system with three output variables, we have

[VI] [hll h12 hiS] [YI]

V'l = hZl h22 h23 Yz = hlYl + h2yz + h3Y3.

1!3 h31 h3Z h33 Y3

(5-11)

Therefore in order to determine the state YI it. is necessary that at least one element of the first column of H be nonzero. In general, for a system to be observable, it is necessary and sufficient for at least one element of each column of H = CT to be nonzero.

To clearly illustrate these concepts, let us study a physical system which we know to be controllable and observable.

-2

FIG. 5-1. Open-loop second-order system.

gXAlI.IPL}~ 5-1. Consider the second-order system shown in Fig. 5-1 and investigated in several preceding sections. The output vector is simply the state vector for this physical system. Then the equations describing the behavior of the system arc

. [0 1] _ [0]

x = x + u,

-2 -3 1

v = [~ ~] x = h.

(5-12)

(5-13)

The canonical transformation of this system was discussed in Section 3-3. The transformation matrix was found in Eq. (3-Ei6) as

[ 1 -1]

x = y = Ty.

-1 2

(5--14)

Then follo-wing Section 3-3, the transformed equations describing the uncoupled state variables arc

(.5-15)

iO

CONTROLLABILITY AND onSERVABILITY OF LINEAR SYSTE!I'lS

[CHAP. 5

Therefore, as we expected, the system is controllable since all the rows of T-1B have nonzero elements. The equation for the output is

v = CTy

ITy = Ty.

(5-16)

Hence the system is observable, since all the columns of CT possess at least one nonzero element. The canonical variable flow graph is shown in Fig. 5-2. The input is connected to each canonical state variable, and therefore each variable is controllable. Furthermore, the canonical state variables are connected to the output variables, and therefore the system is observable.

FIG. 5-2. Flow graph of canonical state variables.

It is in the interconnection of muitivariable systems that the analyst must be particularly concerned with testing for controllability and observability. For example, consider the cascaded open-loop system shown in Fig. 5-3. The system is a realistic interconnected system and illustrates the point well. That is, the system appears to be controllable and observable, but this fact must be ascertained in each case.

U_(-t)--~I~I~'_Sa_= __ :_·!_~~~--~~LIS_b_= __ s s_l~ v~~t)

FIG. 5-3. Open-loop cascaded system.

EXAMPLE 5-2. Thc equations of the separate systems may be written as

Sa = 1

1

and

1

(5-17)

So = 1 + 8 - 1

Then the signal fiow diagram shown in Fig. 5-4 may be constructed from these equations. The state variables arc defined as the outputs of the integrators,

-

5-2J

COXTTIOLLABILITY AND OBSERVABILITY OF LINEAR SYSTEMS

71

+1 +1

f!(')~V(,)

~OO X.w

-2 +1

FIG. 5-4. Signal flow graph of cascade system.

Therefore the state differential equation is

(5-'18)

The equation for the output is

v = [I, -Ilx = Cx

(5-19)

The roots of the characteristic equation are 1 and -2. Therefore we have

(5-20)

Also the transformation matrix and its inverse are

and

T-1 = [: -;].

(5-21)

Completing the necessary multiplication of matrices, we obtain

(5-22)

and

1) = CTy = [i, -fJy.

(5-23)

Since T-1B contains all nonzero elements, the system is controllable. Furthermore, since CT contains all nonzero elements, the system is observable. This situation is also dearly illustrated by the flow graph shown in Fig. 5-5.

72

CONTROLLABILITY AND OBSERV ABILITY OF LINEAR SYSTEMS

[CHAP. 5

FIG. 5-5. Canonical flow graph

of cascaded system. Y2(s)

Examining Fig. 5-5, and the differential equation in terms of the canonical state variables, we note that each state is uncoupled and may be classified as either controllable or not controllable, and either observable or not observable. In fact, Gilbert has shown that a system S may be always partitioned into four possible subsystems as shown in Fig. 5-6 [2]. The four possible subsystems are (1) a system SeN which is controIIable and not observable, (2) a 'System Sco which is controllable and observable, (3) a system SNO which is not controllable and is observable, and (4) a system SNN which is not controllable and is not

observable.

v

1'(8)

u

FIG. 5-6. Partitioned system.

FIG. 5-7. Fourth-order partitioned system.

For example, consider four first-order interconnected systems represented by the differential equation in canonical variables as follows:

-1\1 0 0 0 1
y= 0 -1\2 0 0 y+ 1
u.
0 0 -A3 0 0
0 0 0 ->'4 0
r The equation for the output is
I [0,1,1, OJ y.
\. v=
\
' ... .' (5-24)

(5-25)

-.

~--- ~

5-2J

73

CONTROLLABILITY AXD OESERV ABIL1TY OF LINEAR SYSTEMS

This system is shown in Fig. 5-7 in terms of canonical variables. In this case

1 SCN = ---, S + Al

1 Sco = S + A2'

1 ENO = --S + As

and

(5-26)

Furthermore, we note that in general

U = UCN = UCO,

Y = Yeo + VNO.

(5-27)

Also the order of the total system is the sum of the orders of each system; that is,

n = nCN + nco + nNO + nNN·

(5-28)

Let us investigate the case of interconnected systems and state the necessary and sufficient conditions for the interconnected system to be controllable (or observable). First consider the case of two parallel systems as shown in Fig. 5-8. In terms of canonical state variables, the interconnected variables may be

written as

Then the matrices of concern are

(5-29)

and

Examining these matrices, we find the necessary and sufficient condition that S = Sa + Sb be controllable (observable) is that both Sa and Sb be controllable ( observable).

u--GJ--G}- v

FIG. 5-8. Interconnected parallel systems.

FIG. 5-9. Cascaded systems.

Now consider the cascaded system shown in Fig. 5-9, A necessary, but not sufficient condition for the controllability (observability) of S = SaSb is that both Sa and So be controllable (observable). Furthermore, if S is unobservable, then Sa is unobservable; and if S is uncontrollable, then Sb is uncontrollable.

i

.....J _

---------- ------

74

COXTROLLABILI'I"Y ANn OBSERVABILITY OF LIXEAR SYSTEMS

[CHAP . .5

Finally consider the feedback system shown in Fig. 5-10. A necessary and sufficient condition that S be controIIabie is that SaSb be controllable. Also a necessary and sufficient condition that S be observable is that StEa be observable. A necessary but insufficient condition that S be controllable (observable) is that both Sa and Sb be controllable (observable). If Sa and S1> are both controllable, any uncontrollable states of S are uncontrollable states of SaSb and originate in Sb' Also if Sa and Sb are both observable, any unobservable states of S are unobservable states of SbSa and originate in Sb'

FIG . .5-10. Feedback system.

The most important, necessary, and sufficient condition for the feedback system is that. stated in terms of the controllability of SaSb and the observability of SbSa' ,"Ve note that closed-loop controllability and observability may be determined from the open-loop cascade systems. Therefore we do not have to examine the more complex closed-loop equations.

Let us consider an example which illustrates that while each subsystem may be individually controlled and observed, this may not be possible for the interconnected system.

EXAMPLE 5-3. The equations representing Sa are

and

(5-30)

This subsystem is obviously controllable and observable. The equations representing Sb are

Plo = -2Ylb + ttlb -U2b' V1b = YIb•

(5-31)

Again, this subsystem is obviously controllable and observable. Now, let us cascade the tw-o subsystems by letting 'Ul" = lila and "U2" = "U2a• The total interconnected flow graph is shown in Fig. 5-11. The differential equation for the interconnected system is

y ~ [:::l ~ [-~ J y + [~l u' c.'

(5-32)

3-3]

75

OBSERve';-G THE STATE OF A LINEAR SYSTEM

FIG. 3-11. Interconnected cascaded system example.

Thus the interconnected system is uncontrollable, since the second row of T-'B IS zero. 'J:he equation for the output is

tJlb = [0, 1] y = CTy.

(5-33)

Hence the system is also unobservable, since the first column of CT is zero. In this system, the input Ula does not reach the state Ylb in order to effect control, and the variable YI" never reaches the output for observation. Of course, this is a special case, since

o.

(5-34)

5-3 Observing the state of a linear syst.ern

In contrast to the theoretical concepts of the preceding section, this section is concerned with the practical problem of measuring the state of a linear system. "Ve wish to construct the state vector of a system from the observations of the system inputs and outputs. In other words, if a system is observable. we wish to observe the state of the system. We will construct an observer which supplies the state vector at the expense of adding poles to the over-all system,

In modern control theory, especially in the field of optimum control, the state vector is assumed to be available for use in the calculation of optimum control signals. In fact, this is very seldom the practical case and measurements of only a few of the state variables is usually practical. However, a satisfactory estimate of the state vector can be obtained at. the expense of suitable computation. Kalman has investigated this problem for systems whose output signals are corrupted by measurement noise [1,4]. In this section, we shall develop the observation theory for only the noiseless case, following the recent work of Luenberger [5].

We shall endeavor to construct an estimate of the state vector using the available system inputs and outputs. The device which constructs the state vector will be called an observer. The observer itself is a time-invariant linear system and is driven by the inputs and outputs of the system it observes. In order to obtain the equations defining the observer, we will require that the observer construct a linear transformation of the state vector. Then finally we shall let the transformation matrix be equal to the identity matrix in order to obtain the state vector.

76

COSTROLLABILITY A?\"D OBSERVABILITY OF LINEAR SYSTEMS

[CHAP. 5

Initially consider the unforced system 31 represented by the matrix equation

:i = Ax.

(,5-35)

A second system 32 is driven by x and is represented by the differential equation

z = Dz + ex,

(5-36)

where it is assumed that the state variables z and x are related by a linear transformation. That is,

z = Tx.

(5-37)

Substituting Eq. (:::;-37) into Eq. (5-36), we have

Ti = DTx + ex.

(5-38)

Multiplying Eq. (5-35) by the matrix T, we obtain Ti = TAx.

(5-39)

Equations (5-38) and (,5--39) imply that

TA - DT = C.

(5-40)

We shall require that. A and D do not have common characteristic roots so that T has a unique solution. Then we may obtain the solution for the state of the observer. Subtracting Ti from both sides of Eq. (5-36), we have

i - Ti = Dz - Tx + ex

(5-41)

= Dz - TAx + ex.

Substituting Eq. (5-40) into Eq. (5-41), we get

Z - Ti = D(z - Tx),

(5-42)

which is a familiar first-order vector differential equation in the variable z - Tx. As we found in Chapter 2, the solution of this differential equation is

z - Tx = eDt(z(O) - Tx(O»),

(5-43)

or

z = Tx + 4>(t)[z(O) - Tx(O)],

(,5-44)

where .p(t) = eDt. require that

Therefore in order to obtain the system state vector we

z(O) = Tx(O).

(5-45)

Then we have the desired result

z = Tx .

(5-46)

. ~-

-

5-3J

77

OBSERVIXG THE STATE OF A LINEAR SYSTEM

FIG. 5-12. First-order system and observer.

~r-___';--IczS;O

A P

More realistically we set z(O) as close as possible to Tx(O) and require that q,(t) approach zero as rapidly as possible.

For example, consider the system 81 being observed by system 82 as shown in Fig. 5-12. TN e note that

x(t) = x(O)it

(5-47)

and that eXx(O)eH drives 82• Using Laplace transform algebra, we find that the state of system 82 is

z(t) = eXx(O) it + II [Z(O) - (_a_) XeO)]'

"A-J.l A-Il

(5-48)

Obviously, in order to reconstruct xU), we need to make

z(O) = (_a_) ;r(O) )..-p,

as closely as possible. Furthermore, we would make p, a large negative number. Then we have

z(t) = C .: J xCt).

(5-49)

Assuming the unique solution of T is obtainable, the whole development of the required equations for the observer can be obtained. The system with an input u is represented by the equation

x = Ax + Bu.

(5-50)

A measure of the input vector u will need to be added to the observer, resulting in the equation

z = Dz + ex + Gu.

(5-51)

Since Eq. (5-40) is still satisfied, following the same procedure as for the unforced system, we have

z - Tx = D(z - Tx) + (G - TB)u.

(5-52)

Therefore we will require that G = TB so that the solution of Eq. (5-52) is the same as that obtained for the unforced system, that. is, Eq. (5-43). In other words, the observer is driven by the signal TBu, which is the transformation of the signal driving the original system. 81, Since it is assumed t.hat G = TB is satisfied in the design of the observer, the remaining design steps are involved with determining the D and T matrices.

-

78

CONTROLLABILITY AND OBSERVABILITY OF LINEAR SYSTEMS

[CHAP. 5

The state vector of the observer is related to the state vector of the original system by a linear transformation. In order to determine the state vector of 81, we must solve the equation

(5-53)

after the observer has obtained z. The solution of Eq. (5-53) depends upon the existence of the inversion of the matrix T. In order to ensure the existence of the inverse matrix, we might use T equal to I, the identity matrix. Then Eq. (5-40) becomes

A - D = C.

(5-54)

Thus the solution of this equation prescribing the observer is simply

D=A-C.

(5-55)

'.

Examining Eq. (5-53), we note that the observer constructs the entire system state vector. However, several of the system's state variables are available by direct measurement, and the purpose of the observer is simply to construct the unmeasurable state variables. Therefore there is a certain degree of redundancy in the observer system. This redundancy may be eliminated by reducing the dynamic order of the observer. It has been shown that if the system 81 is a completely observable nth-order system with m independent outputs, then an observer 82 may be constructed using only n - Ht dynamic elements [5J. It must be noted, however, that in selecting a D matrix of order n - in, then the T matrix may not be equated to the identity matrix. This fact can be asoer-

. tained by examining Eq. (5-40).

To illustrate the construction of an observer, let us investigate an example of a second-order system.

1 X2(S) -I

~

U:»O---~XI(O)

-1 -2

FIG. 5-13. Open-loop second-order system.

EXAMPLE 5-4. Consider the second-order system shown in Fig. 5-13. vVe shall construct an observer when only the state variable ;).'1 is available as an output. The system is represented by the differential equation

(5-56)

! l

j

1

.

'i

Since only Xl is measurable, the output is

v = [1, OJx = ex.

(5-57)

5-3]

79

OBSERVING THE STATE OF A LINEAR SYSTEM

As discussed in the preceding paragraph, we may choose an observer of n - m = first order. The state variable z will satisfy the first-order differential equation

Z = AZ + Cx + Gu.

(5-58)

The characteristic root A should be larger than the roots of 81 and will be chosen to be equal to minus three. The transformation matrix T must satisfy Eq, (5-40) " which is

TA - DT = C.

Therefore it is required that

(5-59)

Thus

T = [1, -!].

(5-60)

Also it is required that G = TB. Hence we have

G = gIl = [I, -tl [~l = -t,

(5-61)

Finally we must obtain Xz from Z using the T matrix. Examining Eq. (5-60), we find that

(5---{i2)

Therefore we obtain the total system as shown in Fig. 5-14, where X2 represents the observer's estimate of X2'

Alternatively, the state variable may be obtained by letting T = I and obtaining an observer of the same order as the order of the original system 81. If we let T = I, then Eq. (5-55) must be satisfied; that is,

D = A-C.

(5-63)

U(8)

-1

FIG. 5-14. Original system and observer.

- ---- - - ----- -- -

80

CONTROLLABILITY AND OBSERVABILITY OF LINEAR SYSTEMS

[CHAP. 5

1

U(8)

Fig 5-15. Original system and observer for T = I.

The differential equation for the transformed state vector is

z = Dz + ex + Gu,

(5-64)

where

and

G = TB = B = [0, If.

Then completing the matrix subtraction in Eq. (5-63), we have

D = [-3 1].

o -1

(5-{)5)

Therefore the observer and the original plant are as shown in Fig. 5-15. Examining this figure, we find that we have reconstructed the state Xz = Zz by simply building a transfer function 1/(8 + 1) which is equal to the transfer function of the first state variable of the plant.

It is important to consider the effect of the inclusion of an observer in a feedback system. We expect that the use of an estimated state vector rather than the actual state vector will deteriorate the performance of the feedback system. Consider the linear system shown in Fig. 5-16 and represented by the matrix equation

u B Is-1:ll:

0---' ---<0::::=;0

A

x = Ax + Bu.

FIG. 5-16. Linear multivariable system.

(5-66)

In this case, we are assuming all the state viIables are available for measurement. Then we may design a linear feedback where

u = Fx.

(5-67)

This feedback system is shown in Fig. 5-17. represented by the following equation:

Then the closed-loop system is

~

F

i = (A + BF)x.

(5-68)

The characteristic roots of (A + BF) determine the closed-loop response.

FIG. 5-17.

Linear feedback system.

I

t I

"

EQh_.

5-3]

S1

OBSERVING THE STATE OF A LINEAR SYSTEM

F

FIG. 5-18. Linear system and observer.

Now let us consider the case where the control signal is based on the estimated state vector. The control signal is

u = Fx,

(5-69)

and the estimated state vector is

x = Hx + 1Hz.

(5-70)

The matrix H relates the available state vectors to the estimated state vector. For example, if IS was unavailable for measurement, then hi3 = 0 for i = 0, 1, 2, ... , n. Also it is necessary that

H + MT = I.

Substituting Eq. (5-139) into Eq. (5-66), we find

x = Ax + BF(Hx + Mz).

(5-71)

(5-72)

In order to obtain the differential equation for the observer, we substitute Eq. (5-69) into Eq. (5-51), obtaining

.i = Dz + Cx + TBF(Hx + Mz).

(5-73)

The linear feedback system, using the estimated state vector, is shown in Fig. 5-1S. Inspecting Eqs. (5-72) and (5-73) and Fig. 5-18, we find that the observer has no effect on the closed-loop system's characteristic roots other than to add the characteristic roots of the observer itself. That is, the characteristic roots of the system are the roots of (A + BF) and the roots of D. This may be shown by determining the characteristic roots of the system matrix equation, which is

d [x] [ (A + BFH) dt z = (C + TBFH)

BFM ] [x]

CD + TBFM) z

(5-74)

Thus we have shown in this section that an observer may be constructed which will provide the state vector for feedback control. Finally the observer

82

[CHAP. 5

CO:\'TROLLABILITY AND OBSERVABILITY OF L1XEAR SYSTEMS

has the effect of adding its poles to the feedback system's poles. In order to cause a minimum amount. of deterioration of the system response, we will normally attempt to select characteristic roots for the observer 'which are much greater in magnitude than the characteristic roots of the system. Then the effects of the observer 'will decay rapidly and cause a negligible deterioration of the system performance.

PROBLEMS

5-1. Investigate the controllability and observability of the systems of (a) Problem 3-8, (b) Problem 3-90., (cl Problem 3-9b, (d) Problem 3-9c.

I s..L 1

---.~ Sa =8

FIGURE P5-2

5-2. Two systems Sa and Sb are shown in Fig. P5-2. Determine the controllability and observability of (a) the cascade connection, (b) the parallel connection, and (c) t.he feedback connection with Sb in the feedback channel.

G(8) = 1(8+6 5)

"

I---_-C(s)=x[(s)

R(s)

FIGURE P5-3

5-3. A second-order feedback control system is shown in Fig. P5-3. Only the state variable Xl is available for measurement. Construct an observer system in order to determine the other state variable by using (a) a first-order observer and (b) a second-order observer.

Co-) 6

\8 = 3(s+3)(8+4)

R(s)

FIGURE P5-4

5-4. A third-order feedback control system is shown in Fig. P5-4. The output :'1 (I) and the derivative of the output tr (I) are available for measurement. Construct an observer system in order to determine the other stat-e variable by using (a) a firstorder observer and (b) a third-order observer.

83

BIBLIOGRAPHY

1. R. E. KALMAN, "On the General Theory of Control Systems," PrOG. 1st International Congress on A. utomatic Control, Butterworth, London, VoL 1, pp. 4S1-492, 1961. 2. E. G. GILBERT, "Controllability and Observability in l\Iultivariable Control Systems," J. SIAJ! Control, Vol. 1, pp. 12S-151; 1963.

3. E. KREIKDLER and P. E. SARACllICK, "On the Concepts of Controllability and Observability of Lineal' Systems," IEEE Trans. on Automatic Control, Vol. AC-9, pp. 129-136, April 1964.

4. R E. KALMAN and R. S. Bucv "New Results in Lineal' Filtering and Prediction Theory," J. Basic Engr., Trans. ASJiE, Series D, VoL 83, pp. 95-10S, :\Iarch 1961. 5, D. G. LUENBERGER, "Observing' the State of a Linear System," IEEE Trans. on Military Electronics, pp. 74-80, April 1964.

CHAPTER SIX

The Formulation of the State Variable Equations For Discrete-Time Systems

, .

Dynamic systems in which one or more variables can change only at discrete instants of time are known as sampled data or discrete-time systems. These systems differ from the systems discussed in the preceding chapters in that the signals for a discrete-time system are in sampled or pulse data form. Some important examples of this type of system are pulsed radar units, time shared data link systems, and especially digital computer control systems.

There are several techniques of analysis and design for sampled data systems using the Laplace transform and a special form of the Laplace transform called the z-transform, where z = esT. These techniques have served the analyst well but are limited to linear time-invariant systems. Therefore, the time domain -techniques have immediate appeal since we have learned that the technique

possesses commanding attributes for nonlinear and time-varying systems. Further, we know that the formulation of the equations of discrete time systems in the time domain provides the potential of using a digital computer to evaluate the system response.

There 'are at least three ways of obtaining the discrete-time state equations.

The path the analyst uses will depend upon the form of the original equations representing the system. The discrete-time state equations may be formulated from (1) an original continuous-time state equation, (2) a z-transform formulation of the equations, or (3) a set of difference equations representing some process. These three cases will be treated in this chapter.

:.!

6-1 Forrrrula.tion of the dlsczete-t.irne state equations from the corrrirruoue-tfme state equations

Sampling operations in a system occur in discrete dynamic elements as well as in sample and hold elements. Initially let us consider the simple open-loop system shown in Fig. 6-1. The input signal rei) is sampled at discrete time tk and then passed through a smoothing filter called a hold element. This smooth-

M

;

!. j

6-1]

FORMULATION FROM CONTINUOUS-TIME STATE EQUATIOXS

85

R(z)

Tnn,d, /

R(s)~

\ R*{s)

Output c(t)

Continuous signal system

lIold u(t)

element U(s)

G(s)

FIG. 6-1. Open-loop sampled system.

ing filter provides the necessary link between the sampled signal and the continuous signal required for the system G(s). The simplest form of a hold element is a zero-order hold which holds or provides the last sampled value until the next sample is available. That is,

U(tk + 7) = ·r(tkl, 0 < 7 ::;;; tk+l - tk·

There are several types of sampling operations including constant frequency sampling, multirate sampling, and random sampling [I], If Tk = tHI - t" is the sampling period, then with constant frequency sampling, T" = T, where T = the sampling period. In order to clearly delineate the development of the discr~te state equations, we shall first consider the constant sampling frequency case.

In Chapter 2 we found the state transition equation for a time-varying system, Eq. (2-44), which is

x(1) = q,(t, tolx(to) + (t q,(t, 7lB(7lr(7) dr, (6-1)

J to

The equation for the time-invariant case was Eq. (2-43), which is

x(t) = q,(t - to)x(to) + ft ¢(t - 7!B(7)r(7) dr. (6-2)

to

Examining the sampled system shown in Fig. 6-1, we note that the input signal to the plant is u(t), and for a zero-order hold

U(tk + 7) = r(tr.) , tk < 7 < t"+I.

Also, for a constant sampling frequency, to = tk = ';;T, and we wish to determine the value of xCi) at the next sampling instant which is t = tk+t = (k + 1) T. Substituting these relations into Eq. (6-2), we obtain

. [r(k+l)T ]

x[(k + l)TJ = q,lT)x(kT) + hT ¢[(k + l)T - TJB dn(kT) ,

(6-3)

where B(7) is assumed to be a constant matrix. For the time invariant case, the integral is independent of k, and therefore for k = 0

l(k+llT fT

¢[(k + l)T - T]B d7 = ¢(T - T)B dr

kT 0

= JOT ¢(T)B dr.

(6-4)

---

86

STATE VARIABLE EQUATIONS FOR DISCRETE-TIME SYSTEMS

[CHAP. 6

We shall designate the integral as D(T). Then Eq. (6-3) becomes

x[(k + 1)1'] = tj>(T)x(kT) + D(T)r(kT),

(6-5)

which is the discrete transition equation.

Examining the solution to the homogeneous equation, we obtain for the first sampling instant

x(T) = ¢(T)x(O)

(6-6)

and for the second sampling instant

x(2T) = ¢(T)x(T).

Substituting Eq. (6-7) into Eq. (6-6), we obtain

x(2T) = ¢2(T)x(O).

(6-8)

Since q,(T) = eAT, then q,2(T) = </1(2T), and therefore

x(2T) = 4>(2T)x(O).

The solution for the system with no input may be written as

x[(k + l)TJ = ¢(T)x(k1') = <p(k1')~(O).

(6-9)

The discrete process is then seen to be a sequence of operations ¢(kT) on the initial state x(O). In some cases the output is not the state vector x itself but rather some combination of the state variables and the input signals. In that case the output state vector c(kT) is

c(kT) = Hx(k1') + l\h(kT),

(6-10)

where Hand i.\1 are matrices.

For a time-varying system, the state transition equation is obtained by substituting t = (k + l)T and to = kT into Eq. (6-1). Therefore we obtain

[ r(k+lJT ]

x[(k + 1)1'J = 4>[(k + l)T, kT]x(kT) + )kT 4>[(k + 1)1', TJB dr r(kT).

(6-11)

'Ve shall define

~(k+1)T

D(kT) = ¢[(k + 1)1', T]B dr,

kT

Then the state transition equation for a time-varying system is

x[(k + l)T] = .p[(k + 1)1', kTJx(kT) + D(k1')r(kT).

(6-12)

Since the system is time varying, we find that the transition matrix depends upon time and therefore upon the sampling instant of interest, that is, kT. If

-

6-1]

87

FORMULATlOX FROM CONTINUOUS-TIME STATE EQUATIOXS

the time-varying system was reduced to time-invariant system, then the ~ and D matrices are independent of k, the specific sampling instant, and Eq. (6-12) reduces to Eq. (6-5).

It will be of interest to examine the linear time-invariant system studied in Chapter 2 when the input signal is sampled and held.

____,/ R*(s) Rlsl

.. T

Zeroorder hold

u(t)

Output

(s+1)(8+2)

FIG. 6-2. Open-loop sampled second-order system.

EXAMPLE 6-1. Consider the system of Example 2-2 when the input signal is sampled and held as shown in Fig. 6-2. The transition matrix for the continuous system was determined to be

(6-13)

where to was assumed to be zero. Then the discrete transition matrix is

[ (9 _T -2T)

¢(T) = ~e - e

(-2e-T + 2e-2T)

( -T -2T) 1

e - e

(_e-T + 2e-2T)

(6-14)

Furthermore,

D(T) = JOT ,;,(r)B dr, where B = {O, I} T. Therefore

(6-15)

(6-16)

For the specific case, where T = 1, we obtain

¢(T) = [ 0.4651 -0.4651

0.2325] -0.0972

and

D(T) = {0.1998,O.2325}T.

Then if the input is a unit step and the initial conditions are zero we may evaluate the state of the first sampling instant from Eq. (6-,5) as follows:

x(T) = D(T)r(O) = D(T) = {0.1998, 0.232S} T.

(6-17)

Furthermore, the response at any instant is

x[(k + I)TJ = ¢(T)x(kT) + D(T),

(6-18)

since r(kT) 1.

- - - ---- ------

88

STATE VARIABLE EQUATIONS FOR DISCRETE-TIME SYSTEMS

[CHAP. 6

6-2 Forrrrulatfon of the discrete state equations from the system flow gr-aph

The advantageous use of the signal flow graph techniques for formulating the state equations was demonstrated in Chapter 3. The analyst may readily determine the transition matrix by using the techniques outlined in Chapter 2. This technique is also very useful for systems with sampling and hold elements. In order to demonstrate the direct applicability of the method let us consider the second-order system with feedback and error signal sampling as shown in .Fig. 6--3.

Zeroorder hold

c(_t) =Xj(t)

e*(I)

ret) +

V(s)

1 G(s) = s(H 1)

FIG. 6-3. Closed-loop sampled system.

EXAMPLE 6--2. The sample and hold clement is described by the following

equation:

(6-19)

..

or in sampled form,

u(kT) = 1>(kT) - xl(kT).

(6-20)

The state transition flow graph for the sampled system is shown in Fig. 6-4. Equation (6-19) provides the connection between the sampled signal e(to) and the continuous signal U(8). The portion of the graph to the right of U(s) is simply the flow graph of the continuous system G(8). Using Mason's gain formula, we obtain the state equations as follows:

(6--21 )

Unity feedback

-1

FIG. 6-4. Flow graph of closed-loop sampled system.

[1

. j

) !

i

6-2]

89

FORMULATION FROM THE SYSTEM FLOW GRAPH

Taking the inverse Laplace transform of Eq. (6-21) we are led to

(6-22)

In order to obtain the discrete transition equation, we let t = (k + 1) T and to = kT. In this way, we acquire the following transition equation;

x(k + l)T] = [(2 - T - e-T) (1 - e-Tl] x(kT)

-(1 - e-T) «:"

+ [<T - 1 + e-T)] r(kT). (1 - e-T)

(6-23)

If the sampling period is T = 1, we have

x[(k + 1) TJ = [ 0.6321 0.6321] x(kT) + [0.3679] r(kT).

-0.6321 0.3679 0.6321

(6-24)

The response of the system to a unit step input at the first two sampling instants is

x(T) = {0.3679,0.6321}T

and

x(2T) = 4>(T)x(T) + DCT)

= [ 0.6321 0.6321] [0.3679] + [0.3679]

-0.6321 0.3679 0.6321 0.6321

= {1.oo0, 0.6321} T.

(6-25)

The evaluation of the transient response can be rapidly accomplished by using a digital computer. Equations of the form of Eq. (6--23) can be easily programmed in Fortran computer language. The programming of Eq. (6-23) allows the analyst to change the value of T and evaluate the effect of the sampling rate on the system response. Also, the system response may be determined for several inputs by changing r(kT). The calculation of the response at each sampling instant tk, as k = 0, 1,2, ... , is seen to be a sequence of operations on x(kT) and r(lcT). This type of sequential calculation is advantageously carried out on a digital computer.

Furthermore, the evaluation of the transient response of time-varying systems is greatly facilitated by using the state transition method and digital computer calculation. Assuming that the analyst possesses reasonable knowledge of the

90

STATE VARIAB1~E EQ"CATIONS FOR DISCRETE-TIME SYSTEMS

[CHAP. 6

system time variation, this variation can be programmed using Eq. (6-12). For example, if the coefficient of the differential equation of the system of Example 6-2 changed abruptly from +1 to -1 as follows:

c + c + c = u, k = even integer, c - c + c = U, k = odd integer,

then the q. and D matrices must be evaluated for the condition k = odd, which leads to

-(1 -

and

D(kT) [/' - 1 - TJ.

= -(1 - eT)

For the condition when k is even the cf> and D matrices of Eq. (6-23) hold. Then using,

(6-26)

x[(k + I)TJ = cf>(kT)x.CkT) + D(kT)r(kT),

(6-27)

.f

the proper index k, and the set of matrices for each index, one may obtain the state response. The indexing and numerical calculation of this type is most advantageously accomplished by a digital computer routine. Furthermore, the effect of variation of the parameters may be determined by evaluating the response for a range of parameter values.

6-3 Formulation of the discrete state equations from difference equations

Frequently the equations describing the dynamic behavior of a discrete data system are available as difference equations. This is in contrast to the development of the preceding sections of this chapter which utilize the equations describing continuous signals and sampling elements. If information about the system behavior is only available at discrete times, we must consider the resulting difference equations. Difference equations are usually encountered in the study of economic, physiological and discrete physical systems. An example of a physical component whose behavior must be described by difference equations is a digital computer. A digital computer accepts input signals at discrete intervals of time and also provides output signals only at discrete intervals of time.

The state difference equations express the behavior of the state variables at an instant. in time tk+ 1, given the past performance at times Ik and the input signals at tk. The general formulation of the state difference equations is

(6-28)

You might also like