Download as pdf or txt
Download as pdf or txt
You are on page 1of 89

STATE SPACE MODEL

REPRESENTATION

Frequency Domain
Classical Approach: Laplace Transform

This approach is based on converting a system's


differential equation to a transfer function.
It generates a mathematical model of the system
that algebraically relates a representation of the
output to a representation of the input.
The primary disadvantage: : It can be applied only
to linear, time-invariant systems or systems that
can be approximated as such.

Why State-Space Model


Modern Approach: State Space Model
State-space approach can be used to represent nonlinear systems
that have backlash, saturation, and dead zone.
Also, it can handle, conveniently, systems with nonzero initial
conditions.
Time-varying systems, (for example, missiles with varying fuel levels
or lift in an aircraft flying through a wide range of altitudes) can be
represented in state space.
Multiple-input, multiple-output systems (such as a vehicle with
input direction and input velocity yielding an output direction and
an output velocity) can be compactly represented in state space.

Some Observations
1.

We select a particular subset of all possible system variables and call the
variables in this subset state variables.

2.

For an nth-order system, we write n simultaneous, first-order


differential equations in terms of the state variables. We call this system
of simultaneous differential equations state equations.

3.

If we know the initial condition of all of the state variables at to as well


as the system input for t > to, we can solve the simultaneous differential
equations for the state variables for t > to.

4.

We algebraically combine the state variables with the system's input


and find all of the other system variables for t > to. We call this algebraic
equation the output equation.

5.

We consider the state equations and the output equations a viable


representation of the system. We call this representation of the system a
state-space representation.

Example
Let us now follow these steps through an
example. Consider the RL network shown in
Figure with an initial current of i(0).
1. We select the current, i(t), for which we
will write and solve a differential equation
using Laplace transforms.
2. We write the loop equation,

Example 2

Restrictions
Typically, the minimum number of state variables required to
describe a system equals the order of the differential equation.
Thus, a second-order system requires a minimum of two state
variables to describe it.
We can define more state variables than the minimal set;
however, within this minimal set the state variables must be
linearly independent. For example, if vR(t) is chosen as a state
variable, then i(t) cannot be chosen, because vR(t) can be written as
a linear combination of i(t), namely VR(t) = Ri(t).
State variables must be linearly independent; that is, no state
variable can be written as a linear combination of the other state
variables, or else we would not have enough information to solve
for all other system variables, and we could even have trouble
writing the simultaneous equations themselves.

Another way to determine the number of state variables is to


count the number of independent energy-storage elements in
the system.
In Figure below there are two energy-storage elements, the
capacitor and the inductor. Hence, two state variables and
two state equations are required for the system.

State Space Model Representation


A time varying control system is a system in which one or
more of the parameters of the system may vary as a
function of time.
The state of a system is a set of variables whose values,
together with the input signals and the equations
describing the dynamics, will provide future state and
output of a system.
The state variables describe the present configuration of a
system and can be used to determine the future response,
given the excitations inputs and the equations describing
the dynamics.

The state differential equation

State Eq.
Output Eq.

Linearized state and output Eq.

General continuous-time linear


dynamical system

Linear Time-invariant (LTI)


state dynamics

Block diagram of the linear, continuoustime control system represented in state


space.

Example 1:

Example

Example 2:

Assume voltage

v(t) is the output

Apply Kirchoffs Voltage and Current Laws

TF to state space

TF to state space

TF to state space

Y ( s)
5
3
2
3

2
s
31 s 5 Y ( s) 5U ( s)
2
U ( s) s 2s 31 s 5

d3y
d2y
dy
2 2 3 5y u
3
dt
dt
dt
If the second derivative of y is designated
as x3; the first derivative is designated as x2,etc.

x3 2 x3 3x2 5 x1 5u
with x2 x3 and x1 x2

1
0 x1 0
x1 0
x 0
x 0 u
0
1
2
2
x3 5 3 2 x3 5
x1
y 1 0 0 x2
x3

TF to state space

State-Space Models to TFs

Example

Linearization
Linearization is the process of finding a linear model of a system that
approximates a nonlinear one. Over 100 years ago, Lyapunov
proved that if a linearized model of a system is valid near an
equilibrium point of the system and if this linearized model is
stable, then there is a region around this equilibrium point that
contains the equilibrium, within which the nonlinear system is also
stable.
Basically this tells us that, at least within a region of an equilibrium
point, we can investigate the behavior of a nonlinear system by
analyzing the behavior of a linearized model of that system.
This form of linearization is also called small-signal linearization.

Linearization

Equilibruim points

Example
Consider nonlinear
time-invariant system:
Assume that input u(t) fluctuates around u = 2
Find an operating point with uQ = 2 and a linearized model around it

y_nl(t): Nonlinear system output


y_l(t): Linearized system output, for a square wave input u(t)

Example

Solution of state differential equation


x ax bu
sX ( s ) x(0) aX ( s ) bu
x ( 0)
b
X (s)

U ( s)
sa sa
t

x(t ) e x(0) e a (t )bu ( )d


at

At

A 2t 2
Ak t k
exp At I At

2!
k!

Converges for all finite t and any A.

The solution of state differential equation


t

x(t ) exp( At )x(0) exp At Bu( )d


0

x Ax Bu
X( s) sI A 1 x(0) sI A1 BU( s)

sI A1 (s) is the Laplace

transform of (t ) exp( At ).

(t): Fundamental or state transition matrix.


t

x(t ) (t )x(0) t Bu( )d


0

The solution to the unforced system (that


s, when u=0) is simply:

Example

Example

Block Diagram Algebra

Introduction
A graphical tool can help us to visualize the model of a system
and evaluate the mathematical relationships between their
elements, using their transfer functions.
In many control systems, the system of equations can be
written so that their components do not interact except by
having the input of one part be the output of another part.
In these cases, it is very easy to draw a block diagram that
represents the mathematical relationships in similar manner
to that used for the component block diagram.

Reminder: Component Block


Diagram

Block Diagram
It represents the mathematical relationships between the
elements of the system.

U1 (s) G1 (s) Y1 (s)


The transfer function of each component is placed in box, and the
input-output relationships between components are indicated by
lines and arrows.

Block Diagram Algebra


Using block diagram, we can solve the equations by
graphical simplification, which is often easier and more
informative than algebraic manipulation, even though the
methods are in every way equivalent.
It is convenient to think of each block as representing an
electronic amplifier with the transfer function printed
inside.
The interconnections of blocks include summing points,
where any number of signals may be added together.

Block Diagram Representations for LTI


Control Systems

(a) Cascaded system;


(b) Parallel system;
(c) Feedback (closedloop) system.

1st & 2nd Elementary Block Diagrams


Block in series:

Y2 ( s )
G1G2
U1( s )

Blocks in parallel with their


outputs added:

Y2 ( s )
G1 G2
U1( s )

3rd Elementary Block Diagram


Single-loop negative feedback

The overall transfer


function is given by:

Y( s )
G1

R( s ) 1 G1G2

Two blocks are connected in a feedback


arrangement so that each feeds into
the other:

Feedback Rule

Y( s )
G1

R( s ) 1 G1G2

The gain of a single-loop negative feedback


system is given by the forward gain divided
by the sum of 1 plus the loop gain

Closed Loop (Feedback) System


E(s)
Y(s)

Y(s) = G1(s) G2 (s) E(s)


= G1(s) G2 (s) [R(s) H(s)Y(s)]
Y(s) [1+ G1(s)G2(s)H(s)] = G1(s)G2(s)R(s)
Y(s)/R(s) = G1(s)G2 (s)/[1+G1(s)G2(s)H(s)]

IIII

1st Elementary Principle of Block


Diagram Algebra

2nd Elementary Principle of Block


Diagram Algebra

3rd Elementary Principle of Block


Diagram Algebra

Example 1: Transfer function from a


Simple Block Diagram
Y( s )
T( s )
R( s )
2s 4
2
s
T( s )
2s 4
1 2
s
2s 4
T( s ) 2
s 2s 4

Block Diagram and its corresponding


Signal Flow Graph

Compact alternative notation to the block diagram.


It characterizes the system by a network of directed
branches and associated transfer functions.
The two ways of depicting signal are equivalent.

Closed-Loop System Subjected to a


Disturbance

where lG1(s)H(s)l >> 1 and |G1(s)G2(s)H(s)l >> 1. In this case, the closed-loop
transfer function CD(S)/D(S) becomes almost zero, and the effect of the
disturbance is suppressed. This is an advantage of the closed-loop system

Procedures for Drawing a Block


Diagram
To draw a block diagram for a system,
Write the equations that describe the dynamic behavior of
each component.
Then take the Laplace transforms of these equations,
assuming zero initial conditions,
Represent each Laplace-transformed equation individually in
block form.
Assemble the elements into a complete block diagram.

Example

Example 2: TF from the Block Diagram


Block Diagram Reduction:

Example 2: TF from the Block Diagram

Example 2: TF from the Block Diagram

Example 2: TF from the Block Diagram

Example 2: TF from the Block Diagram

Example 2: TF from the Block Diagram

G1G2G5 G1G6
T( s )
1 G1G3 G1G2G4

Example: Find the equivalent transfer function

Basic Control Actions

Industrial Controllers
On-off Controllers
Proportional Controllers
Integral Controllers
Proportional-plus-Integral Controllers
Proportional-plus-Derivative Controllers
Proportional-plus-Integral-plus-Derivative Controllers

Basic Operations of a Feedback


Control
Think of what goes on in domestic hot water thermostat:
The temperature of the water is measured.
Comparison of the measured and the required values
provides an error, e.g. too hot or too cold.
On the basis of error, a control algorithm decides what to
do.
Such an algorithm might be:
If the temperature is too high then turn the heater off.
If it is too low then turn the heater on
The adjustment chosen by the control algorithm is applied
to some adjustable variable, such as the power input to the
water heater.

Two-Position or On-Off Control Action


In a two-position control system, the actuating element has only two fixed
positions, which are, in many cases, simply on and off.
Let the output signal from the controller be u(t) and the actuating error signal be
e(t).
In two-position control, the signal u(t) remains at either a maximum or minimum
value, depending on whether the actuating error signal is positive or negative,
so that

where U1 and U2 are constants. The minimum value U2 is usually either zero
or U1.
Two-position controllers are generally electrical devices, and an electric
solenoid-operated valve is widely used in such controllers

(a)Block diagram of an on-off


controller; (b) block diagram
of an on-off controller with
differential gap.

(a) Liquid-level control


system;
(b) (b) electromagnetic
valve.

Level h(t) versus t


curve for the system

The range through which the


actuating error signal must move
before the switching occurs is
called the differential gap.
Such a differential gap causes the
controller output u(t) to maintain
its present value until
the actuating error signal has
moved slightly beyond the zero
value.

Proportional Control Action


For a controller with proportional control
action, the relationship between the output of
the controller u(t) and the actuating error
signal e(t) is
e(t)

u(t)
Kp

1
t

Integral Control Action


In a controller with integral control action, the value of the
controller output u(t) is changed at a rate proportional to the
actuating error signal e(t). That is,

Proportional-Plus-Integral Control
Action
The control action of a proportional plus-integral
controller is defined by
Ti: integral time

Proportional-Plus-Derivative Control
Action
The control action of a proportional plus
derivative controller is defined by

Proportional-Plus-Integral-PlusDerivative Control Action


The combination of proportional control action, integral control action, and
derivative control action is termed proportional-plus-integral-plusderivative control action. This combined action has the advantages of each
of the three individual control actions. The equation of a controller with
this combined action is given by

The PID Algorithm


The PID algorithm is the most popular feedback controller
algorithm used. It is a robust easily understood algorithm
that can provide excellent control performance despite the
varied dynamic characteristics of processes.

PID Controller

In the s-domain, the PID controller


K imay berepresented as:

U ( s) K p
K d s E ( s)
s

In the time domain:

de(t )
u (t ) K p e(t ) K i e(t )dt K d
0
dt
t

proportional gain

integral gain

derivative gain

Definitions
In the time domain:

de(t )
u (t ) K p e(t ) K i e(t )dt K d
0
dt

1 t
de(t )

K p e(t ) e(t )dt Td


Ti 0
dt

integral time constant

where Ti
proportional gain

derivative time constant

Kp
Ki

Kd
Td
Ki

integral gain

derivative gain

You might also like