Inno2024 Emt4203 Control II Notes r2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

EMT 4203: CONTROL ENGINEERING

II

Bsc. Mechatronic Engineering

DeKUT

Lecture Notes

By

Dr. Inno Oduor Odira

February 2024
Table of contents

I Classical Control 3

1 Control Problem and Control Actions 5


1.1 Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Control actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 State feedback control configuration. . . . . . . . . . . . . . . . . . . . 7
1.2.2 Output feedback control configuration. . . . . . . . . . . . . . . . . . . 7

2 PID Controllers 15
2.1 Key features of the PID controller. . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Structures and main properties of PID controllers . . . . . . . . . . . . . . . . . 16
2.2.1 PD Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 PI Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 PID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 PID Controller Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.1 PID controller actions selection general guide . . . . . . . . . . . . . . . 19
2.3.2 Ziegler-Nicholas Tuning: Reaction curve method (Open loop method) . . 19
2.3.3 Ziegler-Nicholas Tuning: Continuous cycling (Closed loop method) . . . 19
2.3.4 Analytical Ziegler-Nicholas continuos cycling method . . . . . . . . . . 21
2.4 PI Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.1 PID and Phase-Lag-Lead Controller Designs . . . . . . . . . . . . . . . 31

References 35
COURSE OULTLINE

EMT 4203: CONTROL ENGINEERING II


Prerequisites: EMT 4102 Control Engineering I

Course Purpose

The aim of this course is to enable the student to understand state space and polynomial representa-
tions of linear dynamical systems, develop skills for design of linear multivariable control systems
by pole placement and linear quadratic optimization and grasp basic nonlinear system analysis and
design methods

Expected Learning Outcomes

By the end of this course, the learner should be able to;

i) Analyse linear dynamical systems by state space and polynomial methods

ii) Design controllers for linear and nonlinear systems

iii) Derive state space representations for nonlinear systems

Course description

Control problem and basic control actions: Proportional (P) control actions, Derivative (D)
control action, integral (I) control action, Proportional plus Derivative (PD) control action, Pro-
portional plus Integral (PI) control action. Controllers based on PID actions, Observability and
state estimation. Pole placement by state feedback. Observer design by pole placement. Polyno-
mial approach for pole placement. State variable feedback controller design: controllability,
observability, eigenvalue placement, observe design for linear systems. Linear Quadratic Reg-
ulator (LQR), Algebraic Riccati Equation, (ARE), disturbance attenuation problem, tracking
problem, Kalman filter as an observer: digital implementation of state-space model, Dynamic
programming.
2 Table of contents

Mode of delivery

Two (2) hour lectures and two (2) hour tutorial per week, and at least five 3-hour laboratory
sessions per semester organized on a rotational basis.

Instructional Materials/Equipment

White Board, LCD Projector

Course Assessment

1. Practicals: 15%
2. Assignments 5%
3. CATs: 10%
4. Final Examination: 70%
5. Total: 100%

Reference books

1. Franklin, Gene F. Feedback control of Dynamic Systems. ( 2006. ) 5th ed. India : Prentice
Hall,
2. Golnaraghi, Farid. Automatic control systems. (2010.) 9th Ed. New Jersey: Wiley.
3. Gearing Hans P. Optimal Control with Engineering Applications. (2007.) Verlag Berlin
Heideberg : Springer,
4. Ogata, Katsuhiko. Modern Control Engineering. (2010.) 5th. Boston: Pearson.
5. Bolton W. Control Systems (2006.) -. Oxford: Newnes.

Course Journals

1. Journal of Dynamic Systems, Measurement, and Control, ISSN: [0022-0434]


2. IRE Transactions on Automatic Control, ISSN: [0096-199X]
3. SIAM Journal on Control and Optimization, ISSN: [0363-0129]
4. Transactions of the Institute of Measurement and Control, ISSN: [0142-3312]
Part I

(Classical Control)
Chapter 1

Control Problem and Control Actions

1.1 Control Problem


In any control system, where the dynamic variable has to be maintained at the desired set point
value, it is the controller which enables the requirement of the control objective to be met.
The control design problem is the problem of determining the characteristics of the controller
so that the controlled output can be:
1. Set to prescribed values called reference
2. Maintained at the reference values despite the unknown disturbances
3. Conditions (1) and (2) are met despite the inherent uncertainties and changes in the plant
dynamic characteristics.
4. Maintained within some constrains.
The first requirement above is called Tracking or stabilization (regulator) depending on whether
the set-point continuously changes or not, The second condition is called disturbance rejection.
The third condition is called Robust tracking/stabilization and disturbance rejection. The fourth
condition is called optimal tracking/stabilization and disturbance rejection.

1.2 Control actions


The liquid level control system in a buffer tank shown in Fig. 1.1 will be used for illustration. This
can be presented as a general plant shown Fig. 1.2. The manner in which the automatic controller
produces the control signal is called the control action.
The control signal is produced by the controller, thus a controller has to be connected to the
plant. The configuration may take either Close loop or Open loop as shown in Fig. 1.3 and 1.4
respectively. These may also be configured as either output feedback control configuration or State
feedback control configuration.
6 Control Problem and Control Actions

Fig. 1.1 Liquid level control system in a buffer tank

Fig. 1.2 General plant

Fig. 1.3 Close loop Controlled system.

Fig. 1.4 Open loop Controlled system.


1.2 Control actions 7

1.2.1 State feedback control configuration.


The general mathematical model of state feedback takes the form

x = Ax + Bu State Equation
y = Cx + Du Output Equation

The associated block diagram is the following Fig. 1.5. Two typical control problems of interest:

Fig. 1.5 Regulation and Tracking configuration.

• The regulator problem, in which r = 0 and we aim to keep limt→∞ y(t) = 0 (i.e) (i.e., a
pure stabilisation problem)
• The tracking problem, in which y(t) is specified to track r(t) ̸= 0.
When r(t) = R ̸= 0, constant, the regulator and tracking problems are essentially the same.
Tracking a nonconstant reference r(t) is a more difficult problem, called the servomechanism
problem.
The control law for state feedback the takes the form

u(t) = K2 r − K1 X

1.2.2 Output feedback control configuration.


Controller compares the actual value of the system output with the reference input (desired value),
determines the deviation, and produces a control signal that will reduce the deviation to zero or a
small value as illustrated in Fig. 1.6.
The algorithm that relates the error and the control signal is called the control action (law)(strategy).
A controller is required to shape the error signal such that certain control criteria or specifica-
tions, are satisfied. These criteria may involve :
• Transient response characteristics,
• Steady-state error,
• Disturbance rejection,
• Sensitivity to parameter changes.
8 Control Problem and Control Actions

Fig. 1.6 Expanded Close loop Output feedback configuration.

Fig. 1.7 Control action.

The most commonly used Control Actions are :

1. Two position (on-off, bang-bang)

2. Proportional (P-control)

3. Derivative (D-control)

4. Integral (I-control)

Two position (on-off, bang-bang)

In a two position control action system, the actuating element has only two positions which are
generally on and off. Generally these are electric devices. These are widely used as they are simple
and inexpensive. The output of the controller is given by Eqn.1.1.
( )
U1 : ∀e(t) ≥ 0
u(t) = . (1.1)
U2 : ∀e(t) < 0
Where, U1 and U2 are constants
The block diagram of on-off controller is shown in Fig. 1.8
The value of U2 is usually either:
• zero, in which case the controller is called the on-off controller (Fig. 1.9), or
• equal to −U1 , in which case the controller is called the bang-bang controller Fig. 1.10.
Two position controllers suffers cyclic oscillations which is mitigated by introduction of a
differential gap or Neutral zone such that the output switches to U1 only after the actuating error
1.2 Control actions 9

Fig. 1.8 Block diagram of on off controller.

Fig. 1.9 Block diagram of on off controller.

Fig. 1.10 Block diagram of on off controller.

becomes positive by an amount d. Similarly it switches back to U2 only after the actuating error
becomes equal to −d.

Fig. 1.11 Block diagram of on off controller.

The existence of a differential gap reduces the accuracy of the control system, but it also
reduces the frequency of switching which results in longer operational life.
With reference to Fig. 1.12, Assume at first that the tank is empty. In this case, the solenoid
will be energized opening the valve fully.
10 Control Problem and Control Actions

Fig. 1.12 water level control system.

If, at some time to, the solenoid is de-energized closing the valve completely, qi = 0, then the
water in the tank will drain off. The variation of the water level in the tank is now shown by the
emptying curve.

Fig. 1.13 water level control system.

If the switch is adjusted for a desired water level, the input qi will be on or off (either a positive
constant or zero) depending on the difference between the desired and the actual water levels to
create differential gap.
Therefore during the actual operation, input will be on until the water level exceeds the desired
level by half the differential gap.
Then the solenoid valve will be shut off until the water level drops below the desired level by
half the differential gap. The water level will continuously oscillate about the desired level.
It should be noted that , the smaller the differential gap is, the smaller is the deviation from the
desired level. But on the other hand, the number of switch on and offs increases.
1.2 Control actions 11

Fig. 1.14 water level control system.

Fig. 1.15 water level control system.

Proportional Control Action

The proportional controller is essentially an amplifier with an adjustable gain. For a controller with
proportional control action the relationship between output of the controller u(t) and the actuating
error signal e(t) is

u(t) = K p e(t)

Where, K p is the proportional gain.


Whatever the actual mechanism may be the proportional controller is essentially an amplifier
with an adjustable gain. The block diagram of proportional controller is shown in Fig.1.16.
The proportional action is shown Fig. 1.17 In general,
• For small values of Kp, the corrective action is slow particularly for small errors.
• For large values of Kp, the performance of the control system is improved. But this may lead
to instability.
Proportional control is said to look at the present error signal.
Usually, a compromise is necessary in selecting a proper gain. If this is not possible, then
proportional control action is used with some other control action(s).
The value of K p should be selected to satisfy the requirements of
12 Control Problem and Control Actions

Fig. 1.16 Block diagram of a proportional controller.

Fig. 1.17 Proportional action.

• stability,
• accuracy, and
• satisfactory transient response, as well as
• satisfactory disturbance rejection characteristics.

Integral Control Action

The value of the controller output u(t) is changed at a rate proportional to the actuating error signal
e(t) given by Eqn.4

du(t)
= Ki e(t) (1.2)
dt
or Z t
u(t) = Ki e(t) (1.3)
0
Where, Ki is an adjustable constant.
With this type of control action, control signal is proportional to the integral of the error signal.
It is obvious that even a small error can be detected, since integral control produces a control
signal proportional to the area under the error signal.
1.2 Control actions 13

Hence, integral control increases the accuracy of the system.


Integral control is said to look at the past of the error signal.
If the value of e(t) is doubled, then u(t) varies twice as fast. For zero actuating error, the value
of u(t) remains stationary. The integral control action is also called reset control. Fig.4 shows the
block diagram of the integral controller.
Remember that each s term in the denominator of the open loop transfer function increases the
type of the system by one, and thus reduces the steady state error.
The use of integral controller will increase the type of the open loop transfer function by one.

Fig. 1.18 Block diagram of an integral controller.

Fig. 1.19 Integral control action.

Derivative Control Action

In this case the control signal of the controller is proportional to the derivative (slope) of the error
signal.
Derivative control action is never used alone, since it does not respond to a constant error,
however large it may be.
Derivative control action responds to the rate of change of error signal and can produce a
control signal before the error becomes too large.
14 Control Problem and Control Actions

Fig. 1.20 Block diagram of a Derivative controller.

Fig. 1.21 Integral control action.

As such, derivative control action anticipates the error, takes early corrective action, and tends
to increase the stability of the system.
Derivative control is said to look at the future of the error signal and is said to apply breaks to
the system.
Derivative control action has no direct effect on steady state error.
But it increases the damping in the system and allows a higher value for the open loop gain K
which reduces the steady state error.
Derivative control, however, has disadvantages as well.
It amplifies noise signals coming in with the error signal and may saturate the actuator.
It cannot be used if the error signal is not differentiable.
Thus derivative control is used only together with some other control action!
Chapter 2

PID Controllers

2.1 Key features of the PID controller.


1. The basic PID controller has the form

Z t
1 t
 
de de
Z
u = k p e + ki e(τ)dτ + kd = k p e + e(τ)dτ + Td
0 dt Ti 0 dt
where u is the control signal and e is the control error and r variable is often called the set
point.

Fig. 2.1 PID control structure

The control signal is thus a sum of three terms: the P-term (which is proportional to the error),
the I-term (which is proportional to the integral of the error), and the D-term (which is proportional
to the derivative of the error). The controller parameters are proportional gain K, integral time Ti ,
and derivative time Td .
The integral, proportional and derivative part can be interpreted as control actions based on
the past, the present and the future as is illustrated in Figure ??. The derivative part can also be
interpreted as prediction by linear extrapolation over the time Td . Using this interpretation it is
easy to understand that derivative action does not help if the prediction time Td is too large.
16 PID Controllers

Fig. 2.2 PID terms interpretation illustration

Integral action guarantees that the process output agrees with the reference in steady state and
provides an alternative to including a feedforward term for tracking a constant reference input.

1
u= u
sTi
where Ti = k p /ki
Derivative action provides a method for predictive action. where Td = kd /k p is the derivative
time constant. The action of a controller with proportional and derivative action can be interpreted
as if the control is made proportional to the predicted process output, where the prediction is made
by extrapolating the error Td time units into the future using the tangent to the error curve.

2.2 Structures and main properties of PID controllers


Several common dynamic controllers appear very often in practice. They are known as P, PD,
PI, PID, phase-lag, phase-lead, and phase-lag-lead controllers. In this section we introduce their
structures and indicate their main properties.
In the most cases these controllers are placed in the forward path at the front of the plant
(system) as presented in Figure 8.1.

Fig. 2.3 A common controller-plant configuration

P Controller

In some cases it is possible to achieve the desired system performance by changing only the static
gain . In general, as K p increases, the steady state errors decrease, but the maximum percent
2.2 Structures and main properties of PID controllers 17

overshoot increases. However, very often a static controller is not sufficient and one is faced with
the problem of designing dynamic controllers.

2.2.1 PD Controller
PD stands for a proportional and derivative controller. The output signal of this controller is equal
to the sum of two signals: the signal obtained by multiplying the input signal by a constant gain
K p and the signal obtained by differentiating and multiplying the input signal by Kd , i.e.
 
de de
u = k p e + kd = k e + Td
dt dt

its transfer function is given by

Kp
Gc (s) = K p + Kd s = Kd (s + ) = Kd (s + zc )
Kd

This controller is equivalent to adding a zero to the system with a result of positive phase
contribution. It is used to improve the system transient response.
PD Controller is equivalent to Phase-Lead compensator which adds not only a zero but also a
pole in away that the phase contribution is positive.

Phase-Lead compensator

The phase-lead compensator is designed such that its phase contribution to the feedback loop is
positive. It is represented by

s + z2
Gc (s) = , p2 > z2 > 0
s + p2
Gc (s) = arg (s + z2 ) − arg (s + p2 ) = θz2 − θ p2 > 0
where θz2 and θ p2 are given in Figure 2.4 (b) . This controller introduces a positive phase shift
in the loop (phase lead). It is used to improve the system response transient behaviour.

Fig. 2.4 Poles and zeros of phase-lag (a) and phase-lead (b) controllers
18 PID Controllers

2.2.2 PI Controller
Similarly to the PD controller, the PI controller produces as its output a weighted sum of the input
signal and its integral. Z t
u(t) = k p e(t) + ki e(τ)dτ
0
Its transfer function is

1 K p s + Ki s + zc
Gc (s) = K p + Ki = = Kp
s s s
It is equivalent of adding a pole at the origin and a zero to the system.
In practical applications the PI controller zero is placed very close to its pole located at the
origin so that the angular contribution of this "dipole" to the root locus is almost zero. A PI
controller is used to improve the system response steady state errors since it increases the control
system type by one .
Equivalent to PI controller is the Phase-Lag compensator.

Phase-Lag Compensator

The phase-lag controller belongs to the same class as the PI controller. The phase-lag controller
can be regarded as a generalization of the PI controller. It introduces a negative phase into the
feedback loop, which justifies its name. It has a zero and pole with the pole being closer to the
imaginary axis, that is
 
p1 s + z1
Gc (s) = , z1 > p1 > 0
z1 s + p1
arg Gc (s) = arg (s + z1 ) − arg (s + p1 ) = θz1 − θ p1 < 0
where p1 /z1 is known as the lag ratio. The corresponding angles θz1 and θ p1 are given in
Figure 2.4(a). The phase-lag controller is used to improve steady state errors.

2.2.3 PID Controller


The PID controller is a combination of PD and PI controllers; hence its transfer function is given
by
Z t
de
u(t) = k p e(t) + ki e(τ)dτ + kd
0 dt

1 Ki + K p s + Kd s2 (s + z1 )(s + z2 )
Gc (s) = K p + Kd s + Ki = = Kd
s s s
The PID controller can be used to improve both the system transient response and steady state
errors. This controller is very popular for industrial applications.
2.3 PID Controller Tuning 19

Phase-Lag-Lead Compensator

The phase-lag-lead controller is obtained as a combination of phase-lead and phase-lag controllers


and is equivalent to PID controller. Its transfer function is given by

(s + z1 ) (s + z2 )
Gc (s) = , p2 > z2 > z1 > p1 > 0, z1 z2 = p1 p2
(s + p1 ) (s + p2 )
It has features of both phase-lag and phase-lead controllers, i.e. it can be used to improve
simultaneously both the system transient response and steady state errors. However, it is harder to
design phase-lag-lead controllers than either phase-lag or phase-lead controllers.
Note that all controllers presented in this section can be realized by using active networks
composed of operational amplifiers (see, for example, Dorf, 1992; Nise, 1992; Kuo, 1995).

2.3 PID Controller Tuning

2.3.1 PID controller actions selection general guide


• Determine what characteristics of the system needs to be improved (Transient and Steady
states requirements)

• Use K p to decrease the rise time Tr .

• Use Kd to reduce overshootMP%OS and settling time Ts

• Use KI to eliminate the steady state error

Several Tuning methods exists including: Ziegler-Nichols tuning methods. Cohen Coon method,
Lambda method etc.

2.3.2 Ziegler-Nicholas Tuning: Reaction curve method (Open loop method)


Also referred to as step response method. The step response method characterizes the open loop
response by the parameters a and τ illustrated in Fig.2.5

Type kp Ti Td
P 1/a
PI 0.9/a 3τ
PID 1.2/a 2τ 0.5τ

2.3.3 Ziegler-Nicholas Tuning: Continuous cycling (Closed loop method)


Also referred to as Frequency response method.
The frequency response method characterizes the process dynamics by the point where the
Nyquist curve of the process transfer function first intersects the negative real axis and the frequency
ωc where this occurs (Fig. 2.6).
20 PID Controllers

Fig. 2.5 Reaction curve

Fig. 2.6 Nyquist curve

Frequency response method follows the following steps

Start with Closed-loop system with a proportional controller.

1. Begin with a low value of gain, Kp

2. Reduce the integrator and derivative gains to 0 .

3. Increase Kp from 0 to some critical value Kp = Kcr at which sustained oscillations occur. If
it does not occur then another method has to be applied.

4. Note the value Kcr and the corresponding period of sustained oscillation, Pcr

Controller parameters for the Ziegler-Nichols frequency response method which gives con-
troller parameters in terms of critical gain Kcr and critical period Pcr
The corresponding PID gains are given in the following table:
Type of
Kp Ti Td
Controller
P 0.5Kcr ∞ 0
1
PI 0.45Kcr 1.2 Pcr 0
PID 0.6Kcr 0.5Pcr 0.125Pcr
2.3 PID Controller Tuning 21

Fig. 2.7 Cyclic oscillation

2.3.4 Analytical Ziegler-Nicholas continuos cycling method


Steps to design PID controller:

1. Consider the system under pure proportional control.

2. Consider the closed loop characteristic equation of the system under pure proportional
control.

3. Form the Routh Array and establish the critical gain Kc that produces an all zero row.

4. Note the value of Kc and use auxiliary polynomial to calculate the period of oscillation T .

5. Obtain the controller settings from the table given above

Example 1

Consider a process with transfer function

1
G(s) =
(s + 1)(s + 3)(s + 5)

Fig. 2.8 PID

T.F. = K.G(s)/{1 + K.G(s)}


1 + KG(s) = 0
(s + 1)(s + 3)(s + 5) + K = 0
p(s) = s3 + 9s2 + 23s2 + 15 + K = 0
22 PID Controllers

1 + KG(s) = 0 ⇔
(s + 1)(s + 3)(s + 5) + K = 0 ⇔
p(s) = s3 + 9s2 + 23s + 15 + K = 0.
The corresponding Routh array is

s3 1 23 0
s2 9 15 + K 0
s1 192 − K 0
s0 15 + K
From this see that the range of K for stability is,
15 + K > 0 ⇒ K > −15 and
192 − K > 0 => K < 192. So Kcr = 192
When K = 192, we have imaginary roots since the S1 row is identically 0 .
The corresponding auxiliary equation is

9s2 + 15 + 192 = 0

with roots at s = ± j4.8


Since this is a quadratic factor of the characteristic polynomial ⇒ the sustained oscillation at
the limiting value of K, Kcr, is at 4.8rad/s.

Type of
Kp Ti = K p /Ki Td = Kd /K p
Controller
P 0.5Kcr ∞ 0
1
PI 0.45Kcr 1.2 Pcr 0
PID 0.6Kcr 0.5Pcr 0.125Pcr

Thus, Pcr = 1.31sec and Kcr = 192.


This gives for full PID control from the table as
Kp = 0.6Kcr = 115.2;
Ki = Kp/(0.5Pcr) = 175.87;
Kd = KpTd = Kp⋆ 0.125Pcr = 18.86

Example 2
6
G(s) =
(s + 1)(s + 2)(s + 3)
Consider the characteristic equation of the system

1 + K p G(s) = 0
s3 + 6s2 + 11s + 6 (1 + K p ) = 0
2.3 PID Controller Tuning 23

The Routh array is formed:

s3 1 11
s2 6 6 (K p + 1) ⇌ Auxiliary
s1 11 − (K p + 1) Polynomial
s0 6 (K p + 1)
For Stability: 11 − (K p + 1) > 0 ⇒ K p ≤ 10
6 (K p + 1) > 0 ⇒ KP > 0
Hence, 0 < K p ≤ 10
⇒ Kc = 10

The auxiliary Polynomial is formed as: 6s2 + 6(10 + 1) = s2 + 11 = 0


At critical gain the system is oscillatory or marginally stable i.e. only imaginary roots are
present. Hence in the above equation

s = jω

⇒ ω = 11

T= ⇒ T = 1.895
ω
Hence for the given system: Kc = 10 & T = 1.895
Controller settings are obtained from the table given below:

Type of
Kp Ti = K p /Ki Td = Kd /K p
Controller
P 0.5Kcr ∞ 0
1
PI 0.45Kcr 1.2 Pcr 0
PID 0.6Kcr 0.5Pcr 0.125Pcr

On Solving :

Type of
Kp Ti Td
controller
P 5 ∞ 0
PI 4.5 1.572 0
PID 6 0.947 0.237

8.5.2 Improvement of Transient Response


The transient response can be improved by using either the PD or phase-lead controllers. In the
following, we consider these two controllers independently. However, both of them have the
common feature of introducing a positive phase shift, and both of them can be implemented in a
similar manner.
24 PID Controllers

PD Controller Design
The PD controller is represented by

Gc (s) = s + zc , zc > 0

which indicates that the compensated system open-loop transfer function will have one ad-
ditional zero. The effect of this zero is to introduce a positive phase shift. The phase shift and
position of the compensator’s zero can be determined by using simple geometry. That is, for the
chosen dominant complex conjugate poles that produce the desired transient response we apply
the root locus angle rule. This rule basically says that for a chosen point, sd , on the root locus the
difference of the sum of the angles between the point sd and the open-loop zeros, and the sum of
the angles between the point sd and the open-loop poles must be 180◦ . Applying the root locus
angle rule to the compensated system, we get
m n
∡Gc (sd ) G (sd ) = ∡ (sd + zc ) + ∑ ∡ (sd + zi ) − ∑ ∡ (sd + pi ) = 180◦
i=1 i=1

which implies
m n
∡ (sd + zc ) = 180◦ − ∑ ∡ (sd + zi ) + ∑ ∡ (sd + pi ) = αc
i=1 i=1

From the obtained angle ∡ (sd + zc ) the location of the compensator’s zero is obtained by
playing simple geometry as demonstrated in Figure 2.9. Using this figure it can be easily shown
that the value of zc is given by
 q 
ωn 2
zc = ζ tan αc + 1 − ζ
tan αc
An algorithm for the PD controller design can be formulated as follows.

Fig. 2.9 Determination of a PD controller’s zero location


2.3 PID Controller Tuning 25

Design Algorithm

1. Choose a pair of complex conjugate dominant poles in the complex plane that produces
the desired transient response (damping ratio and natural frequency). These are obtained
through Transient performance specifications, ie. Percentage overshoot and Settling time.

2. Find the required phase contribution of a PD regulator by using formula .

3. Find the absolute value of a PD controller’s zero by using formula

4. Check that the compensated system has a pair of dominant complex conjugate closed-loop
poles.

Example 3

Let the design specifications be set such that the desired maximum percent overshoot is less than
20% and the 5%-settling time is 1.5 s. Then, the formula for the maximum percent overshoot
given by (6.16) implies
s
ζπ ln2 {OS}
−p = ln{OS} ⇒ ζ = = 0.456
1−ζ2 π 2 + ln2 {OS}
We take ζ = 0.46 so that the expected maximum percent overshoot is less than 20%. In order
to have the 5%-settling time of 1.5 s, the natural frequency should satisfy

3 3
ts ≈ ⇒ ωn ≈ = 4.348rad/s
ζ ωn ζ ts
The desired dominant poles are given by
q
sd = λd = −ζ ωn ± jωn 1 − ζ 2 = −2.00 ± j3.86

Consider now the open-loop control system

K(s + 10)
G(s) =
(s + 1)(s + 2)(s + 12)
The root locus of this system is represented in Figure 2.10.
It is obvious from the above figure that the desired dominant poles do not belong to the original
root locus since the breakaway point is almost in the middle of the open-loop poles located at -1
and -2 .
In order to move the original root locus to the left such that it passes through sd , we design a
PD controller by following Design Algorithm.
Step 1 has been already completed in the previous paragraph. Since we have determined the
desired operating point, sd , we now use angle formula to determine the phase contribution of a PD
controller.
By MATLAB function angle (or just using a calculator), we can find the following angles
26 PID Controllers

Fig. 2.10 Root loci of the original (a) and compensated (b) systems

∡ (sd + z1 ) = 0.4495rad, ∡ (sd + p1 ) = 1.8243rad


∡ (sd + p2 ) = 1.5708rad, ∡ (sd + p3 ) = 0.3684rad
Note that MATLAB function angle produces results in radians. Using angle formula formula ,
we get

∡ (sd + zc ) = π − 0.4495 + 1.8243 + 1.5708 + 0.3684


= 0.1723rad = 9.8734◦ = αc
Having obtained the angle αc , the magnitude formula produces the location of the controller’s
zero, i.e. zc = 24.1815, so that the required PD controller is given by

Gc (s) = s + 24.1815

The root locus of the compensated system is presented in Figures 2.10b and 2.10b. It can be
seen from Figure 2.11 that the point sd = −2 ± j3.86 lies on the root locus of the compensated
system.
At the desired point, sd , the static gain K, obtained by applying the root locus rule Magnitude
formula, is given by K = 0.825. This value can be obtained either by using a calculator or the
MATLAB function abs as follows:
2.3 PID Controller Tuning 27

Fig. 2.11 Enlarged portion of the root loci in the neighborhood of the desired operating point of
the original (a) and compensated (b) systems

d1 = abs(sd + p1);
d2 = abs(sd + p2);
d3 = abs(sd + p3);
d4 = abs(sd + z1);
d5 = abs(sd + zc);
K = (d1∗ d2∗ d3) / (d4∗ d5)
For this value of the static gain K, the steady state errors for the original and compensated
systems are given by ess = 0.7442, essc = 0.1074. Note that in the case when zc > 1, this controller
can also improve the steady state errors. In addition, since the controller’s zero will attract one of
the system poles for large values of K, it is not advisable to choose small values for zc since it may
damage the transient response dominance by the pair of complex conjugate poles closest to the
imaginary axis.
The closed-loop step response for this value of the static gain is presented in Figure 2.12. It
can be observed that both the maximum percent overshoot and the settling time are within the
specified limits.
The values for the overshoot, peak time, and settling time are obtained by the following
MATLAB routine:
Using this program, we have found that ts = 1.125 s and MPOS = 20.68%. Our starting
assumptions have been based on a model of the second-order system. Since the second-order
systems are only approximations for higher-order systems that have dominant poles, the obtained
results are satisfactory.
Finally, we have to check that the system response is dominated by a pair of complex conjugate
poles. Finding the closed-loop eigenvalues we get λ1 = −11.8251, λ2,3 = −2.000 ± j3.8600,
28 PID Controllers

Fig. 2.12 Step response of the compensated system for Example 3

Fig. 2.13 Matlab subroutine

which indicates that the presented controller design results are correct since the transient response
is dominated by the eigenvalues λ2,3 .

2.4 PI Controller Design


As we have already indicated, the PI controller represents a stable dipole with a pole located at
the origin and a stable zero placed near the pole. Its impact on the transient response is negligible
since it introduces neither significant phase shift nor gain change (see root locus rules 9 and 10 in
Table 7.1). Thus, the transient response parameters with the PI controller are almost the same as
those for the original system, but the steady state errors are drastically improved due to the fact
that the feedback control system type is increased by one.
2.4 PI Controller Design 29

The PI controller is represented, in general, by

s + KKpi
Gc (s) = K p , Ki ≪ K p
s
where K p represents its static gain and Ki /K p is a stable zero near the origin. Very often it is
implemented as

s + zc
Gc (s) =
s
This implementation is sufficient to justify its main purpose. The design algorithm for this
controller is extremely simple.

1. Set the PI controller’s pole at the origin and locate its zero arbitrarily close to the pole, say
zc = 0.1 or zc = 0.01.

2. If necessary, adjust for the static loop gain to compensate for the case when K p is different
from one. Hint: Use K p = 1, and avoid gain adjustment problem.

Comment: Note that while drawing the root locus of a system with a PI controller (compen-
sator), the stable open-loop zero of the compensator will attract the compensator’s pole located at
the origin as the static gain increases from 0 to +∞ so that there is no danger that the closed-loop
system may become unstable due to addition of a PI compensator (controller).
The following example demonstrates the use of a PI controller in order to reduce the steady
state errors.

Example 4

Consider the following open-loop transfer function

K(s + 6)
G(s) =
(s + 10) (s2 + 2s + 2)
Let the choice of the static gain K = 10 produce a pair of dominant poles on the root locus,
which guarantees the desired transient specifications. The corresponding position constant and the
steady state unit step error are given by

10 × 6 1
Kp = = 3 ⇒ ess = = 0.25
10 × 2 1 + Kp
Using a PI controller with the zero at −0.1 (zc = 0.1), we obtain the improved values as K p = ∞
and ess = 0. The step responses of the original system and the compensated system, now given by

10(s + 0.1)(s + 6)
Gc (s)G(s) =
s(s + 10) (s2 + 2s + 2)
are presented in Figure ??.
The closed-loop poles of the original system are given by
30 PID Controllers

Fig. 2.14 Step responses of the original (a) and compensated (b) systems for Example 4

λ1 = −9.5216, λ2,3 = −1.2392 ± j2.6204

For the compensated system they are

λ1c = −9.5265, λ2c,3c = −1.1986 ± j2.6109

Having obtained the closed-loop system poles, it is easy to check that the dominant system
poles are preserved for the compensated system and that the damping ratio and natural frequency
are only slightly changed. Using information about the dominant system poles we get

ζ ωn = 1.2392, ωn2 = (1.2392)2 + (2.6204)2 ⇒ ωn2 = 2.9019, ζ = 0.4270

and

2
ζc ωnc = 1.1986, ωnc = (1.1986)2 + (2.6109)2
2
⇒ ωnc = 2.8901, ζc = 0.4147
In Figure 2.15 we draw the step response of the compensated system over a long period of
time in order to show that the steady state error of this system is theoretically and practically equal
to zero.
Figures 2.15 and 2.14 are obtained by using the same MATLAB functions as those used in
Example 8.4.
The root loci of the original and compensated systems are presented in Figures 8.8 and 8.9. It
can be seen from these figures that the root loci are almost identical, with the exception of a tiny
dipole branch near the origin.
2.4 PI Controller Design 31

Fig. 2.15 Step response of the compensated system for Example 4

Fig. 2.16 Root locus of the original system for Example 4

2.4.1 PID and Phase-Lag-Lead Controller Designs


It can be observed from the previous design algorithms that implementation of a PI (phase-lag)
controller does not interfere with implementation of a PD (phaselead) controller. Since these two
groups of controllers are used for different purposes - one to improve the transient response and
the other to improve the steady state errors-implementing them jointly and independently will take
care of both controller design requirements.
Consider first a PID controller. It is represented as

K
Ki s2 + Kdp s + KKdi
GPID (s) = K p + Kd s + = Kd
s s
(s + zc2 )
= Kd (s + zc1 ) = GPD (s)GPI (s)
s
32 PID Controllers

Fig. 2.17 Root locus of the compensated system for Example 8.5

which indicates that the transfer function of a PID controller is the product of transfer functions
of PD and PI controllers. Since in Design Algorithms for PD and PI there are no conflicting steps,
the design algorithm for a PID controller is obtained by combining the design algorithms for PD
and PI controllers.

Design Algorithm: PID Controller

1. Check the transient response and steady state characteristics of the original system.

2. Design a PD controller to meet the transient response requirements.

3. Design a PI controller to satisfy the steady state error requirements.

4. Check that the compensated system has the desired specifications.

Example

Consider the problem of designing a PID controller for the open-loop control system studied in
Example 3, that is

K(s + 10)
G(s) =
(s + 1)(s + 2)(s + 12)
In fact, in that example, we have designed a PD controller of the form

GPD (s) = s + 24.1815

such that the transient response has the desired specifications. Now we add a PI controller in
order to reduce the steady state error. The corresponding steady state error of the PD compensated
2.4 PI Controller Design 33

system in Example 8.8 is essc = 0.1074. Since a PI controller is a dipole that has its pole at the
origin, we propose the following PI controller

s + 0.1
GPI (s) =
s
In comparison , we are in fact using a PID controller with Kd = 1, zc1 = 24.1815, zc2 = 0.1.
The corresponding root locus of this system compensated by a PID controller is represented in
Figure 2.18.

Fig. 2.18 Root locus for the system from Example 3 compensated by the PID controller

It can be seen that the PI controller does not affect the root locus, and hence Figures 2.10 and
2.18 are almost identical except for a dipole branch.
On the other hand, the step responses of the system compensated by the PD controller and
by the PID controller (see Figures 2.13 and 2.19) differ in the steady state parts. In Figure 2.13
the steady state step response tends to yss = 0.8926, and the response from Figure ?? tends to 1
since due to the presence of an open-loop pole at the origin, the steady state error is reduced to
zero. Thus, we can conclude that the transient response is the same one as that obtained by the
PD controller in Example 3, but the steady state error is improved due to the presence of the PI
controller.
34 PID Controllers

Fig. 2.19 Step response of the system from Example 3 compensated by the PID controller

You might also like