Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 885

INTRODUCTION

TO
CONTROL SYS-
TEMS
BY
Omotosho Temidayo Victor
Professor of Atmospheric and Radio
Communication Physics
PHY451 COURSE OUTCOMES
At the end of the whole course, the student is expected to
develop the:
1. Ability to apply various mathematical principles (from
calculus and linear algebra) to solve control system
problems.
2. Ability to obtain mathematical models for such
mechanical, electrical and electromechanical systems.
3. Ability to derive equivalent differential equation, transfer
function and state space model for a given system.
4. Ability to perform system’s time and frequency-domain
analysis with response to test inputs. Analysis includes the
determination of the system stability.
PHY421 COURSE EVALUATION

 Practicalproject : 10%
 Assignments : 10%
 Mid-Term : 10%
 Final Examination : 70%

Total Mark : 100%


LIST OF TEXTBOOKS
1. Modern Control Systems by Dorf & Bishop - 13th ed
2017 Pearson Education, Inc., Hoboken.
2. W. Bolton , Instrumentation & Control Engineering,
2nd Edition, 2015, Elsevier, Newness
3. Benjamin C. Kuo and Farid Golnaraghi, “Automatic
Control Systems”, John Wiley, 2003
4. Norman S. Nise, Benjamin Cummings, “Control
Systems Engineering”, 6th edition, 2015
Today we look at the following:
What is control system?
History of the control systems,
Introductory concepts and terms,
Types of control systems,
Comparison between open loop and
closed loop control systems,
Transfer function and its properties,
Types of signals and
Examples.
Control Systems
What is Control? : The word control is usually taken
to mean regulate, direct, or command
When we use the word “control” in everyday life,
we are referring to the act of producing a desired
result. control is seen to cover all artificial and
natural processes.
What is a system? is an arrangement, set, or
collection of things connected or related in such a
manner as to form an entirety or a whole.
Definition 2: A system is an arrangement of
physical components connected or related in such a
manner as to form and/or act as an entire unit.
 Combining the above definitions, we have
 Definition 2: A control system is an arrangement of
physical components connected or related in such a
manner as to command, direct, or regulate itself or
another system
 The temperature inside a refrigerator is controlled
by a thermostat.
 The picture we see on the television is a result of a
controlled beam of electrons made to scan the
television screen in a selected pattern.
 A compact-disc player focuses a fine laser beam at
the desired spot on the rotating compact-disc in
order to produce the desired music. 7
While driving a car, the driver is controlling the
speed and direction of the car.
the control is automatic (in the refrigerator,
television or compact-disc player), manual control
(such as in driving a car), it is an integral part of
our daily existence.
the natural environment we live in is. The
composition, temperature and pressure of the
earth's atmosphere are kept stable in their liveable
state by an intricate set of natural processes.
The daily variation of temperature caused by the
sun controls the metabolism of all living organisms
The ultimate control system is the human
body, where the controlling mechanism is
so complex that even while sleeping, the
brain regulates the heartbeat, body
temperature and blood-pressure by
countless chemical and electrical impulses
per second, in a way not quite understood
yet. (You have to wonder who designed
that control system?) Hence, control is
everywhere we look, and is crucial for the
existence of life itself.
Introduction to Control Systems
What are control systems?
Why do we study them?
How do we identify them?
A study of control involves developing a
mathematical model for each
component of the control system. A
system is a set of self-contained
processes under study.
• A control system by definition consists of the
system to be controlled - called the plant - as well
as the system which exercises control over the
plant, called the controller. A controller could be
either human, or an artificial device. The
controller is said to supply a signal to the plant,
called the input to the plant (or the control input),
in order to produce a desired response from the
plant, called the output from the plant. When
referring to an isolated system, the terms input
and output are used to describe the signal that
goes into a system, and the signal that comes out
of a system, respectively.
What are Control Systems?
• The study and design of automatic Control
Systems, a field known as control engineering, is a
large and expansive area of study.
• Control systems, and control engineering
techniques have become a pervasive part of
modern technical society.
1. From devices as simple as a toaster, to complex
machines like space shuttles and
2. rockets, control engineering is a part of our
everyday life.
3. In splashing cooling water
4. A metallic part is automatically machined
5. A self-guided vehicle delivering material to
workstations in an aerospace assembly plant
glides along the floor seeking its destination.
Automatically control systems; these also exits in
nature Within our own bodies are numerous
control systems, such as
1. the pancreas, which regulates our blood sugar
2. In time of flight or fight our adrenaline
increases along with our heart rate, causing
more oxygen to be delivered to our cells
3. Our eyes follow a moving object to keep it in view;
our hands grasp the object and place it precisely at
a predetermine location
Even the nonphysical world appears to be
automatically regulated. Models have been suggested
showing automatic control of student performance.
The input to the model is the student’s available
study time, and the output is the grade. The model
can be used to predict the time required for the grade
to rise if a sudden increase in study time is available.
Using this model, you can determine whether
increase in study is worth the effort during the last
week of the term.
History of the Control Systems
James Watt’s flyball governor 1769
History of the Control Systems
Technology Roadmap to the Internet of Things with Applications to Control
Engineering
Manual and automatic control system analogy: (a) human controlled, (b) computer
controlled
Control Systems in Robotics Perspective
Autonomous planning and Exploration
Autonomous control
Industry
There is Control Every where
Control system definition
• A control system consists of subsystem and
processes (or plants) assembled for the purpose
of obtaining a desired output with desired
performance, given a specified input
A Simple Example of Control System
• A car and its driver. If we select the car to be the
plant, then the driver becomes the controller,
who applies an input to the plant in the form of
pressing the gas pedal if it is desired to increase
the speed of the car. The speed increase can then
be the output from the plant. Note that in a control
system, what control input can be applied to the
plant is determined by the physical processes of
the plant (in this case, the car's engine), but the
output could be anything that can be directly
measured (such as the car's speed or its position)
Consider an Elevator response
• Two major measure of performance are apparent
1. The transient response and
2. The steady-state error.
SOME BASIC DEFINITION
Before we can discuss control systems, some
basic terminologies must be defined.
1. Controlled variable 2. Manipulated variable.
3. Plants.
4. Processes. 5. Systems.
6. Disturbances 7. Feedback Control.
8. Input 9. Output
10. Open-loop 11. Closed-loop
1. Controlled variable: The controlled variable is the
quantity or condition that is measured & controlled
Normally, the controlled variable is the output of
the system
2. Manipulated variable: The manipulated variable
or control signal is the quantity or condition that is
varied by the controller so as to affect the value of
the controlled variable.
3. Plants: A plant may be a piece of equipment,
perhaps just a set of machine parts functioning
together, the purpose of which is to perform a
particular operation.
4. Processes: a natural, progressively continuing
operation or development marked by a series of gradual
changes that succeed one another in a relatively fixed
way and lead toward a particular result or end.
5. Systems: A system is a combination of components
that act together and perform a certain objective. A
system should, therefore, be interpreted to imply
physical, biological, economic, and the like, systems.
6. Disturbances: A disturbance is a signal that tends to
adversely affect the value of the output of a system. A
disturbance generated within the system, it is called
internal, while an external disturbance is generated
outside the system and is an input.
7. Feedback Control: refers to an operation that, in
the presence of disturbances, tends to reduce the
difference between the output of a system and
some reference input and does so on the basis of
this difference.
8. Input: is the stimulus, excitation or command
applied to a control system, typically from an
external energy source, usually in order to produce
a specified response from the control system.
9. Output: is the actual response obtained from a
control system. It may or may not be equal to the
specified response implied by the input
Inputs and outputs can have many different forms.
Inputs, for example, may be physical variables,
or more abstract quantities such as reference,
setpoint, or desired values for the output of the
control system. Control systems may have more
than one input or output. Often all inputs and
outputs are well defined by the system description.
10. An open-loop: is one in which the control action
is independent of the output.
11. A closed-loop is one in which the control action
is somehow dependent on the output.
Advantages of control systems
We build control systems for the
following primary reasons and
advantages:
i.Power Amplification (Gain)
– Positioning of a large radar antenna by low-power
rotation of a knob
ii.Remote Control
– Robotic arm used to pick up radioactive materials
Advantages of Control Systems
Iii. Convenience of Input Form
– Changing room temperature by thermostat
position
iv.Compensation for Disturbances
– Controlling antenna position in the presence of
large wind disturbance torque
Examples of Control Systems
1. Speed control system
2. Temperature control and Steam pressure control
systems
3. Business control systems 4. Liquid level control
5. Stability, stabilization and steering.
6. Guidance, navigation, and control of missiles,
spacecraft, planes and ship at sea.
7. Computer control systems in industries and space
shuttle.
8. Home entertainment system CDs or DVDs machine.
9. Human Body Control Systems
Practical Examples of Control
Systems

in ships in air planes

in Azimo
Humanoid robot
in manufacturing
processes in CNC machines
36
Application Examples
Drones -
unmanned
aerial vehicles

In cruise missiles

In battle tanks
In robot warriors 37
1. SPEED CONTROL SYSTEM

Car and Driver


 Objective: To control direction and speed of car
 Outputs: Actual direction and speed of car
 Control inputs: Road markings and speed signs
 Disturbances: Road surface and grade, wind,
obstacles
 Possible subsystems: The car alone, power
steering system, breaking system
TRANSPORTATION
 Functional block diagram:
Desired Actual
course course
of travel + Error Steering of travel
Driver Automobile
- Mechanism

Measurement, visual and tactile


 Time response:
TRANSPORTATION
 Consider using a radar to measure distance and velocity to autonomously
maintain distance between vehicles.

 Automotive: Engine regulation, active suspension, anti-lock breaking


system (ABS)
 Steering of missiles, planes, aircraft and ships at sear.
1. Speed control system.
2. Temperature control system.
2. Temperature Control of Passenger Compartment of a Car.
(a) an open-loop system; it has no feedback and is not self-correcting.
(b) A closed-loop system; it has feedback and is self-correcting

4. A Liquid Level Control System


47
Control Surface for an Airplane

ROLL
YAW

PICTH
49
50
Loop Autopilot Engaged & Disengaged 51
Ship Stabilization Control System. (a) Ship in roll position;
(b) simplified block diagram of ship stabilization control system

5. Stability, Stabilization and Steering


6. Aircraft wing control system
6. Missile Launching and Guidance System
Missile direction control system

6. Guidance, Navigation, and Control of Missiles


7. Computer Control Systems in Industries and Space Exploration
Remote robot control system. (a) Overall picture of remote robot control
system; (b) schematic diagram of remote robot control
7. Computer Control Systems in Industries and Space Exploration
(a) Schematic diagram

7. Aviation: Automatic Aircraft Landing System


(b) Lateral Landing System

Automatic Aircraft Landing System


CD Players: The position of the laser spot in relation to the
microscopic pits in a CD is controlled.

i..

8. HOME ENTERTAINMENT SYSTEM CDS OR DVDS MACHINE


9. Human Body Control Systems
i.Pancreas: Regulates blood glucose level
ii.Adrenaline: Automatically generated to increase the heart
rate and oxygen in times of flight
iii. Eye: Follow moving object
iv. Hand: Pick up an object and place it at a predetermined
location
v.Temperature: Regulated temperature of 36°C to 37°C
vi.Reproduction: Female Menstrual Cycle
9. The human body: Temperature control Systems
HUMAN BODY REPRODUCTION CONTROL SYSTEM
HUMAN REPRODUCTION CONTROL SYSTEM
TWO MAJOR CONFIGURATIONS
OF CONTROL SYSTEMS

66
Two major configurations of control systems

(a). Open-loop Control Systems


Open Loop– Turntable example
Example: Open-Loop Control

Riding with
eyes closed

69
Two Outstanding Features of Open-loop Control Systems

1. Their ability to perform accurately is


determined by their calibration. To calibrate
means to establish or re-establish the input-
output relation to obtain a desired system
accuracy.
2. They are not usually troubled with
problems of instability, a concept we shall
subsequently discussed in detail.
Under certain circumstances such as where
no disturbances exist or the output is hard to
measure open-loop control systems may be
desired. Therefore, it is worth while to
summarize the advantages and disadvantages
of using open-loop control systems are as
follows
Four Advantages of Open-loop Control Systems
1. Simple construction and ease of maintenance.
2. Less expensive than a corresponding closed-loop
system
3. There is no stability problem.
4. Convenient when output is hard to measure
or measuring the output precisely is
economically not feasible.
For example, in the washing machine
system, it would be quite expensive to
provide a device to measure the quality of
the washer's output, cleanliness of the
clothes.
Two Disadvantages Of Open-loop Control Systems

The major disadvantages of open-loop control


systems are as follows:
1. Disturbances and changes in calibration
cause errors, and the output may be different
from what is desired.
2. To maintain the required quality in the
output, recalibration is necessary from time to
time.
Feedback or Closed-loop Systems
Example: Closed-Loop Control

Arms steer
Eyes sense

Brain compute

75
CONTROL SYSTEM IS ALSO AUTOMATION

Processor (Brain)

Feedback(Eyes)

Output
Controlling Unit(hand) action unit
(Leg)

When Human Operator can operate the machine ,taking Feedback of


a Machinery performance and he can change or adjust according to
requirement, that why it also called as Automation.

Thus ,we can define Automation: Any systematic operation


required intelligence and as well as monitoring action.
Closed Loop Control
Car Cruise control Example : Closed loop
Oven example: Closed loop
DISK DRIVE READ SYSTEM

 Goal of the system: Position the reader head in order to read data
stored on a track.
 Variables to control: Position of the reader head
DISK DRIVE READ SYSTEM
 Specification:
i. Speed of disk: 1800 rpm to 7200 rpm
ii. Distance head-disk: Less than 100nm
iii. Position accuracy: 1 µm
iv. Move the head from track ‘a’ to track ‘b’ within 50ms
 System Configuration:
Six Characteristics of Closed-loop Systems
1. Increased accuracy. For example, the ability to
faithfully reproduce the input.
2. Tendency toward oscillation or instability
3. Reduced sensitivity of the ratio of output/input to
variations in SYS parameters & other characteristics
4. Reduced effects of nonlinearities.
5. Reduced effects of external disturbances or noise.
6. Increased bandwidth. The bandwidth of a system
is a frequency response measure of how well the
system responds to variations in the input signal.
Three Advantages of Closed-loop systems
1. Greater accuracy than the open-loop
systems.
2. They are less sensitive to noise,
disturbances and changes in environment.
3. Transient response & steady-state error can
be controlled more conveniently and with
greater flexibility in a closed-loop system,
often by a simple adjustment of gain
(amplification) in the loop and sometimes by
redesigning the controller.
Example: Open-loop vs Closed Loop
Process of Walking:
Desired output: a point where you want to be.
Controller: the brain
Plant or process: the legs

Open-loop control:
Walking with your eyes closed.

Closed-loop feedback control:


Walking with your eyes open.
The eyes sensed the actual output, where you
A walking man are and where you are heading, computes the
error in position and in direction, and issues
commands to the plant, meaning the legs, to
move in such a way so as to reduce the error.
84
Example: Open-loop vs Closed Loop

Dropping a Bomb:
Objective of dropping a bomb
from a height is to hit a target
below.
Desired output: Target below
Photo courtesy U.S. Air Force
Plant or process: the bomb with
its control fins

Open-loop Control or dumb bombs


The controller, meaning the pilot or bombardier, needs to
estimate his own height, velocity, distance to target, wind
conditions, and characteristics of bomb to decide when and
where to release the bomb. Often, hundreds of bombs are
needed to hit a specific target.

85
Example: Open-loop vs Closed
Dropping a Bomb:
Loop Objective of dropping a bomb
from a height is to hit a target
below.
Desired output: Target below
Plant or process: the bomb with
its control fins

Closed-loop Control or “smart bombs”


Sensors are incorporated into the bomb to give feedback on
its actual position relative to the target. The “error”
information is then used to steer the bomb, using its control
fins, to the target. Result: one target only needs one
bomb.
Sensors: TV, Infrared, laser guided, or GPS.

See also: http://science.howstuffworks.com/smart-bomb1.htm


86
Comparison of Open and Closed Loop Systems
The distinguishing characteristic are: an open-loop
system is that it cannot compensate for any
disturbances. If the output of an open-loop system is
corrupted not only by signals that add to the
controller’s commands but also by disturbance at the
output.
Open-loop system, then, do not correct for
disturbances and are simply commanded by the input.
For example, toasters are open-loop systems, as
anyone with burnt toast can attest. The controlled
variable (output) of a toaster is the colour of the toast.
CONTROL SYSTEM COMPONENTS

i.System, plant or process


 To be controlled
ii.Actuators
 Converts the control signal to a power signal
iii.Sensors
 Provides measurement of the system output
iv.Reference input
 Represents the desired output
GENERAL CONTROL SYSTEM

Disturbance

Controlled Manipulated
Error Signal Variable

Set-point or Actual
Reference + Output
input + +
+ Controller Actuator + Process
-

Sensor
Feedback Signal
Components in a Closed Loop System
Computer-controlled systems
In modern systems, the controller/compensator is a
digital computer. The advantage of using a computer
is that many loops can be controlled or compensated
by the same computer through time sharing.
Furthermore, any adjustments of compensator
parameters requires to yield a desired response can
be made by changes in software rather than
hardware. The computer can also perform
supervisory functions, such as scheduling many
required applications
Computer control systems
A very important category of real time systems are automatic control
systems. In fact all control systems are real-time systems because
they must react to external events within a specified amount of time.

control error manipulated variable controlled variable


setpoint

e(k) = control error

The operation of computer control systems is usually synchronized


by a clock signal that determines the sampling period. This
sampling period specifies the maximum total amount of time that is
available for A/D and D/A conversions and control computations.
Control loop variables
y(t) or y(k) - controlled variable (temperature, pressure, water level,
flow, speed etc.)
r(k) - reference or setpoint i.e. the desired value of the controlled
variable
e(k) - control error the difference between the desired value of the
controlled variable and its actual value e(k)= r(k) - y(k)
u(t) or u(k) – manipulated variable represents the action that is used
by the controller to change the controlled variable (control valve
position, power input of a heating element, speed of a cooling fan, fuel
flow to an engine or to a boiler)
Besides, there is usually also one or more disturbance variable(s) d(t)
that are external influences affecting the controlled variable (changing
temperature of the environment, changing load of a electric drive or of
an engine etc.)
SUPPLEMENTARY DEFINITIONS
A transducer is a device that converts one energy
form into another. For example, one of the most
common transducers in control system applications
is the potentiometer, which convert mechanical
position into an electrical voltage
The command v is an input signal, usually equal to
the reference input r. But when the energy form of
the command v is not the same as that of the primary
feedback b, a transducer is required between the
command v and the reference input r as shown
A stimulus, or test input, is any externally introduced
input signal affecting the controlled output c. Note:
The reference input r is an example of a stimulus, but
it is not the only kind of stimulus.
The time response of a system, subsystem, or
element is the output as a function of time, usually
following application of a prescribed input under
specified operating conditions.
A multivariable system is one with more than one
input (multi-input, MI-), more than one output
(multioutput, -MO), or both (multi-input-multi-
output, MIMO).
Five Definitions of Control laws / Algorithms
An on-off controller (two-position, binary controller) has
only two possible values at its output u, depending on the
input e to the controller.
PD, PI, DI, and PID controllers are combinations of
Proportional (P), Derivative (D), and Integral (I)
controllers.
The output u of a PD controller has the form:

The output of a PID controller has the form:


SERVOMECHANISMS
Definition A servomechanism is a power-amplifying
feedback control system in which the controlled
variable c is mechanical position, or a time derivative
of position such as velocity or acceleration. An
automobile power-steering apparatus is a
servomechanism
Examples:

Digital control
of an Air Heater:

Digital speed control


of a DC motor:
Two types of real-time control
systems:
1. Embedded Systems
• dedicated control systems
• the computer is an embedded part of
some piece equipment
• microprocessors, real-time kernels, RTOS
• aerospace, industrial robots, vehicular
systems

2. Industrial Control Systems


• distributed control systems (DCS),
programmable controllers (PLC),
Soft-PLCs
• hierarchically organized, distributed
control systems
• process industry, manufacturing industry,
Automation Applications
 Power generation hydro, coal, gas, oil, shale, nuclear, wind,
solar
 Transmissionelectricity, gas, oil
 Distribution electricity, water
 Process paper, food, pharmaceutical, metal, processing,
glass, cement, chemical, refinery, oil & gas
 Manufacturing computer aided manufacturing (CIM), flexible
fabrication, appliances, automotive, aircrafts
 Storage silos, elevator, harbor, deposits, luggage handling
 Building heat, ventilation, air conditioning (HVAC), access
control, fire, energy supply, tunnels, highways,....
 Transportation rolling stock, street cars, sub-urban trains,
busses, cars, ships, airplanes, satellites,...
Smart grids are distribution networks that measure and control usage
Examples of Automated Plants
Cars
Appliances control (windows, seats, radio,..)
Motor control (exhaust regulations)
ABS and EPS, brake-by-wire, steer-by-wire
19% of the price is electronics, (+10% per year)
Examples of Automated Plants
• Airplanes Avionics
 flight control, autopilot
 flight management
 flight recording, black boxes
 diagnostics
 “fly-by-wire”
Problems Based Learning: Input
and output of a System
D 1.5

FIG 1-2
The system shown in Fig. 1-2, consisting of a mirror
pivoted at one end and adjusted up and down with a
screw at the other end, is properly termed a control
system.
1. Identify the input and
2. Output for the pivoted,

D1.6. Identify a possible input and a possible output for a


rotational generator of electricity
D1.7. Identify the input and output for an automatic
washing machine.
D1.8. Identify the organ-system components, and the
input and output, and describe the operation of the
biological control system consisting of a human
being reaching for an object.
D1.9. Explain how a closed-loop automatic washing
machine might operate.
D1.10. How are the following open-loop systems
calibrated: ( a ) automatic washing machine,
( b ) automatic toaster, ( c ) voltmeter?
D1.11. Identify the control action in the systems of
Problems D1.5, D1.6, and D1.8.
D1.12. Which of the control systems in Problems
D1.5, D1.6, and D1.8 are open-loop? Closed-loop?
Assignments
1. ( a ) Explain the operation of ordinary traffic
signals which control automobile traffic at roadway
intersections. ( b ) Why are they open-loop control
systems? (c) How can traffic be controlled more
efficiently? ( d ) Why is the system of (c) closed-
loop?
2. Devise a control system to fill a container with
water after it is emptied through a stopcock at the
bottom. The system must automatically shut off the
water when the container is filled
Assignments continues
3. Devise a simple control system which automatically
turns on a room lamp at dusk, and turns it off in daylight.
4. Devise a closed-loop automatic toaster.
5. Is the system that controls the total cash value of a
bank account a continuous or a discrete-time
system? Why? Assume a deposit is made only once, and
no withdrawals are made.
6. What type of control system, open-loop or closed-loop,
continuous or discrete, is used by an ordinary stock
market investor, whose objective is to profit from his or
her investment.
How Can Computers Control Things?
The best way to understand how a computer can
control things is to think about how a person controls
something. For example, how does a human control a
car when he/she is driving? The person looks ahead
at the road to see what is approaching, thinks about
what he/she has seen, then acts upon it (turns the
steering wheel and/or presses the pedals). In other
words the person reacts to what is happening in the
world around them. Computer-controlled systems
work in a similar way – the system detects what is
happening in the world around it, processes this
information, and then acts upon what it has
detected.
1. Input devices called sensors feed data into the
computer
2. The computer then processes the input data (by
following a set of instructions)
3. As a result of the processing, the computer can turn
on or off output devices called actuators.
Sensors: We use our eyes, our ears, our mouth,
our nose and our skin - our senses. A normal PC
has no senses, but we can give it some: We can
connect sensors to it. A sensor is a device that
converts a real-world property (e.g. temperature)
into data that a computer can process. Examples
of sensors and the properties they detect are
Sensor What it Detects
Temperature Temperature

Light Light / dark

Pressure Pressure (e.g. someone standing on it)

Moisture Dampness / dryness

Water-level How full / empty a container is

Movement Movement nearby

Proximity How close / far something is

Switch or button If something is touching / pressing it

Note: many sensors are analogue devices and so need to be connected to


the computer using an analogue-to-digital convertor.
Actuators

We use our muscles to move things, press things, lift


things, etc. (and we can also make sound using our
voice). A normal PC has no muscles, but we can give
it some. In fact we can give it the ability to do lots of
things by connecting a range of actuators to it. An
actuator is a device, controlled by a computer, that
can affect the real-world. Examples of actuators,
and what they can do are...
Actuator What it Can Do
Light bulb or LED Creates light
Heater Increases temperature
Cooling Unit Decreases temperature
Motor Spins things around
Pump Pushes water / air through pipes
Buzzer / Bell / Siren Creates noise

Note: some of these devices require an analogue signal to operate them. This means
that they need to be connected to the computer using a digital-to-analogue
converter.

SIREN FIRE BELL ALARM PUMP


Making Decisions (The Process)

The steps followed by the computer in a control


system are just about the same for all systems...
1. Check the data from the sensors
2. If necessary, turn on/off one or more of the
actuators
3. Go back to step 1
Examples of Computer Control system
Many of the devices that we use in our everyday lives are
controlled by small computers...
1. Washing machines
2. Air-conditioning systems
3. Programmable microwave ovens
If we look beyond our homes, we come across even more systems
that operate automatically under the control of a computer...
4. Modern cars have engines, brakes, etc. that are managed and
controlled by a computer
5. Most factory production lines are computer-controlled,
manufacturing products with little or no human input
6. Traffic lights are switched on and off according to programs
running on computers which manage traffic flow through cities.
Why we use Computers to Control Thing?
It is often far better to have a system that is managed and
controlled by a computer rather a human because...
1. Computers never need breaks - they can control a system
without stopping, all day, every day
2. Computers don’t need to be paid. To buy and install a
computerized control system can be very expensive, but, in
the long-term, money is saved by not having to employee
staff to do the work
3. Computers can operate in conditions that would be very
hazardous to human health, e.g. nuclear power stations,
chemical factories, paint-spraying areas
4. Computers can control systems far more accurately, and
respond to changes far more quickly than a human could
An Example Control System - An Automated
Greenhouse
A computer-controlled greenhouse might have a number
of sensors and actuators:
1. A light sensor to detect how much light the plants are
getting
2. A temperature sensor to see how cold or hot the
greenhouse is
3. A moisture sensor to se how wet or dry the soil is
4. Lights to illuminate the plants if it gets too dark
5. A heater to warm up the greenhouse if it gets too cold
6. A water pump for the watering system
7. A motor to open the window if it gets too warm inside
An Example Control System - An Automated
Greenhouse
The process for this system would be...
1. Check light sensor
If it is dark, turn on the lights
If it is not dark, turn off the lights
2. Check temperature sensor
If it is too cold, turn on heater and use motor to close
window
If it is too warm, turn off heater and use motor to open
window
3. Check the moisture sensor
If soil is too dry, turn on the water pump
If soil is too wet, turn off the water pump
4. Go back to step 1 and repeat
An Example Control System - An Automated
Greenhouse
Note: that if you have to describe a control process, never say that anything
like: “the temperature sensor switches on the heater”
This is totally wrong! Sensors cannot control anything - all they can do is
pass data to the computer.
The computer takes the actions and turns on/off the actuators
Automatic Controllers
An automatic controller compares the actual value
of the plant output with the reference input
(desired value), determines the deviation, and
produces a control signal that will reduce the
deviation to zero or to a small value. The manner in
which the automatic controller produces the
control signal is called the control action. Figure
below is a block diagram of an industrial control
system, which consists of an automatic controller,
an actuator, a plant, and a sensor (measuring
element).
Block diagram of an industrial control system, which consists of
an automatic controller, an actuator, a plant, and a sensor.
The controller detects the actuating error
signal, which is usually at a very low power
level, and amplifies it to a sufficiently high
level. The output of an automatic controller is
fed to an actuator, such as an electric motor, a
hydraulic motor, or a pneumatic motor or
valve. (The actuator is a power device that
produces the input to the plant according to
the control signal so that the output signal
will approach the reference input signal.)
The sensor or measuring element is a device
that converts the output variable into another
suitable variable, such as a displacement,
pressure, voltage, etc., that can be used to
compare the output to the reference input
signal. This element is in the feedback path of
the closed-loop system. The set point of the
controller must be converted to a reference
input with the same units as the feedback
signal from the sensor or measuring element.
Practical Automatic Control Systems Examples
Some of the examples below show how the
principles of control can be used to understand and
solve control problems in other fields, such as
economics, medicine, politics, and sociology. Many
control systems are designed in such a way as to
control automatically certain variables of the
system (e.g., the voltage across an element, the
position or velocity of a mass, the temperature of a
chamber, etc.)
(a)Automatic steering control system (b) The controller uses the difference between the actual and
desired direction of travel to adjust the steering wheel accordingly
(C) Typical Direction-of-Travel Response
A typical autopilot system
AUTO PILOT CONTROL SYSTEM
 An autopilot is an example of a control
system. Control systems apply an action
based on a measurement and almost
always have an impact
 It's called a negative feedback loop

because the result of a certain action


inhibits further performance of that
action. All negative feedback loops
require a receptor, a control center and
an effector on the value they are
measuring.
autopilots, are devices for
controlling spacecraft, aircraft,
watercraft, missiles and vehicles without
constant human intervention. Most
people associate autopilots with aircraft,
but the same principles, however, apply
to autopilots that control any kind of
vessel.
Aileron is used for ROLL
Rudder is used for YAW
Elevators is used for PITCH
Three Basic Control of Aircraft
  The three basic control surfaces that affect an
airplane's attitude. The first are the elevators,
which are devices on the tail of a plane that
control pitch (the swaying of an aircraft around a
horizontal axis perpendicular to the direction of
motion). The rudder is also located on the tail of a
plane. When the rudder is tilted to starboard
(right), the aircraft yaws -- twists on a vertical axis
-- in that direction. When the rudder is tilted to
port (left), the craft yaws in the opposite direction.
Finally, ailerons on the rear edge of each wing roll
the plane from side to side.
Autopilot Type
 Autopilots can control any or all of these
surfaces. A single-axis autopilot manages
just one set of controls, usually the
ailerons. This simple type of autopilot is
known as a "wing leveler" because, by
controlling roll, it keeps the aircraft wings
on an even keel. A two-axis
autopilot manages elevators and ailerons.
Finally, a three-axis autopilot manages all
three basic control systems: ailerons,
elevators and rudder.
Modern Automatic Flight Control System
 The heart of a modern automatic flight control system is
a computer with several high-speed processors. To
gather the intelligence required to control the plane, the
processors communicate with sensors located on the
major control surfaces. They can also collect data from
other airplane systems and equipment,
including gyroscopes, accelerometers,
altimeters, compasses and airspeed indicators.
 The processors in the AFCS then take the input data and,

using complex calculations, compare it to a set of control


modes. A control mode is a setting entered by
the pilot that defines a specific detail of the flight. For
example, there is a control mode that defines how an
aircraft's altitude will be maintained. There are also
control modes that maintain airspeed, heading and
flight path.
The Processors & Servomechanism
 These calculations determine if the plane is
obeying the commands set up in the control
modes. The processors then send signals to
various servomechanism units. A
servomechanism, or servo for short, is a
device that provides mechanical control at a
distance. One servo exists for each control
surface included in the autopilot system. The
servos take the computer's instructions and
use motors or hydraulics to move the craft's
control surfaces, making sure the plane
maintains its proper course and attitude.
Basic Elements of An Autopilot System
 The above illustration shows how the basic
elements of an autopilot system are related. For
simplicity, only one control surface the rudder - is
shown, although each control surface would have a
similar arrangement. Notice that the basic
schematic of an autopilot looks like a loop, with
sensors sending data to the autopilot computer,
which processes the information and transmits
signals to the servo, which moves the control
surface, which changes the attitude of the plane,
which creates a new data set in the sensors, which
starts the whole process again. This type of
feedback loop is central to the operation of
autopilot systems. It's so important that we're
going to examine how feedback loops work in the
next section.
Automatic aircraft landing system
Metal Sheet Thickness Control System
Laser Eye Surgery Control System
PRACTICAL CONTROL EXAMPLES

Schematic Diagram of Inflation Control System.


A feedback Control System for National Income
PRACTICAL CONTROL EXAMPLES

Block Diagram of Human Speech.


PRACTICAL CONTROL EXAMPLES

Schematic Diagram of Teaching.


Smart Grids are Distribution Networks that Measure
and Control Usage
Traffic Light Control System

06/09/2022 171
Traffic Light Control System
 Distributed System
 A set of intersections
 A set of connection
(roads)
 Traffic lights regulating
 Traffic lights are
controlled independently

06/09/2022 Traffic Light Control System 172


Traffic Control and Command Centre

06/09/2022 Traffic Light Control System 173


Traffic Light Control System
 No obvious optimal
solution
 In practice most traffic
lights are controlled by
fixed-cycle controllers
 Fixed controllers need
manual changes to
adapted specific
situation

06/09/2022 Traffic Light Control System 174


Green Waves
 Offset of cycle can be adjusted to create
green waves.

06/09/2022 Traffic Light Control System 175


Driver Detector - Sonar Sensor

•Few drivers
•Unusual

06/09/2022 Traffic Light Control System 176


Driver Detector - Camera
 Identification image
 Expensive
 Complex Traffic System

06/09/2022 Traffic Light Control System 177


06/09/2022 178
Driver Detector - Loop Detector
•Measure Inductive
•Most popular
•Cheap

06/09/2022 Traffic Light Control System 179


Traffic Light Control System
What does it do ?

06/09/2022 Traffic Light Control System 180


Classifications of Industrial Controllers
Most industrial controllers may be classified
according to their control actions as:
1. Two-position or on–off controllers
2. Proportional controllers
3. Integral controllers
4. Proportional-plus-integral controllers
5. Proportional-plus-derivative controllers
6. Proportional-plus-integral-plus-derivative
controllers
Most industrial controllers use electricity or
pressurized fluid such as oil or air as power sources.
Consequently, controllers may also be classified
according to the kind of power employed in the
operation, such as
1. pneumatic controllers,
2. hydraulic controllers,
3. or electronic controllers. What kind of controller
to use must be decided based on the nature of
the plant and the operating conditions, including
such considerations as safety, cost, availability,
reliability, accuracy, weight, and size.
Analysis and Design Objective
• Analysis is the process by which a system’s
performance is determined. For example, we evaluate
its transient and steady-state error to determine if
they meet the desire specifications.
• Design is the process by which a system’s
performance is created or changed. A control system
is dynamic: It responds to an input by undergoing a
transient response before reaching a steady-state
response that generally resembles the input. We have
already identified theses two responses and cited a
position control system (an elevator) as an example
Three major objectives of system analysis and design
1.Producing the desired transient response
2.reducing steady-state error and
3. achieving stability
Other Important Design Considerations
1.Hardware selection, such as motor sizing to fulfil power
requirement and
2.Choice of sensors for accuracy, must be considered early
in the design.
3.Finances, control system designer cannot create a designs
without considering their economic impact.
4. Robust design: Thus the engineer must create a robust
design so that the system will not be sensitive to
parameter changes.
System Integration
Success in control Systems depends on taking a
holistic viewpoint. Some of the issues are:
1. plant, i.e. the process to be controlled
2. objectives
3. sensors
4. actuators
5. communications
6. computing
7. architectures and interfacing
8. algorithms
9. accounting for disturbances and uncertainty
A modern industrial plant: A section of the OMV Oil Refinery in Austria
Plant
The physical layout of a plant is an intrinsic part of
control problems. Thus a control Scientist / engineer
needs to be familiar with the "physics" of the
process under study. This includes a rudimentary
knowledge of the basic energy balance, mass
balance and material flows in the system.
All of the above issues are relevant to the control of an
integrated plant such as that shown below.

Figure 1: Process schematic of a Kellogg ammonia plant


Objectives
Before designing sensors, actuators or control
architectures, it is important to know the goal, that is,
to formulate the control objectives. This includes
1. what does one want to achieve (energy
reduction, yield increase,...)
2. what variables need to be controlled to achieve
these objectives
3. what level of performance is necessary (accuracy,
speed,...)
Sensors
A sensor is a device that provides a measurement of
a desired external signal. For example, resistance
temperature detectors (RTDs) are sensors used to
measure temperature.
Sensors are the eyes of control enabling one to see
what is going on. Indeed, one statement that is
sometimes made about control is:
If you can measure it, you can control it.
Actuators
An actuator is a device employed by the control
system to alter or adjust the environment. If sensors
provide the eyes of control, then actuators provide
the muscle. Actuators are also a source of
limitations in control performance. Two aspects of
actuator limitations are maximal movement, and
minimal movement.
Once sensors are in place to report on the state of a
process, then the next issue is the ability to affect, or
actuate, the system in order to move the process
from the current state to a desired state
Watt’s fly ball governor
Speed control using a Fly-ball
A typical industrial control problem will usually
involve many different actuators - see below:

Typical flatness control set-up for rolling mill


A modern rolling mill
Reversing Mill
Figure 6: Feedforward controller for reversing mill
Figure 7: Magnitudes and conventions for ship motion description
Communications

Interconnecting sensors to actuators, involves


the use of communication systems. A typical
plant can have many thousands of separate
signals to be sent over long distances. Thus
the design of communication systems and
their associated protocols is an increasingly
important aspect of modern control
engineering.
Computing

In modern control systems, the connection between


sensors and actuators is invariably made via a
computer of some sort. Thus, computer issues are
necessarily part of the overall design. Current control
systems use a variety of computational devices
including DCS's (Distributed Control Systems), PLC's
(Programmable Logic Controllers), PC's (Personal
Computers), etc.
A modern computer based rapid prototyping system
Architectures and interfacing
The issue of what to connect to what is a non-trivial
one in control system design. One may feel that the
best solution would always be to bring all signals to a
central point so that each control action would be
based on complete information (leading to so called,
centralized control). However, this is rarely (if ever)
the best solution in practice. Indeed, there are very
good reasons why one may not wish to bring all
signals to a common point. Obvious objections to
this include complexity, cost, time constraints in
computation, maintainability, reliability, etc.
Table 1.1: Typical control heirarchy
Algorithms
Finally, we come to the real heart of control engineering
i.e. the algorithms that connect the sensors to the
actuators. It is all to easy to underestimate this final
aspect of the problem.
As a simple example from our everyday experience,
consider the problem of playing tennis at top
international level. One can readily accept that one
needs good eye sight (sensors) and strong muscles
(actuators) to play tennis at this level, but these
attributes are not sufficient. Indeed eye-hand
coordination (i.e. control) is also crucial to success.
In summary:
Sensors provide the eyes

and actuators the muscle


but control science provides the
finesse.
 Better Sensors
Provide better Vision

 Better Actuators
Provide more Muscle

Better Control
Provides more finesse by combining sensors and
actuators in more intelligent ways
Disturbances and Uncertainty
One of the things that makes control science
interesting is that all real life systems are acted
on by noise and external disturbances. These
factors can have a significant impact on the
performance of the system. As a simple
example, aircraft are subject to disturbances in
the form of wind-gusts, and cruise controllers in
cars have to cope with different road gradients
and different car loadings.
Homogeneity
A final point is that all interconnected systems,
including control systems, are only as good as
their weakest element. The implications of this
in control system design are that one should
aim to have all components (plant, sensors,
actuators, communications, computing,
interfaces, algorithms, etc) of roughly
comparable accuracy and performance.
In order to make progress in control engineering
(as in any field) it is important to be able to
justify the associated expenditure. This usually
takes the form of a cost benefit analysis.
Typical steps include:
1. Assessment of a range of control
opportunities;
2. Developing a short list for closer examination;
3. Deciding on a project with high economic or
environmental impact
Cost benefit analysis
4. Consulting appropriate personnel
(management, operators, production staff,
maintenance staff etc.);
5. Identifying the key action points;
6. Collecting base case data for later
comparison;
7. Deciding on revised performance
specifications;
8. Updating actuators, sensors etc.;
Cost benefit analysis (Contd.)
9. Development of algorithms;
10. Testing the algorithms via simulation;
11. Testing the algorithms on the plant using a rapid
prototyping system;
12. Collecting preliminary performance data for
comparison with the base case;
13. Final implementation;
14. Collection of final performance data;
15. Final reporting on project.
Signals and systems terminology
The Design of Feedback Control systems

Step 1: Transform requirement into a


physical system
Step 2: Draw a functional block diagram
Step 3: Create a Schematic
Step 4: Develop a Mathematical Model
(Block diagram)
Step 5: Reduce the block diagram
Step 6: Analyze and design
The Control System Design
Process
Antenna Positioning Control System
• Original system: the antenna with
electric motor drive systems.
• Control objective: to point the
antenna in a desired reference direction.
• Control inputs: drive motor voltages.
• Outputs: the elevation and azimuth of the
antenna.
• Disturbances: wind, rain, snow.
215
Antenna Azimuth Position
Control System

STEP 1
Detailed layout
STEP 2 Functional block diagram
Antenna Control System
Functional Block Diagram
Wind force
Antenna System
volts torque Angular
Ref. + volts power position
input Diff. Power
_
amp amp Motor Antenna
Error

volts
Angle
Feedback Path sensor

Information Variables Physical Variables

219
STEP 3 Schematic layout
Step 5: Reduce the Block Diagram
Step 6: Analyze and Design
• Ramp inputs represent constant-velocity inputs
to a position control system by their linearly
increasing amplitude. These waveforms can be
used to test a system’s ability to follow a linearly
increasing input or, equivalently, to track a
constant velocity target. For example, a position
control system that tracks a satellite that moves
across the sky at a constant angular velocity, as
shown above, would be tested with a ramp input
to evaluate the steady-state error between the
satellite’s angular position and that of the control
system.
Finally, parabolas, whose second derivatives are
constant, represent constant acceleration inputs
to position control systems and can be used to
represent accelerating targets, such as the missile
above, to determine the steady-state error
performance.
SYSTEM MODELING

228
System Modeling: An Introduction
• Any physical system or process may be defined as one
which operates on one or more inputs to produce one
or more outputs. All the physical processes are
dynamic in nature which depends on time. Before
designing any process, we have to predict the output
of the process for a defined input. A good design
should pre tell about the output of the system for all
possible inputs. Practically speaking the inputs will not
be very accurate as we think and design. There should
be some ‘Pilot running’ before operating the actual
process. This may be done by simulation, emulation or
experimentation
• There are two types of analyzing a system
• By experimentation
• By Modeling 229
Experimentation
 Experimentation is done by constructing the
system and testing with some definite
inputs. In this method there are some
demerits. They are as follows:
 Cost for construction will go in vain if the design is
not appropriate
 There may be a Loss / Damage / Breakage in system
while experimenting on it
 So experimentation is not a suitable solution for
analyzing any system. There must be some alternate
for this, which is nothing but the modeling.
230
Modeling
• Modeling, as the literal meaning denotes, it is
generating a model for a real process or system.
Example: Model examination is a typical process
equivalent to an university examination

• But there arises a question “How to produce an exact


model for any process?”

• Suppose we want to explain how a fan is working, we


use a small 12 V DC motor with some card board blades
attached (a miniature model). These are some example
models being used in schools for basic understanding.
• From these examples, it could be obvious that Modeling
is a process of understanding and analyzing any
physical process easily. 231
System Modeling

Tw(s)

+ p(s)
Sa(s) 1 + 1 1
Gc(s)
Ra  s La
Kh BmGp(s)
s Jm s
+
-
Ks

X2W Helicopter Pitch Control Model


System Modeling
• Purpose of models in control systems:
1. The mathematical model of a system is the
basis for all control system analysis and
design methods.

2. A detailed model allows some verification of


the performance of the control system
through simulation, before it is implemented
on the actual system.

233
Types of Models
• Physical Models
A model for a given system
– scale models depends upon:
– analogue models • defined system boundaries
• objective of the study
• Mathematical Models • level of approximation
– analytically based required
– experimentally based

234
Types of Models
• A design model will often have many
assumptions and simplifications made to allow
the use of analytical methods (we will normally
require linear, time-invariant models).

• For verification studies, all model details are


included and the model equations are then
solved numerically, i.e. computer simulation.
235
The Modeling Process
1. Define the purpose or objective of the model.
Identify system boundaries, functional blocks,
interconnecting variables, inputs and outputs.
Construct a functional block diagram.

2. Determine the model for each component or


subsystem.
Apply known physical laws when possible,
otherwise use experimental data to identify input-
output relationships.
236
The Modeling Process
3. Integrate the subsystem models into an
overall system model.
Combine equations, eliminate variables, check
for sufficient equations to solve the system.

4. Verify the model validity and accuracy.


Implement a simulation of the model equations
and compare with experimental data for the
same conditions.

237
The Modeling Process
5. Make simplifications to create an
approximate model suitable for control
design.
– Linearization of model equations
– Reduce the order of the model by eliminating
unimportant dynamics
– Use lumped parameter approximations for distributed
parameter system.
trade-off
Model Complexity Model Accuracy
238
Mathematical Modeling
• The most powerful tool in designing any system is
the mathematical modeling. Mathematical modeling
is representing the physical system by means of
mathematical equation. Most of the systems are
dynamic in nature. So systems are represented as
mathematical function of time. Example x(t), y(t)
etc. There are lots of easy and powerful tools
available in mathematics which further reduces the
complexity in analysis. Differential equations are
the equations in mathematics that are used to
represent dynamic functions. Physical systems are
thus modeled as differential equations in
mathematical modeling.
6/9/22 239
There are several physical systems like mechanical,
electrical, electro mechanical, thermal, hydraulic,
pneumatic etc., Every system has got its own way of
modeling which is dealt in detail in the forth coming
sessions. Before modeling any physical system, we
should understand that the modeling is based on the
energy storage capacity of the process. The energy
storage capacity is the factor that makes the system
highly dynamic.
• In an electrical system, any electrical component
will be one of or a combination of the three
elements
• Resistance (Energy Non storing / Energy Dissipating
element)
• Inductance (Energy Storage element)
• Capacitance (Energy Storage element)
240
Mathematical Modeling

• To understand system performance, a


mathematical model of the plant is required

•This will eventually allow us to design control


systems to achieve a particular specification

241
Mathematical Modeling

• The input-output relationship is usually


expressed in terms of a differential equation

(an-1, …, a0, bm, … b0) are the system’s


parameters, n ≥ m

• The system is LTI if the parameters are time-


invariant

• n is the order of the system 242


Physical Systems
• Hydraulic Systems
• Thermal Systems
• Pneumatic Systems
• Mechanical Systems
• Electrical Systems

243
Linear System
 What is Linear System?
The system whose steady state output is always
proportional to input is a Linear System. During
transient state, this need not be maintained
 A system whose transient output is proportional to
input at the instant of the application of input is
the zero order linear system
 Zero order system is the simplest linear system to
analyze, but most difficult to construct
 Examples of Physical Systems

244
Non-linear System
• What is Non-linear System?
In the real world, all the elements, devices,
processors are all non-linear but they differ in
degree of nonlinearity. Some will be slightly
nonlinear and some moderately nonlinear and
many are totally non-linear
• They are governed by non-linear differential
equations
• Example – Vanderpol Equation

245
How to detect non-linear system?
• A linear system when applied with sinusoidal
input will give sinusoidal output with different
magnitude and in phase
• Only zero order linear system will give sinusoidal
output magnified by a constant with zero phase
shift
• For non-linear system, the output for sinusoidal
input with fixed frequency is non-sinusoidal in
nature, while frequency remains same
• If differential equation is available, then applying
the principle of linearity, superposition etc., the
difference between linear and nonlinear system
can be identified
MODELING OF PHYSICAL SYSTEMS
• One of the common tasks in the analysis and
design of control systems is Mathematical
Modeling of the Systems
• Mathematical model of a dynamic system is
defined as a set of equations that represents the
dynamics of the system accurately, or at least
fairly well
• Dynamics of any system (mechanical, electrical, thermal,
economic, biological etc.) may be described in terms of
differential equations. Such equations are obtained by
physical laws governing a particular system. Eg. Kirchhoff’s
laws
6/9/22 for electrical system 247
• Once a mathematical model of a system is
obtained, various analytical and computer tools
can be used for analysis and synthesis purposes

• Two common methods of modeling linear systems


are:
– Transfer function method
– State variable method

• Transfer function is valid only for linear time-


invariant systems whereas state equations can be
applied to both linear and non-linear systems
248
• A system is called linear if the principle of
superposition applies. It states that the response
produced by the simultaneous application of two
different forcing functions is the sum of the two
individual responses
• Dynamic systems that are composed of linear
time-invariant lumped parameter components
maybe described by linear time-invariant (constant
coefficient) differential equations

• A system is said to be nonlinear if the principle of


superposition does not apply
249
Linear Systems
A system is called linear if the principle of superposition
applies. The principle of superposition states that the
response produced by the simultaneous application of
two different forcing functions is the sum of the two
individual responses. Hence, for the linear system, the
response to several inputs can be calculated by treating
one input at a time and adding the results. It is this
principle that allows one to build up complicated solutions
to the linear differential equation from simple solutions. In
an experimental investigation of a dynamic system, if
cause and effect are proportional, thus implying that the
principle of superposition holds, then the system can be
considered linear.
Linear Time-Invariant Systems and Linear Time-Varying
Systems. A differential equation is linear if the
coefficients are constants or functions only of the
independent variable. Dynamic systems that are
composed of linear time-invariant lumped-parameter
components may be described by linear time-invariant
differential equations that is, constant-coefficient
differential equations. Such systems are called linear
time-invariant (or linear constant-coefficient) systems.
Systems that are represented by differential equations
whose coefficients are functions of time are called linear
time-varying systems. An example of a time-varying
control system is a spacecraft control system. (The mass
of a spacecraft changes due to fuel consumption.)
Back to the our Case Study Antenna control
Antenna Azimuth: A position control
system converts a position input
command to a position output response.
Position control systems find widespread
applications in antennas, robot arms, and
computer disk drives. The radio telescope
antenna is one example of a system that
uses position control systems.
“ System Concept ”
Response of a position control system
Antenna Control: Transfer Functions
physical systems can be modeled mathematically
with transfer functions. Typically, systems are
composed of subsystems of different types, such
as electrical, mechanical, and electromechanical.
The first case study uses our ongoing example of
the antenna azimuth position control system to
show how to represent each subsystem as a
transfer function.
• PROBLEM: Find the transfer function for each
subsystem of the antenna azimuth position
control system schematic shown below
Detailed layout
Computer-aided design modern control systems
In the past, control system design was labour
intensive & implemented through hand calculations
& plastic graphical aid tools. The process was slow,
and the results not always accurate. Large
mainframe computers were then used to simulate
the designs. Today desktop computers, can perform
analysis, design & simulations with one program.
With the ability to simulate a design rapidly, we can
play what-if games and try alternate solutions to
see if they yield better results, such as reduced
sensitivity to parameter changes. We can include
nonlinearities and other effects and test our models
for accuracy
MATLAB
We use MATLAB and MATLAB Control System Toolbox, which
expands MATLAB to include system-specific commands. In
addition, presented are several MATLAB enhancements that
give added functionality to MATLAB and the Control System
Toolbox. Included are;
1. Simulink, which uses a graphical user interface (GUI)
2. the LTI Viewer, which permits measurements to be made
directly from time and frequency response curves
3. the Symbolic Math Toolbox, which saves labour when making
symbolic calculations required in control system analysis and
design.   MATLAB is presented as an alternate method of solving
control system problems. You are encouraged to solve problems
first by hand and then by MATLAB so that insight is not lost
through mechanized use of computer programs.
Mathematical Modeling of Control Systems
 In studying control systems you must be able to model
dynamic systems in mathematical terms and analyze
their dynamic characteristics.
 A mathematical model of a dynamic system is defined
as a set of equations that represents the dynamics of
the system accurately, or at least fairly well.
 Note that a mathematical model is not unique to a
given system. A system may be represented in many
different ways and, therefore, may have many
mathematical models, depending on one’s perspective
 The dynamics of many systems, whether they are
mechanical, electrical, thermal, economic,
biological, and so on, may be described in terms
of differential equations.
 Such differential equations may be obtained by
using physical laws governing a particular system
—for example, Newton’s laws for mechanical
systems and Kirchhoff’s laws for electrical
systems. We must always keep in mind that
deriving reasonable mathematical models is the
most important part of the entire analysis of
control systems.
MODELING OF ELECTRICAL SYSTEMS
Electrical circuits involving resistors, capacitors and
inductors are considered. The behavior of such
systems is governed by Ohm’s law and Kirchhoff’s
laws
Resistor: Consider a resistance of ‘R’  carrying
current ‘i’ Amps as shown in Fig (a), then the voltage
drop across it is v = R I
Inductor: Consider an inductor “L’ H carrying current
‘i’ Amps as shown in Fig (a), then the voltage drop
across it can be written as v = L di/dt
Capacitor: Consider a capacitor “C’ F carrying current
‘i’ Amps as shown in Fig (a), then the voltage drop
across it can be written as v = (1/C)  i dt
ELECTRICAL COMPONENT MODELS
voltage/current voltage/charge
i
+
v 2 2
_
Inductance v = L di/dt v = L dq /dt

i
+
v
Resistance v=Ri v = R dq/dt
_

i
Capacitance v = 1/C  i dt v = 1/C q
+
v
_
BASIC LAWS
– Kirchhoff’s voltage law: The algebraic
sum of voltages around any closed loop
in an electrical circuit is zero

– Kirchhoff’s current law: The algebraic


sum of currents into any junction in an
electrical circuit is zero
ELEMENTS OF ELECTRICAL MODELING
• To describe and analyze electrical systems, we use the
concepts of voltage, charge, current, and flux

• Voltage, is the electromotive force (EMF) or "effort"


needed to produce a flow of current in a wire

• Current, refers to the rate of flow of charge

• Charge, is the time integral of current

• Electrical Flux is the accumulation of voltage (effort) over


time, i.e
STEPS FOR MODELING OF ELECTRICAL
SYSTEMS
 Apply Kirchhoff’s voltage law or Kirchhoff’s
current law to form the differential equations
describing electrical circuits comprising of
resistors, capacitors, and inductors

 Form Transfer Functions from the describing


differential equations

 Then simulate the model


Mathematical Models
• Mathematical models may assume many different
forms. Depending on the particular system and
the particular circumstances, one mathematical
model may be better suited than other models.
Once a mathematical model of a system is
obtained, various analytical and computer tools
can be used for analysis and synthesis purposes.
Simplicity Versus Accuracy. In obtaining a
mathematical model, we must make a
compromise between the simplicity of the model
and the accuracy of the results of the analysis.
TYPES OF MATHEMATICAL MODELS
Several types of mathematical models have been
proposed for the description of systems. The most popular
ones, which we will present in this chapter, are the
following:
1. The differential equations
2. The transfer function
3. The impulse response
4. The state equations
Each model has advantages and disadvantages over the
others. Hence, the knowledge of all four models offers the
flexibility of using the most appropriate model among the
four for a specific system or for a specific application
The above four mathematical models are based on
mathematical relationships. Note, however, that
there are other ways of describing a system, aiming
to give a schematic overview of the system. Two
such popular schematic system descriptions are
1. The block diagrams
2. The signal-flow graphs
These diagrams and graphs are particularly useful in
giving a simplified overall picture of a system
1. TRANSFER FUNCTION
 Transfer functions are commonly used to
characterize the input-output relationships of
components or systems that can be described by
linear, time-invariant, differential equations.
 Transfer Function. The transfer function of a linear,
time-invariant, differential equation system is
defined as the ratio of the Laplace transform of
the output (response function) to the Laplace
transform of the input (driving function) under the
assumption that all initial conditions are zero.
Comments on Transfer Function
1. The transfer function of a system is a mathematical
model in that it is an operational method of expressing
the differential equation that relates the output variable
to the input variable.
2. The transfer function is a property of a system itself,
independent of the magnitude and nature of the input or
driving function.
3. The transfer function includes the units necessary to
relate the input to the output; however, it does not
provide any information concerning the physical structure
of the system. (The transfer functions of many physically
different systems can be identical.)
Comments on Transfer Function
4. If the transfer function of a system is known, the
output or response can be studied for various
forms of inputs with a view toward understanding
the nature of the system.
5. If the transfer function of a system is unknown, it
may be established experimentally by introducing
known inputs and studying the output of the
system. Once established, a transfer function gives
a full description of the dynamic characteristics of
the system, as distinct from its physical description.
General Form of Transfer Function
Example: Obtain the pole-zero map of the
following transfer function.
Pole-zero Map
Consider the linear time-invariant system defined
by the following differential equation: where y is
the output of the system and x is the input.

• The transfer function of this system is the ratio of


the Laplace transformed output to the Laplace
transformed input when all initial conditions are
zero, or
Convolution Integral. For a linear, time-invariant
system the transfer function G(s) is

where X(s) is the Laplace transform of the input to


the system and Y(s) is the Laplace transform of
the output of the system, where we assume that
all initial conditions involved are zero. It follows
that the output Y(s) can be written as the product
of G(s) and X(s), or
Types of Control Systems
1. Open-loop control system: A system that utilizes a
device to control the process without using feedback.
Thus the output has no effect upon the signal to the
process.
2. Closed-loop feedback control system: A system that
uses a measurement of the output and compares it with
the desired output.
3. Regulator: The control system where the desired values
of the controlled outputs are more or less fixed and the
main problem is to reject disturbance effects.
4. Servo system: The control system where the outputs are
mechanical quantities like acceleration, velocity or
position.
5. Multivariable Control System: A system with
more than one input variable or more than one
output variable
Classification: Natural control system and Man-made
control system:
6. Natural control system: It is a control system that
is created by nature, i.e. solar system, digestive
system of any animal, etc.
7. Man-made control system: It is a control system
that is created by humans, i.e. automobile, power
plants etc.
7. Automatic control system: It is a control system that is
made by using basic theories from mathematics and
engineering. This system mainly has sensors, actuators
and responders.
8. Combinational control system: It is a control system
that is a combination of natural and man-made control
systems, i.e. driving a car etc.
9. Time-variant control system: It is a control system
where any one or more parameters of the control
system vary with time i.e. driving a vehicle.
10.Time-invariant control system: It is a control system
where none of its parameters vary with time i.e. control
system made up of inductors, capacitors and resistors
only.
11. Linear control system: It is a control system that
satisfies properties of homogeneity and additive.

12.Non-linear control system: It is a control system


that does not satisfy properties of homogeneity
and additive, i.e. f (x)=x3

13.Continuous-Time control system: It is a control


system where performances of all of its
parameters are function of time, i.e. armature
type speed control of motor.
14.Discrete -Time control system: It is a control
system where performances of all of its
parameters are function of discrete time i.e.
microprocessor type speed control of motor
15.Deterministic control system: It is a control
system where its output is predictable or
repetitive for certain input signal or disturbance
signal.
16. Stochastic control system: It is a control system
where its output is unpredictable or non-
repetitive for certain input signal or disturbance
signal.
17.Lumped-parameter control system: It is a control
system where its mathematical model is
represented by ordinary differential equations.
18.Distributed-parameter control system: It is a
control system where its mathematical model is
represented by an electrical network that is a
combination of resistors, inductors and capacitors.
19.SISO control system: It is a control system that
has only one input and one output.
20. MIMO control system: It is a control system that
has only more than one input and more than one
output.
Electrical Component Models
i
+ voltage/current voltage/charge
v
_

i
Inductance v = L di/dt v = L dq2/dt2
+
v
_ Resistance v=Ri v = R dq/dt

i + Capacitance v = 1/C  i dt v = 1/C q


v
_

293
Electrical Network Transfer Functions
The transfer function to the mathematical modeling
of electric circuits including passive networks and
operational amplifier circuits. Subsequent sections
cover mechanical and electromechanical systems.
Equivalent circuits for the electric networks that we
work with first consist of three passive linear
components: resistors, capacitors, and inductors.
Table below summarizes the components and the
relationships between voltage and current and
between voltage and charge under zero initial
conditions.
 TABLE below Voltage-current, voltage-charge, and
impedance relationships for capacitors, resistors,
and inductors Note: The following set of symbols
and units is used throughout this example:
 v(t) V(volts),
 i(t) A(amps), q(t)
 Q (coulombs),
 C F(farads),
 R (ohms),
 G (mhos),
 L H (henries).
Example: An RC Circuit
Find the transfer function Vo(s) / Vi(s) . R
1 1
vi  R i   i dt , vo   i dt vi i vo
C C
Taking the Laplace
 1  1
Vi ( s )   R   I ( s ) , Vo ( s )  I ( s)
 sC  sC
then, eliminating I(s)
1 Vo ( s ) 1
Vo ( s )  Vi ( s ) or 
1  RCs Vi ( s ) 1   s
where,  = RC
297
Transfer Function—Single Loop via the Differential
Equation
• PROBLEM: Find the transfer function relating
the capacitor voltage, Vc(s), to the input
voltage, V(s) in Figure below
SOLUTION: In any problem, the designer must
first decide what the input and output should
be. In this network, several variables could have
been chosen to be the output—for example, the
inductor voltage, the capacitor voltage, the
resistor voltage, or the current. The problem
statement, however, is clear in this case: We are
to treat the capacitor voltage as the output and
the applied voltage as the input. Summing the
voltages around the loop, assuming zero initial
conditions, yields the integral-differential
equation for this network as
Block diagram of
series RLC electrical network
• Let us now develop a technique for
simplifying the solution for future problems.
First, take the Laplace transform of the
equations in the voltage-current column of
Table assuming zero initial conditions
 Notice that this function is similar to the definition of
resistance, that is, the ratio of voltage to current. But,
unlike resistance, this function is applicable to capacitors and
inductors and carries information on the dynamic behavior
of the component, since it represents an equivalent
differential equation. We call this particular transfer function
impedance. The impedance for each of the electrical
elements is shown in Table 2.3. Let us now demonstrate how
the concept of impedance simplifies the solution for the
transfer function. The Laplace transform of Eq. 2.61),
assuming zero initial conditions, is
FIGURE 2.5 Laplace-transformed
network
Transfer Functions of Dynamic
Elements and Networks
Basic Laws
Newton’s Laws
– A body will accelerate when acted on by a
net force
– Force equals the rate of change of
momentum
– Every applied force is reacted by an equal
and opposite force
Mechanical Translation Models
x
force/velocity force/position
f
M
Mass f = M dv/dt f = M dx2/dt2
x
f Viscous f=Bv f = B dx/dt
B friction

Spring f = k  v dt f=kx
f x

313
Models of Mechanical Systems
Mechanical translational systems
dv(t )
• Newton’s second law: f (t )  Ma (t )  M
dt

• Device with friction (shock absorber): f (t )  B[ dx1(t )  dx 2 (t ) ]


B is damping coefficient. dt dt

• Translational system to be defined


is a spring (Hooke’s law): f (t )  K [ x1(t )  x 2(t )]
K is spring coefficient
Mechanical Rotational Models
J torque/velocity torque/position

T, 
Inertia T = J d/dt T = J d2/dt2
T, 
Viscous T=B T = B d/dt
B friction

Stiffness T = s   dt T=s 
s
T, 
315
Transformation Models
i1 i2
v1 v2 v1 N1 i1 N2
Transformer = =
N1 N2 v2 N2 i2 N1
f2 , x2
f1 , x1 f1 L2 x1 L1
L2 Lever = =
L1
f2 L1 x2 L2
T2 ,  2
N1 T1 N1 q1 N2
Gears = =
T2 N2 q2 N1
N2
T1 ,  1
316
Example 1 Mechanical system: Spring-mass-damper system.

(a) Diagram of the mechanical system components.

(b) Free body diagram of the mechanical system.


• In this example we model the spring-mass-system
shown in Figure (a) above. The mass, m is
subjected to an external force f. Let's suppose that
we are interested in controlling the position of m.
The way to control the position of the mass is by
choosing f. We first identify the input and output.
Input: external force, f,
Output: mass position, x.
We apply Newton's second law to obtain the
differential equation of this mechanical system.
Using the free-body-diagram shown in Figure (b),
we have
• where, b is the damping coefficient and k is the
spring stiffness. Equation (2) is the differential
equation that describes the dynamics of the
spring-mass-damper system. Note that the input
and output appear in this equation. If we know
the input, f then we can solve equation (2) for the
output, x.
Example 2 Fluid System: Water Level Control
• The system is a storage tank of cross-sectional
area A whose liquid level or height is h. The liquid
enters the tank from the top and leaves the tank
at the bottom through the valve, whose fluid
resistance is R. The volume flow rate in and the
volume flow rate out are qi and qo, respectively.
The fluid density ρ is constant. Figure below
shows the schematic diagram of the system. In
such a system it is desired to regulate the water
level in the tank. Assume that the variable that
we can change to control the water level is qi.
Diagram of the fluid system components and signals
• We first identify the input and output of the
system
• Input: volume flow rate in, qi,
• output: water level, h.
• In order to obtain the differential equation of the
system we use the conservation of mass principle
which states that;
• The time rate of change of fluid mass inside the
tank = the mass flow rate in - mass flow rate out
• where, A is the cross sectional area of the tank, g is
the acceleration due gravity and R is the fluid
resistance through the valve. Equation (1) is the
differential equation that describes the dynamics
of our thermal system. Note that the input and
output appear in this equation. If we know the
input, qi then we can solve equation (1) for the
output, h. A block diagram representation of the
fluid system is shown in Figure 2.
• how to go from a physical problem to a
mathematical description in state space form.
Pictorially:
Analogies between Electrical and Mechanical
systems
 Mechanical force is analogous to electrical voltage
and mechanical velocity is analogous to electrical
current.
 Mechanical displacement and electrical charge. We
also see that the spring is analogous to the
capacitor, the viscous damper is analogous to the
resistor, and the mass is analogous to the inductor.
Thus, summing forces written in terms of velocity is
analogous to summing voltages written in terms of
current, and the resulting mechanical differential
equations are analogous to mesh equations.
Analogous Quantities Base on Force Voltage
Analogy
Force Voltage Analogy
Analogous Quantities Based on Force - Current
Analogy
Force - Current Analogy
Thermal Electrical Analog
Summary of Governing Differential equations for Ideal elements
Translational Mechanical System Transfer Functions
In this section we concentrate on
translational mechanical systems. In the
next section we extend the concepts to
rotational mechanical systems.
Hence, an electrical network can be
interfaced to a mechanical system by
cascading their transfer functions, provided
that one system is not loaded by the other
Mechanical systems, like electrical networks, have
three passive, linear components. Two of them,
the spring and the mass, are energy-storage
elements; one of them, the viscous damper,
dissipates energy. The two energy-storage
elements are analogous to the two electrical
energy-storage elements, the inductor and
capacitor. The energy dissipator is analogous to
electrical resistance. Let us take a look at these
mechanical elements, which are shown in Table
2.4. In the table, K, f v, and M are called spring
constant, coefficient of viscous, friction, and
mass, respectively.
FIGURE a.) Mass, spring, and damper system; b.)
Block diagram
Mechanical Systems
Mechanical systems can be divided into two basic
systems.
(a) Translational systems and (b) Rotational systems.
(a) Translational systems:
1. Mass: This represents an element which resists
the motion due to inertia. According to
Newton's second law of motion, the inertia force is
equal to mass times acceleration
Where a, v and x denote acceleration, velocity and
displacement of the body respectively.
Symbolically, this element is represented by a block as shown
here

2. Dash pot: This is an element which opposes motion due to


friction. If the friction is viscous friction, the frictional force is
proportional to velocity. This force is also known as damp ling
force. Thus we can write
Where B is the damping coefficient. This element is
called as dash pot and is symbolically represented
above.
3. Spring: The third element which opposes motion is
the spring. The restoring force of a spring
is proportional to the displacement. Thus
Where K is known as the stiffness of the spring or
simply spring constant. The symbol used for this
element is shown before
(b) Rotational systems: Corresponding to the three
basic elements of translation systems, there are three
basic elements representing rotational systems.
1. Moment of Inertia: This element opposes the
rotational motion due to Moment of lnertia. The
opposing inertia torque is given by,

Where α, ω and θ are the angular acceleration,


angular velocity and angular displacement
respectively. J is known as the moment of inertia of
the body.
2. Friction: The damping or frictional torque which
opposes the rotational motion is given by,

Where B is the rotational frictional coefficient.


3. Spring: The restoring torque of a spring is
proportional to the angular displacement θ and is
given by,
Where K is the torsimal stiffness of the spring. The
three elements defined above are shown below
Rotational elements

Since the three elements of rotational systems are similar in nature


to those of translational systems no separate symbols are necessary
to represent these elements. Having defined the basic elements of
mechanical systems, we must now be able to write differential
equations for the system when these mechanical systems are
subjected to external forces. This is done by using the D' Alembert's
principle which is similar to the Kirchhoff's laws in Electrical
Networks. Also, this principle is a modified
version of Newton's second law of motion. The
D' Alembert's principle states that, "For any body,
the algebraic sum of externally applied forces and
the forces opposing the motion in any given
direction is zero".

A mechanical translational system


This is the differential equation governing the
motion of the mechanical translation system. The
transfer function can be easily obtained by taking
Laplace transform of eqn
Thus,

If velocity is chosen as the output variable, we can write eqn

Similarly, the differential equation governing the motion of


rotational system can also be obtained. For the system in
Fig below, we have
Mechanical rotational system
Rotational Mechanical System Transfer Functions

Rotational mechanical systems are handled


the same way as translational mechanical
systems, except that torque replaces force
and angular displacement replaces
translational displacement. The mechanical
components for rotational systems are the
same as those for translational systems,
except that the components undergo
rotation instead of translation
Transfer Functions for Systems with Gears
Gears provide mechanical advantage to rotational
systems. Anyone who has ridden a 10-speed
bicycle knows the effect of gearing. Going uphill,
you shift to provide more torque and less speed.
On the straightaway, you shift to obtain more
speed and less torque. Thus, gears allow you to
match the drive system and the load—a trade-off
between speed and torque.
Thus, the torques are directly proportional to the ratio of the number
of teeth. All results are summarized in Figure 2.28.
Transfer Functions of Common System Elements
1 Gear train: For the relationship between the input
speed and output speed with a gear train having a gear
ratio N: transfer function = N
2. Amplifier: For the relationship between the output
voltage and the input voltage with G as the constant
gain: transfer function = G
3. Potentiometer: For the potentiometer acting as a
simple potential divider circuit the relationship between
the output voltage and the input voltage is the ratio of the
resistance across which the output is tapped to the total
resistance across which the supply voltage is applied and
so is a constant and hence the transfer function is a
constant K: transfer function = K
4. Armature-controlled dc. Motor: For the
relationship between the drive shaft speed and the
input voltage to the armature is:
5. Valve controlled hydraulic actuator: The output
displacement of the hydraulic cylinder is related to
the input displacement of the valve shaft by a
transfer function of the form:

6. Heating system: The relationship between the


resulting temperature and the input to a heating
element is typically of the form:
where C is a constant representing the thermal
capacity of the system and R a constant representing
its thermal resistance.
7. Tachogenerator: The relationship between the
output voltage and the input rotational speed is
likely to be a constant K and so represented by:
transfer function = K
8. Displacement and rotation: For a system where
the input is the rotation of a shaft and the output,
as perhaps the result of the rotation of a screw, a
displacement, since speed is the rate of
displacement we have v =
9. Height of liquid level in a container: The height of
liquid in a container depends on the rate at which
liquid enters the container and the rate at which it is
leaving. The relationship between the input of the
rate of liquid entering and the height of liquid in the
container is of the form:

where A is the constant cross-sectional area of the


container, p the density of the liquid, g the
acceleration due to gravity and R the hydraulic
resistance offered by the pipe through which the
liquid leaves the container.
The Laplace Transform and the z-Transform
Two very important transformation techniques for
linear control system analysis are presented : the
Laplace transform and the z-transform. The Laplace
transform relates time functions to frequency-
dependent functions of a complex variable. The z-
transform relates time sequences to a different, but
related, type of frequency-dependent function.
Applications of these mathematical transformations
to solving linear constant-coefficient differential and
difference equations are also discussed here. Together
these methods provide the basis for the analysis and
design techniques.
The Laplace Transform
The Laplace transform is defined in the following
manner:
Thus, we can find the Laplace transform of impulse
functions. This property has distinct advantages
when applying the Laplace transform to the
solution of differential equations where the initial
conditions are discontinuous at t = 0.
Laplace Transform of a Time Function
THE INVERSE LAPLACE TRANSFORM
The Laplace transform transforms a problem from
the real variable time domain into the complex
variable s-domain. After a solution of the
transformed problem has been obtained in terms of
s, it is necessary to “invert” this transform to obtain
the time domain solution. The transformation from
the s-domain into the t-domain is called the
inverse Laplace transform
SOME PROPERTIES OF THE LAPLACE TRANSFORM AND ITS INVERSE
Inverse Laplace Transform: Example
we demonstrate the use of the Laplace transform
Table 2.2 to find f(t) given F(s).
Partial-Fraction Expansion
To find the inverse Laplace transform of a complicated
function, convert the function to a sum of simpler
terms for which we know the Laplace transform of
each term. The result is called a partial-fraction
expansion. If F1(s)= N(s)/D(s), where the order of
N(s) is less than the order of D(s), then a partial-
fraction expansion can be made. If the order of N(s)
is greater than or equal to the order of D(s), then
N(s) must be divided by D(s) successively until the
result has a remainder whose numerator is of order
less than its denominator.
Partial-Fraction Expansion
ASSIGNMENT

Show that the inverse Laplace transform solution


is equal to
More Examples
If

we must perform the indicated division until we


obtain a remainder whose numerator is of order
less than its denominator. Hence,

Taking the inverse Laplace transform, using Item 1


of Table 2.1, along with the differentiation
theorem(Item 7) and the linearity theorem(Item3
of Table 2.2),we obtain
Using partial-fraction expansion, we will be able to
expand functions like

into a sum of terms and then find the inverse


Laplace transform for each term. We will now
consider three cases and show for each case how
an F(s) can be expanded into partial fractions.
Laplace Transform Solution of a Differential Equation
Given the following differential equation, solve
for y(t) if all initial conditions are zero. Use the
Laplace transform.
Using MATLAB in Control Problem
 This is your first MATLAB exercise. You will learn
how to use MATLAB to
 (1) represent polynomials,
 (2) find roots of polynomials,
 (3) multiply polynomials, and
 (4) find partial-fraction expansions.
Characteristics Equation, Poles & Zeros
An s-plane pole and zero plot.
Graphical evaluation of the
residues.
Partial-fraction Expansions with Matlab.
1

2
1
2
Solution of a Differential Equation using Matlab
State-Space Representation
The limitation of transfer function representation
becomes obvious as we tackle more complex problems.
For complex systems with multiple inputs and outputs,
transfer function matrices can become very clumsy. In
modern control, the method of choice is state-space or
state variables in the time domain – essentially a matrix
representation of the model equations.
What Are We Up to?
 Learning how to write the state-space representation
of a model.
 Understanding the how a state-space representation is
related to the transfer function representation
 Understand state variables, state differential
equations, and output equations.
 Recognize that state variable models can describe
the dynamic behavior of physical systems and can be
represented by block diagrams and signal flow
graphs.
 Know how to obtain the transfer function model
from a state variable model, and vice versa.
 Be aware of solution methods for state variable
models and the role of the state transition matrix in
obtaining the time responses.
 Understand the important role of state variable
modeling in control system design.
A time-varying control system is a system in
which one or more of the parameters of the
system may vary as a function of time. For
example, the mass of an airplane varies as a
function of time as the fuel is expended
during flight. A multivariable system is a
system with several input and output signals.
The time-domain representation of control
systems is an essential basis for modern control
theory and system optimization. The time-
domain analysis and design of control systems
uses the concept of the state of a system
Advantages of State Variable Analysis

1. It can be applied to non linear


system.
2. It can be applied to time invariant
systems.
3. It can be applied to multiple input
multiple output systems.
4. Its gives idea about the internal
state of the system.
State Space Modeling is a method to convert a set of
differential equation(s) into a form of matrix equation from
which we can extract physical/practical meaning of a system.
The logic behind the State Space Modeling is as follows. For
most of differential equations, there would be terms that can
be interpreted as an input to a system and terms that can be
interpreted as output of the system. For example, if you have
a system with an input function labeled as u(t) and output
function labeled as y(t) as shown below.
Giving you more practical examples, the very common
spring system and spring-damper systems can also
be described as single input and single output system
and can be described in a form of differential equation
The state of a system is a set of variables whose
values, together with the input signals and the
equations describing the dynamics, will provide the
future state and output of the system.
Example 1: Given the electrical network shown
below, find a state-space representation if the
output is the current through the resistor
SOLUTION: The following steps will yield a viable
representation of the network in state space.
Step 1 Label all of the branch currents in the
network. These include
Step 2 Select the state variables by writing the
derivative equation for all energy-storage
elements, that is, the inductor and the capacitor. Thus,
The General State-Space Representation
In a state space system, the internal state of the system
is explicitly accounted for by an equation known as the
state equation. The system output is given in terms of
a combination of the current system state, and the
current system input, through the output equation.
These two equations form a linear system of
equations known collectively as state-space
equations. The state-space is the linear vector space
that consists of all the possible internal states of the
system. Because the state-space must be finite, a
system can only be described by state-space equations
if the system is lumped.
For a system to be modeled using the state-
space method, the system must meet these
requirements:
1. The system must be linear
2. The system must be lumped
Definitions System Variable: Any variable that
responds to an input or initial conditions in a
system.
Definitions State variable
The state of a system refers to the past, present, and
future conditions of the system. From a mathematical
perspective, it is convenient to define a set of state
variables and state equations to model dynamic systems.
As stated earlier, the variables x1(t), x2(t),..., xn(t) are the
state variables of the nth-order system , and the n first-order
differential equations, , are the state equations . In general,
there are some basic rules regarding the definition of a
state variable and what constitutes a state equation. The
state variables must satisfy the following conditions:
1. At any initial time t = t0 , the state variables x1(t0),
x2(t0),..., xn(t0) define the initial states of the system.
Definitions State variable
2. Once the inputs of the system for t ≥ t0 and the initial
states just defined are specified, the state variables
should completely define the future behavior of the
system.
The state variables of a system are defined as a minimal
set of variables, x1(t ), x2 (t),..., xn(t), such that
knowledge of these variables at any time t0 and
information on the applied input at time t0 are sufficient
to determine the state of the system at any time t >
t0 .Hence, the space state form for n state variables is
where x(t) is the state vector having n rows,
• State vector: A vector whose elements are the
state variables.
• State space: The n-dimensional space whose axes
are the state variables.

Graphic representation of state space and a state vector


where the state variables are assumed to be a
resistor voltage, vR, and a capacitor voltage, vC. These
variables form the axes of the state space. A
trajectory can be thought of as being mapped out by
the state vector,
x(t), for a range of t. Also shown is the state vector at
the particular time t = 4.
State equations: A set of n simultaneous, first-order
differential equations with n variables, where the n
variables to be solved are the state variables.
Output equation: The algebraic equation that
expresses the output variables of a system as linear
combinations of the state variables and the inputs.
In general, an output variable can be expressed as an
algebraic combination of the state variables.

State equation Linear, 1st order ODEs

Output equation Linear algebraic equations


The Output Equation
One should not confuse the state variables with the
outputs of a system. An output of a system is a variable
that can be measured , but a state variable does not
always need to satisfy this requirement. For instance, in
an electric motor, such state variables as the winding
current, rotor velocity, and displacement can be
measured physically, and these variables all qualify as
output variables. On the other hand, magnetic flux can
also be regarded as a state variable in an electric motor
because it represents the past, present, and future
states of the motor, but it cannot be measured directly
during operation and therefore does not ordinarily
qualify as an output variable.
State-Space Description

Dynamic Observation
Process Process
Controllable
inputs u u
State equation Observations
D
y
B x +
+ 1/s C

A
Output equation
State x

Disturbance Measurement
(noise) w Error (noise) n
Plant

446
State Space Model Example : RLC Circuit

Let's take a RLC circuit as another


example as shown below.

Theoretically you can determine those variables in any


way you like as long as you can define proper
differential equations using those variables.
 
The state space model become as shown below
Given a transfer function, we can obtain an
equivalent state-space representation and vice versa.
The function tf can be used to convert a state-space
representation to a transfer function representation;
the function ss can be used to convert a transfer
function representation to a state-space
representation. These functions are shown below,
where sys_tf represents a transfer function model and
sys_ss is a state-space representation.
(a) The ss function. (b) Linear system model conversion.
(a) m-file script for the conversion of the equation to a state-space
representation.
Output print out.
Block diagram with X1(s) defined as the leftmost state variable.
LTI - State-Space Description
Fact: (instead of using the impulse response representation..)
 Every (lumped, noise free) Linear, Time Invariant
(LTI) system can be described by a set of
equations of the form:
Linear, 1st order ODEs
Linear algebraic equations
Dynamic Observation
Process Process
Controllable u
inputs u D
Observations
B x + y
+ 1/s C
A

State x
Disturbance Measurement
(noise) w Error (noise) n 456
Plant
Very few processes (and systems) have an input that has
a direct influence on the output. Hence D is usually
zero. When we discuss single-input-single-out put(SISO)
models, scalar variables should be used for the input,
output, and feedthrough: u, y, and D. For convenience,
we keep the notation for B and C, but keep in mind
that, in this case, B is a column vector and C is a row
vector. If x is of the order of n, then A is n × n, B is n × 1,
and C is 1 × n. The idea behind the use is that we can
make use of linear system theories, and we can analyse
complex systems much more effectively. There is no
unique way to define the state variables. What will be
shown is just one of many possibilities
Example 1b: Find the state equations for the
translational mechanical system shown below
SOLUTION: First write the differential equations for
the network, using the methods of the Laplace-
transformed equations of motion. Next take the
inverse Laplace transform of these equations,
assuming zero initial conditions, and obtain
Examples 2
Derive the state-space representation of a second-
order differential equation below

We can do blindfolded the transfer function of this


equation with zero initial conditions:
2

Now let’s do something different. First, we rewrite


the differential equation as
3

which allow us to redefine the second-order equation as a set of two coupled first-
order equations. The first differential equation is the definition of the state variable
x2 in Eq. (Equation 3); the second is based on the differential equation

6
as a statement that x1 is our output variable.
Compare the results with Eqs. (1) and (2), and we see
that, in this case,

We can use the MATLAB function tf2ss() to convert the


transfer function in Eq. (2) to state-space form:
However, you will find that the MATLAB result is not identical
to Eq. (5). It has to do with the fact that there is no unique
representation of a state-space model The characteristic
polynomial of the matrix A Eq. (7)is identical to that of the
transfer function Eq. (2). Needless to say, the eigenvalues of A
are the poles of the transfer function. It is a reassuring
thought that different mathematical techniques provide the
same information. It should come as no surprise if we
remember our linear algebra.
Example 3: Draw the block diagram of the state-space
representation of the second-order differential equation in Example 1.
The result is in Fig below It is quite easy to understand if we note that
the transfer function of an integrator is 1/s. Thus the second-order
derivative is located before the two integrations are made. The
information at the summing point also adds up to the terms of the
second-order differential equation. The resulting transfer function is
identical to Eq. (E4.2). This reduced to a closed-loop transfer function.
Example 4: Let’s try another model with a slightly more complex
input. Derive the state space representation of the differential equation
With Example 1 as the hint, we define the state variables x1 = x1 (i.e.,
the same), and x2 =dx1/dt. Using steps similar to those of Example 1,
we find that the result, as equivalent to Eq. (1), is

8
9

which is the form of Eq. (2). With MATLAB, the statements for this
example are

Comments at the end of Example 1 also apply here. The result should
be correct, and we should find that both the roots of the
characteristic polynomial p
and the eigenvalues of the matrix a are −0.2 ± 0.98 j .
We can also check by going backward:

Example 5: Derive the state-space representation of the lead–lag


transfer function:

We follow the hint in Example 3 and write the transfer function as


10

11

Thus all the coefficients in Eqs. (1) and (2) are scalar, with A = −3, B =
1, C = −1, and D = 1. Furthermore, (E10) and (E11) can be represented
by the block diagram in Fig. below
We may note that the coefficient D is not zero, meaning that, with a
lead–lag element, an input can have an instantaneous effect on the
output. Thus, although the state variable x has zero initial condition, it is
not necessarily so with the output y. This analysis explains the mystery
with the inverse transform of this transfer function. The MATLAB
statement for this example is
State Space Analysis

480
State Space Models
For continuous time systems

For discrete time systems


Linear State Space Models
Example 3.3
Consider the simple electrical network shown in
Figure 3.1. Assume we want to model the voltage
v(t) On applying fundamental network laws we
obtain the following equations:

Figure 3.1: Electrical


network. State space model.
These equations can be rearranged as follows:

We have a linear state space model with


Linearization
Although almost every real system includes
nonlinear features, many systems can be
reasonably described, at least within certain
operating ranges, by linear models.
Example 3.7 (Inverted pendulum)

In Figure 3.5, we have used the following notation:


y(t) - distance from some reference point
(t) - angle of pendulum
M - mass of cart
m - mass of pendulum (assumed concentrated at tip)
 - length of pendulum
f(t) - forces applied to pendulum
Example of an Inverted Pendulum
Application of Newtonian physics to this system
leads to the following model: where m = (M/m)
This is a linear state space model in which A, B and C
are:
Summary
 In order to systematically design a controller for a
particular system, one needs a formal - though
possibly simple - description of the system. Such a
description is called a model.
 A model is a set of mathematical equations that
are intended to capture the effect of certain
system variables on certain other system
variables.
• The italicized expressions above should be
understood as follows:
– Certain system variables: It is usually neither
possible nor necessary to model the effect of
every variable on every other variable; one
therefore limits oneself to certain subsets.
Typical examples include the effect of input on
output, the effect of disturbances on output,
the effect of a reference signal change on the
control signal, or the effect of various
unmeasured internal system variables on each
other.
– Capture: A model is never perfect and it is
therefore always associated with a modeling error.
The word capture highlights the existence of
errors, but does not yet concern itself with the
precise definition of their type and effect.
– Intended: This word is a reminder that one does
not always succeed in finding a model with the
desired accuracy and hence some iterative
refinement may be needed.
– Set of mathematical equations: There are
numerous ways of describing the system behavior,
such as linear or nonlinear differential or
difference equations.
• Models are classified according to properties of the
equation they are based on. Examples of
classification include:
Model
Attribute Contrasting Attribute Asserts whether or not …
Single input
Single output Multiple input multiple output … the model equations have one input and one output only
Linear Nonlinear … the model equations are linear in the system variables
Time varying Time invariant … the model parameters are constant
Continuous Sampled … model equations describe the behavior at every instant of
time, or only in discrete samples of time
Input-output State space … the model equations rely on functions of input and output
variables only, or also include the so called state variables.
Lumped Distributed parameter … the model equations are ordinary or partial differential
parameter equations

In many situations nonlinear models can be linearized


around a user defined operating point.
Control Systems in State Space
State-Space Formulation of Transfer-
Function Systems

Consider
num=[10 10]
den=[1 6 5 10]
[A,B,C,D] = tf2ss(num,den)
A=
-6 -5 -10
1 0 0
0 1 0
B=
1
0
0
C=
0 10 10
D=
0
Transformation from State space to transfer
function
A=[-6 -5 -10;1 0 0;0 1 0]

B=[1;0;0]

C=[0 10 10]

D=[0]

[num,den]=ss2tf(A,B,C,D)

num =

0 0.0000 10.0000 10.0000

den =

1.0000 6.0000 5.0000 10.0000


Controllability

• Consider the system with state equation


x  Ax  Bu , a composite matrix Q can
be formed such that
Q  [ B AB A B ....
2 n1
A B]
where n is the order of the system.
The system is state controllable if the rank
of the composite matrix Q is n.
Controllability
A=

A=[-6 -5 -10;1 0 0;0 1 0] -6 -5 -10


1 0 0
B=[1;0;0] 0 1 0
Q=[B A*B (A^2)*B]
rank(Q) B=

1
0
0
Q=

1 -6 31
0 1 -6
0 0 1

ans =

3
Observability
• Consider the system with state equation
x  Ax  Bu and y  Cx  Du, a
composite matrix Q can be formed such that
T
Q  [C A C T T
A  
T 2
C ....
T
 
AT
C ]
n 1 T

where n is the order of the system.


The system is state controllable if the rank of
the composite matrix Q is n.
Observability

A=
A=[-6 -5 -10;1 0 0;0 1 0]
C=[0 10 10] -6 -5 -10
Q=[C' A'*C' (A'^2)*C'] 1 0 0
0 1 0
rank(Q)
C=

0 10 10
Q=

0 10 -50
10 10 -50
10 0 -100

ans =

3
Control design methodology

Controller
Modeling Design
analytical Dynamic model Root-Locus PI Control algorithm
system IDs Control

Satisfy

Requirement
Performance Specifications
Analysis
Design Goals
Performance Specifications

• Stability
• Transient response
• Steady-state error
• Robustness
– Disturbance rejection
– Sensitivity
Stability
Concept Of Stability

A stable system is a dynamic system with a Bounded


Response to a Bounded Input.

Absolute stability is a stable/not stable


characterization for a closed-loop feedback system.
Given that a system is stable we can further
characterize the degree of stability, or the relative
stability.
 The concept of stability can
be illustrated by a cone
placed on a plane horizontal
surface.
 A necessary and sufficient
condition for a feedback
system to be stable is that
all the poles of the system
transfer function have
negative real parts.

A system is considered marginally stable if only certain bounded


inputs will result in a bounded output.
Types of Stability
1) Stable system
2) Unstable system
3) Marginally stable
Stable System
 A system is said to be stable if for a bounded
input, the response of system is bounded.
 In absence of an input, a stable system
approaches zero as time approaches infinity
irrespective of the initial condition.
 It important that every working system is stable.
To obtain a bounded response, the poles of the closed-
loop system must be in the left-hand portion of the s-
plane. Thus, a necessary and sufficient condition
for a feedback system to be stable is that all the
poles of the system transfer function have negative
real parts. A system is stable if all the poles of the
transfer function are in the left-hand s-plane
Unstable System
A system is said to be unstable if for a bounded input, the system
produces an output which goes on increasing without any bounds and
the designer has no control over it.
An unstable system whose response grows without bounds can cause
damage to the system, adjacent property and also to human life.
One will not find an unstable system in working condition
Marginally Stable System

 A system is said to be marginally stable if the


output of the system does not go down to zero (like
a stable system) or does it go on increasing (like an
unstable system).
 The output of marginally stable system oscillates
in a finite range.
Performance Specs
Stability

• BIBO stability: bounded input results in bounded


output.
– A LTI system is BIBO stable if all poles of its
transfer function are in the LHP (pi, Re[pi]<0).

im1 ( s  zi ) C1 C2 Cn
Y ( s )  G ( s )U ( s )  K n    ... 
i 1 ( s  pi ) s  p1 s  p2 s  pn
n
 y (t )   Ci e pit
i 1
pi t t 
Note : Ci e 
  if Re[ pi ]  0
Example
• Consider the Transfer function calculated in previous
slides.
X (s) C
G( s )  
Y ( s ) As  B

the denominato r polynomial is As  B  0

• The only pole of the system is

B
s
A

514
Examples
• Consider the following transfer functions.
– Determine
• Whether the transfer function is proper or improper
• Poles of the system
• zeros of the system
• Order of the system

s3 s
)i G( s )  )ii G( s ) 
s( s  2 ) ( s  1)( s  2 )( s  3)

( s  3) 2 s 2 ( s  1)
)iii G( s )  )iv G( s ) 
s( s 2  10) s( s  10)
515
Stability of Control Systems
• The poles and zeros of the system are plotted in s-plane
to check the stability of the system.
j

LHP RHP

Recall s    j

s-plane

516
Stability of Control Systems
1. If all the poles of the system lie in left half
plane (LHP) the system is said to be Stable.
2. If any of the poles lie in right half plane(RHP)
the system is said to be unstable.
3. If pole(s) lie on imaginary axis the system is
said to be marginally stable. j

LHP RHP

s-plane 517
Stability of Control Systems
• For example
C
G( s )  , if A  1, B  3 and C  10
As  B
• Then the only pole of the system lie at

pole  3
j

LHP RHP

X 
-3

s-plane
518
Examples
• Consider the following transfer functions.
 Determine whether the transfer function is proper or
improper
 Calculate the Poles and zeros of the system
 Determine the order of the system
 Draw the pole-zero map
 Determine the Stability of the system
s3 s
)i G( s )  )ii G( s ) 
s( s  2 ) ( s  1)( s  2 )( s  3)

( s  3) 2 s 2 ( s  1)
)iii G( s )  )iv G( s ) 
s( s 2  10) s( s  10)
519
Performance Specs Stability

Stable
Unstable
The “Roots” of a Response

Im(s) Marginally
Stable

Stable

Unstable

Re(s)

527
Types of system description and their corresponding definitions of
asymptotic stability, marginal stability, and instability.
Various Stability criteria have been developed.
These criteria give pertinent information regarding
the stability of a system without directly applying the
definitions for stability and without requiring
complicated numerical procedures. The most
popular criteria are the following:
1. The algebraic (Routh, Hurwitz, and the
continued fraction) criteria,
2. the Nyquist criterion,
3. the Bode criterion, and
4. the Nichols criterion, as well as
5. the root locus technique, are all in the
frequency domain 2-5 are graphical methods
6. the Lyapunov criterion is in the time
domain.
out of the six criteria 1 to 5 are in the
frequency domain only one is in the time
domain
Performance Specs & Steady-state error

• Steady state (tracking) error of a stable system


ess  lim e(t )  lim (r (t )  y (t ))
t  t 

r(t) is the reference input, y(t) is the system output.


How accurately can a system achieve the desired state?
Final value theorem: if all poles of sF(s) are in the open left-half
of the s-plane, then

lim f (t )  lim sF ( s )
t  s 0

Easy to evaluate system long term behavior without solving it


Performance Specs Robustness
• Disturbance rejection: steady-state error caused by
external disturbances
– Can a system track the reference input despite of external
disturbances?
– Denial-of-service attacks
• Sensitivity: relative change in steady-state output divided
by the relative change of a system parameter
– Can a system track the reference input despite of
variations in the system?
– Increased task execution times
– Device failures
Performance Specs Goal of Feedback Control
• Guarantee stability
• Improve transient response
– Short settling time
– Small overshoot
• Small steady state error
• Improve robustness wrt uncertainties
– Disturbance rejection
– Low sensitivity
The Design and Analysis of a Control System:
The first step in analyzing a control system was to
derive a mathematical model of the system. Once
such a model is obtained, various methods are
available for the analysis of system performance.
Three requirements enter into the design of a
control system:
1. Transient response /performance
2. Stability:The degree or extent of system stability
3. Steady-state errors/performance
Transient response/performance
For elevator,
Effects to passengers
• Slow transient response -> passengers impatient
• Excessively fast response -> uncomfortable
• If elevator oscillates about the arrival floor, how do
you feel?
Structural reasons
• Too fast -> may cause permanent physical damage
Steady-state response
What remains after the transients have decayed to
zero
We care about
accuracy!

• No error – accurately stop at 4th floor, elevator floor


level with the floor
• Steady state error – some inaccuracy occurs
• Therefore, analyze the error and design corrective
action to reduce it
Stability
For a linear system
Total response = Natural response + Forced response
Control systems must be designed to be stable, for a control
system to be useful, the natural response must Decay to zero
as time approaches infinity
Oscillate: In some systems, the natural response grows
without bound rather than diminish to zero it oscillate.
Eventually, the natural response is so much greater than the
forced response that the system is no longer controlled. This
condition is instability may lead to self-destruction of the
physical device if there is no limit stops
Left Hand s-Plane Right Hand s-Plane
The following example illustrates the stability
conditions of systems with reference to the poles of
the system transfer functions that are also the roots
of the characteristic equation. EXAMPLES: The
following closed-loop transfer functions and their
associated stability conditions are given.
Stable system vs Unstable system
Stability vs. Instability

*SAS – system of active stability


Stability

Vigorous exercising on the 12th floor likely led to mechanical


resonance of the building triggering a two-day evacuation
In practice, the input signal to a control system is
not known ahead of time but is random in nature,
and the instantaneous input cannot be expressed
analytically. In analyzing and designing control
systems, we must have a basis of comparison of
performance of various control systems. This basis
may be set up by specifying particular test input
signals and by comparing the responses of various
systems to these input signals. Typical Test
Signals. Six commonly used test input signals are
step functions, ramp functions, acceleration
functions, impulse functions, sinusoidal
functions, and white noise.
With these test signals, mathematical and
experimental analyses of control systems can be
carried out easily, since the signals are very
simple functions of time. If the inputs to a control
system are gradually changing functions of time,
then a ramp function of time may be a good test
signal. Similarly, if a system is subjected to
sudden disturbances, a step function of time
may be a good test signal; and for a system
subjected to shock inputs, an impulse function
may be best.
• Once a control system is designed on the
basis of test signals, the performance of
the system in response to actual inputs is
generally satisfactory. The use of such test
signals enables one to compare the
performance of many systems on the
same basis. The time response of a
control system consists of two parts: the
transient response and the steady-state
response.
Transient Response and Steady-State Response
By transient response, we mean that which goes
from the initial state to the final state.
By steady-state response, we mean the manner in
which the system output behaves as t approaches
infinity. Thus the system response c(t) may be
written as

where the first term on the right-hand side of the


equation is the transient response and the second
term is the steady-state response.
Stability
• Absolute Stability, Relative Stability, and Steady-
State Error. In designing a control system, we must
be able to predict the dynamic behavior of the
system from a knowledge of the components. The
most important characteristic of the dynamic
behavior of a control system is absolute stability
that is, whether the system is stable or unstable. A
control system is in equilibrium if, in the absence
of any disturbance or input, the output stays in the
same state.
Stable Unstable
A linear time-invariant control system is stable if
the output eventually comes back to its
equilibrium state when the system is subjected to
an initial condition.
A linear time-invariant control system is critically
stable if oscillations of the output continue
forever.
It is unstable if the output diverges without bound
from its equilibrium state when the system is
subjected to an initial condition.
Relative Stability and Steady-state Error. Since a
physical control system involves energy storage, the
output of the system, when subjected to an input,
cannot follow the input immediately but exhibits a
transient response before a steady state can be
reached. The transient response of a practical control
system often exhibits damped oscillations before
reaching a steady state. If the output of a system at
steady state does not exactly agree with the input, the
system is said to have steady state error. This error is
indicative of the accuracy of the system. In analyzing a
control system, we must examine transient-response
behavior and steady-state behavior.
Steady-State Error
• Steady-state error is the difference between the
input and the output for a prescribed test input as
t→∞. Test inputs used for steady-state error analysis
and design are summarized in Table below In order
to explain how these test signals are used, let us
assume a position control system, where the output
position follows the input commanded position. Step
inputs represent constant position and thus are
useful in determining the ability of the control
system to position itself with respect to a stationary
target, such as a satellite in geostationary orbit. An
antenna position control is an example of a system
that can be tested for accuracy using step inputs.
Examples on Steady-state error

1. Let us determine the appropriate value of K1 and


calculate the steady-state error for a unit step input
for the system shown when
Example 2: Non-unity Feedback Control System
Let us consider the system in the Figure below,
where we assume we cannot insert a gain K1
following R1(s). Then the actual error is given by
MATLAB and Control
MATLAB-Toolboxes for Control

Linear Control Nonlinear Control Identification

Control System Toolbox Nonlinear Control Toolbox Identification Toolbox


Simulink® Fuzzy Toolbox Frequency-Domain ID Toolbox
Mu Toolbox Simulink® Simulink®

06/09/2022 MATLAB Control Toolbox 559


MATLAB and Control
• Modeling Tools

06/09/2022 MATLAB Control Toolbox 560


Control System Toolbox
Core Features
 Tools to manipulate LTI models

 Classical analysis and design


 Bode, Nyquist, Nichols diagrams
 Step and impulse response
 Gain/phase margins
 Root locus design

 Modern state-space techniques


 Pole placement
 LQG regulation

06/09/2022 MATLAB Control Toolbox 561


Control System Toolbox
LTI Objects (Linear Time Invariant)
 4 basic types of LTI models
 Transfer Function (TF)
 Zero-pole-gain model (ZPK)
 State-Space models (SS)
 Frequency response data model (FRD)

 Conversion between models

 Model properties (dynamics)

06/09/2022 MATLAB Control Toolbox 562


Control System Toolbox
Transfer Function
p1 s n  p 2 s n 1  ...  p n 1
H (s ) 
q1 s m  q1 s m 1  ...  q m 1

where
p1 , p 2 ... p n 1 numerator coefficients
q1 , q1 ... q m 1 denominator coefficients

06/09/2022 MATLAB Control Toolbox 563


Control System Toolbox
Transfer Function

• Consider a linear time invariant (LTI)


single-input/single-output system
y '' 6 y ' 5 y  4u ' 3u

• Applying Laplace Transform to both sides with zero


initial conditions
Y ( s) 4s  3
G ( s)   2
U (s) s  6s  5
06/09/2022 MATLAB Control Toolbox 564
Control System Toolbox
Transfer Function

>> num = [4 3]; >> [num,den] =


>> den = [1 6 5]; tfdata(sys,'v')
>> sys = tf(num,den) num =
Transfer function: 0 4 3
4s+3 den =
----------------- 1 6 5
s^2 + 6 s + 5

06/09/2022 MATLAB Control Toolbox 565


Control System Toolbox
Zero-pole-gain model (ZPK)
(s  p1 )(s  p 2 )  ...  (s  p n )
H (s )  K
(s  q1 )(s  q 2 )  ...  (s  q m )

where
p1 , p 2 ... p n 1 the zeros of H(s)
q1 ,q1 ... q m 1 the poles of H(s)

06/09/2022 MATLAB Control Toolbox 566


Control System Toolbox
Zero-pole-gain model (ZPK)

 Consider a Linear time invariant (LTI)


single-input/single-output system
y '' 6 y ' 5 y  4u ' 3u

 Applying Laplace Transform to both sides with zero


initial conditions

Y ( s) 4s  3 4( s  0.75)
G (s)   2 
U ( s ) s  6 s  5 ( s  1)( s  5)
06/09/2022 MATLAB Control Toolbox 567
Control System Toolbox
Zero-pole-gain model (ZPK)

>> [ze,po,k] = zpkdata(sys1,'v')


>> sys1 =
ze =
zpk(-0.75,[-1 -5],4)
-0.7500
po =
Zero/pole/gain:
-1
4 (s+0.75)
-5
-----------
k=
(s+1) (s+5)
4

06/09/2022 MATLAB Control Toolbox 568


Control System Toolbox
State-Space Model (SS)
.
x  A x B u
y C x D u

where
x state vector
u and y input and output vectors
A , B ,C and D state-space matrices

06/09/2022 MATLAB Control Toolbox 569


Control System Toolbox
State-Space Models
• Consider a Linear time invariant (LTI)
single-input/single-output system
y '' 6 y ' 5 y  4u '' 3u
• State-space model for this system is

 x1 '   0 1   x1  0   x1 (0)  0 
 x '   5 6   x   1  u  x (0)   0 
 2   
 2    2  
 x1 
y  3 4  
 x2 

06/09/2022 MATLAB Control Toolbox 570


Control System Toolbox
State-Space Models
>> sys = ss([0 1; -5 -6],[0;1],[3,4],0)
a=
c=
x1 x2
x1 x2
x1 0 1 y1 3 4
x2 -5 -6

b= d=
u1 u1
x1 0 y1 0
x2 1

06/09/2022 MATLAB Control Toolbox 571


Control System Toolbox
Conversion between different models
tf2ss

Transfer function State Space

ss2tf

zp2tf ss2zp
tf2zp zp2ss

Zero-pole-gain

06/09/2022 MATLAB Control Toolbox 572


Control System Toolbox

Time Responses of Systems

• Impulse Response (impulse)


• Step Response (step)
• General Time Response (lsim)
• gensig - Generate input signal for lsim.

06/09/2022 MATLAB Control Toolbox 573


Control System Toolbox
Time Response of Systems
 The impulse response of a system is its output
when the input is a unit impulse.
 The step response of a system is its output when
the input is a unit step.
 The general response of a system to any input can
be computed using the lsim command.

06/09/2022 MATLAB Control Toolbox 574


Control System Toolbox
Time Response of Systems
Assignment 5 Given the LTI system

3s  2
G( s ) 
2 s 3  4s 2  5 s  1

Plot the following responses for:

 The impulse response using the impulse command.


 The step response using the step command.

06/09/2022 MATLAB Control Toolbox 575


Control System Toolbox
Time Response of Systems

06/09/2022 MATLAB Control Toolbox 576


Graphical Methods Available To Analyse Control
Systems
Four primarily graphical methods are available to
the control system analyst which are simpler and
more direct than time-domain methods for practical
linear models of feedback control systems. They
are:
1. Bode- Plot Representations
2. Nyquist Diagrams
3. Nichols Charts
4. The Root-Locus Method
Basic Goal of Control System Design
The basic goal of control system design is meeting performance
specifications. Performance specifications are the constraints
put on system response characteristics. They may be stated in
any number of ways. Generally they take two forms:
1. Frequency-domain specifications (pertinent quantities
expressed as functions of frequency)
2. Time-domain specifications (in terms of time response)
The desired system characteristics may be prescribed in either
or both of the above forms. In general, they specify three
important properties of dynamic systems:
1. Speed of response
2. Relative stability
3. System accuracy or allowable error
Frequency-domain specifications for both
continuous and discrete-time systems are often
stated in one or more of the following seven ways.
To maintain generality, we define a unified open-
loop frequency response function GH(ω):

1. Gain Margin, 2. Phase Margin,


3. Delay Time Td 4. Bandwidth (BW)
5. Cutoff Rate 6. Resonance Peak Mp
7. Resonant Frequency ωp
581
3. Delay Time Td: interpreted as a frequency-domain specification
response, is a measure of the speed of and is given by

where γ = arg(C / R). The average value of Td(ω) over the frequencies
of interest is usually specified
4. Bandwidth (BW): The bandwidth of a system is defined as that
range of frequencies over which the system responds satisfactorily.

582
5. Cutoff Rate: is the frequency rate at which the magnitude ratio
decreases beyond the cutoff frequency ωc. For example, the cutoff
rate may be specified as 6 db/octave. An octave is a factor-of-two
change in frequency.
6. Resonance Peak Mp: is a measure of relative stability, is the
maximum value of the magnitude of the closed-loop frequency
response. That is,

7. Resonant Frequency ωp: is the frequency at which Mp occurs.

An underdamped second-order continuous system 583


TIME-DOMAIN SPECIFICATIONS
Time-domain specifications are customarily defined
in terms of unit step, ramp, and parabolic
responses. Each response has a steady state and a
transient component.
Steady state performance, in terms of steady state
error, is a measure of system accuracy when a
specific input is applied.
Transient Response is often described in terms of
the unit step function response. Typical
specifications are:
584
1. Delay time td
2. Rise time tr
3. Peak time tp
4. Maximum Overshoot Mp
5. Settling Time ts
The transient response of a practical control system
often exhibits damped oscillations before reaching
steady state.
587
588
589
The Plot of Unit step response of an Underdamped Continuous Second-order System
PERFORMANCE SPECIFICATIONS

Controlled
variable

Overshoot
Steady state error
%
Reference

Transient State Steady State

Time
Settling time
TRANSIENT-RESPONSE ANALYSIS WITH MATLAB
Unit-step response curve.
596
EXAMPLE
 A higher-order system is defined by

598
601
STABILITY ANALYSIS
 BIBO: Bounded Input Bounded Output systems.
 For LTI systems this requires that all poles of the closed-loop

transfer function lie in the left half of the complex plane.


 Determine if the transfer function has any poles either on the
imaginary axis or in the right half of the s-plane.

 Routh-Hurwitz criterion: determine if any roots of a polynomial


lie outside the left half of the complex plane. It does not find the
exact locations of the roots.

 Other methods find the exact locations of the roots. For first and
second order systems, analytical method can be used. For higher
order systems, computer programs or simulation are required.
The Routh-Hurwitz Stability Criterion
Consider the following equation of the nth degree
in s
ansn+an-1sn-1+…+a1s+a0=0, (1)
where the coefficients an, an-1, …, a1, a0 are all of
the same sign and none is zero.
Some roots will have +ve real parts and the system
will be unstable if all the coefficients of its
characteristic equation do not have the same sign
or if all the terms are not present.

616
Assignment 10: Make a Routh table and tell how
many roots of the following polynomial are in the
right half-plane and in the left half-plane.

Routh-Hurwitz Criterion: Special Cases


Two special cases can occur: (1) The Routh table
sometimes will have a zero only in the first column of
a row, or (2) the Routh table sometimes will have an
entire row that consists of zeros. Let us examine the
first case.
Zero Only in the First Column
If the first element of a row is zero, division by zero
would be required to form the next row. To avoid
this phenomenon, an epsilon, ϵ, is assigned to
replace the zero in the first column. The value ϵ is
then allowed to approach zero from either the
positive or the negative side, after which the signs of
the entries in the first column can be determined.
Let us look at an example.
Entire Row is Zero: We now look at the second
special case. Sometimes while making a Routh table,
we find that an entire row consists of zeros because
there is an even polynomial that is a factor of the
original polynomial. This case must be handled
differently from the case of a zero in only the first
column of a row. Let us look at an example that
demonstrates how to construct and interpret the Routh
table when an entire row of zeros is present.
Example: Determine the number of right-half-
plane poles in the closed-loop transfer
function
Even
Odd
Even
Odd
Even

Odd

632
ansn+an-1sn-1+…+a1s+a0=0, (1)
 Step 1. Arrange the coefficients of the given equation in two rows in
the following fashion:
Row 1: an an-2 an-4 …

Row 2: an-1 an-3 an-5 …

Where the arrow direction indicate the order of arrangement.


Step 2. Form a third row from the first and second rows as follows :
Row 1: an an-2 an-4 …
Row 2: an-1 an-3 an-5 …
Row 3: bn-1 bn-3 bn-5 … where
bn-1= -1 an an-2
an-1 an-1 an-1

bn-3= -1 an an-4
an-3 an-1 an-5

635
Step 3. Form a fourth row from the second and third rows:

Row 2: an-1 an-3 an-5 …


Row 3: bn-1 bn-3 bn-5 …
Row 4: cn-1 cn-3 cn-5 …
Where , cn-1 = -1 an-1 an-3
bn-1 bn-1 bn-3

Cn-3 = -1 an-1 an-5


bn-1 bn-1 bn-5 …

636
Step 4. continue this procedure of forming a
new row from the two preceding rows until only zeros are
obtained.

In general an array of n+1 rows will result with the last two
rows each containing a single element.

637
638
Statement : Routh-Hurwitz criterion states that the system whose
characteristic equation is eqn (1), is stable if and only if all the
elements in the first column of the array formed by the coefficients
in the above manner have the same algebraic sign.

If the elements in the first column are not all of the same sign, the
number of sign changes in the first column = the number of roots
with +ve real parts.

639
640
 Problem 1
 Apply Routh-Hurwitz criterion to the following equation
and find the stability of the system.
S4+2s3+3s2+4s+5=0
Solution:
Routh array
1st row: 1 3 5
2nd row: 2 4 0
3rd row: 1 5
4th row: -6
5th row: 5
Since the number of sign changes in the first column are
two, the given eqn has two roots with +ve real parts.
Hence the system is unstable.
643
 Problem 2
 Apply Routh-Hurwitz criterion to the following equation
and find the stability of the system.
S4+7s3+17s2+17s+6=0
Solution:
Routh array
1st row: 1 17 6
2nd row: 7 17
3rd row: 14.58 6
4th row: 14.12
5th row: 6
Since all the coefficients in the first column are of the
same sign (+ve) the given eqn has no roots with +ve real
644
parts. Hence the system is stable
 Problem 3
 Apply the Routh-Hurwitz criterion to the following
equation and find the stability of the system:
S5+2s4+2s3+4s2+11s+10=0
 Solution

Here all the coefficients are +ve and none is zero. Hence
we proceed to write the Routh array as follows:
1st row: 1 2 11
2nd row: 2 4 10
3rd row: 0 6
 We cannot proceed further because the first element in the
3rd row is zero.
645
 The situation can be remedied by multiplying the original eqn by a
factor (s+a), where a>0, and then applying the same procedure.
 The reasoning is that multiplication by (s+a) will change the
vanishing element without changing the number of roots with +ve
real parts.
 Any +ve real number can be used for a; the simplest choice is a=1.

 Multiplying the given eqn by (s+1), we have the new eqn

 S6+3s5+4s4+6s3+15s2+21s+10=0

646
 S6+3s5+4s4+6s3+15s2+21s+10=0
The Routh array now becomes :
1 4 15 10
1 2 7 (after dividing by 3)
1 4 5 (after dividing by 2)
 -1 1 (after dividing by 2)
1 1 (after dividing by 5)
2

1

647
 There are two changes of sign in the first column of the Routh
array, which indicates that there are two roots having +ve real
parts in the given eqn.
 Hence the system is unstable.

648
 Problem
 Apply the Routh-Hurwitz criterion to the following
equation and find the stability of the system:
S5+s4+4s3+24s2+3s+63=0
 Solution
Here all the coefficients are +ve and none is zero. Hence
we proceed to write the Routh array as follows:
1st row: 1 4 3
2nd row: 1 24 63
3rd row: -1 -3 (after dividing by 20)
4th row: 1 3 (after dividing by 21)
5th row: 0
?
.
649
We cannot proceed further because of the zero in the fifth row.

The earlier technique of multiplying the original eqn by a factor


(s+1) fails here because the whole fifth row (which happens to
have only one element) vanishes.

650
 Examination reveals that this situation will occur whenever the array
has two consecutive rows in which the ratio of the corresponding
elements is constant.

 When this happens, all the elements in the following row will vanish.

 This is an indication that the given eqn has at least one pair of roots
which lie radially opposite to each other and equidistant from the
origin.

651
 The row can be completed by forming an auxiliary polynomial in
s2, using the elements in the last non vanishing row as its
coefficients; the coefficients of the derivative of this
aux.ploynomial has to be taken for the elements in the following
row.

 In the problem, the elements in the last non vanishing row are 1 and
3; hence the aux. polynomial in s2 is
 s2 +3

652
 The derivative of this polynomial is
2s, whose coefficient 2 is to be taken as the element in the
row following the last non vanishing row.
 The Routh array now becomes

1 4 3
1 24 63
 -1 -3
1 3
2

3

653
 There are two changes of sign in the first column and hence
there two roots with +ve real parts. Hence the system is
unstable.
 The roots of the eqn formed by the aux polynomial

S2+3=0 are also the roots of the original eqn.


 In this problem they are s= +j√3

which have zero real parts

654
Some basic results:
 Second order system:

P2 ( s)  s 2  a1s  a0  ( s  p1 )( s  p2 )
 s 2  ( p1  p2 ) s  p1 p2

For third order system :

P2 ( s)  s 3  a2 s 2  a1s  a0  ( s  p1 )( s  p2 )( s  p3 )
 s 3  ( p1  p2  p ) s 2  ( p1 p2  p1 p3  p2 p3 ) s  p1 p2 p3
 We see that the coefficients of the polynomial are given by:
an 1  negative of the sum of all roots.
an  2  sum of the products of all possible combinatio ns
of roots taken 2 at a time.
an 3  negative of the sum of the products of all
possible combinatio ns of roots taken 3 at a time.

 Suppose that all the roots are real and on the left half plane, then
all coefficients of the polynomial are positive.
 If all the roots are real and in the left half plane then no
coefficient can be zero.
 The only case for which a coefficient can be negative is when
there is at least one root in the right half plane.
 The above is also true for complex roots.
1) If any coefficient is equal to zero, then not all roots are in
the left half plane.
2) If any coefficient is negative, then at least one root is in
the right half plane.
3) The converse of rule 2) is not always true.

 Example:

P( s)  s 3  s 2  2 s  8  ( s  2)( s 2  s  4)
all coefficien ts are positive. But two roots
(complex) are in the right half plane.
ROUTH-HURWITZ STABILITY
CRITERION
 All the coefficients must be positive if all the roots are in the left
half plane. Also it is necessary that all the coefficients for a stable
system be nonzero.
 These requirements are necessary but not sufficient. That is we
know the system is unstable if they are not satisfied; yet if they are
satisfied, we must proceed further to ascertain the stability of the
system.
 For example,

 The Routh-Hurwitz is a necessary and sufficient criterion for the


stability of linear systems.
3 2 2
q ( s )  s  s  2s  8  ( s  2)( s  s  4)
the system is unstable yet all coefficien ts are positive.
 The Routh-Hurwitz criterion applies to a polynomial of the
form:
P( s )  an s n  an 1s n 1  .......  a1s  a0
assume a0  0

 The Routh-Hurwitz array:


sn an an  2 an  4 an 6 ....
s n 1 an 1 an 3 an 5 an 7 ....
s n2 b1 b2 b3 b4 ....
s n 3 c1 c2 c3 c4 ....
. . .
. . .
s2 k1 k2
s1 l1
s0 m1
 Columns of s are only for accounting.
 The b row is calculated from the two rows above it.

 The c row is calculated from the two rows directly above it.

 Etc…

 The equations for the coefficients of the array are:

1 an an  2 1 an an  4
b1   b2   , .......
an 1 an 1 an  3 an 1 an 1 an  5

1 an 1 an 3 1 an 1 an  5
c1   c2   , ......
b1 b1 b2 b1 b1 b3

 Note: the determinant in the expression for the ith coefficient


in a row is formed from the first column and the (i+1)th
column of the two preceding rows.
The number of polynomial roots in the right half plane is equal to the
number of sign changes in the first column of the array.

P ( s )  s 3  s 2  2 s  8  ( s  2)( s 2  s  4)
Example:
The Routh array is :
s3 1 2
s2 1 8
s1 -6
s0 8

Since there are two sign changes on the first column, there are two
roots of the polynomial in the right half plane: system is unstable.

Note: The Routh-Hurwitz criterion shows only the stability of the


system, it does not give the locations of the roots, therefore no
information about the transient response of a stable system is
derived from the R-H criterion. Also it gives no information about
the steady state response. Obviously other analysis techniques in
addition to the R-H criterion are needed.
From the equations, the array cannot be completed if the first element in
a row is zero. Because the calculations require divisions by zero. We
have 3 cases:
Case 1: none of the elements in the first column of the array is zero. This
is the simplest case. Follow the algorithm as shown in the previous
slides.

Case 2: The first element in a row is zero, with at least one nonzero
element in the same row. In this case, replace the first element which is
zero by a small number  . All the elements that follow will be functions
of  . After all the elements are calculated, the signs of the elements in
the first column are determined by letting approach zero.
Example:

5 4 3 2
P(s)  s  2s  2s  4s  11s  10
s5 1 2 11
s4 2 4 10
s3  6
2 12
s - 10

s1 6
s 0 10
When we calculate the elements :
b1  0, b 2  6, therefor e we put b1  
and calculate the other coefficien ts. You should
verify the results.


 There are 2 sign changes regardless of is positive or negative. Therefore the system is
unstable.
 Case 3: All elements in a row are zero.
 Example:
P( s)  s 2  1
s2 1 1
s1 0
s0
 Here the array cannot be completed because of the zero element in the first
column.
 Another example:

P(s)  s 3  s 2  2s  2
The array is :
s3 1 2
s2 1 2
s1 0
s0
 Case 3 polynomial contains an even polynomial as a factor.
It is called the auxiliary polynomial. In the first example,
the auxiliary polynomial is s2 1
and in the second example, auxiliary polynomial is
s2  2
 Case 3 polynomial may be analyzed as follows:
i
Suppose that the row of zeros is the s then the
row,
auxiliary polynomial is differentiated with respect to s,
and the coefficients of the resulting polynomial used to
i
replace the zeros in the s row. The calculation of the
array then continues as in the case 1.
 Example:
P( s )  s 4  s 3  3s 2  2 s  2
The Routh array is :
s4 1 3 2
s3 1 2
s2 1 2
s1 0
s0
Since the s1 row contains zeros, the auxiliary
polynomial is obtained from the s 2 row :
Paux ( s )  s 2  2
The derivative is 2 s, therefore 2 replaces 0 in the
s1 row, and the Routh array is then completed.
P ( s )  s 4  s 3  3s 2  2 s  2
The Routh array now becomes :
s4 1 3 2
s3 1 2
s2 1 2
s1 2
s0 2
Hence there are no roots in the right half plane.

Note : When the re is a row of zeros in the Routh array, the


system is nonstable. That is it will have roots either on the
imaginary axis (as in this example), or it has roots on the
right half plane.
Frequency Response
• By the term frequency response, we mean the
steady-state response of a system to a sinusoidal
input. In frequency-response methods, we vary
the frequency of the input signal over a certain
range and study the resulting response.
Frequency-response methods were developed in
1930s and 1940s by Nyquist, Bode, Nichols, and
many others. The frequency-response methods
are most powerful in conventional control theory.
They are also indispensable to robust control
theory.
Advantage Of The Frequency-response Approach
 1.One advantage of the frequency-response
approach is that we can use the data obtained
from measurements on the physical system
without deriving its mathematical model.
 2. Another advantage of the frequency-response
approach is that frequency-response tests are, in
general, simple and can be made accurately by
use of readily available sinusoidal signal
generators and precise measurement equipment.
 The transfer functions of complicated
components can be determined experimentally
by frequency-response tests. In addition, the
frequency-response approach has
 3. the advantages that a system may be designed
so that the effects of undesirable noise are
negligible and that such analysis and design can
be extended to certain nonlinear control systems.
Advantages of frequency response analysis

4. The tests involve measurements under steady


state conditions which are more simpler to
analyze compared to measurements of transient
responses.
5. The tests are made on open loop systems which
are not subject instability problems.
6. The results give convenient access to control
system order, gain, error constants, resonant
frequencies etc.
Disadvantages of frequency response
analysis
1. It is not always easy to deduce transient response
characteristics from a knowledge of the
frequency response.
2. In completing tests it can be difficult to generate
low-frequency signals and obtain the necessary
measurements. Normal frequencies of 0,1 to 10
Hz are used. However for process control
frequencies of one cycle over several hours may
be required while for fluid servos frequencies of
> 100 Hz may be experienced.
Frequency-Response Characteristics in Graphical Forms
The sinusoidal transfer function, a complex function
of the frequency ω, is characterized by its
magnitude and phase angle, with frequency as
the parameter. There are three commonly used
representations of sinusoidal transfer functions:
1. Bode diagram or logarithmic plot
2. Nyquist plot or polar plot
3. (Nichols plots) Log-magnitude-versus-phase plot
• Bode Diagrams or Logarithmic Plots. A Bode
diagram consists of two graphs: One is a plot of
the logarithm of the magnitude of a sinusoidal
transfer function; the other is a plot of the phase
angle; both are plotted against the frequency on
a logarithmic scale. The standard representation
of the logarithmic magnitude of
where the base of the logarithm is 10.The unit used
in this representation of the magnitude is the
decibel, usually abbreviated dB.
Plotting Bode Diagrams with MATLAB.
The command bode computes magnitudes and
phase angles of the frequency response of
continuous-time, linear, time invariant systems.
Control System Toolbox
Frequency Response: Bode and Nyquist Plots
• Typically, the analysis and design of a control
system requires an examination of its frequency
response over a range of frequencies of interest.

• The MATLAB Control System Toolbox provides


functions to generate two of the most common
frequency response plots: Bode Plot (bode
command) and Nyquist Plot (nyquist command).

06/09/2022 MATLAB Control Toolbox 683


Control System Toolbox
Frequency Response: Bode Plot
Problem
• Given the LTI system
1
G( s ) 
s(s  1)
Draw the Bode diagram for 100 values of frequency in
the interval 10 1 10
. 

06/09/2022 MATLAB Control Toolbox 684


Control System Toolbox
Frequency Response: Bode Plot
>>bode(tf(1, [1 1 0]), logspace(-1,1,100));

06/09/2022 MATLAB Control Toolbox 685


Control System Toolbox
Frequency Response: Nyquist Plot
 The loop gain Transfer function G(s)
 The gain margin is defined as the multiplicative amount
that the magnitude of G(s) can be increased before the
closed loop system goes unstable
 Phase margin is defined as the amount of additional
phase lag that can be associated with G(s) before the
closed-loop system goes unstable

06/09/2022 MATLAB Control Toolbox 686


Control System Toolbox
Frequency Response: Nyquist Plot
Problem
Given the LTI system
1280s  640
G( s ) 
s 4  24.2 s 3  1604.81s 2  320.24s  16
Draw the bode and Nyquist plots for 100 values of
frequencies in the interval 104 103  .
In addition, find the gain and phase margins.

06/09/2022 MATLAB Control Toolbox 687


Control System Toolbox
Frequency Response: Nyquist Plot
w=logspace(-4,3,100);
sys=tf([1280 640], [1 24.2 1604.81 320.24 16]);
bode(sys,w)
[Gm,Pm,Wcg,Wcp]=margin(sys)
%Nyquist plot
figure
nyquist(sys,w)

06/09/2022 MATLAB Control Toolbox 688


Control System Toolbox
Frequency Response: Nyquist Plot

The values of gain and phase margin and corresponding frequencies are

Gm = 29.8637 Pm = 72.8960 Wcg = 39.9099 Wcp = 0.9036


06/09/2022 MATLAB Control Toolbox 689
Control System Toolbox
Frequency Response Plots
bode - Bode diagrams of the frequency response.
bodemag - Bode magnitude diagram only.
sigma - Singular value frequency plot.
nyquist - Nyquist plot.
nichols - Nichols plot.
margin - Gain and phase margins.
allmargin - All crossover frequencies and related gain/phase
margins.
freqresp - Frequency response over a frequency grid.
evalfr - Evaluate frequency response at given frequency.
interp - Interpolates frequency response data.

06/09/2022 MATLAB Control Toolbox 690


The open loop transfer function of a closed
loop control system may assume following
general form:
G(s)H(s) = K [(1+ sT1) (1+ sT2) …] ωn2
sN [(1+sTa)(1+sTb)…](s2+2ζωns+ωn2)
The sinusoidal transfer function is obtained by
substituting s= jω, thus
G(jω)H(jω) = K [(1+ jωT1) (1+ jωT2) …] ωn2
(jω)N[(1+jωTa)(1+ jωTb)…](ωn2- ω2 + j2ζωnω)
N = Type of the system
712
• Initial Slope of Bode plot :
Type of system N Initial Slope -20N Intersection with
dB/decade 0 dB axis at

0 0 dB/decade Parallel to 0dB


axis
1 -20dB/decade =K

2 -40dB/decade =K 1/2

3 -60dB/decade =K 1/3

… … …
N -20NdB/decade =K 1/N

713
• Each first order term , i.e. , (1+jωT ) term
• In the denominator will contribute -20
dB/decade .
• Each first order term , i.e. , (1+jωT ) term
• In the numerator will contribute +20
dB/decade.
• Each second order term in the
denominator will contribute -40 dB/decade

714
• Corner frequency:
Determination of corner frequency is vital,
since the slope of magnitude plot will
change at each corner frequency .
• Each first order term , i.e. , (1+jωT ) term
has a corner frequency of ω = 1/T
• Each second order term , i.e. ,
• ωn2
• (s2+2ζωns+ωn2) term has a
corner frequency of ωn .

715
Draw the Bode plot for the following
Open-loop transfer function
 G(s)H(s) = 4
 s(1+0.5s)(1+0.08s)
And determine (a) the gain margin
 (b) the phase margin
 (c) the closed loop stability

716
G(s)H(s) = 4
s(1+0.5s)(1+0.08s)
Step1: Check whether given transfer function is in
normalized form.
Here, G(s)H(s) is in normalized form

Then put s= jω
G(jω)H(jω) = 4
jω(1+0.5 jω)(1+0.08 jω )

Step 2: the corner frequencies are


ω = 1/0.5 = 2
and ω = 1/0.08 = 12.5

717
The starting frequency of the Bode plot is
taken as lower than the lowest corner
frequency.
Step 3. Determine the initial slope and point
of intersection with frequency (ω)axis:
Since system is type 1, the initial slope of
the Bode plot is -20 db/ decade.
point of intersection with frequency (ω)axis
occurs at ω = K = 4

718
Step 4: Determine change in slope at each
corner frequency and plot the magnitude plot.
The initial slope of -20 db/ decade continues
till the lowest corner frequency (ω = 2) is
reached.
The corner frequency (ω = 2) is due to the
first order term in the denominator, (1+j0.5ω)
which will contribute -20 db/ decade.

719
Hence total slope becomes (-20 + -20) db/
decade = -40 db/ decade
( Note that a decade is a frequency band
from ω1 to 10ω1, where ω1 is any
frequency)
( Also note that an octave is a frequency
band from ω1 to 2ω1, where ω1 is any
frequency)
( Octave = 1/0.301decade)
Hence 20 db/ decade = 20 x 0.3 db/ octave
 = 6 db/ octave

720
The slope of -40 db/decade will continue till
the next corner frequency (ω = 12.5) is
reached.
The corner frequency (ω = 12.5) is due to the
first order term in the denominator,
(1+j0.08ω) which will contribute -20 db/
decade.
Hence total slope becomes (-40 + -20) db/
decade = -60 db/ decade.
The slope of -60 db/decade will continue for
frequencies greater than ω = 12.5.
721
Step 5: calculate G(jω) H(jω) for
frequencies between ω = 1 to 100:
 G(jω) H(jω)=
 -900 – tan-1(0.5ω)- tan-1(0.08ω)

ω 1 2 8 10 20 50
rad/sec

G(jω) H(jω -121 -144 -202 -207 -234 -252


degree

722
dB
20 -1200
-20db/decade
10 -1500

0 5 -1800
1 4 ω 100
2.7 10
-10 -40db/decade -2100

-20 -2400
-2700
-20db/decade

723
• The gain margin is defined as negative of
gain in db at phase cross over frequency.
• phase cross over frequency is 5 rad/sec
• And corresponding gain value = -13 dB
• Hence gain margin = 13 dB
• The phase margin is given by
1800 + G(jω)H(jω) at gain cross
over frequency .
• The gain cross over frequency is 2.7
rad/sec
• And corresponding phase angle = -1500
724
• Hence phase margin = 1800 + (-1500)
• = 300
• The phase margin & gain margin are both
positive.
• Hence system is stable.
• If phase cross over frequency is greater
than gain cross over frequency, system
will be stable.

725
• Draw the Bode plot for the following
• Open-loop transfer function
• G(s)H(s) = 30
• s(1+0.5s)(1+0.08s)
• And determine (a) the gain margin
• (b) the phase margin
• (c) the closed loop
stability

726
• Draw the Bode plot for the following
• Open-loop transfer function
• G(s)H(s) = 48(s+10)
• s(s+20)(s2+2.4s+16)
• And determine (a) the gain margin
• (b) the phase margin
• (c) the closed loop stability

727
• G(s)H(s) = 48(s+10)
• s(s+20) (s2+2.4s+16)
• Step1: Check whether given transfer
function is in normalized form.
• Here, G(s)H(s) is not in normalized form.
• G(s)H(s) = 48 x 10 [s/10 +1] 16
• 20x 16x s[s/20 +1] (s2+2.4s+16)

• = 1.5(0.1s +1) x 16
s( 0.05s + 1) (s2+2.4s+16)
728
Then put s= jω
• G(jω)H(jω) =1.5(0.1jω +1) x 16
jω ( 0.05 jω + 1) [ j2.4ω + (16- ω2)]

• Step 2: the corner frequencies are


• ω = 1/0.1 = 10,
• ω = 1/0.05 = 20, and
• ω = ωn =√ 16 = 4 since ωn2 =
16
• in the quadratic term.

729
• The starting frequency of the Bode plot is
taken as lower than the lowest corner
frequency.
• Step 3. Determine the initial slope and
point of intersection with frequency
(ω)axis:
• Since system is type 1, the initial slope of
the Bode plot is -20 db/ decade.
• point of intersection with frequency
(ω)axis
occurs at ω = K = 1.5
730
• Step 4: Determine change in slope at each
corner frequency and plot the magnitude
plot.
• The initial slope of -20 db/ decade continues
till the lowest corner frequency (ω = 4) is
reached.
• The corner frequency (ω = 4) is due to the
second order term (quadratic term) in the
denominator, 16
• [ j2.4ω + (16- ω2)]
• which will contribute -40 db/ decade.
731
• Hence total slope becomes (-20 + -40) db/ decade
= -60 db/ decade
• ( Note that a decade is a frequency band from ω1
to 10ω1, where ω1 is any frequency)
• ( Also note that an octave is a frequency band from
ω1 to 2ω1, where ω1 is any frequency)
• ( Octave = 1/0.301decade)
• Hence -60 db/ decade = -60 x 0.3 db/ octave
• = -18 db/ octave

732
• The slope of -60 db/decade will continue
till the next corner frequency (ω = 10) is
reached.
• The corner frequency (ω = 10) is due to
the first order term in the numerator,
(1+j0.1ω) which will contribute +20 db/
decade.
• Hence total slope becomes (-60 + +20)
db/ decade = -40 db/ decade.
• The slope of -40 db/decade will continue
till the next corner frequency (ω = 20) is
reached.

733
• The corner frequency (ω = 20) is due to
the first order term in the denominator,
(1+j0.05ω) which will contribute -20db/
decade.

• Hence total slope becomes (-40 + -20)


db/ decade = -60 db/ decade.

• The slope of -60 db/decade will continue


for frequencies greater than ω =20

734
• Step 5: calculate G(jω) H(jω) for
frequencies between ω = 1 to 100:
• G(jω) H(jω)= tan-1(0.1ω) -900 –
• tan-1(0.05ω)- tan-1 2.4ω/(16- ω2 )

ω 1 4 5 10 20 50
rad/sec

G(jω)H(jω - 96 -169 -204 -235 -244 -256


degree

735
Frequency Response Analysis

G (s)
Frequency Response Analysis

1
G ( s) 
• Consider process ( s  1)

1.5
Input
Output
1

0.5
A m plitude

-0.5

-1

-1.5
0 5 10 15 20 25 30 35 40 45 50
Time (sec)
Frequency domain specifications
The gain margin is defined as the change in open loop
gain required to make the system unstable. Systems with
greater gain margins can withstand greater changes in
system parameters before becoming unstable in closed
loop. Keep in mind that unity gain in magnitude is equal
to a gain of zero in dB.
 The phase margin is defined as the change in open loop
phase shift required to make a closed loop system
unstable.
 The phase margin is the difference in phase between the
phase curve and -180 deg at the point corresponding to the
frequency that gives us a gain of 0dB (the gain cross over
frequency, Wgc).
 Likewise, the gain margin is the difference between the
Gain and Phase Margin

-180
Developing a Bode Plot from the Transfer Function

G p (i  )  R( )  i I  

Ar ( )  R 2 ( )  I 2 ( )

 I ( ) 
1
 ( )  tan  
 R( ) 
Derivation of the Bode Plot for a First Order
Process
Kp Kp
G p (s)  G p (i ) 
 p s 1 i p  1
After rationaliz ation
Kp K p  p
G p (i )  2 2
i 2 2
  1
p   1p

K p2  K p2 2 2p Kp
Ar ( )  2 2

  1
p
2 2
  1
p

 ( )  tan 1 ( p )
Bode Plots
Bode Diagram

40 Constant
Integral factor
Magnitude (dB)

Derivative factoe
20

-20
135

90
Phase (deg)

45

-45

-90
0 1
10 10
Frequency (rad/sec)
Bode Plots

Bode Diagram
70

60 First order factor in numerator


50 First order factor in numerator and denominator
First order factor in denominator
40

30
Magnitude (dB)

20

10

-10

-20

-30

-40
90

45
Phase (deg)

-45

-90
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
Bode Plots
Bode Diagram
70

60 First order factor in numerator


50 First order factor in numerator and denominator
First order factor in denominator
40

30
Magnitude (dB)

20

10

-10

-20

-30

-40
90

45
Phase (deg)

-45

-90
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
Bode plots
Bode Diagram
20
Under damped
10
Critically damped
0 Over damped

-10
Magnitude (dB)

-20

-30

-40

-50

-60

-70

-80
0

-45
Phase (deg)

-90

-135

-180
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
The Polar Plots or the Nyquist Plot.
The Nyquist
Criterion
752
Mathematical background:
• Consider the denominator of overall transfer
function:
• Do(s) = 1+G(s)H(s)
• = K(s-s1)(s-s2)(s-s3)…(s-sm)
• (s-sa)(s-sb)(s-sc)…(s-sn)
Equation(1)
• Where s1,s2,…,sm are zeros of Do(s) and sa,
sb, … sn are poles of Do(s) and s is a
complex variable : s = σ + jω
753
• The function Do(s) apparently will also be
complex.
• Let us write Do(s) = u(s) + j v(s)
• When s takes on different values, both u(s)
and v(s) will vary.
• It is manifestly impossible to represent the
function Do(s) graphically in a single diagram
for all complex values of s.
• However, since the zeros, s1,s2,…,sm and the
poles, sa, sb,…, sn, completely characterize
the function Do(s), a simple plot in the s-plane
showing the location of zeros and poles,
provides a neat way of specifying properties of
Do(s) graphically.
754
• For each value , so , of s, there corresponds a
value of Do(so) = u(so) + j v(so) which can be
represented by a point in Do(s) plane.
• If we now allow s to take on successive
values along a closed contour C in the
• s-plane , a corresponding closed contour г,
as shown in the next slide, will be traced.
• We say that the contour C in the s-plane is
mapped onto the Do(s) plane as the contour
г by the functional relationship given in
• Equation(1)
755
• Mapping of a contour C in S-plane onto a contour г
in Do(s) plane

• contour C contour г
jv

σ
u

• (a) s-plane (b) Do(s) plane


756
• We know that the exact shape of contour г
depends critically on that of contour C ;
• but why have we drawn contour г clockwise
for a clockwise contour C as shown?
• In order to answer this question we have to
pursue further by examining four different
cases listed below:
• (1) contour C encircles neither zeros nor poles
of Do(s), (2) contour C encircles zeros contour
C encircles poles,(3) contour C encircles poles
but not zeros, (4) contour C encircles both
poles and zeros. 757
• (1) Contour C encircles neither zeros nor poles:
• It is clear that a closed contour in the s-plane
maps onto a closed contour in the Do(s) plane.
• Contours C and г essentially divide the s &
Do(s) plane into two regions, one inside and
one outside the contours.
• If the contour C in the s-plane does not encircle
any poles or zeros, then the contour г in the
Do(s)-plane cannot encircle either the origin or
the point at infinity.
• As we trace the contour C in the clockwise
direction (in the direction of arrow), the interior
758
• the interior of the contour which contains
neither zeros nor poles is always on the right
of the contour.
• Correspondingly, in the Do(s) plane the region
which contains neither the origin nor the point
at infinity should also be on the right of the
contour as г is traced.
• In other words, the interior of the contour C in
the s-plane is mapped onto the Do(s) plane as
the interior of the contour г, and if C is
described in clockwise direction, so is г .
759
Mapping of a contour C which encircles a
zero of Do(s) jv
jw

г
s1 C

σ
u

(a) S plane
(b) Do(s) plane

760
Mapping of a contour C which encircles two
zeros of Do(s) jv
jw

s1 C
г
σ
u
s2

(a) S plane
(b) Do(s) plane

761
• (2) Contour C encircles zeros but not poles:
• As we trace the contour C in the direction of
arrow, the interior of the contour, which is
always on the right of the contour and which
contains a zero, s1, maps onto Do(s)-plane
as the contour г.
• г is correspondingly traced with its interior,
which contains the origin, always on its right.
• Hence if C is described in a clockwise
direction, so is г .

762
Mapping of a contour C which encircles a
pole of Do(s) jv
jw

C
X Sa г
σ
u

(a) S plane
(b) Do(s) plane

763
• The contour г encircles the origin of the
Do(s) plane once in clockwise direction
because the contour C encircles one zero in
the s-plane in the clockwise direction.
• In general, if the contour C encircles Z zeros
in the clockwise direction, the corresponding
• contour г encircles the origin of the Do(s)
plane Z times in clockwise direction .
• A zero of multiplicity k counts as k zeros.

764
• (3) Contour C encircles poles but not zeros:
• Here, as we trace the contour C in the
direction of arrow, the region on the right
which contains a pole, Sa, maps on to the
• Do(s)-plane as the region also on the right of
the contour г, which contains the point at
infinity as г is described.
• Hence interior of contour C maps onto the
exterior of the contour г .
• Since

765
Mapping of a contour C which encircles a
pole of Do(s) jv
jw

C
X Sa г
σ
u

(a) S plane
(b) Do(s) plane

766
Contour C which encloses the entire right
half of the complex s- plane
jw

S = +j∞

S ∞

σ
S=0

S = -j∞

(a) S plane

767
Nyquist Diagram

f
Imaginary Axis

Ar

Real Axis
Nyquist Diagram
(Complex Plane Plot)

R ( )  Ar cos 
I ( )  Ar sin 

Therefore, you can use the same


equations used to develop a Bode
plot to make a Nyquist diagram.
Nyquist Plot
Nyquist Diagram
10
Constant factor
8 Integral factor
Derivative factor
6

4
Imaginary Ax is

-2

-4

-6

-8

-10
-1 0 1 2 3 4 5
Real Axis
Nyquist Plot

Nyquist Diagram
20
first order factor in denominator
15 first order factor in numarator and denominator
first order factor in numerator
10

5
Imaginary Axis

-5

-10

-15

-20
-1 0 1 2 3 4 5
Real Axis
Nyquist Plot

Nyquist Diagram
4

Underdamped
3 Critically damped
Overdamped

2
Imaginary Axis

-1

-2

-1 -0.5 0 0.5 1 1.5


Real Axis
Nichols plot
Nichols Chart
30
Constant factor
20 Integral factor
Derivative factor
10
Open-LoopGain(dB)

-10

-20

-30
-180 -135 -90 -45 0 45 90 135 180
Open-Loop Phase (deg)
Nichols plot
Nichols Chart
40

First order factor in denominator


30 First order factor in numerator and denominator
First order factor in numerator
20
O pen-Loop G ain (dB)

10

-10

-20

-30

-40
-90 -45 0 45 90
Open-Loop Phase (deg)
Nichols plot

Nichols Chart
50
Under damped
Critically damped
Over damped

0
Open-Loop Gain (dB)

-50

-100
-180 -135 -90 -45 0
Open-Loop Phase (deg)
The Nyquist Stability Criterion
 

The Nyquist plot allows us also to predict the stability and


performance of a closed-loop system by observing its open-
loop behavior. The Nyquist criterion can be used for design
purposes regardless of open-loop stability (Bode design
methods assume that the system is stable in open loop).
Therefore, we use this criterion to determine closed-loop
stability when the Bode plots display confusing information.
 The Nyquist diagram is basically a plot of G(j* w) where
G(s) is the open-loop transfer function and w is a vector of
frequencies which encloses the entire right-half plane. In
drawing the Nyquist diagram, both positive and negative
frequencies (from zero to infinity) are taken into account. In
the illustration below we represent positive frequencies in red
and negative frequencies in green.
The frequency vector used in plotting the Nyquist
diagram usually looks like this (if you can imagine
the plot stretching out to infinity):
However, if we have open-loop poles or zeros on the
jw axis, G(s) will not be defined at those points, and
we must loop around them when we are plotting the
contour. Such a contour would look as follows:
The Nyquist Stability Criterion
w  100 99.9 100 j  1 s ( w)  j w f ( w)  1
50 4.6
G( w) 
3 2
s ( w)  9 s ( w)  30 s ( w)  40
5

Im( G( w) )
0
0

5
2 1 0 1 2 3 4 5 6
Re( G( w) )
The Cauchy criterion
The Cauchy criterion (from complex analysis) states that when taking a closed contour in
the complex plane, and mapping it through a complex function G(s), the number of times
that the plot of G(s) encircles the origin is equal to the number of zeros of G(s) enclosed
by the frequency contour minus the number of poles of G(s) enclosed by the frequency
contour. Encirclements of the origin are counted as positive if they are in the same
direction as the original closed contour or negative if they are in the opposite direction.
 
When studying feedback controls, we are not as interested in G(s) as in the closed-loop
transfer function:
G(s)
---------
1 + G(s)
If 1+ G(s) encircles the origin, then G(s) will enclose the point -1.
Since we are interested in the closed-loop stability, we want to know if there are any
closed-loop poles (zeros of 1 + G(s)) in the right-half plane.
 
Therefore, the behavior of the Nyquist diagram around the -1 point in the real axis is very
important; however, the axis on the standard nyquist diagram might make it hard to see
what's happening around this point
The Nyquist Stability Criterion - Application
Knowing the number of right-half plane (unstable) poles in open loop (P), and the
number of encirclements of -1 made by the Nyquist diagram (N), we can determine
the closed-loop stability of the system.

If Z = P + N is a positive, nonzero number, the closed-loop system is unstable.

We can also use the Nyquist diagram to find the range of gains for a closed-loop unity
feedback system to be stable. The system we will test looks like this:

where G(s) is :
s^2 + 10 s + 24
---------------
s^2 - 8 s + 15
Nichols chart
Nichols chart analysis is a modification of the Nyquist and
Bode methods. It is a frequency response method. The
Nichols chart is a useful technique for determining the
stability and the closed-loop frequency response of a
feedback system. Nichols chart is basically a
transformation of the M- and N-circles on the polar plot
into non-circular M and N counters on a dB magnitude
versus phase angle plot in rectangular coordinates. If the
open-loop frequency response function of a continuous-
time system is represented by GH(ω), then GH(ω) is
plotted on a Nichols chart is called a Nichols chart plot of
GH(ω). The Nichols chart has two advantages over the
polar plot. They are:
Nichols chart
(a) since |GH(ω)| is plotted on a logarithmic scale, a
much wider range of magnitude can be graphed
(b) the graph of GH(ω) is essentially a summation of
the individual magnitude and phase angle
contributions of its poles and zeros.
The stability is obtained from a plot of the open-loop
gain versus phase characteristics
Gain Margin, Phase Margin, Phase Crossover
Frequency, And Gain Crossover Frequency
The MATLAB command
[Gm, pm, wcp, wcg] = margin (sys)
Where, Gm is the gain margin, pm is the phase
margin, wcp is the phase –crossover frequency, and
wcg is the gain crossover frequency. The following
MATLAB command is commonly used for obtaining
the resonant peak and resonant frequency:
[mag, phase, w] = bode (num, den, w)
[mag, phase, w] = bode (sys, w)
[Mp, k] = max (mag),
resonant peak = 20 * log 10 (Mp)
resonant frequency = w(k)
The following lines are used in MATLAB program to
obtain bandwidth:
n=1
while 20 * log 10 (mag (n)) > – 3
n=n+1
end
bandwidth = w(n)
The Nichols Stability Method
Polar Stability Plot - Nichols Mathcad Implementation

This example makes a polar plot of a transfer function and draws one contour of constant
closed-loop magnitude. To draw the plot, enter a definition for the transfer function
G(s):

45000
G ( s) 
s ( s  2) ( s  30)
The frequency range defined by the next two equations provides a logarithmic frequency scale
running from 1 to 100. You can change this range by editing the definitionsm
forand  m:

.02  m
m  0  100  m  10
Now enter a value forM to define the closed-loop magnitude contour that will be plotted.
M  1.1
Calculate the points on the M-circle:

 M2 M 
MCm   
exp 2  j .01 m
 M2  1 2 
 M 1 
The first plot showsG, the contour of constant closed-loop magnitude,M
The Nichols Stability Method
The first plot showsG, the contour of constant closed-loop magnitude,
M, and the
Nyquist of the open loop system

Im G j  m

Im MCm

Re G j  m  Re MCm   1
The Nichols Stability Method
The Nichols Stability Method Mpw

1
G    
j   j   1  0.2  j   1

Mpw  2.5 dB  r  0.8

The closed-loop phase angle


at r is equal to -72 degrees and b = 1.33
The closed-loop phase angle atb is equal to
-142 degrees

-3dB
-72 deg wr=0.8
-142 deg
The Nichols Stability Method
0.64
G    
j   j    j   1
2

Phase Margin = 30 degrees


On the basis of the phase we estimate  0.30
GM
Mpw  9 dB Mpw  2.8  r  0.88
From equation
1
Mpw   0.18
2
2   1  
We are confronted with comflectings

The apparent conflict is caused by the nature of


G(j) which slopes rapidally toward 180 degrees
line from the 0-dB axis. The designer must use PM
the frequency-domain-time-domain correlation
with caution
The Nichols Stability Method

GM

PM
ROOT LOCUS
(THEORY)
Consider the following standard control system,

1+ KG(s)H(s)
Where K is varied

R(s) error C(s)


G(s)

H(s)
• Transience and stability of a system depends upon its CLOSED LOOP
POLES.

• To build a better system, movement of poles can be adjusted by modi-


fying the system parameters.

• The ROOT LOCUS method thus is of help.

What is a ROOT LOCUS?

Plot of the LOCI of the closed loop poles as a function of the open
loop gain (K), where K is varied from –infinity to infinity

When,
K is varied from –infinity to zero, it is a direct root locus.

K is varied from zero to infinity, it is a inverse root locus.


CONSTRUCTION OF A ROOT LOCUS
There are 8 steps to constructing a ROOT LOCUS and must be fol-
lowed in order to reach a conclusion
RULE 1:
• Root locus is always symmetrical to the x-axis or the real axis.

• Roots are either real or complex conjugate pairs or both.

• Hence the roots or poles and zeroes of the equation are found first.
Say,
G(s)=_________(s+1)___
(s^2 + 4s +5)(s+3)
So the poles of the equation are
s+3=0, s=-3
s^2 +4s+5=0 s=-2+I and -2-I
And the poles are
s+1=0 s=-1
RULE 2: Number of LOCI

• If the number of poles be n and the number of zeroes be m.

• If n>m , then
Total number of loci= n

• Each locus starts from a pole and ends in a zero.

• So, number of loci that start from a pole and end in infinity is n-m.

E.g. in the previous case


Total number of loci=3
And number of loci starting from a pole and end in infinity is 3-2=1
Rule 3: Real axis loci

• Some of the loci will lie on the real axis.

• A point on the real axis will lie on the root locus if and only if,

Number of poles and zeroes to the right is odd


RULE 4: Angle of asymptotes

• Mostly, number of poles > number of zeroes. i.e. n>m


Hence, n-m branches move to infinity, along the asymptote.

Asymptote: It is defined as a line on the root locus which


converges at infinity.

The angle of asymptote is given by,


Theta= (2*q + 1)*180 / n-m for K>0
Theta= (2*q)*180 / n-m for K<0

Where q=0,1,2….n-m-1

e.g. Using the same example as before we see


that,
Q can take 0,1 as the values. N-m=2
Theta=90, 270 degrees.
RULE 5: Centre of asymptotes

• Since knowledge of only asymptote angle is insufficient, the location of


asymptotes in the s-plane are also important

• The point where the asymptote touches the real axis is known as the
centroid.

• Centroid= [sum(real part of poles)- sum(real parts of zeroes)]/(n-m)

e.g. In the above example


G(s)=_________(s+1)___
(s^2 + 4s +5)(s+3)
sum(real part of poles) = -2+(-2)+(-3)= -7
sum(real parts of zeroes) = -1
n-m =2
• Centroid= (-7+1)/2= -3
RULE 6: break away and break-in point

• Break-in point is defined as the point where the root locus enters real axis.

• Break away point is defined as the point where the root locus comes out of
the real axis.

• Break away & break in points are points on the real axis at which multiple
roots of the characteristic equation occur.

• Observation: If there are 2 adjacently placed poles on the real axis and the
real axis is a part of the root locus, hence minimum 1 break away point exists
between the 2 poles.

1+ G(s)H(s)=0; we get a relation in terms of K.

-dk/ds=0, gives the break away point.


e.g. G(s)H(s)=K / s^2 + 4s+5
Hence,
s^2+4s+5+K=0, and K=(-s^2+4s+5)
-dk/ds=0,  2s+4=0
hence, s=-2.
RULE 7: Intersection of root locus with jw axis.

• To calculate the intersection with jw axis, this is followed

1. Construct 1+G(s)H(s)=0
2. Develop the routh array in terms of K
3. Find K value for which routh array contains a term of zeroes.
4. Frame auxiliary equation with the help of coefficients from the row above
the row of zeroes.
5. Substitute the K value found in step 2, equate the equation to zero, find the
value of s.
The roots of the equation give the intersection points.
e.g. say G(s)H(s)= K/s*s+1*s+3
Characteristic equation will be s^3+ 4s^2 +3s+K=0,
Routh array s^3: 1 3
s :4 K
s^0: (12-K)/4…..hence K-12=0  K=12
Substituting K=12 in the auxiliary equation, 4s^2+K=0,
hence we get K=(+or-) root(3)i , which are the intersection points.
RULE 8: Angle of departure and angle of arrival.

• The root locus always leaves a complex pole, at an angle known as the depar-
ture angle given by,
Theta=180+ arg( G(s)H(s) )

• The root locus always arrives at a complex zero, at an angle known as the ar-
rival angle.
Theta=180- arg( G(s)H(s) ),

Where G(s)H(s) is the angle excluding the zero where the angle has the be
calculated
Steps of a ROOT LOCUS method:

1. Determine the branch number ending at infinity of the loci using rule 1.

2. Plot the poles and zeroes.

3. Find the real axis loci using dark lines.

4. Find the asymptotes and their angles using rule 3.

5. Using rule 4 determine the asymptotes centre and draw step 4 and step 5.

6. Calculate the break in or breakaway points.

7. Calculate angle of departure and angle of arrival using the rule.

8. Determine the jw crossover if root locus has complex poles and zeroes.
Root Locus
Locus is defined as a set of all points satisfying a set of
conditions. The term root refers to the roots of the
characteristic equation, which are the poles of the closed-
loop transfer function. These poles define the time response
of the system and hence the performance and stability of the
system. Hence, root-locus defines a graph of the poles of the
closed-loop transfer function as the system parameter, such
as the gain is varied. Evan’s root locus method, or simply
root-locus method, gives all closed-loop poles graphically,
using the knowledge provided by the open-loop poles and
open-loop zeros. A root-locus plot is composed of as many
individual loci as there are poles. Individual loci are referred
to as branches of the root locus
Root Locus
Closed-loop poles
Plotting the root locus of a transfer function
Choosing a value of K from root locus
Closed-loop response
Key Matlab commands used in this
tutorial: cloop, rlocfind, rlocus, sgrid, step
A plant to be controlled is described by a transfer
function

Obtain the root locus plot using MATLAB.


Solution.
>> %MATLAB Program
>> clf
>> num = [1 5];
>> den = [1 7 25];
>> rlocus(num, den);
Root-locus Method
• Based on characteristic eqn of closed-loop transfer
function
• Plot location of roots of this eqn
– Same as poles of closed-loop transfer function
– Parameter (gain) varied from 0 to 
• Multiple parameters are ok
– Vary one-by-one
– Plot a root “contour” (usually for 2-3 params)
• Quickly get approximate results
– Range of parameters that gives desired response
Closed-Loop Poles
The root locus of an (open-loop) transfer function H(s) is a
plot of the locations (locus) of all possible closed loop
poles with proportional gain k and unity feedback:

The closed-loop transfer function is:

and thus the poles of the closed loop system are values of s
such that 1 + K H(s) = 0.
If we write H(s) = b(s)/a(s), then this equation has the
form:

Let n = order of a(s) and m = order of b(s)


[the order of a polynomial is the highest power of s that
appears in it]. We will consider all positive values of k.
In the limit as k  0, the poles of the closed-loop system
are a(s) = 0 or the poles of H(s).
In the limit as k   (infinity), the poles of the closed-loop
system are b(s) = 0 or the zeros of H(s).
 No matter what we pick k to be, the closed-loop
system must always have n poles, where n is the
number of poles of H(s).
 The root locus must have n branches, each branch
starts at a pole of H(s) and goes to a zero of H(s).
 If H(s) has more poles than zeros (as is often the
case), m < n and we say that H(s) has zeros at
infinity. In this case, the limit of H(s) as s  
(infinity) is zero.
 The number of zeros at infinity is n-m, the number
of poles minus the number of zeros, and is the
number of branches of the root locus that go to
infinity (asymptotes).
 Since the root locus is actually the locations of all
possible closed loop poles, from the root locus we
can select a gain such that our closed-loop system
will perform the way we want.
 If any of the selected poles are on the right half
plane, the closed-loop system will be unstable.
 The poles that are closest to the imaginary axis
have the greatest influence on the closed-loop
response, so even though the system has three or
four poles, it may still act like a second or even
first order system depending on the location(s) of
the dominant pole(s).
Plotting the root locus of a transfer function
Consider an open loop system which has a transfer function of

How do we design a feed-back controller for the system by using


the root locus method?
Say our design criteria are 5% overshoot and 1 second rise time.
Make a Matlab file called rl.m. Enter the transfer function, and
the command to plot the root locus:

num=[1 7];
den=conv(conv([1 0],[1 5]),conv([1 15],[1 20]));
rlocus(num,den)
axis([-22 3 -15 15])
Closed-Loop Poles

The closed-loop transfer function is:

and thus the poles of the closed loop system are values of s such that
1 + K H(s) = 0.

K s  7  s( s  5)( s  15)( s  20)  K s  7 


1  0
s ( s  5)( s  15)( s  20) s ( s  5)( s  15)( s  20)
The plot shows all possible closed-loop pole
locations for a pure proportional controller.
Obviously not all of those closed-loop poles
will satisfy our design criteria.
Choosing a value of K from the root locus
The plot shows all possible closed-loop pole locations for a
pure proportional controller. Obviously not all of those closed-
loop poles will satisfy our design criteria.

To determine what
part of the locus is
acceptable, we can use
the command
sgrid(Zeta,Wn) to plot
lines of constant
damping ratio and
natural frequency.
Choosing a value of K from the root locus
Its two arguments are the
damping ratio (Zeta) and
natural frequency (Wn)
[these may be vectors if you want to look at a range
of acceptable values].
In our problem, we need an overshoot less than 5%
(which means a damping ratio Zeta  0.7) and a rise
time of 1 second (which means a natural frequency
Wn > 1.8).
Enter in the Matlab command window:
zeta=0.7; Wn=1.8; sgrid(zeta, Wn)
Rise Time: defined as the 10% to 90% time from a
former setpoint to new setpoint. The quadratic
approximation for normalized rise time for a 2 nd-
order system, step response, no zeros is: where ζ is
the damping ratio and ω0 is the natural frequency of
the network. However, the proper calculation for rise
time of a system of this type is: where ζ is the
damping ratio and ωn is the natural frequency of the
network.
ROOT-LOCUS ANALYSIS

• The locus of roots of characteristic


equation of the closed loop system as the
gain is varied from zero to infinity is the
root-locus.

818
R(s) C(s)
G(s)

H(s)

For the system shown , the closed loop


transfer function is C(s) = G(s)
R(s) 1+ G(s)H(s)
The characteristic equation of the closed loop
system is
1+G(s)H(s)=0
819
• i.e., G(s)H(s) = -1 (1)
• Here we assume that G(s)H(s) is a ratio of
• polynomials in s.
• Since s is a complex quantity, G(s)H(s) is also
• a complex quantity.
• Since G(s)H(s) is a complex quantity,
Equation (1) can be split in to two equations
by equating the angles and magnitudes of
both sides to obtain the following:
Angle condition : G(s)H(s)= + 1800 (2k+1)
• Magnitude condition : G(s)H(s) = 1
820
• The values of s that fulfill both the angle and
magnitude conditions are the roots of the
characteristic equation.
• A plot of the points in the complex plane
satisfying the angle condition alone is the
root locus.

821
General Rules for constructing
ROOT LOCI
• 1. Locate the poles and zeros of G(s)H(s)
• on the s plane.
• The root locus branches start from
open-loop poles and terminate at zeros
• (finite zeros or zeros at infinity)
• The open-loop zeros are the zeros of
G(s)H(s)

822
• Number of branches of root-locus equals
the number of open-loop poles (n) since it
is , generally, greater than the number of
open-loop zeros (m).
• The number of individual root-locus
branches terminating at finite open-loop
zeros is equal to the number (m) of the
• open-loop zeros.
• The remaining n-m branches terminate at
• infinity along asymptotes.

823
• If we include poles and zeros at infinity, the
number of open-loop poles equals that of
open-loop zeros.

• Hence it is always true that , the root loci


start at the poles of G(s)H(s) and end at the
zeros of G(s)H(s), as K varies from zero to
infinity , where the poles & zeros include
both those at finite s plane and those at
infinity.

824
• 2. Determine the root loci on the real axis.
• Root loci on the real axis are determined
by poles and zeros lying on it.
• The complex-conjugate poles & zeros of
G(s)H(s) have no effect on the root loci on
the real axis.
• For constructing root loci on the real axis,
choose a test point on it.
• If the total number of real poles & real
zeros to the right of this test point is odd ,
• then this point lies on a root locus.

825
• 3. Determine the asymptotes of root loci
• If the test point s is located far from the
origin, then the angle of each complex
quantity may be considered the same.
• Therefore, the root loci for very large values
of s must be asymptotic to straight lines
whose angles are given by:
• Angles of asymptotes = +1800(2k+1)
• n-m
• where k=0,1,2,…
• n = number of finite poles of G(s)H(s)
• m= number of finite zeros of G(s)H(s)
826
• The number of distinct asymptotes = n-m.

• k=0 corresponds to the asymptotes with


the smallest angle with the real axis.
• Although k assumes an infinite number of
values, as k is increased the angle repeats
itself.
• All the asymptotes intersect on the real axis.
• The point at which they do so is obtained as
follows.
• s= (sum of poles)-(sum of zeros)
• n-m
827
4.Find the breakaway and break-in points.
Because of the conjugate symmetry of root
loci, the breakaway and break-in points
either lie on the real axis or occur in complex-conjugate
pairs.

If root locus lies between two open-loop poles


then there exists at least one breakaway point
between two poles.

Similarly if root locus lies between two open-loop zeros


then there exists at least one break-in point
between two zeros.

828
If root locus lies between an open-loop pole
and a zero (finite or infinite) on the real axis,
then there may exist no breakaway or break-in
points or there may exist both breakaway or
break-in points.

829
Suppose that the characteristic equation is
given by
B(s) + K A(s) =0
• Breakaway and break-in points correspond
• to multiple roots of the characteristic
• equation.
• Hence they can be determined from the
roots of
• dK =0 (1)
• ds

830
• It is important to note that Breakaway and
break-in points must be roots of
Equation(1), but not all roots of Equation(1)
are breakaway and break-in points .
• If a real root of Equation(1) lies on the root
locus portion of the real axis, then it is an
actual breakaway and break-in point.
• If a real root of Equation(1) is not on the
root locus portion of the real axis, then this
root is neither a breakaway nor a break-in
point.
831
• If two roots of Equation (1) are a complex-
conjugate pair and if it is not certain whether
they are on root loci, then it is necessary to
check the corresponding K value.
• If the value of K is positive, then that point is
an actual a breakaway or a break-in point.
• If the value of K is negative, then that point is
neither a breakaway nor a break-in point.

832
• 5. Determine the angle of departure (angle of
arrival) of the root locus from a complex pole
(at a complex zero).
• To sketch the root loci with accuracy, we
must find the direction of root loci near the
complex poles & zeros.
• Angle of departure from a complex pole =
1800 - (sum of the angles of vectors to the
complex pole in question from other poles)+
(sum of the angles of vectors to the complex
pole in question from zeros)

833
• Angle of arrival at a complex zero = 1800 -
(sum of the angles of vectors to the complex
zero in question from other zeros)+ (sum of
the angles of vectors to the complex zero in
question from poles).

834
• 6. Find the points where the root loci may
cross the imaginary axis
• The points where the root loci may cross the
jω axis can be found by
(a) Use of Routh’s stability criterion or
(b) Letting s= jω in characteristic equation,
equating both the real part and imaginary part to
zero, and solving for ω & K.
• The values of ω thus found give the
frequencies at which root loci cross the
imaginary axis.
• The K value gives the gain at the crossing
point.
835
• 7. Taking a series of test points in the
broad neighborhood of the origin of the s
plane, sketch the root loci.
• Determine root loci in the broad
neighborhood of the jω axis & the origin.
• The most important part of the root loci is
on neither the real axis nor the
asymptotes but the part in the broad
neighborhood of the jω axis & the origin.
• The shape of the root loci in this important
region in the s plane must be obtained
with sufficient accuracy.
836
R(s) C(s)
K
S(s+1)(s+2)

1.Consider the system shown in fig.


Draw the root locus & determine
the range of K for stability.

837
• For the given system, G(s)= S(s+1)(s+2)
K

• H(s)=1
• The angle condition becomes
• G(s)H(s) = S(s+1)(s+2)
K

= 0- s- s+1 - s+2 = +_1800(2k+1), (k=0,1,2….)

• The magnitude condition is


• K
G(s) = S(s+1)(s+2) = 1

838
• Step 1. Locate the poles & zeros of G(s)H(s)
on the s plane.
• The poles are s=0, -1, -2.
• There are no zeros.

j3

j2

j1
σ
x x x
-4 -3 -2 -1 0 1 2
-j1

-j2

-j3 839
• The location of poles are indicated by
crosses.
• (The location of zeros,if any, are indicated by
small circles)
• The number of branches of root loci = n-m
• =3-0
• =3
• Where,
• n = number of finite poles of G(s)H(s) =3
• m= number of finite zeros of G(s)H(s) =0

840
• Step2. Determine the root loci on the real
axis.
• To determine the root loci on the real axis,
we select test point s on the –ve real axis
between 0 and -1.
• The number of poles & zeros to the right of
the test point is 1(odd number).
• Hence this test point lies on the root loci.
• Thus, root loci exists on the –ve real axis
between 0 & -1

841
• If we select a test point between -1 & -2,
the number of poles to the right of the
test point is an even number (2).
• Hence this test point is not on the root
loci.
• Thus, root loci does not exists on the –ve
real axis between 0 & -1.
• If we select a test point to the right of -2,
the number of poles to the right of the
test point is an odd number (3).

842
• Hence this test point is on the root loci.
• Thus, root loci does exists on the –ve real
axis between -2 & -∞.
• Step 3. Determine the asymptotes of the
root loci.
• Angles of asymptotes = +_ 1800(2k+1)
• n-m
• where k=0,1,2,…
• n=3
• m= 0

843
• Angles of asymptotes = +_ 1800(2k+1)
• 3
• Number of asymptotes = n-m = 3
• When k=0, Angles of asymptotes=+600, -600.

• When k=1, Angles of asymptotes=+1800, -1800


• Point of intersection of asymptotes =
• (sum of poles)-(sum of zeros)
• n-m
• = (0+-1+-2) – 0 = -1
• 3
844
• The asymptotes are drawn as shown
• Step 4: Determine the break away point
• The break away points can be determined from
the roots of dK =0
• ds
• The characteristic equation is
• 1+G(s)H(s)=0
• i.e., 1+ K =0

• S(s+1)(s+2)
or, K=-(s3+3s2+2s)
• By setting dK =0 , we obtain
• ds
• dK = -(3s2+6s+2)=0
• ds

• or s= -0.4226, s= -1.5774
845
• Since the break away point must lie on a
root locus between 0 & -1, it is clear that
s= -0.4226 corresponds to the actual
break away point.
• Point s= -1.5774 is not on the root locus.
• Hence it is not an actual break away point.
• In fact evaluation of values of K
corresponding to s= -0.4226, s= -1.5774
yields
• K=0.3849 (+ve) for s= -0.4226
• K=- 0.3849 (-ve) for s= -1.5774
846
• Step : 5. Determine the points where the
root loci cross the imaginary axis
• Method 1
• By use of Routh’s stability criterion
• The characteristic eqn is
1+ K =0
s(s+1)(s+2)

• i.e. s3+3s2+2s+K=0
• The Routh’s array becomes

847
S3 1 2

S2 3 K

S1 6-K
3

S0 K

• The value of K that makes the s1 term in


the first column = 0 is K=6
• The crossing points can be found by
solving the aux. eqn obtained from the s2
row; i.e.
• 3s2+K=3s2+6=0 which yields
848
• S= +_ j √2
• The frequencies at the crossing points on
the imaginary axis are thus ω= +_ √2
• The gain value corresponding to the
crossing points is K=6
• Method 2
• Let s= jω in the characteristic eqn, equate
both the real and imaginary part to zero,
and solve for ω & K.
• Now, the characteristic eqn is
• s3+3s2+2s+K=0
849
• Let s= jω,
• (jω)3+3(jω)2+2(jω)+K=0
• Or (K-3ω2) +j(2ω- ω3)=0
• equating both the real and imaginary part
to zero, we obtain
• K-3ω2 = 0, 2ω- ω3 =0
• From which ω= +_ √2
• K=6

850
• The frequencies at the crossing points on the
imaginary axis are thus ω= √2 +
_
• The gain value corresponding to the crossing
points is K=6
• Step No 6: Choose a test point in the broad
neighborhood of the jω axis and the origin and
apply the angle condition.
• If a test point is on the root loci, then the
• sum of three angles, θ1+ θ2+ θ3, must be 1800
• If the test point does not satisfy the angle
condition, select another test point until it satisfy
the angle condition.

851
• Continue this process and locate sufficient
number of points satisfying the angle
condition.

S+2 S+1
S
j1
θ1
θ3 θ2
-2 -1 0 σ

852
asymptotes

K ∞

j3

j2
K=6
j1
σ
∞ K 600
x x x
-4 -3 -2 -1 0 1 2
-j1
K=6
-j2

-j3 K ∞ 853
• Problem 2: Sketch the root locus plot of
system having G(s)=K(s+2) , H(s)=1
• s2+2s+3

854
• Step 1. Locate the poles & zeros of G(s)H(s)
on the s plane.
• The poles are s=-1+j√2, -1-j√2
• The zero is s= -2.
jω j2

x j√2

j1

σ
o
-2 -1 0 1

-j1

x -j√2 855
• Step2. Determine the root loci on the real
axis.
• To determine the root loci on the real axis,
we select test point s on the –ve real axis
between 0 and -2.
• The number of poles & zeros to the right of
the test point is nil.
• Hence this test point does not lies on the
root loci.
• Thus, root loci does not exist on the –ve real
axis between 0 & -2
856
• Taking a test point to the left of s= -2, we
find that the number of real poles & zeros
to the right of test point is odd (=1).
• Hence the root loci exist on the –ve real
axis from s=-2 to -∞ as shown

857
jω j2

x j√2

j1

σ
o
-2 -1 0 1

-j1

x -j√2 858
• Step 3. Determine the asymptotes of the
root loci.
• Angles of asymptotes = +_ 1800(2k+1)
• n-m
• where k=0,1,2,…
• n = number of poles= 2
• m= number of zeros=1
• Angles of asymptotes = +_ 1800(2k+1)
• 1
• Number of asymptotes = n-m = 1
• When k=0, Angles of asymptotes=1800
859
• Step 4: Determine the angle of departure
from the complex conjugate poles.
• The presence of a pair of complex
conjugate poles demands determination of
the angle of departure from these poles.
• Knowledge of this angle is important since
the root locus near a complex pole yields
information as to whether the locus
originating from the complex poles
migrates towards real axis or extends
toward the asymptotes.

860
• Determination of angle of departure.

jω j2

x j√2

j1

550 σ
o
-2 -1 0 1

-j1
900
x -j√2 861
• Angle of departure from a complex pole =
1800 - (sum of the angles of vectors to the
complex pole in question from other
poles)+ (sum of the angles of vectors to
the complex pole in question from zeros)
• Angle of departure from the complex pole
at s=-1+j√2 = 1800 –(900)+550=1450
• (Since the root locus is symmetric about
the real axis),
Angle of departure from the complex pole
at s=-1-j√2 = -1450
862
• Step:4. Determine the break-in point
• The break-in point can be found from
• the roots of dK=0
• ds
• Since the characteristic equation is given by
• 1+G(s)H(s)=0, we have
• 1+ K (s+2) = 0
• s2+2s+3

• Hence, K= -(s2+2s+3)
• (s+2)
• we have, dK = -(2s+2)(s+2)-(s2+2s+3)
• ds (s+2)2 =0

• which gives s2+4s+1=0


• 863
• i.e., s= -3.7320 or s= -0.2680
• Note that point s= -3.7320 is on the root
locus.
• Hence this point is an actual break-in point.
• Since s= -0.2680 is not on the root locus it is
not an actual break-in point.

864
• Step 5. Sketch root-locus plot based on the
information obtained in the foregoing steps.
• To determine root loci accurately, several
points must be found using angle condition.

865
• Root locus plot

1450 jω j2

x j√2

j1

Break-in point
σ
o
-3.73 -2 -1 0 1

-j1

x -j√2 866
-1450
• It is seen that root locus in the complex s
plane is a part of a circle.
• We can prove that root locus is circular in
shape by deriving the eqn for the root locus,
using the angle condition.
• For the present system, the angle condition
is
• S+2 – s+1- j√2 – s+1+ j√2 +_= 1800(2k+1)
• If s= σ+jω is substituted in to this last eqn,
we obtain
• σ+2+jω - σ+1+jω - j√2 – σ+1+jω + j√2 =
• 1800(2k+1) +_ 867
• Which can be written as
• Tan-1( σ+ω 2 ) - Tan-1( ωσ -+√21 ) - Tan-1( ω + √2
σ+1
)=
1800(2k+1)
+
_

868
• Sketch the root locus plot for the system
having open loop transfer function given
by G(s)H(s) = k
• s(s+4)(s2+4s +13)

869
• Step 1. Locate the poles & zeros of
G(s)H(s) on the s plane.
• The open-loop poles are,
• s = 0, s= - 4, s = -2 + j 3, s = -2 -
j3
• The number of open loop poles = 4
• The number of open loop zeros = 0
• Hence number of branches of root locus
• =4–0=4

870
j4
x j3

j2

j1
x x
-4 -3 -2 -1 0
-j1

-j2

x -j3

871
• Step 2. Determine the root loci on the real
axis.
• Root loci on the real axis is determined by
poles and zeros lying on it.
• If we select any test point between 0 & -4,
we find that number of poles to the right
of test point is = 1 (odd)
• Hence the segment of real axis between
0 & -4 is on the root loci.

872
j4
x j3

j2

j1
x x
-4 -3 -2 -1 0
-j1

-j2

x -j3

873
• Step 3. Determine the asymptotes of the
root loci.
•Angles of asymptotes = 1800(2k+1)
• n-m
• where k=0,1,2,…
• n=4
• m= 0
• number of asymptotes = n-m = 4
• Angles of asymptotes =
• + 450, - 450 ,+1350 , -1350
874
• Point of intersection of asymptotes (on the
real axis)
• or the centroid =
• (sum of poles)-(sum of zeros)
• n-m
• = (0+-4+-2+j3+ -2-j3) – 0
= -2
• 4

875
• Step 4: Determine the break away point
• The break away points can be determined
from the roots of dK =0
• ds
• The characteristic equation is
• 1+G(s)H(s)=0
• i.e., 1+ k =0
• s(s+4)(s2+4s +13)

• i.e., s(s+4)(s2+4s +13) + k = 0


876
• K = - ( s4 + 8s3 + 29 s2 + 52s)
• d K = -(4s3 + 24 s2 + 58 s +52)
ds
put d K = 0
ds
Thus (4s3 + 24 s2 + 58 s +52) = 0
The break away points are given by
roots of the above equation. One of the
roots is s = -2 .
The other roots are found as follows:

877
• (s+2) is a factor ,
• Hence we have (4s3 + 24 s2 + 58 s +52)
• (s+2)
• = (4s2 + 16s + 26)
• and thus (s+2) (4s2 + 16s + 26)
=0
• The two complex break away points are
given by roots of (4s2 + 16s + 26) =0
• i.e. , s= -2 + j 1.58 , -2 – j 1.58

878
• Step 5. Determination of angle of departure.
• Presence of a pair of complex conjugate poles
demand determination of angle of departure from
these poles.
• Angle of departure from pole at -2+j3
• = 1800-(φ1+ φ2+ φ3)

• where φ1 = 1800- tan -1


(3/2)
• φ2 = tan -1
(3/2)
• φ3 = 900

879
j4
x j3

j2

j1 φ1
x x
-4 -3 -2 -1 0
-j1

-j2

x -j3

880
• Hence angle of departure from pole at -
2+j3
•= 1800-(φ1+ φ2+ φ3)
• = 1800-(1800- tan -1 (3/2)+tan -1 (3/2)+
900)
• = - 900
• And angle of departure from pole at -2-j3
• = + 900

881
• Step :6. Determine the points where the
root loci cross the imaginary axis
• By use of Routh’s stability criterion
• The characteristic equation is
• 1+G(s)H(s)=0

• i.e., 1+ k =0
• s(s+4)(s2+4s +13)

i.e., s(s+4)(s2+4s +13) + k = 0

• i.e., s4 + 8s3 + 29 s2 + 52s+k = 0


882
• Routh’s array :
• s4 1 29 K
• s3 8 52
• s2 22.5 K
• s1 (52 – 0.35K)
• s0 K
• For stability , (52 – 0.35K) ≥ 0
• or K ≤ 148.6
• when K = 148.6 , root loci intersects the jω
axis .

883
• Corresponding value of s is found from
• auxiliary equation for s2 row.
• i.e., 22.5 s2 + K = 0
• or 22.5 s2 + 146.8 = 0

• Or s = +j 2.56, - j 2.56

884
K=∞

K=∞

x j3

1350 450
x -2 x
0
-4
-450
-1350

x -j3

K=∞
K=∞

885
• Home Work

• Draw the root locus plot for the system


having open loop transfer function
G(s)H(s) = K
(s2 + 2s +2) (s2 + 2s + 5)

886

You might also like