Introduction

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

Introduction

What is Control?
When we use the word control in everyday life, we are referring to the act
of producing a desired result
By this broad definition, control is seen to cover all artificial processes.
The temperature inside a refrigerator is controlled by a thermostat.

The picture we see on the television is a result of a controlled beam of


electrons made to scan the television screen in a selected pattern

A compact-disc player focuses a fine laser beam at the desired spot on the
rotating compact-disc in order to produce the desired music.

While driving a car, the driver is controlling the speed and direction of the
car so as to reach the destination quickly, without hitting anything on the
way
A system is a set of self-contained processes under study.

control system

system which exercises control system to be controlled


over the plant, called the called the plant
controller.

Input Controller Control Signal Plant Response

Open Loop

Closed Loop
A control system is said to be deterministic when the set of physical laws
governing the system are such that if the state of the system at some time (called
the initial conditions) and the input are specified, then one can precisely predict
the state at a later time.
A stochastic (also called probabilistic) system has such governing laws that
although the initial conditions (i.e. state of a system at some time) are known in
every detail, it is impossible to determine the system's state at a later time.

A system is called chaotic if even a small change in the initial conditions produces
an arbitrarily large change in the system's state at a later time.
When we analyze and design control systems, we try to express their governing
physical laws by differential equations.

Depending upon whether the differential equations used to describe a control


system are linear or nonlinear in nature, we can call the system either linear or
nonlinear.
a control system whose description requires partial differential equations is
called a distributed parameter system, whereas a system requiring only ordinary
differential equations is called a lumped parameter system.
For a general lumped parameter, SISO system (Figure 1.1) with input u(t) and
output y(t), the governing ordinary differential equation can be written as

𝒚 (𝒏 ) ( 𝒕 )= 𝒇 ( 𝒚 (𝒏 −𝟏 ) , 𝒚 (𝒏 −𝟏 ) ( 𝒕 ) , … … , 𝒚 ( 𝟏) ( 𝒕 ) , 𝒚 ( 𝒕 ) ,𝒖(𝒎) ( 𝒕 ) ,𝒖( 𝒎−𝟏 ) ( 𝒕 ) , … , 𝒖(𝟏 ) ( 𝒕 ) , 𝒖 ( 𝒕 ) , 𝒕 )

let us assume that the initial conditions are zero, and we apply an input, u(t),
which is a linear combination of two different inputs, u1(t), and u2(t), given by
𝒖 ( 𝒕 )=𝑪𝟏 𝒖𝟏 ( 𝒕 ) + 𝑪 𝟐 𝒖𝟐( 𝒕 )

If the resulting output, y(t ), can be written as


𝒚 ( 𝒕 ) =𝑪 𝟏 𝒚 𝟏 ( 𝒕 ) + 𝑪𝟐 𝒚 𝟐 (𝒕 )

then the system is said to be linear, and its linear differential equation can be written as
Example 1.1 For an electrical network shown in Figure 1.2, the governing
differential equations are the following:
Example 1.2
Consider a simple pendulum (Figure 1.3) consisting of a point mass, m, suspended
from hinge at point O by a rigid massless link of length L.

The equation of motion of the simple pendulum in the absence of an externally


applied torque about point O in terms of the angular displacement, θ(t), can be
written as
(1)
This governing equation indicates a second-order system. Due to the presence
of sin(θ) is nonlinear
From our everyday experience with a simple pendulum, it is clear that it can be
brought to rest at only two positions, namely θ = 0 and θ = π rad. (180°). Therefore,
these two positions are the equilibrium points of the system.
Let us examine the behavior of the system near each of these equilibrium points.
Expanding sin(θ) about the equilibrium point θ = 0, we get the following Taylor's series
expansion:
𝟑 𝟓 𝟕
𝜽 𝜽 𝜽
𝐬𝐢𝐧 ( 𝜽 ) =𝜽 − + − +…
𝟑 ! 𝟓! 𝟕 !
If we assume that motion of the pendulum about θ=0 consists of small angular
displacements (say θ < 10°), then sin(θ) θ and eq.(1) become:
𝑳 𝜽(𝟐) + 𝒈 𝜽 ( 𝒕 ) =𝟎
Similarly, expanding sin(θ) about the other equilibrium point, θ = π, by assuming small
angular displacement, ϕ, such that θ = π — ϕ, and noting that sin(θ) = — sin(ϕ) -ϕ, we
can write Eq. (1) as
(𝟐)
𝑳 𝚽 − 𝒈 𝚽 ( 𝒕 )=𝟎
Laplace Transform and the Transfer Function

For a general input, the Laplace transformation (denoted by ) to the input, u(t),
defined as
(2)

where s denotes the Laplace variable (a complex number), and U(s) is called the
Laplace transform of u(t). The Laplace transform of a function u(t) is defined only if
the infinite integral in Eq. (2) exists, and converges to a functional form, U(s).
However, if U(s) exists, then it is unique.

Some important properties of the Laplace transform are stated below

(a) Linearity: If a is a constant (or independent of s and t) and f(t ) = F(s), then

Also, if f1(t) = F1(s) and f2(t) = F2(s), then


𝓛 { 𝒇 𝟏 ( 𝒕 ) + 𝒇 𝟐 ( 𝒕 ) }= 𝑭 𝟏 ( 𝒔 ) + 𝑭 𝟐 ( 𝒔 ) ( 𝟒 )
(b) Complex differentiation: If f(t ) = F(s), then
{tf(t)} = -dF(s)/ds (5)

(c) Complex integration: If f(t ) = F(s), and if exists as t = 0 is approached from the
positive side, then
(6)

(d) Translation in time: If f(t ) = F(s), and a is a positive, real number such that f( t — a) =
0 for 0 < t < a, then

− 𝒂𝒔
𝓛 𝒇 ( 𝒕 − 𝒂 )= 𝒆 𝑭 ( 𝒔 ) (𝟕 )

(e) Translation in Laplace domain: If f(t ) = F(s), and a is a complex number, then

𝓛 { 𝒆 𝒂𝒕 𝒇 ( 𝒕 ) }= 𝑭 ( 𝒔 − 𝒂 ) (𝟖 )

(f) Real differentiation: If f(t ) = F(s), and if is Laplace transformable, then


(𝟏 )
𝓛 𝒇 ( 𝒕 ) = 𝒔𝑭 ( 𝒔 ) − 𝒇 ¿

where denotes the value of f(t ) in the limit t 0, approaching t = 0 from the
positive side.
(g) Real integration: If f(t ) = F(s), and the indefinite integral is Laplace
transformable, then
𝟎
𝑭 (𝒔 ) 𝟏
𝓛 {∫ 𝒇 ( 𝒕 ) 𝒅𝒕 }= + ∫ 𝒇 ( 𝒕 ) 𝒅𝒕 (𝟏𝟎)
𝒔 𝒔 −∞
Note that the integral term on the right-hand side of Eq. (10) is zero if = 0 for t
< 0.

(h) Initial value theorem: If is Laplace transformable, and exists, then

(11)

(i) Final value theorem: If f(t) = F(s) is Laplace transformable, and exists, then

𝒇 ( ∞ ) = 𝐥𝐢𝐦 𝒔𝑭 ( 𝒔 ) (𝟏𝟐 )
𝒔→ 𝟎
For simplicity, we assume that all initial conditions for the input, u(t), and its
derivatives and the output, y(t), and its derivatives are zeros. Then, using Eq. (9)
we can transform the governing equation of the system (linear differential
equation) to the Laplace domain as follows:
𝒏 𝒏 −𝟏 𝒎 𝒎− 𝟏
( 𝒂¿¿ 𝒏 𝒔 + 𝒂𝒏 − 𝟏 𝒔 +…+ 𝒂 𝟏 𝒔+ 𝒂 𝒏) 𝒀 ( 𝒔)=( 𝒃𝒎 𝒔 + 𝒃𝒎 −𝟏 𝒔 + …+ 𝒃𝟏 𝒔 +𝒃 𝟎) 𝑼 ( 𝒔) ¿

Above equation brings us to one of the most important concepts in control theory,
namely the transfer function, G(s), which is defined as the ratio of the Laplace
transform of the output, Y(s), and that of the input, U(s),
G(s) = Y(s)/U(s)
𝒎 𝒎−𝟏 𝒏 𝒏− 𝟏
𝑮(𝒔)=(𝒃𝒎 𝒔 +𝒃𝒎− 𝟏 𝒔 +…+𝒃𝟏 𝒔 +𝒃𝟎)/(𝒂¿¿ 𝒏 𝒔 +𝒂𝒏 − 𝟏 𝒔 +…+𝒂𝟏 𝒔+𝒂𝒏 )¿
The roots of the numerator and denominator polynomials of the transfer function,
G(s), represent the characteristics of the linear, time-invariant system. The
denominator polynomial of the transfer function, G(s), equated to zero is called
the characteristic equation of the system
=0
The roots of the characteristic equation are called the poles of the system. The
roots of the numerator polynomial of G(s) equated to zero are called the zeros of
the transfer function
=0
In terms of its poles and zeros, a transfer function can be represented as a ratio of
factorized numerator and denominator polynomials, given by the following rational
expression:

G(s) = K(s – z1)(s - z2 )... (s - zm)/[(s – p1)(s - p2)... (s - pn)]

=K/

You might also like