Download as pdf or txt
Download as pdf or txt
You are on page 1of 72

Goal of Control Engineering

The goal of control engineering is to improve, or in some case


enable, the performance of a system by the addition of sensors,
control processors & actuators. The sensors measures or sense
various signals in the system & operators command; the control
processors process the sensed signals & drive the actuator, which
affects the behavior of the system. A schematic diagram of general
control system is shown in the figure below:
Because the sensors signals can affect the system to be controlled(via
the control processor & the actuators), the control system shown in
above figure is known as closed loop control system/feedback
system. In contrast a control system that has no sensors & therefore
generates actuator signals from command signals alone is called as
open loop system.
Robustness specification limits the changes in closed loop system
performance that can be caused by changes in the system to be
controlled or difference between the system to be controlled & its
model.
Modeling of the plant/process can be developed in various ways like:
(a) Physical Modeling
(b) Empirical modeling or identification
It must be understood, however, that there is never a perfect
mathematical model for the plant. The mathematical model is an
abstraction and hence cannot perfectly represent all possible
dynamics of any physical process
Need for Intelligent Control
The conventional control methods like PID, linear quadratic
regulator, Optimal control, Robust control, etc requires:
The mathematical model(often the differential equation or
input output data that helps in formulating the model of the
plant) of the process for designing the controller.
The techniques used to build conventional controller put a
constraint on the model of the plant i.e plant model has to
satisfy certain features like time invariance & linearity (or
with very small amount of non linearity is allowed).
Hence, lower-order “design models” are also often
developed that may satisfy certain assumptions (e.g.,
linearity or the inclusion of only certain forms of
nonlinearities) yet still capture the essential plant behavior.
But for very complex real problems the task of
mathematical modeling becomes too difficult & even if we
have the model it may not satisfy the linearity & time
invariant constraint put by methods for classical controller
design.
So the problems for which Intelligent control are
particularly well suited, and where there is often very
good motivation to use intelligent control rather than
conventional control, are the control problems where the
plant has complex nonlinear behavior, and where a model
is hard to derive due to inherent uncertainties.
Does that mean an intelligent controller don’t require
the model of plant ?

Intelligent controllers like fuzzy and ANN are designed to


control the process without requiring the model of the plant.
But it is to be emphasized that if we have the formal
model(“not necessarily the true model as it is difficult to get
the true model of the system”) then the mathematical or
simulation analysis can help the operator/person to design
the rule base that will be more detailed and hence will be
more robust.
Intelligent Control
Intelligent control achieves automation via the emulation of biological intelligence. It either
seeks to replace a human who performs a control task (e.g., a chemical process operator) or
it borrows ideas from how biological systems solve problems and applies them to the
solution of control problems (e.g., the use of neural networks for control) .

Intelligent control systems excel in areas that are highly non-linear, or when
classical control systems fail, or a model of the system is difficult or impossible to
obtain.
Static and dynamical systems

•Those systems in which output of a


Static Systems system only depends upon the present
value of system only.
•A static system is also referred to as a
memory less system since its output
response is not influenced by the past
values of input.

• Unlike a static system, the output of a


dynamic system will be affected by the
Dynamical Systems present input as well as past values of
input.
•Mathematically, a dynamic system can
be expressed by the so-called state
space description, which describes a
system by its state equation and output
Dynamical systems continued….

•The state equation is often given as


Dynamical Systems a first order differential equation:
•x’(t) = f (t , x(t ),u(t)),
x(t0)=x0 where x(t0)=x0 is the
system state at the initial time t=t0,
•and the output equation is in the
form of
•y(t) = g(t ,x(t),u(t ))
•If function f & g are non linear then
system is called as non linear
dynamical system.
Why to study non linear dynamical
systems
Linear control methods rely on the key assumption of small range
operation for the linear model to be valid.
In nature all systems are basically non linear. They can be approximated
as linear under certain range of operation.
Another assumption of linear control is that the system model is indeed
linearizable. However, in control systems there are many nonlinearities
whose discontinuous nature does not allow linear approximation.

In designing linear controllers, it is usually necessary to assume that the


parameters of the system model are reasonably well known. However,
many control problems involve uncertainties in the model parameters.
Intelligent control(IC)
Objective:
Mimic human (linguistic) reasoning
to make the system intelligent
Intelligence
Adaptability to situation
Main constituents:
- Fuzzy Controllers
- Artificial Neural Networks
- Hybrid Controllers
Intelligent Adaptive Control

What is Intelligent Control ?


– Controls complex uncertain systems within
stringent(strict/precise) specification(s).
Features
Ability to Learn: Ability to modify behavior when
condition changes
Ability to Adapt: Ability to handle uncertainty by
continuously estimating the relevant unknown
knowledge
Ability to deal with Complex Systems : Characterized by
nonlinear dynamics.
Autonomous in Nature: Ability to deal with uncertainty
all by itself without human intervention
Artificial Neural Networks
A new sort of computer

• What are (everyday) computer systems good


at... and not so good at?
• Good at • Not so good at
• Rule-based systems: doing • Dealing with noisy data
what the programmer wants
them to do • Dealing with unknown
environment data
• Massive parallelism

• Fault tolerance

• Adapting to circumstances
Neural networks to the rescue

• Neural network: information processing


paradigm inspired by biological nervous
systems, such as our brain
• Structure: large number of highly
interconnected processing elements (neurons)
working together
• Like people, they learn from experience (by
example)
Neural networks to the rescue

• Neural networks are configured for a specific


application, such as pattern recognition or
data classification, through a learning process
• In a biological system, learning involves
adjustments to the synaptic connections
between neurons
Where can neural network systems help

• when we can't formulate an algorithmic


solution.
• when we can get lots of examples of the
behavior we require.
‘learning from experience’
• when we need to pick out the structure from
existing data.
Characteristics of Neural
associated output patterns.
Network
(i) NNs exhibit mapping capabilities. They can map input patterns to their

(ii) NNs learn by examples. NN architectures can be 'trained' with known examples of a
problem before they are tested for their capability on unknown instances of the
problem.

(iii) NNs posses the capability to generalize. They can predict new outcomes from past
trends.

(iv) NNs are robust systems and are fault tolerant. They can, therefore, recall
patterns from incomplete, partial or noisy patterns.

(v) NNs can process information in parallel, at high speed, and in a distributed manner.
ADVANTAGES
1. ANNs are not programmed. They learn by example with no requirement of
prior knowledge.

2. ANNs are very robust in nature, and can operate even if portions of the input data
are incorrect.

3. The network can see through noise and distortions to obtain the true essence of
real- world environment being viewed.

4. ANNs are capable of making generalizations to reach new conclusions


based upon past experience.

5. ANNs can solve any problem involving the mapping of input- output data.
DISADVANTAGES
1. Learning takes time. Depending on the complexity of the ANNs and the quantity of
classification data (i,e,training sets), the learning process can take hours, days or even
weeks.
2. Even though the ANNs can see through noise and distortions most of the time, there
will be cases where the network is tricked or sees an "optical illusion".
3. Being good at making generalizations and reaching new conclusions
does not lead to being precise or logical. Consider the following problem:
2.14325 + 3.25617 = 5.39942
The ANS would conclude that adding the two numbers together will result in a
number that is probably very close to 5.4. Now looking at a logical problem:
IF a = b AND b = c THEN a = c
The neural network would conclude that a is equal to a term is probably really
close to term c.
Where are NN used?

• Recognizing and matching complicated, vague, or


incomplete patterns
• Data is unreliable
• Problems with noisy data
– Prediction
– Classification
– Data association
– Data conceptualization
– Filtering
– Planning
– Modelling
– Control
Applications

• Prediction: learning from past experience


– pick the best stocks in the market
– predict weather
– identify people with cancer risk
• Classification
– Image processing
– Predict bankruptcy for credit card companies
– Risk assessment
Applications

• Recognition
– Pattern recognition: SNOOPE (bomb detector in
U.S. airports)
– Character recognition
– Handwriting: processing checks
• Data association
– Not only identify the characters that were scanned
but identify when the scanner is not working
properly
Applications

• Data Conceptualization
– infer grouping relationships
e.g. extract from a database the names of those most
likely to buy a particular product.
• Data Filtering
e.g. take the noise out of a telephone signal, signal
smoothing
• Planning
– Unknown environments
– Sensor data is noisy
– Fairly new approach to planning
Applications
• Modelling and Identification
• Direct and Indirect Adaptive Control
Strengths of a Neural Network

• Power: Model complex functions, nonlinearity built


into the network
• Ease of use:
– Learn by example
– Very little user domain-specific expertise needed
• Intuitively appealing: based on model of biology, will
it lead to genuinely intelligent computers/robots?

Neural networks cannot do anything that cannot be


done using traditional computing techniques, BUT
they can do some things which would otherwise be
very difficult.
Schematic view of the biological neuron
An individual neuron consists of the following
three parts:
● The dendrites are a receiving area for the
information from neurons;

The cell body, called a soma, collects and


combines incoming information received from
other neurons; and

The neuron transmits information to other


neurons through a single fiber called an axon.
Non Linear Model of A Neuron
Neural Network
Architectures
Neural Network
Architectures
Single Layer Perceptron:

• First Neural Network with the ability to learn


• Made up of only input neurons and output neurons
• Input neurons typically have two states: ON and OFF
• Output neurons use a simple threshold activation function
• In basic form can only solve linear problems
Neural Network
Architectures
Multi Layer Perceptron:
A network can have several layers. Each layer has a weight matrix W, a bias vector b,
and an output vector a. To distinguish between the weight matrices, output vectors, etc.,
for each of these layers in our figures, we append the number of the layer as a superscript
to the variable of interest. We can see the use of this layer notation in the three-layer
network shown below, and in the equations at the bottom of the figure.
Neural Network
Architectures

Multi Layer Perceptron


Neural Network
Architectures
Competitive Network:
The simplest competitive learning network consists of a single layer of output
units. Each output unit i in the network connects to all the input units (+) via weights, w ij,
j= 1,2, ..., n. Each output unit also connects to all other output units via inhibitory weights
but has a self-feedback with an excitatory weight. As a result of competition, only the unit
i with the largest (or the smallest), net input becomes the winner. Only the weights of the
winner unit get updated. The effect of this learning rule is to move the stored pattern in
the winner unit (weights) a little bit closer to the input pattern.
Neural Network
Architectures
Kohonen’s SOM:
The self-organizing map (SOM) has the desirable property of topology
preservation, which captures an important aspect of the feature maps in the cortex of
highly developed animal brains. In a topology-preserving mapping, nearby input patterns
should activate nearby output units on the map. It basically consists of a two dimensional
array of units, each connected to all n input nodes. During competitive learning, all the
weight vectors associated with the winner and its neighboring units are updated.
Kohonen's SOM can be used for projection of multivariate data, density approximation,
and clustering. It has been successfully applied in the areas of speech recognition, image
processing, robotics, and process control.
Neural Network
Architectures
Hopfield Network:
Hopfield used a network energy function as a tool for designing recurrent
networks and for understanding their dynamic behavior. Hopfield's formulation made
explicit the principle of storing information as dynamically stable attractors and
popularized the use of recurrent networks for associative memory and for solving
combinatorial optimization problems.
A Hopfield network with n units has two versions: binary and continuously
valued. Let v, be the state or output of the ith unit. For binary networks, v, is either + 1 or
-1, but for continuous networks, v, can be any value between 0 and 1.
Neural Network
Architectures
Adaptive Resonance Theory (ART) models:
How do we learn new things (plasticity) and yet retain the stability to ensure
that existing knowledge is not erased or corrupted? Carpenter and Grossberg's Adaptive
Resonance Theory models (ARTl, ART2, and ART Map) were developed in an attempt to
overcome this dilemma." The network has a sufficient supply of output units, but they are
not used until deemed necessary. A unit is said to be committed (uncommitted) if it is (is
not) being used. The learning algorithm updates the stored prototypes of a category only
if the input vector is sufficiently similar to them. An input vector and a stored prototype
are said to resonate when they are sufficiently similar.
Neural Network Learning
Process
Neural Network Learning
Process
Supervised Learning
Here every input pattern that is used to train the network is associated with an
output pattern which is the target or the desired pattern.A teacher is assumed to be
present during the learning process.A comparison is made between the network’s output
and the desired one to find error..
An important issue concerning supervised learning is the problem of error
convergence, i.e. the minimization of error between the desired and computed unit
values. The aim is to determine a set of weights, which minimizes the error. One
well-known method, which is common to many learning paradigms, is gradient descent.
Neural Network Learning
Process
Supervised Learning

1. Error Correction Learning:


The actual response yk(n) of neuron k is different from the desired response
dk(n). Hence, we may define an error signal as the difference between the target response
dk(n) and the actual response yk(n), as shown by
ek(n) = dk(n) - yk(n)
The ultimate purpose of error-correction learning is to minimize a cost function
based on the error signal ek(n), such that the actual response of each output neuron in
the network approaches the target response for that neuron in some statistical sense.
Neural Network Learning
Process
Supervised Learning

2. Stochastic Learning:
In this method, weights are adjusted in a probabilistic fashion. An example is
evident in simulated annealing-the learning mechanism employed by Boltzman and
Cauchy machines, which are a kind of NN systems.
Neural Network Learning
Process
Unsupervised Learning
It uses no external teacher and is based upon only local information. It is also
referred to as self-organization, in the sense that it self-organizes data presented to the
network and detects their emergent collective properties. Paradigms of unsupervised
learning are Hebbian learning and competitive learning. A neural network learns off-line
if the learning phase and the operation phase are distinct. A neural network learns on-line
if it learns and operates at the same time. Usually, supervised learning is performed
off-line, whereas unsupervised learning is performed on-line.
Neural Network Learning
Process
Hebbian Learning Rule
For the Hebbian learning rule the learning signal is equal simply to the
neuron’s output, We have

r = f (wtix)

The increment Δwi of the weight vector becomes


Δwi = c f (wtix)x

The single weight wij is adapted using the following increment:


Δwij = c f (wtix)xj

This can be written briefly as:


Δwij = c oi xj for j = 1, 2,..........,n
Neural
NeuralNetwork
NetworkLearning
Learning
.
Process
Process
x1 w1

x2 w2
o
x3 w3 Initial
weights
w4 Input
x4 x1 1

x2 -1
x= w=
x3 0

x4 0.5
Neural Network Learning
Process
This example illustrates Hebbian learning with binary and continuous
activation functions of very simple network. Assume the network shown in above figure with
the initial weight vector T
w = 1 -1 0 0.5

needs to be trained using the set of three input vectors as below:


1 1 0
x1 = -2 -0.5 x3 = 1
x2 =
1.5 2 -1
0 -1.5 1.5

for an arbitrary choice of learning constant c =1. Since the initial weights are of nonzero
value, the network has apparently been trained before hand. Assume that f(net) = sgn(net).
Neural Network Learning
Process
Step 1: Input x1 applied to the network results in activation net1 as below:

net1 = wTx1 = =3 1
1 -1 0 0.5 -2
1.5
The updated weights are 0
wnew = wold + sgn(net1) x1 = w + x1

and plugging numerical values we obtain

1 1 2
-1 -2 -3
wnew = + =
0 1.5 1.5
0.5 0 0.5
Neural Network Learning
Process
Step 2: This learning step is with x2 as input:
net2 = wTx2 = = -0.25 1
2 -3 1.5 0.5 -0.5
-2
The updated weights are -1.5
wnew = wold + sgn(net2) x2 = w - x2 1
and plugging numerical values we obtain -2.5
= 3.5
Step 3: This learning step is with x3 as input: 2
net3 = wTx3 = = -3

0
The updated weights are 1 -2.5 3.5 2
new old 3 1
w = w + sgn(net ) x3 = w - x3
-1
1.5

T
= 1 -3.5 4.5 0.5
Neural Network Learning
Process
Competitive Learning
In this method those neurons which respond strongly to the input stimuli have their
weights updated.When an input pattern is presented , all neurons in the layer compete
and the winning neuron undergoes weight adjustment. Hence, it is a winner takes all
strategy.Only a single output neuron is active at any one time. Let
denote the synaptic weight connecting input node i to neuron j.
Each neuron is allotted a fixed amount of weight which is distributed among its input
nodes:
Neural Network Learning
Process
A neuron learns by shifting synaptic weights from its inactive to active input nodes.If a
neuron does not respond to a particular input pattern, no learning takes place in that
neuron.If a particular neuron wins the competition then each input node of that neuron
relinquishes some proportion of its synaptic weight and the weight relinquished is then
distributed equally among the active input nodes.
According to this rule:
Pattern Recognition Example

A produce dealer has a warehouse that stores a variety of fruits


and vegetables. When fruit is brought to the warehouse various
types of fruit may be mixed together. The dealer wants a machine
that will sort the fruit according to type. There is a conveyor belt
on which fruits are loaded. This conveyor passes through a set of
sensors which measure three properties of the fruit: Shape, texture
and weight.
The shape sensor will output 1 if fruit is approximately round
and -1 if it is more elliptical.
The texture sensor will output 1 if surface of fruit is smooth and
-1 if it is rough.
The weight sensor will output 1 if it is more than one pound and
-1 if it is less than one pound.
Single Layer Perceptron

Two input/Single-Neuron Perceptron


The three sensor output will be input to the neural network.
The purpose of network is to decide which kind of fruit is on
conveyor so that the fruit is directed to the correct storage bin.
Assume that there are only two kinds of fruits on the conveyor:
Apple and Oranges

As each fruit passes through the sensor it can be represented by


three dimensional vector:

shape
p = texture
weight
Prototype orange is represented by:

1
p1 = -1 The neural network
-1 receives one three
dimensional input
Prototype apple is represented by: vector for each fruit on
the conveyor and must
1 make a decision.
p2 = 1
-1
Solution

Using a single neuron perceptron, the perceptron equation is:

We choose bias b and elements of the weight matrix so that the


perceptron is able to distinguish between apple and orange.
- Output is +1 when apple is input
- Output is -1 when orange is input
Let us take:
Testing

(Orange)

(Apple)
Memory Networks
• These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns
and at the time of giving an output they can produce one of
the stored patterns by matching them with the given input
pattern. These types of memories are also
called Content-Addressable Memory (CAM). Associative
memory makes a parallel search with the stored patterns as
data files.
Following are the two types of associative memories we can
observe −
• Auto Associative Memory
• Hetero Associative memory
Auto Associative Memory

• This is a single layer neural network in which the


input training vector and the output target vectors
are the same. The weights are determined so that
the network stores a set of patterns.
• Example : Test the auto-associative network for the input pattern [ -1 1 1 1] and also test the network
for same input vector. Also test the same input vector for one missing, two mistake entries in the test
vector .

Solution: x=[-1 1 1 1] , since it is auto associative network, therefore y=[-1 1 1 1]


Auto Associative Memory
1. Training Algorithm
wij(new)=wij(old)+xi * yj

These weights are fixed during testing.

2. Testing Algorithm

Calculate the net input to each output unit j = 1 to n


yinj=∑xi *wij

Apply the following activation function to calculate the output


yj=f(yinj)
{+1 if yinj>0
-1 if yinj⩽0 }
• Example : Test the auto-associative network for the input pattern [ -1 1 1 1] and also test the network
for same input vector. Also test the same input vector for one missing, two mistake entries in the test
vector .

Solution: x=[-1 1 1 1] , since it is auto associative network, therefore y=[-1 1 1 1]


Hetro-associative Memory Networks

• Similar to Auto Associative Memory network, this is also a


single layer neural network. However, in this network the
input training vector and the output target vectors are not the
same. The weights are determined so that the network stores
a set of patterns.

You might also like