Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

University Of Khartoum

Department Of Electronics & Electrical


Engineering
Software & Control Engineering

EEE52511: NEURAL NETWORK &


FUZZY SYSTEMS
By: Dr. Hiba Hassan
Lecture 1
9/1/2023 U of K: Dr. Hiba Hassan 2

Course Objectives
To understand neural networks and fuzzy logic theory.
To gain knowledge of neural networks and fuzzy
system development.
To familiarize students with various concepts,
hardware and software used in neural, fuzzy systems
analysis and design.
To apply the techniques for solving real-life problems
using neural networks and fuzzy systems.
If time allows, to introduce hybrid systems such as
neuro-fuzzy systems.
9/1/2023 U of K: Dr. Hiba Hassan 3

Syllabus
• Neural Networks:

• definition, similarity with human brain,

• classifications,

• input/output set, learning,

• single layer and multilayer perception,

• forward and backward propagation,

• design of ANN model,

• training set for ANN, test for ANN,

• Application of ANN in Engineering.


9/1/2023 U of K: Dr. Hiba Hassan 4

Syllabus ( cont.)
• Fuzzy Logic:
• Fuzzy set theory,
• set theoretic operations,
• law of contradiction and law of Excluded Middle,
• fuzzy operation,
• reasoning and implication,
• fuzzy logic system applications.
9/1/2023 U of K: Dr. Hiba Hassan 5

References
• Neural Network Design (2nd Edition),
Martin T. Hagan, Howard B. Demuth, Mark
H. Beale, Orlando De Jes.
• Jang, J.-S. R., Sun, C.-T., & Mizutani, E.
(1997). Neuro-fuzzy and soft computing:
A computational approach to learning
and machine intelligence. Upper Saddle
River, NJ: Prentice Hall.
9/1/2023 U of K: Dr. Hiba Hassan 6

History of ANN Research


• Major Leaps in ANN Research:
• McCulloch and Pitts … 1943 (1st Neuron Model)
• Donald Hebb …. 1949 (1st Learning Rule)
• Marvin Minsky …. 1951 (1st Neural Machine)
• Rosenblatt …. 1958 (Perceptron)
9/1/2023 U of K: Dr. Hiba Hassan 7

Introduction
• The word neural network actually came from the
biological term neurons.
• Hence, an artificial neural network is a complex
information processing model that tries to imitate
the way a human brain functions.
• Its main objective is to find a suitable function that
maps given inputs to expected outputs.
• Hence, it is generally described as a function
approximator.
9/1/2023 U of K: Dr. Hiba Hassan 8

A Look into our Brain!


• Neurons are the core components of our nervous
system, and that includes the brain, spinal cord &
nerve cells.
• A typical neuron possesses a cell body (often
called the soma), dendrites, and an axon.
• Dendrites are thin structures that carry electrical
signals into the neuron body.
• An axon is a single long nerve fiber that carries
the signal from the neuron body to other neurons.
9/1/2023 U of K: Dr. Hiba Hassan 9

Cont.
• Synapses are specialized structures where
neurotransmitter chemicals are released to
communicate with target neurons.
• The cell body of a neuron frequently gives rise to
multiple dendrites, but only one axon.
• The axon may branch hundreds of times before it
terminates.
9/1/2023 U of K: Dr. Hiba Hassan 10
9/1/2023 U of K: Dr. Hiba Hassan 11

Cont.
• At the majority of synapses, signals are sent from
the axon of one neuron to a dendrite of another.
• But sometimes, exceptions may take place, such
as:
• neurons that lack dendrites,
• neurons that have no axon,
• synapses that connect an axon to another axon
or
• a dendrite to another dendrite, etc.
9/1/2023 U of K: Dr. Hiba Hassan 12

How the brain works!


• Each neuron receives inputs from other neurons
• The effect of each input line on the neuron is controlled
by a synaptic weight
• The weights can be positive or negative.

• The synaptic weights adapt so that the whole network


learns to perform useful computations
• Recognizing objects, understanding language,
making plans, controlling the body.
11
• Our brain have about 10 neurons each with
4
approximately 10 connections.
9/1/2023 U of K: Dr. Hiba Hassan 13

Linking Biological NN to ANN


9/1/2023 U of K: Dr. Hiba Hassan 14

Back to Artificial Neural Networks


• Neural networks employ a huge interconnection of
simple computing cells (neurons or processing
units).
• The computations can be performed through a
process of learning to acquire knowledge from the
environment, this is done by using a Learning
algorithm.
• This learning is used to adjust the interneuron
connection strengths, known as synaptic weights.
9/1/2023 U of K: Dr. Hiba Hassan 15

When should we use it?


• When to Consider using Neural Networks:
• if the input is high-dimensional discrete or real-
valued (e.g. raw sensor input).
• if the output is discrete or real valued.
• if the output is a vector of values.
• for possibly noisy data.
• when the form of target function is unknown.
• when human readability of result is unimportant.
9/1/2023 U of K: Dr. Hiba Hassan 16

Characteristics of NN
1) Learns from experience.
2) Generalizes from examples: Can interpolate from
previous learning and gives the correct response to new
data.
3) Rapid applications development: NNs are generic
machines and quite independent from domain
knowledge.
4) Adaptability: Adapts to a changing environment, if
properly designed.
5) Computational efficiency: Although the training of a
neural network demands a lot of computer power, a
trained network consumes low power.
6) Non-linearity: Not based on linear assumptions about the
real word.
9/1/2023 U of K: Dr. Hiba Hassan 17

A Model Neuron: Node or Unit


• An artificial neuron • A neural network node:
model is also called a
node or a unit & it is
represented as follows:
• Where, net i defines
the net input to unit i &
is given by;
• ∑j wijyj .
• Wij refers to the weight
from unit j to unit i
9/1/2023 U of K: Dr. Hiba Hassan 18
9/1/2023 U of K: Dr. Hiba Hassan 19

The analogy between the Human


and the Artificial Neural Networks:
Human Artificial
Neuron Processing Element
Dendrites Combining Function
Cell Body Transfer Function
Axons Element Output

Synapses Weights
9/1/2023 U of K: Dr. Hiba Hassan 20

Training a neural network


9/1/2023 U of K: Dr. Hiba Hassan 21

Some Applications of Artificial


Neural Networks
• Classification
Marketing: consumer spending pattern.
Defence: radar and sonar image.
Agriculture & fishing: fruit and catch grading.
Medicine: ultrasound, ECG,…. etc medical
diagnosis.
• Recognition and Identification
General Computing & Telecommunications: speech,
vision and handwriting recognition.
Finance: signature verification and bank note
verification
21
9/1/2023 U of K: Dr. Hiba Hassan 22

Cont.
• Assessment
Engineering: product inspection monitoring and control.
Defence: target tracking.
Security: motion detection, surveillance image analysis
and fingerprint matching.
• Forecasting and Prediction
Finance: foreign exchange rate and stock market
forecasting.
Agriculture: crop yield forecasting.
Marketing: sales forecasting.
Meteorology: weather prediction.
22
9/1/2023 U of K: Dr. Hiba Hassan 23

NEURAL NETWORK
ARCHITECTURE
An Overview
9/1/2023 U of K: Dr. Hiba Hassan 24

Architecture
• Neural networks are designed in one of these two
types:
• Feedforward: information is transmitted in the
forward direction, i.e. from the input to the
output.
• Recurrent, or feedback: at least one path leads
back to the starting neuron, this path is called a
cycle.
9/1/2023 U of K: Dr. Hiba Hassan 25

Feed-forward Neural Network


• The neurons are arranged in separate layers,
these layers are the input layer, the hidden layer
and the output layer.
• The hidden layer may be a single layer or several
layers in which case it is called a multi-layer feed-
forward or a deep neural net.
• There are no connections between the neurons of
the same layer.
• The neurons in one layer receive inputs from the
previous layer.
• The neurons in one layer delivers its output to the
next layer.
• The connections are unidirectional.
9/1/2023 U of K: Dr. Hiba Hassan 26

3-8-8-2 Neural Network


9/1/2023 U of K: Dr. Hiba Hassan 27

An example of a general feedforward neural


net.
9/1/2023 U of K: Dr. Hiba Hassan 28

A Recurrent Neural Network Example


9/1/2023 U of K: Dr. Hiba Hassan 29

Symmetrically connected networks


• These are like recurrent networks, but the
connections between units are symmetrical (they
have the same weight in both directions).
• John Hopfield (and others) realized that
symmetric networks are much easier to analyze
than recurrent networks.
• Symmetrically connected nets without hidden
units are called “Hopfield nets”.
9/1/2023 U of K: Dr. Hiba Hassan 30

Symmetrically connected networks


with hidden units

• These are called “Boltzmann machines”.


• They are much more powerful models than
Hopfield nets.
• They are less powerful than recurrent neural
networks.
• They have a simple learning algorithm.
9/1/2023 U of K: Dr. Hiba Hassan 31

Simple Artificial Neuron


9/1/2023 U of K: Dr. Hiba Hassan 32

Working with Simple Artificial Neuron


• The node receives input from some other units, or
perhaps from an external source.
• Each input’s associated weight w can be modified
so as to model synaptic learning. The unit
computes some function f of the weighted sum of
its inputs:

• Its output, in turn, can serve as input to other units.


9/1/2023 U of K: Dr. Hiba Hassan 33

Simple Artificial Neuron

• The weighted sum is called the net


input to unit i, hence it is often written as neti.

• The function f is called the unit's activation


function. In the simplest case, f is the identity
function, and the unit's output is just its net input.
This is called a linear unit.
9/1/2023 U of K: Dr. Hiba Hassan 34

Simple neuron models, with and without bias


9/1/2023 U of K: Dr. Hiba Hassan 35

Simple neuron models, with and without bias


(cont.)

• The previous slide shows two neuron models, one


with bias, b, and one without.

• The bias is like a weight, except that it has a


constant input of 1.

• Here, the input p is a scalar and the weight w is a


scalar as well, hence the product wp is a scalar.
9/1/2023 U of K: Dr. Hiba Hassan 36

Cont.
• Suppose that the target is called t. if the output a
is different from t, then the weights are changed
according to the following equation:
wi = wi +η(t - a) * xi
• And η is an attenuation factor
9/1/2023 U of K: Dr. Hiba Hassan 37

Example
• Assuming p is input and t is target, develop a
perceptron that can solve the following problem

• Ans:
1. Graphical representation to check if the problem
is linearly separable.
9/1/2023 U of K: Dr. Hiba Hassan 38

Cont.
2. Develop the network architecture and choose
initial weights.
9/1/2023 U of K: Dr. Hiba Hassan 39

Solution (cont.)
3. Apply the learning rule:

4. Calculate error: e = t – a
5. Then apply:

You might also like