Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

All rights on this document are reserved to P.Ramesh Babu (Asst.

Professor)

Introduction to Neural Networks

Neural Networks:

 A Neural Networks is a massively parallel distributed processor made up of


simple processing units which has a natural propensity for storing experiential
knowledge and making it available for use.

 Neural Networks also referred as Neuro Computers, connectionist Networks,


parallel distributed processor.

 To achieve good performance, Neural Networks employ simple processing units


Or cells called as “Neurons”.

Benefits of Neural Networks:

Non Linearity:
 An artificial Neuron can be linear or nonlinear. A neural network made up of an
interconnection of nonlinear neurons.
 The Nonlinearity is a special kind of sense that it is distributed through out the
networks.
 The nonlinearity is highly important property particularly for generation of input
signal.
 Example is speech signal.

I/O Mapping:
 The I/O Mapping brings to study of non parametric statically interface which is a
branch of statistics dealing with model free estimation or from a biological view
point.
 Non parametric is used here to signify the fact that no prior assumption are made
on a statistical model for the I/p data.
 I/O Mapping performed by a neutral network and non parametric statistical
inference.

Adaptively:
 The natural architecture of a Neural Network for pattern classification, signal
processing and control applications.
 Coupled with the adaptive capability of the network make it a useful tool in
adaptive pattern classification adaptive signal processing and adaptive control.
 It ensures all time the system remains stable.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Evidential Response:
 In the context of pattern classification a neural networks can be design to provide
information not only about which particular pattern to select, but also about the
confidence in the decision made.
Fault Tolerance:
 A Neural Network in hardware from has the potential to be inherently fault
tolerance or capable of robust computation in the sense that it performance
degrades gracefully under adverse operating conditions.

VLSI Implementability:
 The massively parallel nature of neural networks makes it potentially fast for the
computation of certain tasks.
 This same feature makes a neural networks well suited for implementation using
VLSI technology.
Uniformity of Analysis and Design:
This feature manifests in different ways:
 Neurons, in one form or other represent an ingredient, common to all neural
networks.
 Modulo networks can be built through a seamless integration of modules.
Neuro Biological Analysis:
 A design of neuro network is motivated by analoging with the brain. Which is
a living proof that counts tolerant.
 Parallel process is not only physically possible but also fact and powerfull.
 Neuro biologists look to neuro networks as a research tool for the
interpretation of neuro biological phenomenon on the other hand engineers
look to Neurobiology for new ideas to solve problems more complex than
those based on conventional hard wired design techniques.

General procedural to build neural networks:


 Understand and specify your problem in terms of input and required output.
Example: For classification the output the output are the usually represented as
binary vector.
 Take the simplest form of network you think might be able to solve your problem.
Example: Perception.
 Try to find appropriate connection waits so that the networks produces the right
output for each input in its trying data.
 Make sure that the networks on its training data test its generalization by checking
its performance on new testing data.
 If the network still does not perform well enough go back to step 3 and try harder.
 If the network still does not perform well enough go back to step 2 and try harder.
 If the network still does not perform well enough go back to step 1 and try harder.
 Finally the problem will be solved.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Advantages and Disadvantages Of Neural Networks:-

Advantages:-

 Parallel processing.
 Distributed Representation.
 Online algorithms (Incremental algorithm)
 Simple Computations.
 Robust with respect to noisy data.
 Robust with respect to no data.
 Empirically shown to work well for many problem domains.

Disadvantages:-

 Slow training.
 Poor interpretability.
 Network topology layout Adhoc.
 Hard to debug, because diatributed representations preclude context checking.
 May converge to local not global minimum of error.
 Not known how to model higher level cognitive mechanisms.
 May be hard to describe a problem in terms of features with numerical values.

Human Brain:-
The human nervous system may be viewed as a three stage system.Central to the
system is the Brain, represented by “NEWRAL NET”, which continuously receives the
information and it takes correct decisions.

Neural
Receptors Net
Effectors
Stimulators Response

Fig:- Block diagram representation of neurons system

As per the diagram two sets of arrows are shown in the figure these pointing from
left to right indicates forward transmission through the system. The arrows from right to
left indicate the presence of feedback system.
Receptors convert stimuli from the human body into electrical impulses that
convay information to Neural Net. The effectors will convert electrical impulses
generated by Neural Network into system output.
Brain is present inside the head. It is the most significant part of the body and it
has the vital role to do each and every thing. Human beings has quality cerebrum and
another processing units. Human Brain is most complicated thing and not easy to
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

understand. It is represented by the “Neural Nets” or “Nerves”. To understand the Brain


has been made easier by “Raman Y.Cajal”. In the year of 1911, who introduced the idea
of neurons as structural constituents of the Brain.

Human beings contains about 10 billon Neurons. Each neuron connects to


approximately 100-10000 other neurons through transmitting electrochemical signals.
There are over 100 different arranged in functional and structural areas of brain about
10% on input and output and 90% on internal layer.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Properties of Brain:-

 It can learn, recognize itself from experience.


 It adopts to the environment.
 It is robust and fault tolerant.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Structural Organizations levels in the Brain:-

Central Nervous System

Inter regional System

Local cicuits

Neurons

Dendrite Trees

Neuro micro circuits

Synapses

Molecules

Synapses:-
 These are elementary Structural and functional units that medicate the interaction
between neurons.
 It deals with modules and ions for the next action.

Neural Micro Circuits:-


 It refers to an assembly of synapses organized into patterns of connectivity to
produce a functional operation of interest.
Dendrite trees:-
 The neural micro circuits are grouped to form dendrite subunits with in the
dendrite trees of individual network.

Neurons:-
 An Neuron is an information processing unit. The whole Neuron about 100µm in
size contains several dendrite trees.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Local Circuits:-
 In next level of complexity the neurons with similar or different properties
performs operations of a localized region in the Brain.

Inter Regional System:-


 All the procedure followed by the system made up of path ways columns and
topographical maps which are located in different parts of the Brain.
Central Nervous System:-
 Inter regional system mediates all the behavior in the system.

Model of Neuron:-
 A Neuron is an information processing unit, i.e. fundamental to the operation of a
neural network. Here we can identify three basic elements of neuron model.
 A set of synapses each of which is characterized by a weight of its own.
Specifically signal ‘xj’ at the input synapses ‘j’ connected to neuron ‘k’ is
multiplied by the synaptic weight ‘wkj’. An adder for summing the input signals,
weighted by the respective synapses of the Neuron. An activation function for
limiting the amplitude of the output of a neuron.
 The activation function also referred as “Squashing function”.

Input signals
x1 wk1 Bias

vk
x2 wk2 ΣΣ Ψ( ) yk
uk Output artificial
Neuron
xm ww
kmkm

In mathematical terms we may describe a neuron ‘k’ writing the following equations.
Uk = Σmj=1 wkj xj
vk = uk + bk
yk = Ψ(vk)
Where x1, x2,…… are the input signals.
Wk1, wk2, ……… are synaptic weights of neuran.
uk is the linear combiner output due to input signals.
Bk is the bias.
Ψ( ) is the non linear activation function.
Yk output signal of neuron.
Vk local field of neuron.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Types of Active Function:-


 The active function denoted by the Ψ( ) defines the output of the neuron
interms of the induced local field ‘v’. Here we identify three Types of
active Functions.
Threshold Function:-
Equations for threshold function is

Ψ(v) = 1 if v ≥ 0
0 if v < 0
Pease wise Linear Function:-
Equations for Pease wise linear function is
1 if v≥ +1/2
Ψ(v)= v if +1/2 > v > -1 /2
0 if v ≤ -1/2
Sigmoid Function:-
Equations for Sigmoid function is
1 if v > 0
Ψ(v) = 0 if v = 0
-1 if v < 0
Knowledge Representation:-

Knowledge refers to stored information used by a person or machine to interpret, predict


and appropriately respond to the outside world.
The primary characteristics of knowledge representations are two. They are
1. Word information is actually made is explicit.
2. How the information is physically encoded for subsequent use.
Knowledge representation is goal directed. In real world application of “Intelligent
machine”. Knowledge of representation is a good thing.
A major task for neural network is to learn a model of the world. In which it is embedded
and to maintain the model sufficiently consistent with the real world. So as to achieve the
specified goals of the application of interest. Knowledge of world consist two kinds of
Information.
The known, world state, represented by facts about what is and what has been known,
this form of knowledge referred to as prior information.
Observations of the world obtained by means of sensors designed probe the environment
in which the neural network is supposed to operate. Ordinarily these observations are
inherently noisy being subject to error due to sensor noise and system imperfections. In
any event, the observations so obtained provide the pool of information from which the
examples used to prime the neural network are drawn.Those the examples can be labeled
or unlabeled.
The subjects of knowledge representation inside an artificial network are very
complicated nevertheless these are four rules for knowledge representation.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Rule 1:-

Similar input from similar classes should usually produce similar representation in side
network there are a plethora of measures for determining the similarity between inputs.
A commonly used measure of similarity is based on the concept of Euclidian distance.
To be specific let xi denotes an m-by-1 vector.
n =[xi1,xi2 ,xi3,………………xim]T.
T –denotes matric transposition.
Ni- a point in an m-diementional space called Euclidian space denoted by Rm .
Euclidian distance between a pair of m-by-1 vector.
Xi , Xj is denoted as follows
D(Xi,Xj ) =|| Xi-Xj|| = [Σmk-1 (xik-xjk)2)]y2
Where Xik, Xjk are the Kth term element of Xi, Xj.

Rule 2:-

Items to be categorized as separate classes should be given widely different


representation in the network. The second rule is exactly opposite to rules.

Rule 3:-

It is a particular feature is important than there should be a large no of neurons involved


in the representation of that of item in the network consider for example a radar
application involving the presence of clutter.
The detection performance of such a Radar system is measured in terms of two
probabilities.
1. probability detection.
2. probability of alarm.

Rule 4:-

Prior information and invariance should be build into the design of a neural network.
There by simplifying the network design by not having to learn them.
This rule is particularly important because proper adherence to it result in a neural
network with a specialized structure (restricted). This is highly desirable for several
reasons.
1. Biological visual and auditory network are known to be very specialized.
2. A neural network with specialized structure usually ha sa smaller number of free
parameters.
3. the rate of information transmission through a specialized network is accelerated.
4. the cost of building a specialized network reduced because of its smaller size.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

How to Build prior Information into Neural Network Design:-

An important issue that has to be addressed, of course is how to develop a specialized


structure by building prior information into its network design.

We may use a combination of two techniques.


1. Restricting the network architecture through the use of locol connections
known as respective fields.
2. Constructing the choice of synaptic weights through the use of weight sharing.
These two techniques particularly the later one have a profitable side benefits,
the number of free parameters in the network is reduced significantly.

Hidden Layer
x1 Neurons
1
Output Layer
x2 Neuron
5
x3 2 y1
x4
x5
3 6
x6 y2
x7
4
x8
x9
x10
Fig:- combined use of respective fields and weight sharing

Consider the above partially connected feed forward network. This


network has a restricted architecture by construction. The top 6 source nodes
constitute the respective field for hidden neuron 1 and so on. For the other hidden
neurons in the network to satisfy weight sharing constraints we merely have to the
some set of synaptic weights for each ine of the neurons in the hidden layer of the
network. As shown in the figure with 6 locol connections for hidden neuron and
total of 4 hidden neurons. We may express the included local field of hidden
neuron ‘j as follows.
i.e vj = Σji=1 wi xi+j-1
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

where {wi}6=1 constitute the same set of weights shared by all 4 hidden
neurons and xk is the signal picked up from source node.

Note:-
The issue of building prior information in the design of neural networks pertains
to one part of rule ‘4’. The remaining part of rule involves the issue of invariance.

How to Build Invariance in the Neural Network:-


There are three techniques ti build invariance into neural network design.
1. Invariance by Structure:-
Invariance may be imposed on a neural network by structuring its design
appropriately. Specifically synaptic connections between the neurons of the
network are created. So that transformed versions of the same input are forced to
produce the same output.
Let wij be the synaptic weight of neuron ‘j’ connected to pixel ;I; in the
input image. If wij=wjk is enforced for all pixels I and k that lie centre of the
image. The number of synaptic connections in the neural network becomes
prohibitively large even for image of moderate size.
2. Invariance by Training:-
A Neural Network has a natural ability for pattern classification. This ability may
be exploited directly to obtain transformation invariance as follows. The network
of different examples being chosen to correspond to different transformations of
the object.
3. Invariance Feature Space:-
The use of an invariant feature space as described may offer a most suitable
Class estimate
technique for neural classifier.
To illustrate the idea of invariant feature space, consider the example of coherent
radar system used for air surveillance, where the targets of internets include aircrafts,
weather systems and ground objects moreover experimental studies have shown that such
radar signals can be modeled fairly, closely as an auto regressive (AR) process of
moderate order. An ‘AR’ model is a special from of regressive model defined for
complex valued data by
X(n) = Σmi=1 ai*x[n-i]+e(n)
Where {ai}m are the AR coefficients
M is the input signal.
e(n) is the error (noise)

Invariant Class estimate


Classifier
feature
type
extractions

Fig:- Block diagram of invariant feature space system


All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Architecture of Neural Networks:-


In general we may identify different classes of network architecture (Structures).
We are having two types of networks. They are
1. Feed Forward Neural Network.
2. Feed Backward Neural Network.

Feed Forward Neural Network:-

This feed forward network is divided into two sub networks.


1. simple Feed Forward Neural Network.
2. Multilayer Feed Forward Neural Network.
Feed Forward Neural Network allows signal to travel one way only from input to
output. It is not containing feedback loops. The output of any layer does not affect the
some layer. Feed Forward networks tend to be straight forward network that associate
inputs with outputs. They are extensively used in pattern reorganization.

Input Layer
Hidden Layer

Output Layer

Fig:- simple feed forward Network.


All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Input Layer
Hidden Layer

Output Layer

Fig:- multilayer feed forward network.

Feed Backward Neural Network:-

Feed Backward can have signals traveling in both directions by introducing loops
in the networks. Feed Backward networks are extremely power full and can get extremely
complicated feed backward network are dynamic. Their state is changing continuously
until they reach an equilibrium point until to the input changes and a new equilibrium
needs to be found. Feed Backward networks are also referred as “recurrent network”.
Although the latter terms is often used to denote feed back connections in single layer
organization.
Input Layer Hidden Layer Output Layer

Fig:-Feed backward network


All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Artificial Intelligence and Neural Networks:-


The goal of Artificial Intelligence is the development of algorithms that
requires machines to perform cognitive tasks, at which humans are currently
better. An Artificial Intelligence system may be capable of doing three things.
1. Store knowledge.
2. Apply the stored knowledge in problems.
3. Acquire new knowledge through experience.
An Artificial Intelligence as three components they are
1. Representation.
2. Learning
3. Reasoning.

Representation

Learning

Reasoning

Representation:-
It is two types
1. Declarative Representation.
2. Conceptual Representation.
1) Declarative Representation:-
The knowledge is represents as a static collection of facts, with a small set
of general procedures used to manipulate the facts.
2) Conceptual Representation:-
Knowledge is embedded in a execute code that acts out the meaning of the
meaning of the knowledge. Both kind of knowledge, declarative and procedural are need
in most problem domains of intrest.
Reasoning:-
Reasoning is the ability to solve the problems. For a system to quality as a
reasoning system it must satisfy certain conditions.
1. The system must be able to express and solve a broad range of problems and
problems types.
2. The system must be able to make explicit and implicit information known to
it.
3. The system must have a control mechanism.
In many situations encountered in practice the available knowledge is in
exact. In such situations reasoning procedures are used.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Learning:-
Environment will supply some information to the learning element. The learning element
uses this information to make improvement in knowledge base and finally the
performance element uses the knowledge base to perform its tasks.

Learning Knowledge Performance


Environment
Element Base element

Fig: simple model of machine level

Neural Network Viewed as Directed graphs:-


In directed graphs signal flow is most important are can simplify the graph by
using signal flow. Signal flow graphs with a well defined set of rules were developed by
a MASON. A signal flow graph is a network of directed links that are interconnected at
certain points called nodes.
The flow of signals in the various parts of graphs has 3 basic rules.
Rule 1:-
A signal flows along a link only in the direction defined by the arrow on the link.
We are having two types of links.
1. Synaptic Link:-
It is a behavior is governed by a linear input, output relation. If the nodes signal xj
is multiplied by the synaptic weight w to produce the node signal ‘y’.

xj Y=xjwkj
wk

Here x={x1,x2,……xj}
w={w1,w2 ….wk}
2. Activation Link:-
Its behavior is governed in general by a non linear input and output relation. Here
the non-linear function is Ψ( ).

xj Ψ(xj)
Ψ( )
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

Rule 2:-
A node signal equals the algebraic sum of all signals entering to the related nodes
via incoming links. This is known as ‘fan-in-rule’.
yj

yk=yj+ yj

yj

Rule 3:-
The signal at a node is transmitted to each outgoing link originating from that
node, with the transmission being entirely independent of the transfer function of the
outgoing lines. This is known as ‘fan-out-rule’.

xj

xj
xk

A neural network is a directed graph consisting of nodes with inter connecting


synaptic and activation link and is characterized by 4 properties.
1) Each Neural is represented by a set of linear synaptic links, an externally applied bias
and a possibly non linear activation link. The bias is represented by a synaptic link
connected to an input fixed at +1.
2) The synaptic links of a neuron wait their respective input signals.
3) The weight sum of the input signal defines the induced local field of the neuron in
question.
4) The activation link changes the induced local field of the neural to produce an output.
Directed graphics are two types
1. Complete directed graph:-
A directed graph is known as complete if it describes not only the signal flow
from the neuron to neuron. But also the signal flow inside each neuron.
2. Partially Complete directed graph:-
Use a reduced form of any graph by omitting details of signal flow inside the
individual neurons, such directed graph is known as partially completed directed
graph.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)

x0=+1

wk0=bk

yk
x1 wk1
Ψ( )
wk2
x2
wkm

xm

Prepared by: V.Baltha Reddy


Sri Prakash College of Engineering,
Roll.No:06A61A0557 (CSE)

Guidance by: P. Ramesh Babu


(Assistant professor)
Department of C.S.E
Sri Prakash College of Engineering
E-Mail: rameshbabu_kb@yahoo.co.in

You might also like