Professional Documents
Culture Documents
UNIT1
UNIT1
Professor)
Neural Networks:
Non Linearity:
An artificial Neuron can be linear or nonlinear. A neural network made up of an
interconnection of nonlinear neurons.
The Nonlinearity is a special kind of sense that it is distributed through out the
networks.
The nonlinearity is highly important property particularly for generation of input
signal.
Example is speech signal.
I/O Mapping:
The I/O Mapping brings to study of non parametric statically interface which is a
branch of statistics dealing with model free estimation or from a biological view
point.
Non parametric is used here to signify the fact that no prior assumption are made
on a statistical model for the I/p data.
I/O Mapping performed by a neutral network and non parametric statistical
inference.
Adaptively:
The natural architecture of a Neural Network for pattern classification, signal
processing and control applications.
Coupled with the adaptive capability of the network make it a useful tool in
adaptive pattern classification adaptive signal processing and adaptive control.
It ensures all time the system remains stable.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Evidential Response:
In the context of pattern classification a neural networks can be design to provide
information not only about which particular pattern to select, but also about the
confidence in the decision made.
Fault Tolerance:
A Neural Network in hardware from has the potential to be inherently fault
tolerance or capable of robust computation in the sense that it performance
degrades gracefully under adverse operating conditions.
VLSI Implementability:
The massively parallel nature of neural networks makes it potentially fast for the
computation of certain tasks.
This same feature makes a neural networks well suited for implementation using
VLSI technology.
Uniformity of Analysis and Design:
This feature manifests in different ways:
Neurons, in one form or other represent an ingredient, common to all neural
networks.
Modulo networks can be built through a seamless integration of modules.
Neuro Biological Analysis:
A design of neuro network is motivated by analoging with the brain. Which is
a living proof that counts tolerant.
Parallel process is not only physically possible but also fact and powerfull.
Neuro biologists look to neuro networks as a research tool for the
interpretation of neuro biological phenomenon on the other hand engineers
look to Neurobiology for new ideas to solve problems more complex than
those based on conventional hard wired design techniques.
Advantages:-
Parallel processing.
Distributed Representation.
Online algorithms (Incremental algorithm)
Simple Computations.
Robust with respect to noisy data.
Robust with respect to no data.
Empirically shown to work well for many problem domains.
Disadvantages:-
Slow training.
Poor interpretability.
Network topology layout Adhoc.
Hard to debug, because diatributed representations preclude context checking.
May converge to local not global minimum of error.
Not known how to model higher level cognitive mechanisms.
May be hard to describe a problem in terms of features with numerical values.
Human Brain:-
The human nervous system may be viewed as a three stage system.Central to the
system is the Brain, represented by “NEWRAL NET”, which continuously receives the
information and it takes correct decisions.
Neural
Receptors Net
Effectors
Stimulators Response
As per the diagram two sets of arrows are shown in the figure these pointing from
left to right indicates forward transmission through the system. The arrows from right to
left indicate the presence of feedback system.
Receptors convert stimuli from the human body into electrical impulses that
convay information to Neural Net. The effectors will convert electrical impulses
generated by Neural Network into system output.
Brain is present inside the head. It is the most significant part of the body and it
has the vital role to do each and every thing. Human beings has quality cerebrum and
another processing units. Human Brain is most complicated thing and not easy to
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Properties of Brain:-
Local cicuits
Neurons
Dendrite Trees
Synapses
Molecules
Synapses:-
These are elementary Structural and functional units that medicate the interaction
between neurons.
It deals with modules and ions for the next action.
Neurons:-
An Neuron is an information processing unit. The whole Neuron about 100µm in
size contains several dendrite trees.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Local Circuits:-
In next level of complexity the neurons with similar or different properties
performs operations of a localized region in the Brain.
Model of Neuron:-
A Neuron is an information processing unit, i.e. fundamental to the operation of a
neural network. Here we can identify three basic elements of neuron model.
A set of synapses each of which is characterized by a weight of its own.
Specifically signal ‘xj’ at the input synapses ‘j’ connected to neuron ‘k’ is
multiplied by the synaptic weight ‘wkj’. An adder for summing the input signals,
weighted by the respective synapses of the Neuron. An activation function for
limiting the amplitude of the output of a neuron.
The activation function also referred as “Squashing function”.
Input signals
x1 wk1 Bias
vk
x2 wk2 ΣΣ Ψ( ) yk
uk Output artificial
Neuron
xm ww
kmkm
In mathematical terms we may describe a neuron ‘k’ writing the following equations.
Uk = Σmj=1 wkj xj
vk = uk + bk
yk = Ψ(vk)
Where x1, x2,…… are the input signals.
Wk1, wk2, ……… are synaptic weights of neuran.
uk is the linear combiner output due to input signals.
Bk is the bias.
Ψ( ) is the non linear activation function.
Yk output signal of neuron.
Vk local field of neuron.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Ψ(v) = 1 if v ≥ 0
0 if v < 0
Pease wise Linear Function:-
Equations for Pease wise linear function is
1 if v≥ +1/2
Ψ(v)= v if +1/2 > v > -1 /2
0 if v ≤ -1/2
Sigmoid Function:-
Equations for Sigmoid function is
1 if v > 0
Ψ(v) = 0 if v = 0
-1 if v < 0
Knowledge Representation:-
Rule 1:-
Similar input from similar classes should usually produce similar representation in side
network there are a plethora of measures for determining the similarity between inputs.
A commonly used measure of similarity is based on the concept of Euclidian distance.
To be specific let xi denotes an m-by-1 vector.
n =[xi1,xi2 ,xi3,………………xim]T.
T –denotes matric transposition.
Ni- a point in an m-diementional space called Euclidian space denoted by Rm .
Euclidian distance between a pair of m-by-1 vector.
Xi , Xj is denoted as follows
D(Xi,Xj ) =|| Xi-Xj|| = [Σmk-1 (xik-xjk)2)]y2
Where Xik, Xjk are the Kth term element of Xi, Xj.
Rule 2:-
Rule 3:-
Rule 4:-
Prior information and invariance should be build into the design of a neural network.
There by simplifying the network design by not having to learn them.
This rule is particularly important because proper adherence to it result in a neural
network with a specialized structure (restricted). This is highly desirable for several
reasons.
1. Biological visual and auditory network are known to be very specialized.
2. A neural network with specialized structure usually ha sa smaller number of free
parameters.
3. the rate of information transmission through a specialized network is accelerated.
4. the cost of building a specialized network reduced because of its smaller size.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Hidden Layer
x1 Neurons
1
Output Layer
x2 Neuron
5
x3 2 y1
x4
x5
3 6
x6 y2
x7
4
x8
x9
x10
Fig:- combined use of respective fields and weight sharing
where {wi}6=1 constitute the same set of weights shared by all 4 hidden
neurons and xk is the signal picked up from source node.
Note:-
The issue of building prior information in the design of neural networks pertains
to one part of rule ‘4’. The remaining part of rule involves the issue of invariance.
Input Layer
Hidden Layer
Output Layer
Input Layer
Hidden Layer
Output Layer
Feed Backward can have signals traveling in both directions by introducing loops
in the networks. Feed Backward networks are extremely power full and can get extremely
complicated feed backward network are dynamic. Their state is changing continuously
until they reach an equilibrium point until to the input changes and a new equilibrium
needs to be found. Feed Backward networks are also referred as “recurrent network”.
Although the latter terms is often used to denote feed back connections in single layer
organization.
Input Layer Hidden Layer Output Layer
Representation
Learning
Reasoning
Representation:-
It is two types
1. Declarative Representation.
2. Conceptual Representation.
1) Declarative Representation:-
The knowledge is represents as a static collection of facts, with a small set
of general procedures used to manipulate the facts.
2) Conceptual Representation:-
Knowledge is embedded in a execute code that acts out the meaning of the
meaning of the knowledge. Both kind of knowledge, declarative and procedural are need
in most problem domains of intrest.
Reasoning:-
Reasoning is the ability to solve the problems. For a system to quality as a
reasoning system it must satisfy certain conditions.
1. The system must be able to express and solve a broad range of problems and
problems types.
2. The system must be able to make explicit and implicit information known to
it.
3. The system must have a control mechanism.
In many situations encountered in practice the available knowledge is in
exact. In such situations reasoning procedures are used.
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Learning:-
Environment will supply some information to the learning element. The learning element
uses this information to make improvement in knowledge base and finally the
performance element uses the knowledge base to perform its tasks.
xj Y=xjwkj
wk
Here x={x1,x2,……xj}
w={w1,w2 ….wk}
2. Activation Link:-
Its behavior is governed in general by a non linear input and output relation. Here
the non-linear function is Ψ( ).
xj Ψ(xj)
Ψ( )
All rights on this document are reserved to P.Ramesh Babu (Asst.Professor)
Rule 2:-
A node signal equals the algebraic sum of all signals entering to the related nodes
via incoming links. This is known as ‘fan-in-rule’.
yj
yk=yj+ yj
yj
Rule 3:-
The signal at a node is transmitted to each outgoing link originating from that
node, with the transmission being entirely independent of the transfer function of the
outgoing lines. This is known as ‘fan-out-rule’.
xj
xj
xk
x0=+1
wk0=bk
yk
x1 wk1
Ψ( )
wk2
x2
wkm
xm