Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 6

Neural Networks

Neural networks are nonlinear systems based on a very large number of elementary processors, relatively simple which operate in parallel. The processors interoperate through connections: excitatory and inhibitory which have associated weight. The learning is done by weight modifications based on a learning rule. An Artificial Neural Network (ANN is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. !t is composed of a large number of highly interconnected processing elements (neurons working together to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. "earning in biological systems involves ad#ustments to the synaptic connections that exist between the neurons. Arguments of using neural network for communications Nonlinearity $ neural networks are systems with multiple inputs and outputs which can learn a nonlinear relation between inputs and outputs. Neural networks don%t need the traffic model $ neural networks have demonstrate their capacity of solving complex problems without exact knowledge and prior information. Thus, is not necessary a traffic model, but a good problem representation. &enerali'ation $ neural networks (sometimes with fu''y logic are capable to approximate complicate relations input(output selecting the significant entrances, thus obtaining characteristic parameters. )lexibility $ every component of the neural network is a processor that operate independently by the others from the system, thus for resolving complex problems the system is able to extend in a modular way, by adding new processors without redesigning. Tolerance to damage $ similar to human neuronal system the performance of the neuronal networks is degrading with the degrading of connections and neurons usage. Neuronal networks have a damage border. *rocessing speed $ because of the parallelism and hardware implementations, inclusive optic implementation, neuronal networks have an impressive speed (Terra operations per second on a micro chip +cm, . The potential of processing is very high due to the nonlinear character and processing speed of the neural networks. The applicability domain is very large. -ue to this characteristics neural networks can: ( learn traffic variations from experience ( adapt the dynamic solicitation of the network ( predict traffic future behavior Neural networks take a different approach to problem solving than that of conventional computers. .onventional computers use an algorithmic approach: the computer follows a set of instructions in order to solve a problem. /nless the specific steps that the computer

needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. 0ut computers would be so much more useful if they could do things that we don1t exactly know how to do. Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable. 2n the other hand, conventional computers use a cognitive approach to problem solving3 the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable3 if anything goes wrong is due to a software or hardware fault. Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. 4ven more, a large number of tasks, re5uire systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network in order to perform at maximum efficiency.

x0 = +1 x1
Input signal

w0 w
1

x2

w2

Local Field

Activation function
f (

v
Summing function

Output

xm

wm Synaptic
weights

Types of applications in which we can find neural networks: ( function approximation ( classifications ( shapes recognition

( ( (

predictions associative memory robots controlling

Structure of a multi layer neural network

Components of a neural network The main parts of a neural network are: ( processing units ( neurons activation phase ( current phase ( one output for each unit ( connections between units, which have weights ( propagation rule from the connection network ( activation function which combine the entrance with the current unit phase, in order to generate a new activation phase ( a learning rule to modify the connection weights by experience Neuron models net j = w ji .xi
i =+ n

o j = f ( w ji xi j
i =+

Neural network classification when thinking of learning modalities Neural networks with supervised learning: generate a set of input model $ output model peers with the help of which calculate the error e(t regarding the difference from the real current value of the output y(t and the desired one d(t :
e(t = d (t y (t

Neural network with no supervised learning: the neural network extract by itself the essential characteristics of input models, forming distinct representation of this. Neural network with 6critic7 learning ( with reward and penalty : the network doesn%t have the needed signal, but the one which appreciate how good the system is working. Usual learning rules 8ebb rule:
w = y x ij i j
w = :d sgn( w T x <x ij i i j

*erceptron rule: x9:x+x,;x#;xN<

where x is the input vector in neuron #,

-elta rule ( =idrow(8off rule : &enerali'ed delta rule:

w = : d y <x ij i i j

w = : d y < f ' ( net x ij i i i j


w = d x ij i j w = x w mj j mj

.orrelation learning rule: .ompetitive learning rule:

w = d w ij i ij

Retro propagation error algorithm >*4 algorithm was build independently by many scientists from numeric analy'e domain and statistics, even neural networks. This algorithm is learning with control in two steps. !t is also knew as generali'ed delta rule. This algorithm has two steps: ( the first step in which the information is propagated through the network from layer to layer, from the input to the output. ( second step the errors are propagated from the output to the input, producing >N? parameter actuali'ation )irst stage: (N number of inputs in neural network (Nh number of neurons of hide layer (Nout number of neurons of output layer net pj = w ji x pi + j
i =+ N

, i9+,,,@;.N , #9+,,,@;..Nh

o pj = f ( w ji x pi + j
i =+

o pk = f ( wkj o pj + k
j =+ Nh N

Nh

unde net pk = wij o pj + k


j =+

Nh

, k9+,,,@;..Nout

o pk = f ( wkj f ( w ji x pi + j + k
j =+ i =+

pk = ( d pk o pk f 1 ( net pk

Aecond stage: The errors are propagated from the output to the input, layer with layer, producing weights modification of the connections in order to minimi'e the error at each neuron level .
p wkj = pk o pj

pj = pk wkj f 1 (net pj )

p w ji = pj o pi

Conclusions The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. )urthermore there is no need to devise an algorithm in order to perform a specific task, there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain. *erhaps the most exciting aspect of neural networks is the possibility that some day 1conscious1 networks might be produced. There is a number of scientists arguing that consciousness is a 1mechanical1 property and that 1conscious1 neural networks are a realistic possibility. )inally, ! would like to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, A!, fu''y logic and related sub#ects. References +. Network optimi'ation, .orina 0otoca, *olitechnic of Timisoara ,. www.wikipedia.com

You might also like