Professional Documents
Culture Documents
Artificial Network
Artificial Network
Submitted by:
Name: Nusrat Nowaz Aditi.
ID: 2222310630.
Topic: Artificial Neural Networks - new era of intelligence.
Course Title: Computer Information System.
Course Code: 107.
Section: 10.
Submitted to:
Faculty: Yasir Arefin
Department of Management.
North South University.
Introduction
Artificial Neural Networks (ANNe) are being heralded as the “wave of the future” in
computing. They’re indeed self-learning mechanisms that don’t require the traditional
programmer’s skills. Unfortunately, there’s been a lot of hype out there. Writers are claiming
that these “neurons-inspired processors” can do pretty much anything. It’s an exaggeration, and
it’s led to disappointments for some users who have tried and failed to solve their neural network
problems. Application builders have often concluded that neural nets are “complicated and
confusing.” Unfortunately, the industry itself has led to this confusion. There’s been an
avalanche of articles touting a vast array of different neural networks, each with its own claims
and examples. At the moment, only a few of these neural-based paradigms are being used
commercially. By far and away, the most popular structure is the feedforward (backpropagation)
network.
There are many variations of this fundamental type of neuron within humans, making it
even more difficult for humans to electrically replicate the process of thinking. However, all
natural neurons contain the same four fundamental components. These components are referred
to by their biological names: Dendrites, Soma, Axon, and Synapse. The Dendrites are extensions
of the Soma that act like input channels. The input channels receive input through the synapses
of other neurons, and the Soma processes these incoming signals over time.
At present, however, the purpose of artificial neural networks isn’t to recreate the brain.
Instead, neural network scientists are trying to understand what nature can do that humans can
use to solve problems that haven’t been solved by traditional computing. To do this, the
fundamental unit of neural networks – the artificial neurons – mimic the four fundamental
functions of natural neurons.
In the upper left of the processing element in Fig.4 above, the first step is to multiply
each of these inputs by their respective weighting factor (W(n)). These modified inputs are then
fed into the summing function. The summing function usually only sums these products.
However, many different types of operations are possible. These operations can produce a
number of different values that are propagated forward. For example, the average value, the
largest value, the smallest value, the ORed value, the ANDed value, etc. In most commercial
development products, softwareengineers are allowed to define their own summing function
routines coded in a higher level language (typically C is supported). In some cases, the summing
function can be further complicated by the inclusion of an activation function, which makes the
summing function operate in a time-sensitive manner.
All artificial neural networks have the same structure or topology, as shown in Fig.5 In
this structure, some neurons interface with the real world to accept its inputs, while others
provide the real world with the network’s outputs. These connections are the building blocks of
the system. Each neuron provides a variable strength of input to the network. There are two types
of connections. The first causes the summative mechanism of the next neuron to increase, and
the second causes it to decrease. To put it in more human terms, one stimulates, and the other
stops.
Component 4. Scaling and Limiting: After the transfer function of the processing element, the
result may be passed through other scaling and limiting operations. The scaling operation simply
multiplies the scaling factor by the transfer value and then increments the offset. The limiting
operation ensures that the scaled result doesn’t exceed the upper or lower limit.
Component 5. Output Function (Competition): Each processing element receives a single
output signal that can be sent to hundreds of other neurons. It’s similar to a biological neuron,
which receives many inputs and outputs only one action.
Component 6. Error Function and Back-Propagated Value: In most learning networks, the
error function is used to compare the current output to the desired output. The raw error value is
then converted by the error function to fit a specific network architecture.
Component 7. Learning Function: The goal of the learning function is to change the input
connection weights of each processing element according to a neural-based algorithm. The
process of changing the input connection weights to get a specific result can also be referred to
as the adaptation function and the learning
Development Systems: Good development systems let you prototype your network, train it, fine-
tune it, and run it. They run on the normal range of computers. These packages don’t usually run
on special hardware, although some vendors package fast RISC converters into special neural
processing boards.
What the Next Developments Will Be: The vendors in the industry anticipate that the transition
from tools to applications will continue. Specifically, the trend will be toward hybrid systems.
Hybrid systems will include other types of process such as fuzzy logic, expert systems or kinetic
algorithms. In fact, several manufactures are working on fuzzy neurons. The fuzzy logic
incorporates the inflexibility of life into math. In life, most pieces of data don’t fit into specific
categories.
For example, fuzzy neuron-based systems may be initialized with an expert’s set of rules and
weights for a particular application. However, the neural network doesn’t care that those rules
aren’t exact, because neural networks can learn and then validate the expert’s rules. They can
also add nodes for concepts that the expert may not understand. In short, hybrid systems are the
future.
Summary
To sum up, the promise of AI in computing lies in its ability to do things that traditional
processors can’t. Neural networks can identify patterns within large datasets and then extrapolate
those patterns into actionable insights. Neural networks learn, not are programmed.
But there’s even more to come. Neural networks need faster hardware, and they need to
be integrated into hybrid systems, which also use fuzzy logic and specialized systems. In the
future, neural networks will be capable of hearing speech, reading handwriting, and forming
actions. They’ll be the intelligence behind robots that never get bored or distracted. And in the
age of “intelligent” machines, they’ll be at the forefront.
References
[Aarts, 89] Aarts, Emile, and Korst, Jan, Simulated Annealing and Boltzmann Machines,
A Stochastic Approach to Combinatorial Optimization and Neural Computing, Wiley, Tiptree
Essex GB, 1989.
[Abu-Mostafa, 85] Abu-Mostafa, Y.S., and St. Jacques, J.M., "Information Capacity of
the Hopfield Model", IEEE Transactions on Information Theory, Volume IT-31, Number 4, July
1989.
[Anderson, 83] Anderson, James A., "Cognitive and Psychological Computation with
Neural Models", IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-13,
Number 5, September 9183.
[Baba, 77] Baba, Norio, Shoman, T., and Sawaragi, Y., "A Modified Convergence
Theorem for a Random Optimization Method", Information Sciences, Volume 13, 1977.