Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

North South University

Submitted by:
Name: Nusrat Nowaz Aditi.
ID: 2222310630.
Topic: Artificial Neural Networks - new era of intelligence.
Course Title: Computer Information System.
Course Code: 107.
Section: 10.

Submitted to:
Faculty: Yasir Arefin

Department of Management.
North South University.

Introduction
Artificial Neural Networks (ANNe) are being heralded as the “wave of the future” in
computing. They’re indeed self-learning mechanisms that don’t require the traditional
programmer’s skills. Unfortunately, there’s been a lot of hype out there. Writers are claiming
that these “neurons-inspired processors” can do pretty much anything. It’s an exaggeration, and
it’s led to disappointments for some users who have tried and failed to solve their neural network
problems. Application builders have often concluded that neural nets are “complicated and
confusing.” Unfortunately, the industry itself has led to this confusion. There’s been an
avalanche of articles touting a vast array of different neural networks, each with its own claims
and examples. At the moment, only a few of these neural-based paradigms are being used
commercially. By far and away, the most popular structure is the feedforward (backpropagation)
network.

Figure 1: Artificial Neural Network

Artificial Neurons and How They Work


A neuron is the basic processing unit in a neural network. This building-block of human
intelligence includes a few general functions. A biological neuron takes inputs from other inputs,
combines them in a certain way, generally performs a non-linear operation on the result, and
outputs the final output.
Figure 2: A Simple Neuron

There are many variations of this fundamental type of neuron within humans, making it
even more difficult for humans to electrically replicate the process of thinking. However, all
natural neurons contain the same four fundamental components. These components are referred
to by their biological names: Dendrites, Soma, Axon, and Synapse. The Dendrites are extensions
of the Soma that act like input channels. The input channels receive input through the synapses
of other neurons, and the Soma processes these incoming signals over time.
At present, however, the purpose of artificial neural networks isn’t to recreate the brain.
Instead, neural network scientists are trying to understand what nature can do that humans can
use to solve problems that haven’t been solved by traditional computing. To do this, the
fundamental unit of neural networks – the artificial neurons – mimic the four fundamental
functions of natural neurons.

Figure 3: A Basic Artificial Neuron

Electronic Implementation of Artificial Neurons


These artificial neurons are referred to as “processing elements” in current software
packages and have many more functions than the basic artificial neuron mentioned above.
Figure 4: A Model of a "Processing Element"

In the upper left of the processing element in Fig.4 above, the first step is to multiply
each of these inputs by their respective weighting factor (W(n)). These modified inputs are then
fed into the summing function. The summing function usually only sums these products.
However, many different types of operations are possible. These operations can produce a
number of different values that are propagated forward. For example, the average value, the
largest value, the smallest value, the ORed value, the ANDed value, etc. In most commercial
development products, softwareengineers are allowed to define their own summing function
routines coded in a higher level language (typically C is supported). In some cases, the summing
function can be further complicated by the inclusion of an activation function, which makes the
summing function operate in a time-sensitive manner.

Artificial Network Operations


The second part of the “art” of using a neural network center around the many ways in
which those individual neurons can cluster together. This flusterment happens in the human brain
in a way that allows information to be processed dynamically, interactively, and autonomously.
In biology, neural networks are built in a 3D world from tiny parts.. This is not the case with any
proposed or existing human-made network. Today’s integrated circuits are 2D devices with a
finite number of layers to connect.
At present, neural networks are just the assembly of the primitive artificial neurons. This
assembly happens by creating layers that are then connected to each other. How these layers are
connected is the second part of the “art” of engineering networks to solve real-world problems.

Figure 5: A Simple Neural Network Diagram.

All artificial neural networks have the same structure or topology, as shown in Fig.5 In
this structure, some neurons interface with the real world to accept its inputs, while others
provide the real world with the network’s outputs. These connections are the building blocks of
the system. Each neuron provides a variable strength of input to the network. There are two types
of connections. The first causes the summative mechanism of the next neuron to increase, and
the second causes it to decrease. To put it in more human terms, one stimulates, and the other
stops.

Major Components of an Artificial Neuron


In this section, we’ll look at the seven main parts that make up an artificial neuron. These
parts apply regardless of whether the neuron is being used for input/output or if it’s in one of the
layers below.
Component 1. Weighting Factors: Most of the time, a neuron receives multiple inputs at the
same time. Each input carries its own weight, which provides the input’s effect on the processing
element’s summation function.
Component 2. Summation Function: The first step in the operation of a processing element is to
calculate the weighted product of all the inputs. The inputs and their respective weights are
vectors, which can be represented as i1, i2 (in) and w1, w2 (wn).
Component 3. Transfer Function: The sum of the summation function, which is almost always
a weighted sum, is converted into a working output by an algorithm called the transfer function.
The transfer function compares the sum of the summations to a threshold value to determine the
neural output.

Figure 6: Sample Transfer Functions

Component 4. Scaling and Limiting: After the transfer function of the processing element, the
result may be passed through other scaling and limiting operations. The scaling operation simply
multiplies the scaling factor by the transfer value and then increments the offset. The limiting
operation ensures that the scaled result doesn’t exceed the upper or lower limit.
Component 5. Output Function (Competition): Each processing element receives a single
output signal that can be sent to hundreds of other neurons. It’s similar to a biological neuron,
which receives many inputs and outputs only one action.
Component 6. Error Function and Back-Propagated Value: In most learning networks, the
error function is used to compare the current output to the desired output. The raw error value is
then converted by the error function to fit a specific network architecture.
Component 7. Learning Function: The goal of the learning function is to change the input
connection weights of each processing element according to a neural-based algorithm. The
process of changing the input connection weights to get a specific result can also be referred to
as the adaptation function and the learning

Networks for Data


Networks for Data Conceptualization
In many applications, data is not just classified. Not all applications include data that can
fit into a class. Not all applications read characters. Not all applications identify diseases. Some
applications need to categorize data that may or may not be clearly definable. For example, let’s
say we’re working on a mailing list data base for a group of potential customers. Customers may
exist in all classifications, but they may be concentrated within a particular age group and
income level. In real life, other information may disrupt the region that contains the vast majority
of buyers. This process of data conceptualization is simply trying to classify a group as
accurately as possible.

Networks for Data Association


The previous network class, classification, relates to networks for data association. In the
case of data association, classifications are still performed. For instance, a character reader
classifies each of the scanned inputs. But for most applications, there is an additional element.
The additional element is that some of the data is simply wrong. The credit card application may
have been rendered unreadable due to water stains. Maybe the scanner lost its light source.
Maybe the card itself was completed by a 5-year-old. Networks for data association accept these
occurrences as just bad data, and they accept that bad data can fall under all classifications.

Networks for Data Filtering


Data filtering is the last main type of network. An early example of a data filtering
network was the MADALINE. This network removed the echo from a phone line using a
dynamic echo cancelling circuit. More recent work has made modems capable of working
reliably at 4800 or 9600 baud using dynamic equalization techniques. These two applications
also use neural networks which have been integrated into special purpose chips.

New Technologies that are Emerging


What Currently Exists: At present, there are several vendors in the market. These vendors are all
trying to get a piece of the neural network market. There are neural network products that are just
an add-on to the popular data bases and spreadsheets. There are products that are designed for
specific operating systems on specific machines. The most popular neural network development
tools work on Apple’s Macintosh and the IBM PC standard.

Development Systems: Good development systems let you prototype your network, train it, fine-
tune it, and run it. They run on the normal range of computers. These packages don’t usually run
on special hardware, although some vendors package fast RISC converters into special neural
processing boards.

Hardware Accelerators: A dedicated neural processor is a processor with a specific set of


features that allow it to be used in a neural network. Some of the major chip manufacturers have
developed neural processors, some of which were specifically designed for development system
vendors, some of which package multiple simple neurons into one chip, and some of which
incorporate proprietary ideas, such as the creation of a particular fuzzy neuron. Neural processors
are available in a variety of broad technologies, including analog processors, digital processors,
hybrid processors, and optical processors.

What the Next Developments Will Be: The vendors in the industry anticipate that the transition
from tools to applications will continue. Specifically, the trend will be toward hybrid systems.
Hybrid systems will include other types of process such as fuzzy logic, expert systems or kinetic
algorithms. In fact, several manufactures are working on fuzzy neurons. The fuzzy logic
incorporates the inflexibility of life into math. In life, most pieces of data don’t fit into specific
categories.
For example, fuzzy neuron-based systems may be initialized with an expert’s set of rules and
weights for a particular application. However, the neural network doesn’t care that those rules
aren’t exact, because neural networks can learn and then validate the expert’s rules. They can
also add nodes for concepts that the expert may not understand. In short, hybrid systems are the
future.
Summary

To sum up, the promise of AI in computing lies in its ability to do things that traditional
processors can’t. Neural networks can identify patterns within large datasets and then extrapolate
those patterns into actionable insights. Neural networks learn, not are programmed.
But there’s even more to come. Neural networks need faster hardware, and they need to
be integrated into hybrid systems, which also use fuzzy logic and specialized systems. In the
future, neural networks will be capable of hearing speech, reading handwriting, and forming
actions. They’ll be the intelligence behind robots that never get bored or distracted. And in the
age of “intelligent” machines, they’ll be at the forefront.

References
[Aarts, 89] Aarts, Emile, and Korst, Jan, Simulated Annealing and Boltzmann Machines,
A Stochastic Approach to Combinatorial Optimization and Neural Computing, Wiley, Tiptree
Essex GB, 1989.

[Abu-Mostafa, 85] Abu-Mostafa, Y.S., and St. Jacques, J.M., "Information Capacity of
the Hopfield Model", IEEE Transactions on Information Theory, Volume IT-31, Number 4, July
1989.

[Anderson, 83] Anderson, James A., "Cognitive and Psychological Computation with
Neural Models", IEEE Transactions on Systems, Man, and Cybernetics, Volume SMC-13,
Number 5, September 9183.

[Baba, 77] Baba, Norio, Shoman, T., and Sawaragi, Y., "A Modified Convergence
Theorem for a Random Optimization Method", Information Sciences, Volume 13, 1977.

[Anderson,88] Anderson, J.A. and Rosenfeld,E., eds., "Neurocomputing: Foundations of


Research, MIT Press, Boston, MA 1988, page 125.

You might also like