Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/251839070

Novel Approach to Improve the Performance of Artificial Neural Networks

Article  in  Proceedings of the IEEE · February 2007


DOI: 10.1109/ICSCN.2007.350778

CITATIONS READS
0 63

3 authors, including:

Devendran V Amitabh Wahi


Lovely Professional University Bannari Amman Institute of Technology
21 PUBLICATIONS   55 CITATIONS    39 PUBLICATIONS   139 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Natural Scene Categorisation View project

All content following this page was uploaded by Devendran V on 02 May 2014.

The user has requested enhancement of the downloaded file.


IEEE - ICSCN 2007, MIT Campus, Anna University, Chennai, India. Feb. 22-24, 2007. pp.442-445.

Novel Approach to Improve the Performance of Artifi'cial Neural Networks


V. Devendran', Hemalatha Thiagarajan2 and Amitabh Wahi3

Abstract: Artificial neural networks, inspired by the consumed more when the number of dimension of input data
information-processing strategies of the brain, are is large. Feature extraction [6] methods are utilized to reduce
proving to be useful in a variety of the applications the training time taken by the network. Changes of feature
including object classification problems and many other extraction methods for the same problem produces different
areas of interest, can be updated continuously with new output level. Leandro Nunes de Castro[7] and Mercedes
data to optimize its performance at any instant. The Femandez-Renondo [8] discusses various methods that are
performance of the neural classifiers depends on many used to calculate the range of values to assign initial weights
criteria i.e., structure of neural networks, initial weights, whose initial values helps to minimize the learning time of
feature data, number of training samples used which are the network. Hence lot of work is needed in this line to
all still a challenging issues among the research increase the level of response. This paper presents novel
community. This paper discusses a novel approach to methodology to improve the performance of the neural
improve the performance of neural classifier by changing classification irrespective of the above discussed constraints.
the methodology of presenting the training samples to the The organization of the paper is as follows: Section II
neural classifier. The results are proving that network describes Artificial Neural Networks, Section III deals with
also depends on the methodology of giving the samples to Wavelet Feature Extraction, Section IV explains the proposed
the classifier. This work is carried out using real world work, Section V gives the experiment results and finally
dataset. Section VI concludes with conclusion.
Keywords: Artificial Neural Networks, Haar Feature
Extraction. II. ARTIFICIAL NEURAL NETWORKS

I. INTRODUCTION The Neural networks developed from the theories of how


the human brain works. Many modem scientists believe, the
Artificial Neural Networks [1-5] have been applied in human brain is a large collection of interconnected neurons.
many fields including face recognition, speaker recognition, These neurons are connected to both sensory and motor
finger-print recognition, texture classification and object nerves. Scientists believe, that neurons in the brain fire by
classification. Artificial neural networks are able to perform emitting an electrical impulse across the synapse to other
clustering and classification task effectively with its prior neurons, which then fire or don't depending on certain
knowledge which can be acquired through N number of conditions. Structure of a neuron is given in figure 2.1.
exemplars including positive and negative samples. The Inputs
performance of neural classifier is limited by number of x1
constraints. The Neural classifier's performance is normally Weights
limited to the structure of the neural network, features O utput
extracted, number of training samples used, range of values
of initial weights and the initial weights assigned. Even there
is no concrete statement to define the structure of neural
Df(E wixi)
networks which says number of hidden layers and number of xw
21
neurons in each hidden layers. The training time will be

Department of Computer Applications, Bannari Amman Fig. 2.1. Structure and Functioning of a Single Neuron
Institute of Technology, Sathyamangalam, TamilNadu, India
2
Department of Computer Applications, National Institute of The Artificial neural network is basically having three
Technology, Trichy, TamilNadu, India layers namely input layer, hidden layer and output layer.
3 Department of Information Technology, Bannari Amman There will be one or more hidden layers depending upon the
Institute of Technology, Sathyamangalam, TamilNadu, India number of dimensions of the training samples. This neural
Email: svdevendran 4yahoo.com, hema 4nitt.edu, network structure used in our experiment consists of only
awahi bhu khotmail.com

1-4244-0997-7/07/$25.00 ©2007 IEEE 442


IEEE-ICSCN, Feb. 2007

three layers having 50 neurons in the input layer, 10 neurons Pate


in the hidden layer and 1 neuron in the output layer as shown Feature
in Figure 2.2. , Extra ction

Feature
Vec tor Classification |

+ Class

Fig. 3.1. Basic Steps Involved in Classification


-*(J-~ Hidden Layer
Input Layer ( 10 neurons) Haar wavelet [9] is widely used technique for the feature
(50 neurons) extraction, which is single-level one-dimensional wavelet
decomposition and gives both an approximation and detailed
Fig. 2.2. Simple Neural Network Structure coefficients as shown in the figure 3.2. Approximation
A learning problem with binary outputs (yes / no or 1 / 0)
coefficients which are of size 50xl are considered for the
is referred to as binary classification problem whose output training of the neural classifier.
layer has only one neuron. A learning problem with finite
number of outputs is referred to multi-class classification
problem whose output layer has more than one neuron. The
examples of input data set (or sets) are referred to as the
training data. The algorithm which takes the training data as
input and gives the output by selecting best one among
hypothetical planes from hypothetical space is referred to as
the learning algorithm. The approach of using examples to
synthesize programs is known as the learning methodology.
When the input data set is represented by its class
membership, it is called supervised learning and when the
data is not represented by class membership, the learning is
known as unsupervised learning. There are two different
styles of training .i.e., Incremental Training and Batch
training. In incremental training the weights and biases of the
network are updated each time an input is presented to the
network. In batch training the weights and biases are only
updated after all of the inputs are presented. In this Fig. 3.2. Original, Approximation and Detailed Coefficients of Haar
experimental work; Backpropagation algorithm is applied for Wavelets
learning the samples, Tan-sigmoid and log-sigmoid functions
are applied in hidden layer and output layer respectively, IV. PROPOSED WORK
Gradient descent is used for adjusting the weights as training
methodology. Using the above feature vector representation, neural
classifier is trained and tested to classify an image as car and
III. WAVELET FEATURE EXTRACTION non-car. The training of the classifier is done in three
different experiments by changing the methodology of
There are many motivations for using features rather than presenting the input data from the same 100 labeled samples.
the pixels directly. The most common reason is that feature Each experiment is tested with 1000 samples and the results
extraction is used to reduce the dimension of the input data are given in the table 5.1. This experiment is done with 50
and in turn helps to minimize the training time taken for positive and 50 negative samples for training; 500 positive
network classifier. Image feature detection is a fundamental and 500 negative samples for testing of the classifier. The
issue in many intermediate level vision problems such as complete flow chart of the computer-based system is shown
stereo, motion correspondence, image registration and object in figure 4.1.
recognition as shown in the figure 3.1.

443
Novel Approach to Improve the Performance ofArtificial Neural Networks

Grayscale Image Input Type III: 100 samples are given in 10 split ups i.e.,
lOP, iON, lOP, iON, lOP, iON, lOP, iON, lOP, lONas
alternate positive and negative samples of size 10 each as
Noise Removal shown in figure 4.4. The same neural network is tested with
new set of 1000 samples which results in performance of 77.2
0 0.

Histogram Equalization
. lU U
BW Conversion Fig. 4.4. Order of Training Samples in Input Type III

There figures 4.5, 4.6 and 4.7 are showing training graphs
Haar Feature Extraction which has got in Experiment I, Experiment II and Experiment
III using Input type I, Input type II and Input type III
respectively. The performance of these three networks
Neural Classification depends on the training samples and the way they are
presented to the classifier.
Result
Fig. 4.1. Flowchart Diagram of the Complete System
Grayscale images (40xlO0) are received as input; Noise is
removed; Pixel values are equalized using histogram; Image
is converted into Black and White Image; Feature is extracted
using Haar wavelet. The extracted features (50xl) are given
for training the classifier and tested. The Input type I, Input
type II and Input Type III are three methodologies of
presenting the samples for the neural training. Each input
type is used for training the neural classifier in each
experiment and tested.
Input Type I: 50 positive samples followed by 50
negative samples are given for the neural training as shown in
figure 4.2. The same neural network is tested with new set of
1000 samples which results in performance of 71.4 00.
Fig. 4.5. Training Graph for Experiment- I

1. 50 51 .... 100
Fig. 4.2. Order of Training Samples in Input Type I
Input Type II: First 25 positive samples followed by first
25 negative samples and then the remaining 25 positive
samples followed by the remaining 25 negative samples are
given for neural training as shown in figure 4.3. The same
neural network is tested with new set of 1000 samples which
results in performance of 73.2 00.

I. 2
25, 6.... 50, 51 .... 75, 76 ....1I00 Fig. 4.6. Training Graph for Experiment II
Fig. 4.3. Order of Training Samples in Input Type II

444
IEEE-ICSCN, Feb. 2007

VI. CONCLUSION
This paper presents a novel idea which helps to improve
the performance of the neural networks by changing the
methodology of presenting the samples for training. Also the
experiment results are proving that the neural network's
performance also depends on the order of training samples
given to the neural networks. This experiment is showing
comparatively good results and tested with 1000 samples
consists of 500 positive and 500 negative images. The paper
firmly establishes that presentation of the inputs in various
orders changes the performance of the classifier. In future,
the other feature extraction algorithms will be considered and
the performance of the classifier will be tested.

REFERENCES
Fig. 4.7. Training Graph for Experiment- III
[1] Mehmet Engin, "ECG beat classification using neuro-fuzzy
network", Pattern Recognition letters 25(2004) 1715-1722.
V. EXPERIMENT RESULTS [2] Wenming Cao, Feng Hao, Shoujue Wang, "The application of
DBF neural networks for obj ect recognition", Information
The above experiments are carried out on the standard Sciences 160 (2004) 153-160.
real images taken from UIUC Image database [10] and the [3] Aurelio Uncini, "Audio signal processing by neural
results are shown in Table 1. Each experiment is tested with networks", Neurocomputing 55(2003) 593-625.
500 positive and 500 negative samples separately. [4] Horacio M. Gonzalez-Velasco, Carlos J. Garcia-Orellana,
Miguel Macias-Macias, F. Javier Lopez-Aligue, M. Isabel
Table 1: Overall Experiment Results Acevedo-Sotaca, "Neural-networks-based edges selector for
boundary extraction problems", Image and Vision Computing
Classification Rate Overall 22(2004) 1129-1135.
Positive Negative [5] L. Ma, K. Khorasani, "New training strategies for constructive
Category Performance neural networks with application to regression problems",
Samples Samples
neural Networks 17(2004) 589-609.
Experiment 6 337 77 71.4 O [6] Richard Maclin, Jude W. Shavlik, "Combining the prediction
of multiple classifiers: Using Competitive Learning to
Initialize Neural Networks", Proc. Of 14th International Joint
ExperimentlI 683341O 78.3591 73.2 O Conf. on Artificial Intelligence(IJCAI-95).
[7] Leandro Nunes de Castro, Fernando Jose Von Zuben, "An
405
Experiment III 367 116°/ 77.2 0 Immunological Approach to Initialize Feedforward Neural
network weights", Proc. Of Intl. Conf. of Artificial Neural
Networks and Genetic Algorithm, pp. 126-129, 2001.
[8] Mercedes Fernandez-Renondo, Carlos Hernandez-Espinosa,
Experiment - I correctly classifies 337 positive samples "Weight Initialization Methods for Multilayer Feedforward",
and 377 negative samples which have got overall Proc. Of European Symposium on Artificial Neural Networks
performance of 71.4%. Experiment - II correctly classifies (ESANN'2001) pp. 119-124.
341 positive samples and 391 negative samples which have [9] Viola, P., Jones, M, "Rapid object detection using a boosted
got overall performance of 73.2%. Experiment - III correctly cascade of simple features", in the Proc. of Intl. Conf. on
classifies 367 positive samples and 405 negative samples Computer Vision and Pattern Recognition (CVPR). Volume I.
which have got overall performance of 77.2%. (2001) 511 - 518.
[10] http:H12r.cs.uiuc.edu/-cogcomp/Data/Car

445

View publication stats

You might also like