Professional Documents
Culture Documents
Novel Approach To Improve The Performance of Artificial Neural Networks
Novel Approach To Improve The Performance of Artificial Neural Networks
net/publication/251839070
CITATIONS READS
0 63
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Devendran V on 02 May 2014.
Abstract: Artificial neural networks, inspired by the consumed more when the number of dimension of input data
information-processing strategies of the brain, are is large. Feature extraction [6] methods are utilized to reduce
proving to be useful in a variety of the applications the training time taken by the network. Changes of feature
including object classification problems and many other extraction methods for the same problem produces different
areas of interest, can be updated continuously with new output level. Leandro Nunes de Castro[7] and Mercedes
data to optimize its performance at any instant. The Femandez-Renondo [8] discusses various methods that are
performance of the neural classifiers depends on many used to calculate the range of values to assign initial weights
criteria i.e., structure of neural networks, initial weights, whose initial values helps to minimize the learning time of
feature data, number of training samples used which are the network. Hence lot of work is needed in this line to
all still a challenging issues among the research increase the level of response. This paper presents novel
community. This paper discusses a novel approach to methodology to improve the performance of the neural
improve the performance of neural classifier by changing classification irrespective of the above discussed constraints.
the methodology of presenting the training samples to the The organization of the paper is as follows: Section II
neural classifier. The results are proving that network describes Artificial Neural Networks, Section III deals with
also depends on the methodology of giving the samples to Wavelet Feature Extraction, Section IV explains the proposed
the classifier. This work is carried out using real world work, Section V gives the experiment results and finally
dataset. Section VI concludes with conclusion.
Keywords: Artificial Neural Networks, Haar Feature
Extraction. II. ARTIFICIAL NEURAL NETWORKS
Department of Computer Applications, Bannari Amman Fig. 2.1. Structure and Functioning of a Single Neuron
Institute of Technology, Sathyamangalam, TamilNadu, India
2
Department of Computer Applications, National Institute of The Artificial neural network is basically having three
Technology, Trichy, TamilNadu, India layers namely input layer, hidden layer and output layer.
3 Department of Information Technology, Bannari Amman There will be one or more hidden layers depending upon the
Institute of Technology, Sathyamangalam, TamilNadu, India number of dimensions of the training samples. This neural
Email: svdevendran 4yahoo.com, hema 4nitt.edu, network structure used in our experiment consists of only
awahi bhu khotmail.com
Feature
Vec tor Classification |
+ Class
443
Novel Approach to Improve the Performance ofArtificial Neural Networks
Grayscale Image Input Type III: 100 samples are given in 10 split ups i.e.,
lOP, iON, lOP, iON, lOP, iON, lOP, iON, lOP, lONas
alternate positive and negative samples of size 10 each as
Noise Removal shown in figure 4.4. The same neural network is tested with
new set of 1000 samples which results in performance of 77.2
0 0.
Histogram Equalization
. lU U
BW Conversion Fig. 4.4. Order of Training Samples in Input Type III
There figures 4.5, 4.6 and 4.7 are showing training graphs
Haar Feature Extraction which has got in Experiment I, Experiment II and Experiment
III using Input type I, Input type II and Input type III
respectively. The performance of these three networks
Neural Classification depends on the training samples and the way they are
presented to the classifier.
Result
Fig. 4.1. Flowchart Diagram of the Complete System
Grayscale images (40xlO0) are received as input; Noise is
removed; Pixel values are equalized using histogram; Image
is converted into Black and White Image; Feature is extracted
using Haar wavelet. The extracted features (50xl) are given
for training the classifier and tested. The Input type I, Input
type II and Input Type III are three methodologies of
presenting the samples for the neural training. Each input
type is used for training the neural classifier in each
experiment and tested.
Input Type I: 50 positive samples followed by 50
negative samples are given for the neural training as shown in
figure 4.2. The same neural network is tested with new set of
1000 samples which results in performance of 71.4 00.
Fig. 4.5. Training Graph for Experiment- I
1. 50 51 .... 100
Fig. 4.2. Order of Training Samples in Input Type I
Input Type II: First 25 positive samples followed by first
25 negative samples and then the remaining 25 positive
samples followed by the remaining 25 negative samples are
given for neural training as shown in figure 4.3. The same
neural network is tested with new set of 1000 samples which
results in performance of 73.2 00.
I. 2
25, 6.... 50, 51 .... 75, 76 ....1I00 Fig. 4.6. Training Graph for Experiment II
Fig. 4.3. Order of Training Samples in Input Type II
444
IEEE-ICSCN, Feb. 2007
VI. CONCLUSION
This paper presents a novel idea which helps to improve
the performance of the neural networks by changing the
methodology of presenting the samples for training. Also the
experiment results are proving that the neural network's
performance also depends on the order of training samples
given to the neural networks. This experiment is showing
comparatively good results and tested with 1000 samples
consists of 500 positive and 500 negative images. The paper
firmly establishes that presentation of the inputs in various
orders changes the performance of the classifier. In future,
the other feature extraction algorithms will be considered and
the performance of the classifier will be tested.
REFERENCES
Fig. 4.7. Training Graph for Experiment- III
[1] Mehmet Engin, "ECG beat classification using neuro-fuzzy
network", Pattern Recognition letters 25(2004) 1715-1722.
V. EXPERIMENT RESULTS [2] Wenming Cao, Feng Hao, Shoujue Wang, "The application of
DBF neural networks for obj ect recognition", Information
The above experiments are carried out on the standard Sciences 160 (2004) 153-160.
real images taken from UIUC Image database [10] and the [3] Aurelio Uncini, "Audio signal processing by neural
results are shown in Table 1. Each experiment is tested with networks", Neurocomputing 55(2003) 593-625.
500 positive and 500 negative samples separately. [4] Horacio M. Gonzalez-Velasco, Carlos J. Garcia-Orellana,
Miguel Macias-Macias, F. Javier Lopez-Aligue, M. Isabel
Table 1: Overall Experiment Results Acevedo-Sotaca, "Neural-networks-based edges selector for
boundary extraction problems", Image and Vision Computing
Classification Rate Overall 22(2004) 1129-1135.
Positive Negative [5] L. Ma, K. Khorasani, "New training strategies for constructive
Category Performance neural networks with application to regression problems",
Samples Samples
neural Networks 17(2004) 589-609.
Experiment 6 337 77 71.4 O [6] Richard Maclin, Jude W. Shavlik, "Combining the prediction
of multiple classifiers: Using Competitive Learning to
Initialize Neural Networks", Proc. Of 14th International Joint
ExperimentlI 683341O 78.3591 73.2 O Conf. on Artificial Intelligence(IJCAI-95).
[7] Leandro Nunes de Castro, Fernando Jose Von Zuben, "An
405
Experiment III 367 116°/ 77.2 0 Immunological Approach to Initialize Feedforward Neural
network weights", Proc. Of Intl. Conf. of Artificial Neural
Networks and Genetic Algorithm, pp. 126-129, 2001.
[8] Mercedes Fernandez-Renondo, Carlos Hernandez-Espinosa,
Experiment - I correctly classifies 337 positive samples "Weight Initialization Methods for Multilayer Feedforward",
and 377 negative samples which have got overall Proc. Of European Symposium on Artificial Neural Networks
performance of 71.4%. Experiment - II correctly classifies (ESANN'2001) pp. 119-124.
341 positive samples and 391 negative samples which have [9] Viola, P., Jones, M, "Rapid object detection using a boosted
got overall performance of 73.2%. Experiment - III correctly cascade of simple features", in the Proc. of Intl. Conf. on
classifies 367 positive samples and 405 negative samples Computer Vision and Pattern Recognition (CVPR). Volume I.
which have got overall performance of 77.2%. (2001) 511 - 518.
[10] http:H12r.cs.uiuc.edu/-cogcomp/Data/Car
445