Object Classification Through Perceptron Model Using LabView

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

IJECT Vol. 2, IssuE 3, sEpT. 2011

Object Classification through Perceptron Model using LabView


1 1,2

Dept. of Instrument & Control Engineering, NIT Jalandhar,Punjab, India


II. Perceptron Model Perceptron model is the basic and simple architecture of Artificial Neural network. Perceptron may have one single neuron or the layers of neuron. One single neuron is mainly used to classify the data which are linearly separable while more than one neurons are used to classify the data which are not linearly separable. Architecture of the single neuron perceptron with hardlimit function is shown in the figure. The working of this model is very simple. The weighted sum of input signals is compared to a threshold to determine the neuron output. When the sum is greater than or equal to the threshold, the output is 1. When the sum is less than the threshold, the output is 0.

Sachin Sharma, 2Gaurav Kumar

Abstract Artificial neural networks are considered to be good alternatives to conventional statistical methods for the classification problems and pattern recognition problems. Perceptron model is the basic model of ANN and is generally used for the classification of binary class problems. It is a fast and reliable network for the class of problems that it can solve. In this paper we have implemented the perceptron model through labview and we have tested it for real time problem of object classification. In this paper we implemented a single layer perceptron for two different types of objects. Keywords Perceptron model, Artificial Neural network, Labview, Back Propagation. I. Introduction Artificial Intelligence is an alternative solution for processing the data and information like human brain. From the start of 20th century, the scientists have started the study of brain functions. Artificial Neural Networks(ANN) represent an area of expert system. It is basically an attempt to simulate the processing of information like human brain. There is a variety of neural network architectures that have been developed over the past few decades. Different models have also be defined depending upon their activation function. Perceptron model [1,3] is a neural network having symmetrical hard limit activation function. Perceptron is fast and reliable network for the class of problems that it can solve. In addition an understanding of the operations of the perceptron provides a good basis for understanding more complex networks. Beginning from the feed-forward networks and the multi-layer perceptron (MLP) to the highly dynamic recurrent ANN. There are many different types of neural nets like feedback, feed forward etc. Also, various algorithms have been developed for training these networks to perform the necessary functions. In this paper, a new approach has been applied to build perceptron using Lab VIEW, virtual instrumentation software developed by National instruments USA. Lab view is a powerful graphical tool for measurement, instrumentation and control structured problems .However, it does not provide neuron models in its library. Neural network It has been successfully used for data acquisition and control functions. However it has been used here for building neural nets, presently not the library function of Lab VIEW. Neural networks are parallel processors. They have data flowing in parallel lines simultaneously. Lab VIEW has the unique ability to develop data flow diagrams that are highly parallel in structure. Lab VIEW seems to be a very effective approach for building neural nets. In this paper we have also implemented the supervised learning algorithm, for training of the perceptron, using labview. e been successfully developed in this thesis using LabView and it has been shown that Lab VIEW is a very powerful software tool for building neural nets..
w w w. i j e c t. o r g

This architecture is limited to simple binary class problems which are linearly separable. For non-linearly separable class problems we use multi-layer perceptron model

Fig. 1: Architecture of perceptron III. Perceptron Learning All class of Neural Network needs to learn the environment in which that network has to perform. These neural networks learn via some rules. Being a part of neural network, there should be some learning rule for the perceptron [4]. By learning rule we mean a procedure for modifying the weights and biases of a network. This procedure may also be referred to as a training algorithm. The purpose of the learning rule is to train the network to perform some task. There are many types of neural network learning rules. They fall into three broad categories: supervised learning, unsupervised learning and reinforcement (or graded) learning.

InternatIonal Journal of electronIcs & communIcatIon technology

255

IJECT Vol. 2, IssuE 3, sEpT. 2011

ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

A. Supervised Learning In supervised learning, the learning rule is provided with a set of examples (the training set) of proper network behaviour: {p1,t1}, {p2,t2}...{pQ,tQ}, where pq is an input to the network and tq is the corresponding correct(target) output. As the inputs are applied to the network, the network outputs are compared to the targets. The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets. The perceptron learning rule falls in this supervised learning category. In this paper we have implemented the supervised learning with back propagation algorithm to train the perceptron network. B. Reinforcement Learning Reinforcement learning is similar to supervised learning, except that, instead of being provided with the correct output for each network input, the algorithm is only given a grade. The grade (or score) is a measure of the network performance over some sequence of inputs. It appears to be most suited to control system applications [5-6]. C. Unsupervised Learning In unsupervised learning, the weights and biases are modified in response to network inputs only. There are no target outputs available. Most of these algorithms perform some kind of clustering operation. They learn to categorize the input patterns into a finite number of classes. This is especially useful in such applications as vector quantization [7-8]. IV. Learning Algorithm: Back Propagation learning Back propagation learning algorithm [9-11] is based on the propagation of error backward to the input and depending upon that updating the weight vector. It based on the steepest descent approach to find the update in the weight vector. Let us consider one output neuron j. then the error is given by: ej(n)=dj(n)-yj(n); n: represents the iteration n for nth pattern. d: represents the target output y: represents the neuron output. The output of neuron is given by the equation given below. Change in weight or update in weight is given by the equation:

learning rate it may happen that system may miss the minima which cause instability to the system. Learning rate should not be very slow, otherwise the system response will become very slow, and it will take considerable time to converge. Apart from this learning rate sometimes we can add some momentum term to the weight update equation. V. System Model For Classification In this paper we took a simple problem to classify two objects say apple and orange. The objective of this system is to classify objects according to their features. There are three main components of this system: sensors, computer and sorter. Fig. 2 shows the graphical view of the system model.

Fig. 2: System model Both objects have different shape, weight and texture. We have three different type of sensors which gives information about the weight, shape and texture. These sensors are somewhat primitive. The shape sensor will output a 1 if the object is approximately round and -1 if it is more elliptical. The texture sensor will output a 1 if the surface of the fruit is smooth and a -1 if it is rough. The weight sensor will output 1 if the fruit is more than one pound and a -1 if it is less than one pound. These sensors give their output to the computer. A. Software Architecture

By using the chain rule of differentiation, we get the steepest descent direction or we may say the derivative value, given by:

xi(n): input to the neuron. Hence, the change in the weights is given by: This equation is also called the update equation. The factor is called the learning rate and it is very important parameter for the convergence of the algorithm. Learning rate should be well optimal. It means, it should not be too high and can lead to instability or oscillatory behaviour of the system and due to high

Fig. 3: Software Architecture Computer is equipped with LabView software, and an operating system. A NI DAQ card is also connected to the computer for data acquisition purpose. We set the DAQ card for digital input mode because the sensory information is digital in nature. Inside the computer this information goes to the neural network
w w w. i j e c t. o r g

256

InternatIonal Journal of electronIcs & communIcatIon technology

ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

IJECT Vol. 2, IssuE 3, sEpT. 2011

model, which is implemented in LabView. The neural network model classifies the objects and generates the output in binary form either o or 1. These values works as the command to the sorter hardware. Neural network output is given to the RS232 interface which provide serial interfacing between the software and external world hardware. Fig. 3 shows the graphical representation of the software architecture. Sorter is a relay based hardware structure which receives commands from the computer through RS232 protocol. This command gives the instructions regarding desired decision for the coming object. After taking the decision sorter sort them to corresponding bin. VI. GUI Application We have developed the GUI for the training and for the trained perceptron using the weights which we get after the training. We developed this GUI using LabView. Fig. 4 shows the diagram of front panel for the training of Perceptron model.

end result. The end-user can change the variable parameters of the network. For example, in the network the end-user can change the learning rate for updating the hidden & output weights; error pre-set value which is the stopping criterion. But the user cannot change the number of neurons in the hidden layer or output

Fig. 5 : Front panel of trained perceptron layer because the changes have to be made at various locations in the network. References [1] W. McCulloch, W.Pitts, A logical calculus of the Ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, Vol. 5, pp. 115-133,1943. [2] M. Minsky, S. Papert, Perceptrons, Cambridge, MA-MIT Press, 1969. [3] F. Rosenblatt, The perceptron: A probabilistic model for information storage and organization in the brain, Psychological Review, Vol. 65, pp. 386-408, 1958. [4] Gallant, S. I.,Perceptron-based learning algorithms.. IEEE transaction on Neural Networks, vol. 1, no. 2. Pp. 179-191l, 1990 [5] A. Barto, R. Surron, C. Anderson, Neuron-like adaptive elements can solve difficult learning control problems, IEEE Transactions on Systems, Man and Cybernetics, Vol. 13, No. 5, pp. 834-846, 1983. [6] D. White, D. Sofge (Eds.), Handbook of Intelligent Control, New York: Van Nostrand Reinhold, 1992. [7] Richard ). Duda, Peter E. Hart, David G. Stork, Unsupervised Learning and Clustering, Ch. 10 in Pattern classification (2nd edition), pp. 571, Wiley, New York, ISBN 0471056693, 2001. [8] Ranjan Acharyya, "A new approach for Blind Source Separation of Convolutive Sources, 2008. [9] Rumelhart, David E.; Hinton, Geoffrey E., Williams, Ronald J. "Learning representations by back-propagating errors". Nature 323 (6088): pp. 533536, 1986. [10] Paul J. Werbos., The Roots of Backpropagation. From
InternatIonal Journal of electronIcs & communIcatIon technology

Fig. 4: Front panel for training the Perceptron We have used 16 different patterns corresponding the output of the sensors. We use 100 no. of epochs to train the network for better results. We have chosen the learning rate 0.5 for optimal performance. Fig. 5 shows the front panel of the trained perceptron. we have tested this model in the real time environment for the classification of the objects. VII. Conclusion In this paper, a perceptron network architecture has been successfully developed and trained accordingly to produce appropriate results using LabView as the application development environment(ADE). It has been proved that LabView is a good platform for developing and testing computational intelligence algorithms. Neural network built in this thesis have a very good GUI and is very user friendly in terms of the presentation and the
w w w. i j e c t. o r g

257

IJECT Vol. 2, IssuE 3, sEpT. 2011

ISSN : 2230-7109(Online) | ISSN : 2230-9543(Print)

Ordered Derivatives to Neural Networks and Political Forecasting. New York, NY: John Wiley & Sons, Inc., 1994 [11] D.E. Rumelhart, G.E. Hinton, R.J. Williams,Learning representations by back propagation errors, Nature, vol. 323, pp. 533-536, 1986.

Sachin Sharma was born at Dadri, Uttar Pradesh, India on 5 July, 1987. He is pursuing his M.tech degree in Instrumentation and Control Engineering from NIT Jalandhar. He did his B.tech in Electronics and Communication Engineering from GLA Institute of Technology, Mathura. He has published several papers in International conferences. His area of interest includes signal processing, neural networks and system designing. Gaurav Kumar was born at Jalalpur village, Aligarh, U.P., India on 20 May, 1989. He is pursuing his M.tech degree in Instrumentation and Control Engineering from NIT Jalandhar. He did his B.tech in Electronics and Instrumentation Engineering from Hindustan College of Science and Technology, Mathura. His area of interest includes control and automatio.

258

InternatIonal Journal of electronIcs & communIcatIon technology

w w w. i j e c t. o r g

You might also like