Professional Documents
Culture Documents
Face Recognitio N System Using Eigenfaces: Seminar Report
Face Recognitio N System Using Eigenfaces: Seminar Report
Face Recognitio N System Using Eigenfaces: Seminar Report
Face
recognitio
n system
using
eigenfaces
VIHANG PATTIWAR
ELECTRONICS (BE)
Face recognition system using eigenfaces
PUNE - 411037
CERTIFICATE
This is to certify that the Seminar Report titled
___________________________________________________________________
ACKNOWLEDGMENT
I acknowledge a deep sense of gratitude towards PROF. S. B. BHISE, seminar guide and assistant
professor of electronics department, V.I.T. Pune , for providing me valuable and useful information with
guidance and co-operation in writing this report. I would like to thank all other professors and my
friends for giving help and encouragement in all the problems faced during preparation of this paper. I
would like to thank all those who are indirectly involved in preparation of this paper- my colleagues,
friends, parents.
Vihang pattiwar
(BE F)
ABSTRACT 5
1. Introduction
1.1 Introduction 6
1.2 overview 7
2.4 Difficulties 14
3.6 Application 27
REFERENCES 30
ABSTRACT
The rapid development in information technology has made use of computers mandatory
in each and every field.
Digital image processing is a rapidly evolving field with growing applications in science
and engineering . Image processing holds the probability of developing the ultimate machine
In this project an approach is made to detect and identify a human face and describe
the algorithm for software implementation of face recognition system using eigenface. In
eigenface method, training set is prepared first and then the person is recognized by comparing
Above technique is operated on BMP and JPEG image format. The software has been
developed in MATLAB.
CHAPTER 1:
1.1 INTRODUCTION:
Face is our primary focous of interaction with society, face communicates identify,
Emotion, race and age. It is also quite useful for judging gender, size and perhaps even character
Of the person.
A] IMAGE:
The term image is refers to two dimensional light intensity function denoted by
f(x,y)Wher value or amplitude of “f “ at spatial co-ordinates (x,y) gives the intensity of image at
that point . as light is a form of energy f(x,y) be non zero and finite, that is
B] IMAGE PROCESSING:
The image processing is the science of manipulating a picture. It covers a broad range
Of techniques that are present in numerous applications. This technique can enhance or distort
an image, create a new image that has been degraded during or after the image acquition.
Certainly the most powerful image processing system we see and use every day is the one
compose of the human eye, brain and face. Image processing has grown enormously and forced
It into number of fields where cameras amd image are involved. Digital television, computer
axial tomography (in medicine) are some of the few other areas where digital image processing
Developing a computational model for face recognition is quite difficult, because faces
are too complex, multidimensional, and meaningful visual stimuli. They are natural class of
objects, and stand in stark contrast to sine wave greetings, the “block words”, and other artificial
stimuli used in human and computer vision research. Thus unlike most early visual, for which we
may construct detailed models of retinal or striate activity, face recognition is a very high level
task for which computational approaches can currently only suggest broad constraints on the
corresponding neural activities .
Taking the advantage of some of this structure by proposing a scheme for recognition
Automatically learning and later recognizing new faces is practical within this framework
In today’s network world, the need to maintain the security of information or physical
property is becoming both inceasingly difficult. From time to time we hear about the crimes
The criminals are taking advantage of a fundamental flaw in the conventional access
control systems.
The system do not grant access by “who we are”, but by “what we have”, such as ID
cards , keys , passwords etc. none of these means actually define us. Rather , they merely
Are means to authenticate us. It goes without that if someone steals duplicates or acquires these
Identity means, he or she will be able to access our data or our personal property any time and
anywhere.
Behavioral IDs have the advantage of being non-intrusive.people are more comfortable signing
Their names or speaking to a microphone than placing their eyes before a scanner or giving a
drop for DNA sequencing. Face recognition is one of the few biometric methods that possess
The merits of both high accuracy and low intrusiveness.
Are called second order features. This approach is simple to implement and heuristics
Can be applied easily but it is not a robust method for face recognition. This method
Eigenface based approach is derived from information theory that extract the relevant
information, encode it as efficiently as possible and compare one encoding with database of
models encoded similarly. A simple approach to extract the information contained in the image
of face is to capture the variation of images, dependant of judgment of features and use this
information to encode & compare individual face image. In eigen face based approaches,
principle component analysis is used to encode the information.
The face recognition system passes through three main phases during face
recognition process. Three major functional units are involved in three phases and they are
depicted in figure.
In this phase, the acquisition and the pre preprocessing of face images that are
going to be added to face library are performed. Face images are stored in a face library
in the system. The face database is called as “face library” because at the moment it does
not have the properties of relational database. Every action such as training set or eigen
face formation is performed on face library. In order to start the face recognition process,
this initially empty face library has to be filled with face images. The face recognition
system operates on 256*256 BMP formatted images files. In order to perform image size
conversions and enhancement on face images their exists the “pre-processing” module.
This module automatically converts every face image to 256*256 and based on user
request, it can modify the dynamic range of face images (histogram equalization) in order
to improve face recognition performance. Also implement a “background removal”
algorithm in the preprocessing module. Each face is represented by one entry in the face
library. The entry corresponds to the face image itself for the sake of speed, no data
compression is performed on face image that is stored in face library.
Training phase:
After adding the face images to face library, the system is ready to perform
training set and eigen face formation. Those face images that are going to be in the
training set are chosen from the entire face library. Because that the face library enties are
normalized, no further preprocessing is necessary at this stage. After choosing the
training set einen faces are formed and stored for later use. Eigen faces are calculated
from training set keeping only the M images that corresponds to highest eigen values.
This M eigen faces define the M dimensional “face space”. As new faces are
experienced, the eigen faces can be updated or calculated. the corresponding distribution
in the M dimensional weight space is calculated for each face library member by
projecting its face image on to the “face space” spanned by the eigen space. Now the
corresponding weight vector of each face library member has been updated which were
initially empty. The system is now ready for recognition process. Once a training set has
been chosen it is not possible to add new member to the library with the conventional
method that is presented in phase one. Because, the system does not know already exist
in face library or not a library search must be performed.
After choosing a training set and constructing the weight vectors of face library
members , now the system is ready to perform the recognition process.user initiates the
recognition process by choosing a face image. Based on the user request and the acquired
image size, pre-processing steps are applied to normalize
there exist at least one face library member that is similar to the acquired image within
that threshold then, the face image is classified as “known”.otherwise, a miss has
occurred and the face is classified an “unknown”
These two different approaches are compared based on the following aspects of face
recognition.
Face background
Backgrounds of the face images are extremely important in the eigenface approach.
Eigenface and feature vectors are evaluated by image multiplication and addition. As a
result of this, entire information contained in the face image is used. If this information
changes due to face background, recognition performance can significantly decrease. In
order to avoid this a “back-ground removal” algorithm can be less sensitive to face
background in case. They are generally locate face contours In order to extract facial
features.
2.4DIFFICULTIES:
Due to the dynamic nature of face images, a face recognition system encounters
various problems during the recognition process. It is possible to classify a face
recognition system as either “roboust” or “weak” based on its recognition performances
under these circumstances. The objectives of a roboust face recognition is given below.
SHIFT INVARIENCE
The same can be to the system at different perspectives and orientations. For
instances , face images of the same person could be taken from frontal and profile
views. Besides head orientation may change due to translation and rotations.
SCALE INVARIENCE
The same face can be presented to the system at different scales. This may happen
due to focal distance between the face and the camera. as this distance gets
closer , the face images get closer.
ILLUMINATION INVARIENCE
Face images of the persons can be taken under different illumination conditions
such as, the position and the strength of the light source.
This chapter deals with the complete explatation of face recognition using eigen-faces.
3.1Introduction:
The eigenfaces that are created will appear as light and dark areas that are arranged in a specific
pattern. This pattern is how different features of a face are singled out to be evaluated and
scored. There will be a pattern to evaluate symmetry, if there is any style of facial hair, where the
hairline is, or evaluate the size of the nose or mouth. Other eigenfaces have patterns that are less
simple to identify, and the image of the eigenface may look very little like a face.
The technique used in creating eigenfaces and using them for recognition is also used outside of
facial recognition. This technique is also used for handwriting analysis, lip reading, voice
recognition, sign language /hand gesturesinterpretation and medical imaging analysis.
Therefore, some do not use the term eigenface, but prefer to use 'eigenimage'.
The task of facial recogniton is discriminating input signals (image data) into several
classes (persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting
conditions, pose etc.), yet the input images are not completely random and in spite of their
differences there are patterns which occur in any input signal. Such patterns, which can be
observed in all signals could be - in the domain of facial recognition - the presence of some
objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These
characteristic features are called eigenfaces in the facial recognition domain (or principal
components generally). They can be extracted out of original image data by means of a
mathematical tool called Principal Component Analysis (PCA).
By means of PCA one can transform each original image of the training set into a
corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct
any original image from the training set by combining the eigenfaces. Remember that eigenfaces
are nothing less than characteristic features of the faces. Therefore one could say that the original
face image can be reconstructed from eigenfaces if one adds up all the eigenfaces (features) in
the right proportion. Each eigenface represents only certain features of the face, which may or
may not be present in the original image. If the feature is present in the original image to a higher
degree, the share of the corresponding eigenface in the ”sum” of the eigenfaces should be
greater. If, contrary, the particular feature is not (or almost not) present in the original image,
then the corresponding eigenface should contribute a smaller (or not at all) part to the sum of
eigenfaces. So, in order to reconstruct the original image from the eigenfaces, one has to build a
kind of weighted sum of all eigenfaces. That is, the reconstructed original image is equal to a
sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, to what
degree the specific feature (eigenface) is present in the original image.
If one uses all the eigenfaces extracted from original images, one can reconstruct the
original images from the eigenfaces exactly. But one can also use only a part of the eigenfaces.
Then the reconstructed image is an approximation of the original image. However, one can
ensure that losses due to omitting some of the eigenfaces can be minimized. This happens by
choosing only the most important features (eigenfaces). Omission of eigenfaces is necessary due
to scarcity of computational resources.
How does this relate to facial recognition? The clue is that it is possible not only to
extract the face from eigenfaces given a set of weights, but also to go the opposite way. This
opposite way would be to extract the weights from eigenfaces and the face to be recognized.
These weights tell nothing less, as the amount by which the face in question differs from
”typical” faces represented by the eigenfaces. Therefore, using this weights one can determine
two important things:
1. Determine, if the image in question is a face at all. In the case the weights of the image
differ too much from the weights of face images (i.e. images, from which we know for
sure that they are faces), the image probably is not a face.
2. Similar faces (images) possess similar features (eigenfaces) to similar degrees (weights).
If one extracts weights from all the images available, the images could be grouped to
clusters. That is, all images having similar weights are likely to be similar faces.
The algorithm for the facial recognition using eigenfaces is basically described in figure 1. First,
the original images of the training set are transformed into a set of eigenfaces E. Afterwards, the
weights are calculated for each image of the training set and stored in the set W .
Upon observing an unknown image X, the weights are calculated for that particular image and
stored in the vector W . Afterwards, W is compared with the weights of images, of which one
X X
knows for certain that they are faces (the weights of the training set W ). One way to do it would
be to regard each weight vector as a point in space and calculate an average distance D between
the weight vectors from W and the weight vector of the unknown image W (the Euclidean
X X
distance described in appendix A would be a measure for that). If this average distance exceeds
some threshold value , then the weight vector of the unknown image W lies too ”far apart”
X
from the weights of the faces. In this case, the unknown X is considered to not a face. Otherwise
(if X is actualy a face), its weight vector W is stored for later classification. The optimal
X
An eigenvector of a matrix is a vector such that, if multiplied with the matrix, the result is
always an integer multiple of that vector. This integer value is the corresponding eigenvalue of
the eigenvector. This relationship can be described by the equation M × u = × u, where u is an
eigenvector of the matrix M and is the corresponding eigenvalue.
Eigenvectors possess following properties:
In this section, the original scheme for determination of the eigenfaces using PCA will be
presented. The algorithm described in scope of this paper is a variation of the one outlined here.
MERITS:
DEMERITS:
1. If lighting effects and the position of the face with respect to the camera is varied
Greately then accuracy will effect.
2. Only gray scale images can be detected.
3. A noisy image or partially occluded face causes recognition performance to
degrade gracefully.
3.8APPLICATION:
Given a database of standard face images (say criminal mug shots), determine whether or
not a new shot of a person is in database.
Authorize users to allow login access.
Prepare a surveillance camera system residing at some public place which automatically
matches the input faces with criminal database and gives alert if the results are matched.
Match the person with his passport image, licence image etc.
Can be embedded and used for security purposes like mobile , laptop, palmtop.
EIGENFACES
EIGENFACES RECONSTRUCTION
References:
Vit Electronics Page no 29
Face recognition system using eigenfaces
T. M. Mitchell. Machine Learning. McGraw-Hill International Editions, 1997.
D. Pissarenko. Neural networks for financial time series prediction: Overview over recent
research. BSc thesis, 2002.
L. I. Smith. A tutorial on principal components analysis, February 2002.
URL http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf.
(URL accessed on November 27, 2002).
M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3 (1),
1991a. URL http://www.cs.ucsb.edu/ mturk/Papers/jcn.pdf. (URL accessed on November
27, 2002).
M. A. Turk and A. P. Pentland. Face recognition using eigenfaces. In Proc. of Computer Vision
and Pattern Recognition, pages 586-591. IEEE, June 1991b.
URLhttp://www.cs.wisc.edu/ dyer/cs540/handouts/mturk-CVPR91.pdf. (URL accessed on
November 27, 2002).