Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 75

PROJECT

ON
“ATTENDENCE SYSYTEM USING FACIAL RECOGNITION ”

SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENT


FOR THE AWARD OF THE DEGREE
OF
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
OF

Biju Patnaik University Of Technology, Odisha

TABLE OF CONTENTS
1) ABSTRAC ……………………………………………………………………………… 5
2) INTRODUCTIO ……………………………………………………………………. 6
3) REVIEW OF LITERATURE ……………………………………………………… 7
4) PURPOSE ………………………………………………………………………….. 10
a) SCOPE
b) PROPOSED METHODOLOGY
5) SYSTEM ANALYSIS …………………………………………………………….. 12
a) ANALYSIS OF RELATED WORK
b) THEORETICAL IDEA OF PROPOSED WORK

6) SOFTWARE REQUIREMENT SPECIFICATION …………………………… 13

a) HARDWARE REQUIREMENTS
b) SOFTWARE REQUIREMENTS
c) DEVICE PERMISSION REQUIREMENT
7) SYSTEM DESIGNS ………………………………..………………………….………. 14
a) CODING

C) ER DIAGRAMS

b) DATABASE DIAGRAMS D) SCREENSHOTS


8) INPUT AND OUTPUT ……………………………………………………………… 33
9) ARCHITECTURE AND ALGORITHM …………………………….……… 35
10) DIGITAL IMAGE PROCESSING …………………………………….……. 37
11) FACE RECOGNITION ………….……………………………………………. 42
12) FACE DETECTION …………………………………………………………. 56
13) APPLICATION AND IMPLEMENT ………………….…………………….. 64
14) FUTURE SCOPE ….………………………………………………………………. 69
15) CONCLUSION …………………………………………………………………..… 72
16) REFERENCE ………………………………………………………………………… 73

1
1.ABSTRACT
How to accurately and effectively identify people has always been
aninteresting topic, both in research and in industry. With the rapid
development of artificial intelligence in recent years, facial
recognition gains more and more attention. Compared with the
traditional card recognition, fingerprint recognition and iris
recognition,face recognition has many advantages, including but limit
to non-contact,high concurrency, and user friendly. It has high
potential to be used in government,Public facilities, security, e-
commerce, retailing,education and many other fields.
Deep learning is one of the new and important branches in machine
learning. Deep learning refers to a set of algorithms that solve various
problems such as images and texts by using various machine learning
algorithms in multi-layer neural networks. Deep learning can be
classified as a Neural network from the general category, but there are
many changes in the concrete realization. At the core of deep learning
is feature learning, which is designed to obtain hierarchical
information through hierarchical networks, to solve the important
problems that previously required artificial design features. Deep
Learning is a framework that contains several important algorithms.
For different applications (images, voice, text), you need to use
different network models to achieve better results. With the
development of deep learning and the introduction of deep
convolutional neural networks, the accuracy and speed of face
recognition have made great strides. However, as we said above, the
results from different networks and models are very different. In this
paper, facial features are extracted by merging and comparing
multiple models, and then a deep neural network is constructed to
train and construct the combined features. In this way, the advantages
of multiple models can be combined to mention the recognition
accuracy. After getting a model with high accuracy, we build a product
model.

2
2.INTRODUCTION:
Ever since IBM introduced first personal computer on 1981, to the .com
era in the early 2000s, to the online shopping trend in last 10 years,
and the Internet of Things today, computers and information
technologies are rapidly integrating into everyday human life. As the
digital world and real world merge more and more together, how to
accurately and effectively identify users and improve information
security has become an important research topic. Not only in the civil
area, in particular, since the 9-11 terrorist attacks, governments all
over the world have made urgent demands on this issue, prompting the
development of emerging identification methods. Traditional identity
recognition technology mainly rely on the individual’s own memory
(password, username, etc.) or foreign objects (ID card, key, etc.).
However, whether by virtue of foreign objects or their own memory,
there are serious security risks. It is not only difficult to regain the
original identity material, but also the identity information is easily
acquired by others if the identification items that prove their identity
are stolen or forgotten. As a result, if the identity is impersonated by
others, then there will be serious consequences.
Different from the traditional identity recognition technology,
biometrics is the use of the inherent characteristics of the body for
identification. In today's networked world, the need to maintain the
security of information or physical property is becoming both

increasingly important and increasingly difficult. From time to time we


hear about the crimes of credit card fraud, computer break-ins’ by
hackers, or security breaches in a company or government building . In
most of these crimes, the criminals were taking advantage of a
fundamental flaw in the conventional access control systems: the
systems do not grant access by "who we are",but by "what we have",
such as ID cards, keys, passwords, PIN numbers, or mother’s maiden
name.None of these means are really define us. Recently, technology
became available to allow verification of "true" individual identity. This
technology is based in a field called"biometrics". Biometric access

3
control are automated methods of verifying or recognizing the identity
of a living person on the basis of some physiological characteristics,
such as fingerprints or facial features, or some aspects of the person's
behavior, like his/her handwriting style or keystroke patterns. Since
biometric systems identify a person by biological characteristics, they
are difficult to forge. Face recognition is one of the few biometric
methods that possess the merits of both high accuracy and low
intrusiveness. It has the accuracy of a physiological approach without
being intrusive. For this reason, since the early 70's (Kelly, 1970), face
recognition has drawn the attention of researchers in fields from
security, psychology, and image processing, to computer vision.

Compared with the traditional identity recognition technology,


biological features have many advantages, as:
1.. Reproducibility, biological characteristics are born with, cannot be
changed, so it is impossible to copy other people's biological
characteristics.
2.. Availability, biological features as part of the human body, readily
available, and will never be forgotten.
3.. Easy to use. Many biological characteristics will not require
individuals to corporate with the examine device.

3.REVIEW OF LITERATURE:

Face recognition is one of the few biometric methods that possess the
merits of both high accuracy and low intrusiveness. It has the accuracy
of a physiological approach without being intrusive. Over past 30 years,
many researchers have proposed different face recognition techniques,
motivated by the increased number of real world applications requiring
the recognition of human faces. There are several problems that make
automatic face recognition a very difficult task. However, the face
image of a person inputs to the database that isusually acquired under
different conditions.

4
The important of automatic face recognition is much be cope with
numerous variations of images of the same face due to changes in the
following parameters such as
1. Pose
2. Illumination
3. Expression
4. Motion
5. Facial hair
6. Glasses
7. Background
Face recognition technology is well advance that can applied for many
commercial applications such as personal identification, security
system, image- film processing, psychology, computer interaction,
entertainment system, smart card, law enforcement, surveillance and
so on. Face recognition can be done in both a still image and video
sequence which has its origin in still-image face recognition. Different
approaches of face recognition for still images can be categorized into
three main groups such as
1. Holistic approach
2. Feature-based approach
3. Hybrid approach product

1. Holistic approach :- In holistic approach or global feature, the whole


face region is taken into account as input data into face detection
system. Examples of holistic methods are eigenfaces (most widely used
method for face recognition), probabilistic eigenfaces, fisher faces,
support vector machines, nearest feature lines (NFL) and independent-
component analysis approaches. They are all based on principal
component-analysis (PCA) techniques that can be used to simplify a
dataset into lower dimension while retaining the characteristics of
dataset.

5
2. Feature-based approach:- In feature-based approaches or local
feature that is the features on face such as nose, and then eyes are
segmented and then used as input data for structural classifier. Pure
geometry, dynamic link architecture, and hidden Markov model
methods belong to this category. One of the most successful of these
systems is the Elastic Bunch Graph Matching (EBGM) system , which is
based on DLA.
Wavelets, especially Gabor wavelets, play a building block role for
facial representation in these graph matching methods. A typical local
feature representation consists of wavelet
coefficients for different scales and rotations based on fixed wavelet
bases. These locally estimated wavelet coefficients are robust to
illumination change, translation, distortion, rotation, and scaling. The
grid is appropriately positioned over the image and is stored with each
grid point’s locally determined jet in figure 2(a), and serves to
represent the pattern classes.
Recognition of a new image takes place by transforming the image into
the grid of jets, and matching all stored model graphs to the image.
Conformation of the DLA is done by establishing and dynamically
modifying links between vertices in the model domain.

3. Hybrid approach :- The idea of this method comes from how human
vision system perceives both holistic and local feature. The key factors
that influence the performance of hybrid approach include how to
determine which features should be combined and how to combine, so
as to preserve their advantages and avert their disadvantages at the
same time.
These problems have close relationship with the multiple classifier
system (MCS) and ensemble learning in the field of machine learning.
Unfortunately, even in these fields, these problems remain unsolved.
In spite of this, numerous efforts made in these fields indeed provide
us some insights into solving these problems, and these lessons can be
used as guidelines in designing a hybrid face recognition system.Hybrid
approach that use both holistic and local information for recognition
may bean effective way to reduce the complexity of classifiers and
improve their generalization capability.

6
4.PURPOSE:
Facial recognition is a biometric software application capable of
uniquely identifying or verifying a person by comparing and analyzing
patterns based on the person's facial contours. Facial recognition is
mostly used for security purposes, though there is increasing interest
in other areas of use.
(a)SCOPE :
A face recognition system is a computer application for automatically
identifying or verifying a person from a digital image or a video frame
from a video source. ... It is typically used in security systems and can
be compared to other biometrics such as fingerprint or eye
iris recognition systems.
 The information age is quickly revolutionizing the way
transactions are completed. Everyday actions are increasingly
being handled electronically, instead of with pencil and paper or
face to face. This growth in electronic transactions has resulted
in a greater demand for fast and accurate user identification and
authentication.
 Access codes for buildings, banks accounts and computer systems
often use PIN's for identification and security clearance . Using
the proper PIN gains access, but the user of the PIN is not
verified. When credit and ATM cards are lost or stolen, an
unauthorized user can often come up with the correct personal
codes.
 Despite warning, many people continue to choose easily guessed
PINs and passwords: birthdays, phone numbers and social security
numbers. Recent cases of identity theft have highlighted the
need for methods to prove that someone is truly who he/she
claims to be.
 Face recognition technology may solve this problem since a face
is undeniably connected to its owner expect in the case of
identical twins. Its nontransferable. The system can then
compare scans to records stored in a central or local database or
even on a smart card.

7
(b)METHODOLOGY:
1.1Structural Design
The architecture of our automated attendance system is as shown in
the Fig.1. As far as hardware and software are concerned, we need a
high definition camera which will be installed such that it covers the
whole class. Then we must have a PC installed with a very helpful and
multi-functional machine language “MATLAB” and MS Excel. To acquire
the image or video, connect the camera to the PC and make sure that
the camera driver is properly installed and compatible with MATLAB.
The working of our smart attendance system is pretty simple and easy
to comprehend. To begin with, there should be a face database which
contains the images of the students. The images will be focused only
on the face region. Now as soon as we start the camera it records the
video, continues for few seconds and then stops. The system reads the
frames of video and after reading, it sends the frame for detection
where the face of each student sitting in the class has to be detected
and cropped. After detection, the cropped faces are used for
recognition. These cropped faces are compared with the face database
using the proposed algorithm and then after acceptablerecognition,
the system marks the attendance in an excel sheet.
Methodology
In this paper, the system has to follow some particular methodologies
which need to be processed in following steps: x Creating face
database x Video recording x Face detection x Face recognition x
Registering attendance
1. Creating face database
The database is the training set of our system and is created in such a
way that it contains images of enrolled students. These images are
cropped to get the region of interest which is the face of the student.
In this paper, to test the working of the system it is trained with the
training set which consists of 5 images per student and here, the
number of students is 10. So, overall the system is trained with 50
images.

8
1.2. Video recording
As we discussed, we must have a very good quality camera to get the
efficient detection and recognition. It should be connected to the PC
and its drivers have to be properly installed and should be compatible
with MATLAB. As we start the camera, the video will be recorded for
few seconds and then will be processed further for face detection.

5.SYSTEM ANALYSIS:

5.1 ANALYSIS OF RELATED WORK:


(A) FACE DETECTION or tracking:
Face detection employs algorithms that decide whether an image is a
positive image (face image) or negative image (non-face image) and
this is called a classifier. When a human face is detected, the
technology will return the coordinate locations of those faces within an
image or video with a bounding box.
Face Tracking is the means of identifying or verifying a
human face from a digital image or video frame. ... When used in
online mode, a face is tracked during a live feed, while the video is
being captured.
(b) FACE POSITIONING AND ALIGNMENT:
Face alignment is a process of applying a supervised learned model to
a face image and estimating the locations of a set of facial landmarks,
such as eye corners, mouth corners, etc. Face alignment is a key
module in the pipeline of most facial analysis algorithms, normally
after face detection .
Face alignment is the task of identifying the geometric structure of
faces in digital images, and attempting to obtain a
canonical alignment of the face based on translation, scale, and
rotation.
The purpose of the facial feature point positioning is to further
determine facial feature

9
points (eyes, mouth center points, eyes, mouth contour points, organ
contour points, etc.) on the
basis of the face area detected by the face detection / tracking, s
position

(c) FACE FEATURE EXTRACTION:


 Facial feature extraction is the process of extracting
face component features like eyes, nose, mouth, etc.
from human face image. Facial feature extraction is very much
important for the initialization of processing techniques
like face tracking, facial expression recognition or face recognition.
5.2 THEORETICAL IDEA OF PROPOSED WORK :

Face recognition is essentially pattern recognition, and the purpose is


to abstract real things into numbers that computers can understand. If
a picture is a 256 bit-color image, then each pixel of the image is a
value between 0 and 255, so we can convert an image into a matrix.
How to identify the patterns in this matrix? One way is to use a
relatively small matrix to sweep from left to right and top to bottom in
this large matrix

6.SOFTWARE REQUIREMENT SPECIFICATION:

6.1 HARDWARE REQUIREMENTS:


a. GPU : Intel HD Graphics
b. CPU : Intel Celeron
c. Camera : Minimum 2MP Camera
d. USB Port : 1X USB 2.0 or better port

6.2 SOFTWARE REQUIREMENTS:

10
a. Anaconda : It is an open-source distribution of the Python and R
programming languages for scientific computing, that aims to
simplify package management and deployment. Package versions
are managed by the package.
b. Python : It is an interpreted, high-level, general purpose
programming language.
c. Window 7,8.1,10 : It is a group of several graphical operating
system families, all of which are developed, marketed and sold
my Microsoft.
d. Sublime : It is a proprietary cross-platform source code editor
with a Python application programming interface. It natively
supports many programming languages and markup languages.
6.3 DEVICE PERMISSION REQUIREMENT:
a. Camera
b. USB
c. Storage
d. Voice

7.SYSTEM DESIGN:

7.1 CODING:

(A) FACE DETECTION:

clear all

clc

%Detect objects using Viola-Jones Algorithm

%To detect Face

11
FDetect = vision.CascadeObjectDetector;

%Read the input image

I = imread('HarryPotter.jpg');

%Returns Bounding Box values based on number of


objects BB = step(FDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

rectangle('Position',BB(i,:),'LineWidth',5,'LineStyle','-',
'EdgeColor','r');

end

title('Face Detection');

hold off;

The step(Detector,I) returns Bounding Box value that contains


[x,y,Height,Width] of the objects of interest.

BB =

52 38 73 73
379 84 71 71
198 57 72 72

12
(B) NOSE DETECTION:

%To detect Nose

NoseDetect = vision.CascadeObjectDetector('Nose','MergeThreshold',16);

BB=step(NoseDetect,I);

figure,

imshow(I); hold on

for i = 1:size(BB,1)

rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-',
'EdgeColor','b');

end

title('Nose Detection');

hold off;

EXPLANATION:

To denote the object of interest as 'nose', the argument 'Nose' is


passed.

vision.CascadeObjectDetector('Nose','MergeThreshold',16);

13
The default syntax for Nose detection :

vision.CascadeObjectDetector('Nose');

Based on the input image, we can modify the default values of the
parameters passed to vision.CascaseObjectDetector. Here the
default value for 'MergeThreshold' is 4.

When default value for 'MergeThreshold' is used, the result is


not correct.

Here there are more than one detection on Hermione.

To avoid multiple detection around an object, the


'MergeThreshold' value can be overridden.

(C) MOUTH DETECTION:

%To detect Mouth

MouthDetect = vision.CascadeObjectDetector('Mouth','MergeThreshold',16);

BB=step(MouthDetect,I);

igure,

imshow(I); hold on

for i = 1:size(BB,1)

14
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-',
'EdgeColor','r');

end

title('Mouth Detection');

hold off;

(D) EYE DETECTION:

%To detect Eyes

EyeDetect = vision.CascadeObjectDetector('EyePairBig');

%Read the input Image

I = imread('harry_potter.jpg');

BB=step(EyeDetect,I);

figure,imshow(I);

rectangle('Position',BB,'LineWidth',4,'LineStyle','-','EdgeColor','b');

title('Eyes Detection');

Eyes=imcrop(I,BB);

figure,imshow(Eyes);

FACE RECOGNITION

function closeness=recognition(input_image,U,R);

15
% By L.S. Balasuriya

% Returns closeness of known face vectors in R to unknown


image input_image % R contains the Eigenfaces

%********image to vector ***********************************************

vinput=reshape(input,[10000 1]);

%********recognition **************************************************

facespace=voutput'*U;

%********Eucleadian distance ******************************************

[p,ignor]=size(R);

16
distance_vecs=R-repmat(facespace,[p 1]);

distance=sum(abs(distance_vecs)')';

%********order of closeness to unknown face ***************************

[ignor,closeness]=sort(distance);

17
7.2 DATABASE TABLES :

(a) LOGIN TABLE


S.NO Field name Data Types Discription
1. User Name Varchar(20) store user name for
checking correct user
name(not null)
2. Password Varchar(20) Store password
corresponding to
serbame(Not null)

(b) ATTENDENCE TABLE


S.NO Field name Data Types Discription
1. Adate Datetime Store date and time
of the period
2. Week No. Int Store the week
number
3. Period No. Int Store period numbers
4. Fcode Varchar(10) Store the faculty
code
5. Admno. Int Admission form
number
6. Status Char(1) Status of the student
means abset or
present or in leave
7. remark Varchar(50) Remark that student
is in leave or not

(c) BATCHES TABLE


S.NO Field name Data Types Discription
1. Bcode Int Stores the batch code
2. Course Varchar(50) Store the students
stream
3. Sem No. Int Store semester of the
student

18
(d) FACULTY TABLE
S.NO Field name Data Types Discription
1. Fcode Varchar(10) Store faculty code
2. Pwd Varchar(10) Store the password of
faculty
3. Fname Varchar(50) Store faculty name
4. Dept Varchar(50) Used to store
department of the
faculty

(d) SCHEDULE TABLE

S.NO Field name Data Types Discription


1. Week no. Int Store the week
number of the
attendance of the
student
2. Bcode Int Store the batch code
3. Period Int Store period numbers
4. Fcode Varchar(10) Store the faculty
code
5. scode Varchar(10) Used to store subject
code

(e) STUDENT TABLE

S.NO Field name Data Types Discription


1. Admno Int Used to store the
student admission
number
2. Bcode Int Store the hold
student batch code
3. Sname Varchar(50) Store student name

(f) SUBJECT TABLE

S.NO Field name Data Types Discription


1. Scode Varchar(10) Used to store the
subject codes
2. sname sname Store the sub name

19
7.3 ER DIAGRAM: Student id
Month

Status

Name Teacher
id Subject

Attendance
Teacher Semes

Teachs Belongs

Subject Studied

Student

Scode Sname

Student
Semester
Teacher id
id

Name Course

20
7.4 SCREENSHOTS:

1.

2.

3.

21
4.

22
Coding related to screenshots :

from cx_Freeze import setup, Executable


import sys,os
PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__))
os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tcl8.6')
os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tk8.6')

base = None

if sys.platform == 'win32':
base = None

executables = [Executable("train.py", base=base)]

packages = ["idna","os","sys","cx_Freeze","tkinter","cv2","setup",
"numpy","PIL","pandas","datetime","time"]
options = {
'build_exe': {

'packages':packages,
},

setup(
name = "ToolBox",
options = options,
version = "0.0.1",
description = 'Vision ToolBox',
executables = executables
)

#write python setup build

# -*- coding: utf-8 -*-


"""
Created on Sun Jun 17 2020

@author: Ujjwal KumarSharma


"""

import tkinter as tk
from tkinter import Message ,Text
import cv2,os
import shutil
import csv
import numpy as np

23
from PIL import Image, ImageTk
import pandas as pd
import datetime
import time
import tkinter.ttk as ttk
import tkinter.font as font

window = tk.Tk()
#helv36 = tk.Font(family='Helvetica', size=36, weight='bold')
window.title("Face_Recogniser")

dialog_title = 'QUIT'
dialog_text = 'Are you sure?'
#answer = messagebox.askquestion(dialog_title, dialog_text)

#window.geometry('1280x720')
window.configure(background='blue')

#window.attributes('-fullscreen', True)

window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)

#path = "profile.jpg"

#Creates a Tkinter-compatible photo image, which can be used everywhere Tkinter


expects an image object.
#img = ImageTk.PhotoImage(Image.open(path))

#The Label widget is a standard Tkinter widget used to display a text or image on
the screen.
#panel = tk.Label(window, image = img)

#panel.pack(side = "left", fill = "y", expand = "no")

#cv_img = cv2.imread("img541.jpg")
#x, y, no_channels = cv_img.shape
#canvas = tk.Canvas(window, width = x, height =y)
#canvas.pack(side="left")
#photo = PIL.ImageTk.PhotoImage(image = PIL.Image.fromarray(cv_img))
# Add a PhotoImage to the Canvas
#canvas.create_image(0, 0, image=photo, anchor=tk.NW)

#msg = Message(window, text='Hello, world!')

# Font is a tuple of (font_family, size_in_points, style_modifier_string)

24
message = tk.Label(window, text="Face-Recognition-Based-Attendance-
Management-System" ,bg="Green" ,fg="white" ,width=50 ,height=3,font=('times',
30, 'italic bold underline'))

message.place(x=200, y=20)

lbl = tk.Label(window, text="Enter


ID",width=20 ,height=2 ,fg="red" ,bg="yellow" ,font=('times', 15, ' bold ') )
lbl.place(x=400, y=200)

txt = tk.Entry(window,width=20 ,bg="yellow" ,fg="red",font=('times', 15, ' bold '))


txt.place(x=700, y=215)

lbl2 = tk.Label(window, text="Enter


Name",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold '))
lbl2.place(x=400, y=300)

txt2 = tk.Entry(window,width=20 ,bg="yellow" ,fg="red",font=('times', 15, ' bold


') )
txt2.place(x=700, y=315)

lbl3 = tk.Label(window, text="Notification :


",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold underline '))
lbl3.place(x=400, y=400)

message = tk.Label(window, text="" ,bg="yellow" ,fg="red" ,width=30 ,height=2,


activebackground = "yellow" ,font=('times', 15, ' bold '))
message.place(x=700, y=400)

lbl3 = tk.Label(window, text="Attendance :


",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold underline'))
lbl3.place(x=400, y=650)

message2 = tk.Label(window, text="" ,fg="red" ,bg="yellow",activeforeground =


"green",width=30 ,height=2 ,font=('times', 15, ' bold '))
message2.place(x=700, y=650)

def clear():
txt.delete(0, 'end')
res = ""
message.configure(text= res)

def clear2():
txt2.delete(0, 'end')
res = ""
message.configure(text= res)

def is_number(s):
try:

25
float(s)
return True
except ValueError:
pass

try:
import unicodedata
unicodedata.numeric(s)
return True
except (TypeError, ValueError):
pass

return False

def TakeImages():
Id=(txt.get())
name=(txt2.get())
if(is_number(Id) and name.isalpha()):
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
sampleNum=0
while(True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#incrementing sample number
sampleNum=sampleNum+1
#saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ "+name +"."+Id +'.'+ str(sampleNum) +
".jpg", gray[y:y+h,x:x+w])
#display the frame
cv2.imshow('frame',img)
#wait for 100 miliseconds
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum>60:
break
cam.release()
cv2.destroyAllWindows()
res = "Images Saved for ID : " + Id +" Name : "+ name
row = [Id , name]
with open('StudentDetails\StudentDetails.csv','a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message.configure(text= res)

26
else:
if(is_number(Id)):
res = "Enter Alphabetical Name"
message.configure(text= res)
if(name.isalpha()):
res = "Enter Numeric Id"
message.configure(text= res)

def TrainImages():
recognizer = cv2.face_LBPHFaceRecognizer.create()#recognizer =
cv2.face.LBPHFaceRecognizer_create()#$cv2.createLBPHFaceRecognizer()
harcascadePath = "haarcascade_frontalface_default.xml"
detector =cv2.CascadeClassifier(harcascadePath)
faces,Id = getImagesAndLabels("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("TrainingImageLabel\Trainner.yml")
res = "Image Trained"#+",".join(str(f) for f in Id)
message.configure(text= res)

def getImagesAndLabels(path):
#get the path of all the files in the folder
imagePaths=[os.path.join(path,f) for f in os.listdir(path)]
#print(imagePaths)

#create empth face list


faces=[]
#create empty ID list
Ids=[]
#now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
#loading the image and converting it to gray scale
pilImage=Image.open(imagePath).convert('L')
#Now we are converting the PIL image into numpy array
imageNp=np.array(pilImage,'uint8')
#getting the Id from the image
Id=int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces.append(imageNp)
Ids.append(Id)
return faces,Ids

def TrackImages():
recognizer =
cv2.face.LBPHFaceRecognizer_create()#cv2.createLBPHFaceRecognizer()
recognizer.read("TrainingImageLabel\Trainner.yml")
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);
df=pd.read_csv("StudentDetails\StudentDetails.csv")
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX

27
col_names = ['Id','Name','Date','Time']
attendance = pd.DataFrame(columns = col_names)
while True:
ret, im =cam.read()
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
faces=faceCascade.detectMultiScale(gray, 1.2,5)
for(x,y,w,h) in faces:
cv2.rectangle(im,(x,y),(x+w,y+h),(225,0,0),2)
Id, conf = recognizer.predict(gray[y:y+h,x:x+w])
if(conf < 50):
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
aa=df.loc[df['Id'] == Id]['Name'].values
tt=str(Id)+"-"+aa
attendance.loc[len(attendance)] = [Id,aa,date,timeStamp]

else:
Id='Unknown'
tt=str(Id)
if(conf > 75):
noOfFile=len(os.listdir("ImagesUnknown"))+1
cv2.imwrite("ImagesUnknown\Image"+str(noOfFile) + ".jpg",
im[y:y+h,x:x+w])
cv2.putText(im,str(tt),(x,y+h), font, 1,(255,255,255),2)
attendance=attendance.drop_duplicates(subset=['Id'],keep='first')
cv2.imshow('im',im)
if (cv2.waitKey(1)==ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour,Minute,Second=timeStamp.split(":")
fileName="Attendance\
Attendance_"+date+"_"+Hour+"-"+Minute+"-"+Second+".csv"
attendance.to_csv(fileName,index=False)
cam.release()
cv2.destroyAllWindows()
#print(attendance)
res=attendance
message2.configure(text= res)

clearButton = tk.Button(window, text="Clear",


command=clear ,fg="red" ,bg="yellow" ,width=20 ,height=2 ,activebackground =
"Red" ,font=('times', 15, ' bold '))
clearButton.place(x=950, y=200)
clearButton2 = tk.Button(window, text="Clear",
command=clear2 ,fg="red" ,bg="yellow" ,width=20 ,height=2, activebackground =
"Red" ,font=('times', 15, ' bold '))

28
clearButton2.place(x=950, y=300)
takeImg = tk.Button(window, text="Take Images", command=TakeImages ,fg="red"
,bg="yellow" ,width=20 ,height=3, activebackground = "Red" ,font=('times', 15, '
bold '))
takeImg.place(x=200, y=500)
trainImg = tk.Button(window, text="Train Images",
command=TrainImages ,fg="red" ,bg="yellow" ,width=20 ,height=3,
activebackground = "Red" ,font=('times', 15, ' bold '))
trainImg.place(x=500, y=500)
trackImg = tk.Button(window, text="Track Images",
command=TrackImages ,fg="red" ,bg="yellow" ,width=20 ,height=3,
activebackground = "Red" ,font=('times', 15, ' bold '))
trackImg.place(x=800, y=500)
quitWindow = tk.Button(window, text="Quit", command=window.destroy ,fg="red"
,bg="yellow" ,width=20 ,height=3, activebackground = "Red" ,font=('times', 15, '
bold '))
quitWindow.place(x=1100, y=500)
copyWrite = tk.Text(window, background=window.cget("background"),
borderwidth=0,font=('times', 30, 'italic bold underline'))
copyWrite.tag_configure("superscript", offset=10)
copyWrite.insert("insert", "Developed by Ashish","", "TEAM", "superscript")
copyWrite.configure(state="disabled",fg="red" )
copyWrite.pack(side="left")
copyWrite.place(x=800, y=750)

window.mainloop()

29
8.INPUT AND OUTPUT :

The input first taken is the face pattern of a new student. The face
pattern is analyzed with the help of camera and Arduino helps to
control the entire process. The patterns once collected can be stored
in the database along with the students’ details. During the entry of
the student, the system checks for each of their faces’ patterns and
searches for a similar match in the database connected as shown in the
figure 6 . If there is a similar pattern matching for utmost 85%, then a
value 1 is returned to the database which marks attendance status for
the corresponding student as “present”. Otherwise 0 is returned in
case of miss match and attendance is marked “absent”. SQL Update
Queries can be used to achieve it. The same values are repeated
periodically and incase if any student wants to go out then the face
pattern is identified by the camera and if the students returns within
15 minutes of holding time, then same value as the previous hour is
repeated. In case, if the student comes back late with the knowledge
of concerned faculties then the value 1 can be returned as exception
by the faculty later. If none of the two happens and the student comes
back late then the value 0 will be returned for every hour until the
student’s faces is detected and updated again. Hence periodic
attendance can be achieved. The face recognition is possible even in
different environments and that is shown in the figure.

Fig. Image pattern analysis

30
The final output achieved will be the internal marks of attendance for
each of the students enrolled in the system. Once the periodic
attendance is stored for each student over a term then those values
can be collected to calculate the internal marks out of 5 for each
student. SQL SELECT queries can be used to retrieve the marks. Sample
output of the attendance marks retrieved from database is shown in
the figure.

Sample output of an attendance

31
9.ARCHITECTURE AND ALGORITHM
ARCHITECTURE
The architecture of this face recognition based attendance system is
shown in the above mentioned diagram. The working of this smart
attendance system is very simple and easy to understand. To bring this
system into work, we will need some hardware devices for our project.
Firstly, we will need a high definition camera which has to be fixed in
the classroom at a suitable location from where the whole class can be
covered in the camera.
When the camera takes the picture of all students, that picture is
enhanced for further processing. In the enhancement, the picture is
first transformed in grayscale image, and then it will be equalized
using histogram technique.
After enhancement, the picture will be given for detecting the faces of
students which will be done by face detection algorithm.
Then after detection of faces, each student's face will be cropped from
that image, and all those cropped faces will be compared with the
database of faces. In that database, all students' information will be
already maintained with their image. By comparing the faces one by
one, the attendance of students will be marked on server.

32
ALGORITHM
The algorithm shows the step by step working of a system. For this
system, we will need to use the following algorithm.
ALGORITHM: FACE RECOGNITON FOR ATTENDANCE SYSTEM.
INPUT: Classroom image captured by the camera.
OUTPUT: Attendance marking.
PROBLEM DESCRIPTION: Identification of student
Step 1: Start
Step 2: Enroll the students' information in the face database
Step 3: Install a camera device in classroom.
Step 4: Input the image taken by camera.
Step 5: Enhancement of image.
I. Convert to grayscale image.
II. Generate histogram of grayscale image.
III. Equalize the image.
IV. Generate histogram of equalized image.
V. Remove noise from image
VI. Skin classification of image.
Step 6: Face Detection.
I. Crop the faces of students form image
II. Select the region of interest.
Step 7: Face Recognition
I. Compare the cropped images with face database images
II. Mark the attendance on attendance server.
a. If any other face, Then Go to ii) of step 6
Step 8: End.

33
10.DIGITAL IMAGE PROCESSING :
10.1 DIGITAL IMAGE PROCESSING
Interest in digital image processing methods stems from two
principal application areas:

1. Improvement of pictorial information for human interpretation


2. Processing of scene data for autonomous machine perception

In this second application area, interest focuses on procedures for


extracting image information in a form suitable for computer
processing.

Examples includes automatic character recognition, industrial


machine vision for product assembly and inspection, military
recognizance, automatic processing of fingerprints etc.

Image:
Am image refers a 2D light intensity function f(x, y), where(x, y)
denotes spatial coordinates and the value of f at any point (x, y) is
proportional to the brightness or gray levels of the image at that
point. A digital image is an image f(x, y) that has been discretized
both in spatial coordinates and brightness. The elements of such a
digital array are called image elements or pixels.

A simple image model:


To be suitable for computer processing, an image f(x, y) must be
digitalized both spatially and in amplitude. Digitization of the
spatial coordinates (x, y) is called image sampling. Amplitude
digitization is called gray-level quantization.

The storage and processing requirements increase rapidly with the


spatial resolution and the number of gray levels.

34
Example: A 256 gray-level image of size 256x256 occupies 64k bytes
of memory.
Types of image processing
• Low level processing

• Medium level processing

• High level processing


Low level processing means performing basic qperations on images such
as reading an image resize, resize, image rotate, RGB to gray level conversion,
histogram equalization etc…, The output image obtained after low level
processing is raw image.Medium level processing means extracting regions of
interest from output of low level processed image. Medium level processing
deals with identification of boundaries i.e edges .This process is called
segmentation.High level processing deals with adding of artificial intelligence
to medium level processed signal.

10.2 FUNDAMENTAL STEPS IN IMAGE PROCESSING

Fundamental steps in image processing are

1. Image acquisition: to acquire a digital image

2. Image pre-processing: to improve the image in ways that increases the


chances for success of the other processes.
3. Image segmentation: to partitions an input image into its constituent parts of
objects.

4. Image segmentation: to convert the input data to a from suitable for


computer processing.

5. Image description: to extract the features that result in some quantitative


information of interest of features that are basic for differentiating one class of
objects from another.
6. Image recognition: to assign a label to an object based on the information
provided byits description.

35
10.3 ELEMENTS IMAGE PROCESSING
A digital image processing system contains the following blocks as shown in the
figure

The basic operations performed in a digital image processing system include

1. Acquisition
2. Storage
3. Processing

36
4. Communication
5. Display

A simple image formation model

Image are denoted by two-dimensional function f(x, y).f(x, y) may be


characterized by 2
components:
1. The amount of source illumination i(x, y) incident on the scene
2. The amount of illumination reflected r(x, y) by the objects of the scene
3. f(x, y) = i(x, y)r(x, y), where 0 &lt; i(x,y) &lt; and 0 &lt; r(x, y) &lt; 1
Typical values of reflectance r(x, y):
• 0.01 for black velvet
• 0.65 for stainless steel
• 0.8 for flat white wall paint
• 0.9 for silver-plated metal
• 0.93 for snow Example of typical ranges of illumination i(x, y) for visible light
(average values)
• Sun on a clear day: ~90,000 lm/m^2,down to 10,000lm/m^2 on a cloudy day
• Full moon on a clear evening :-0.1 lm/m^2
• Typical illumination level in a commercial office. ~1000lm/m^2

37
38
11.FACE RECOGNITION

Typical illumination le Over the last few decades many techniques have been
proposed for face recognition. Many of the techniques proposed during the early
stages of computer vision cannot be considered successful, but almost all of the
recent approaches to the face recognition problem have been creditable.
According to the research by Brunelli and Poggio
(1993) all approaches to human face recognition can be divided into two
strategies:
(1) Geometrical features and
(2) Template matching.

11.1 FACE RECOGNITION USING GEOMETRICAL FEATURES

This technique involves computation of a set of geometrical features such as nose


width and length, mouth position and chin shape, etc. from the picture of the face
we want to recognize. This set of features is then matched with the features of
known individuals. A suitable metric such as Euclidean distance (finding the closest
vector) can be used to find the closest match. Most pioneering work in face
recognition was done using geometric features (Kanade, 1973), although Craw et
al. (1987) did relatively recent work in this area.

Figure : Geometrical features (white) which could be used for face recognition

39
The advantage of using geometrical features as a basis for face recognition is that
recognition is possible even at very low resolutions and with noisy images (images
with many disorderly pixel intensities). Although the face cannot be viewed in
detail its overall geometrical configuration can be extracted for face recognition.
The technique&#39;s main disadvantage is that automated extraction of the facial
geometrical features isvery hard. Automated geometrical feature extraction based
recognition is also very sensitive to the scaling and rotation of aface in the image
plane (Brunelli and Poggio, 1993). This is apparent when we examine
Kanade&#39;s(1973) results where he reported a recognition rate of between 45-
75 % with a database of only 20 people. However if these features are extracted
manually as in Goldstein et al. (1971), and Kaya and Kobayashi (1972) satisfactory
results may be obtained.

11.1.1 Face recognition using template matching

This is similar the template matching technique used in face detection, except
here we are not trying to classify an image as a &#39;face&#39; or &#39;non-
face&#39; but are trying to recognize a face.

Figure . Face recognition using template matching

Whole face, eyes, nose and mouth regions which could be used in a template
matching strategy.The basis of the template matching strategy is to extract whole
facial regions (matrix of pixels) and compare these with the stored images of
known individuals.

40
Once again Euclidean distance can be used to find the closest match. The simple
technique of comparing grey-scale intensity values for face recognition was used
by Baron (1981).
However there are far more sophisticated methods of template matching for face
recognition. These involve extensive pre-processing and transformation of the
extracted grey-level intensity values. For example, Turk and Pentland (1991a) used
Principal Component Analysis, sometimes known as the eigenfaces approach, to
pre-process the gray-levels and Wiskott et al. (1997) used Elastic Graphs encoded
using Gabor filters to pre-process the extracted regions. An investigation of
geometrical features versus template matching for face recognition by Brunelli and
Poggio (1993) came to the conclusion that although a feature based strategy may
offer higher recognition speed and smaller memory requirements, template based
techniques offer superior recognition accuracy.

11.2 PROBLEM SCOP AND SYSTEM SPECIFICATION

The following problem scope for this project was arrived at after reviewing the
literature on face detection and face recognition, and determining possible real-
world situations where such systems would be of use. The following system(s)
requirements were identified
1. A system to detect frontal view faces in static images
2. A system to recognize a given frontal view face
3. Only expressionless, frontal view faces will be presented to the face
detection&amp;recognition
4. All implemented systems must display a high degree of lighting invariency.
5. All systems must posses near real-time performance.
6 . Both fully automated and manual face detection must be supported
7 . Frontal view face recognition will be realised using only a single known image
8. Automated face detection and recognition systems should be combined into a
fully
automated face detection and recognition system. The face recognition sub-system
must display a slight degree of invariency to scaling and rotation errors in the
segmented image extracted by the face detection sub-system.
9. The frontal view face recognition system should be extended to a pose invariant
face recognition system.

41
Unfortunately although we may specify constricting conditions to our problem
domain, it may not be possible to strictly adhere to these conditions when
implementing a system in the real-world.

11.3 BRIEF OUT LINE OF THE IMPLEMENTED SYSTEM

Fully automated face detection of frontal view faces is implemented using a


deformable template algorithm relying on the image invariants of human faces.
This was chosen because a similar neural-network based face detection model
would have needed far too much training data to be implemented and would have
used a great deal of computing time. The main difficulties in implementing a
deformable template based technique were the creation of the bright and dark
intensity sensitive templates and designing an efficient implementation of the
detection algorithm.

Figure : Implemented fully automated frontal view face detection model


A manual face detection system was realised by measuring the facial proportions
of the average face, calculated from 30 test subjects. To detect a face, a human
operator would identify the locations of the subject&#39;s eyes in an image and
using the proportions of the average face, the system would segment an area from
the image
A template matching based technique was implemented for face recognition. This
was because of its increased recognition accuracy when compared to geometrical
features based techniques and the fact that an automated geometrical features
based technique would have required complex feature detection pre-processing.
Of the many possible template matching techniques, Principal Component Analysis
was chosen because it has proved to be a highly robust in pattern recognition tasks
and because it is relatively simple to implement. The author would also liked to

42
have implemented a technique based on Elastic Graphs but could not find
sufficient literature about the model to implement such a system during the
limited time available for this project.

Figure : Principal Component Analysis transform from &#39;image space&#39;


to &#39;face
space&#39;. Using Principal Component Analysis, the segmented frontal view face
image transformed from what is sometimes called &#39;image space&#39; to
&#39;face space&#39;. All faces in the face database are transformed into face
space. Then face recognition is achieved by transforming any given test image into
face space and comparing it with the training set vectors. The closest matching
training set vector should belong to the same individual as the test image.Principal
Component Analysis is of special interest because the transformation to face space
is based on the variation of human faces (in the training set). The values of the
&#39;face space&#39; vector correspond to the amount certain
&#39;variations&#39; are present in the test imageFace recognition and detection
system is a pattern recognition approach for personal identification purposes in
addition to other biometric approaches such as fingerprint recognition, signature,
retina and so forth. Face is the most common biometric used by humans
applications ranges from static, mug-shot verification in a cluttered background.

43
Fig : Face Recognition

11.4 FACE RECOGNITION DIFFICULTIES


1. Identify similar faces (inter-class similarity)
2. Accommodate intra-class variability
due to
2.1 head pose
2.2 illumination conditions
2.3 expressions
2.4 facial accessories
2.5 aging effects
3. Cartoon faces

11.4.1 Inter - class similarity:


Different persons may have very similar appearance

44
Fig : Face recognition twins and father and Son

Face recognition and detection system is a pattern recognition approach for


personal identification purposes in addition to other biometric approaches such as
fingerprint recognition, signature, retina and so forth. The variability in the faces,
the images are processed before they are fed into the network. All positive
examples that is the face images are obtained by cropping images with frontal
faces to include only the front view. All the cropped images are then corrected for
lighting through standard algorithms.

11.4.2 Inter – class variability

Faces with intra-subject variations in pose, illumination, expression, accessories,


color, occlusions, and brightnes

11.5 PRINCIPAL COMPONENT ANALYSIS (PCA)

Principal Component Analysis (or Karhunen-Loeve expansion) is a suitable strategy


for face recognition because it identifies variability between human faces, which
may not be immediately obvious. Principal Component Analysis (hereafter PCA)
does not attempt to categorise faces using familiar geometrical differences, such
as nose length or eyebrow width. Instead, a set of human faces is analysed using
PCA to determine which &#39;variables&#39; account for the variance of faces. In
face recognition, these variables are called eigen faces because when plotted they

45
display an eerie resemblance to human faces. Although PCA is used extensively in
statistical analysis, the pattern recognition community started to use PCA for
classification only relatively recently. As described by Johnson and Wichern (1992),
&#39;principal component analysis is concerned with explaining the variance-
covariance structure through a few linear combinations of the original
variables.&#39; Perhaps PCA&#39;s greatest strengths are in its ability for data
reduction and interpretation. For example a 100x100 pixel area containing a face
can be very accurately represented by just 40 eigen values. Each eigen value
describes the magnitude of each eigen face in each image.
Furthermore, all interpretation (i.e. recognition) operations can now be done using
just the 40 eigen values to represent a face instead of the manipulating the 10000
values contained in a 100x100 image. Not only is this computationally less
demanding but the fact that the recognition information of several thousand.

11.6 UNDERSTANDING EIGENFACES

Any grey scale face image I(x,y) consisting of a NxN array of intensity values may
also be consider as a vector of N 2 . For example, a typical 100x100 image used in
this thesis will have to be transformed into a 10000 dimension vector!

Figure : 7x7 face image transformed into a 49 dimension vector

This vector can also be regarded as a point in 10000 dimension space. Therefore,
all the images of subjects&#39; whose faces are to be recognized can be regarded

46
as points in 10000 dimension space. Face recognition using these images is doomed
to failure because all human face images are quite similar to one another so all
associated vectors are very close to each other in the 10000-dimension space.

Fig :Eigenfaces

The transformation of a face from image space (I) to face space (f) involves just a
simple matrix multiplication. If the average face image is A and U contains the
(previously calculated) eigenfaces,

f = U * (I - A)

This is done to all the face images in the face database (database with known
faces) and to the image (face of the subject) which must be recognized. The
possible results when projecting a face into face space are given in the following
figure.

There are four possibilities:


1. Projected image is a face and is transformed near a face in the face database
2.Projected image is a face and is not transformed near a face in the face
database
3.Projected image is not a face and is transformed near a face in the face
database

47
4. Projected image is not a face and is not transformed near a face in the face
database While it is possible to find the closest known face to the
transformed image face
by calculating the Euclidean distance to the other vectors, how does one know
whether the image that is being transformed actually contains a face? Since PCA is
a many-to-one transform, several vectors in the image space (images) will map to
a point in face space (the problem is that even non-face images may transform
near a known face image&#39;s faces space vector). Turk and Pentland (1991a),
described a simple way of checking whether an image is actually of a face. This is
by transforming an image into face space and then transforming it back
(reconstructing) into image space. Using the previous notation, I&#39;=U T *U*(I-A
With these calculations it is possible to verify that an image is of a face and
recognise that face. O&#39;Toole et al. (1993) did some interesting work on the
importance of eigen faces with large and small eigenvalues. They showed that the
eigen vectors with larger eigenvalues convey information relative to the basic
shape and structure of the faces. This kind of information is most useful in
categorising faces according to sex, race etc. Eigen vectors with smaller
eigenvalues tend to capture information that is specific to single or small subsets
of learned faces and are useful for distinguishing a particular face from any other
face. Turk and Pentland (1991a) showed that about 40 eigen faces were sufficient
for a very good description of human faces since the reconstructed image have
only about 2% RMS. pixel-by-pixel errors.

48
Figure :Images and there reconstruction. The Euclidean distance between a face
image and its reconstruction will be lower than that of a non-face image.

49
11.7 IMPROVING FACE DETECTION USING RECONSTRUCTIN

Reconstruction cannot be used as a means of face detection in images in near real-


time since it would involve resizing the face detection window area and large
matrix multiplication, both of which are computationally expensive. However,
reconstruction can be used to verify whether potential face locations identified by
the deformable template algorithm actually contain a face. If the reconstructed
image differs greatly from the face detection window then the window probably
does not contain a face. Instead of just identifying a single potential face location,
the face detection algorithm can be modified to output many high
&#39;faceness&#39; locations which can be verified using reconstruction. This is
especially useful because occasionally the best &#39;faceness&#39; location found
by the deformable template algorithm may not contain the ideal frontal view face
pixel area.
Potential face locations that have been identified by the face detection system
(the best face locations it found on its search) are checked whether they contain a
face. If the threshold level (maximum difference between reconstruction and
original for the original to be a face) is set correctly this will be an efficient way to
detect a face. The deformable template algorithm is fast and can reduce the
search space of potential face locations to a handful of positions. These are then
checked using reconstruction. The number of locations found by the face detection
system can be changed by getting it to output, not just the best face locations it
has found so far but any location, which has a &#39;faceness&#39; value, which
for example is, at least 0.9 times the best heuristic value that has been found so
far. Then there will be many more potential face locations to be checked using
reconstruction. This and similar speed versus accuracy trade-off decisions have to
be made keeping in mind the platform on which the system is implemented.

50
Similarly, instead of using reconstruction to check the face detection system&#39;s
output, the output&#39;s correlation with the average face can be checked. The
segmented areas with a high correlation probably contains a face. Once again a
threshold value will have to be established to classify faces from non-faces. Similar
to reconstruction, resizing the segmented area and calculating its correlation with
the average face is far too expensive to be used alone for face detection but is
suitable for verifying the output of the face detection system.

51
11.8 POSE INVARIANT FACE RECOGNITION

Extending the frontal view face recognition system to a pose-invariant recognition


system is quite simple if one of the proposed specifications of the face recognition
system is relaxed. Successful pose-invariant recognition will be possible if many
images of a known individual are in the face database. Nine images from each
known individual can be taken asshown below. Then if an image of the same
individual is submitted within a 30 o angle from the frontal view he or she can be
identified. Nine images in face database from a single known individual Unknown
image from same individual to be identified.

Fig: Pose invariant face recognition.

Pose invariant face recognition highlights the generalisation ability of PCA.


Forexample, when an individual&#39;s frontal view and 30 o left view known, even
the individual&#39;s 15 o left view can be recognized

52
12.FACE DETECTION :

The problem of face recognition is all about face detection. This is a fact that
seems quite bizarre to new researchers in this area. However, before face
recognition is possible, one must be able to reliably find a face and its landmarks.
This is essentially a segmentation problem and in practical systems, most of the
effort goes into solving this task. In fact the actual recognition based on features
extracted from these facial landmarks is only a minor last step.
There are two types of face detection problems:
1) Face detection in images and
2) Real-time face detection

12.1 FACE DETECTION IN IMAGES

A successful face detection in an image with a frontal view of a human face.

53
Most face detection systems attempt to extract a fraction of the whole face,
thereby eliminating most of the background and other areas of an individual&#39;s
head such as hair that are not necessary for the face recognition task. With static
images, this is often done by running a across the image. The face detection
system then judges if a face is present inside he window (Brunelli and Poggio,
1993). Unfortunately, with static images there is a very large search space of
possible locations of a face in an image.
Most face detection systems use an example based learning approach to decide
whether or not a face is present in the window at that given instant (Sung and
Poggio,1994 and Sung,1995). A neural network or some other classifier is trained
using supervised learning with &#39;face&#39; and &#39;non-face&#39; examples,
thereby enabling it to classify an image (window in face detection system) as a
&#39;face&#39; or &#39;non-face&#39;.. Unfortunately, while it is relatively easy
to find face examples, how would one find a representative sample of images
which represent non-faces (Rowley et al., 1996)? Therefore, face detection
systems using example based learning need thousands of &#39;face&#39; and
&#39;non-face&#39; images for effective training. Rowley, Baluja, and Kanade
(Rowley et al.,1996) used 1025 face images and 8000 non-face images (generated
from 146,212,178 sub-images) for their training set!

There is another technique for determining whether there is a face inside the face
detection system&#39;s window - using Template Matching. The difference
between a fixed target pattern (face) and the window is computed and
thresholded. If the window contains pattern which is close to the target
pattern(face) then the window is judged as containing a face. An implementation
of template matching called Correlation Templates uses a whole bank of fixed
sized templates to detect facial features in an image (Bichsel, 1991 &amp; Brunelli
and Poggio, 1993). By using several templates of different (fixed) sizes, faces of
different scales (sizes) are detected. The other implementation of template
matching is using a deformable template (Yuille, 1992). Instead of using several
fixed size templates, we use a deformable template (which is non-rigid) and there
by change the size of the templatehoping to detect a face in an image.

A face detection scheme that is related to template matching is image


invariants.Here the fact that the local ordinal structure of brightness distribution
of a face remains largely unchanged under different illumination conditions (Sinha,
1994) is used to construct A spatial template of the face which closely
corresponds to facial features. In other words, the average grey-scale intensities in
human faces are used as a basis for face detection. For example, almost always an
individuals eye region is darker than his forehead or nose.Therefore an image will
match the template if it satisfies the &#39;darker than&#39; and &#39;brighter
than&#39; relationships (Sung and Poggio, 1994).

54
12.2 REAL TIME FACE DETECTION

Real-time face detection involves detection of a face from a series of frames from
a video-capturing device. While the hardware requirements for such a system are
far more stringent, from a computer vision stand point, real-time face detection is
actually a far simpler process thandetecting a face in a static image. This is
because unlike most of our surrounding environment, people are continually
moving. We walk around, blink, fidget, wave our hands etc

FIG :Frame 1 from camera FIG : Frame 2 from camera

Spatio-Temporally filtered image

55
Since in real-time face detection, the system is presented with a series of frames
in which to detect a face, by using spatio-temperal filtering (finding the difference
between subsequent frames), the area of the frame that has changed can be
identified and the individual detected (Wang and Adelson, 1994 and Adelson and
Bergen 1986).Further more as seen in Figure exact face locations can be easily
identified by using a few simple rules, such as,
1)the head is the small blob above a larger blob -the body
2)head motion must be reasonably slow and contiguous -heads won&#39;t jump
around erratically (Turk and Pentland 1991a, 1991b).
Real-time face detection has therefore become a relatively simple problem and
ispossible even in unstructured and uncontrolled environments using these very
simple imageprocessing techniques and reasoning rules.

12.3 FACE DETECTION PROCESS

Face detection process

It is process of identifying different parts of human faces like eyes, nose, mouth,
etc… this process can be achieved by using MATLAB codeIn this project the author
will attempt to detect faces in still images by using image invariants. To do this it
would be useful to study the grey-scale intensity distribution of an average human
face.The following &#39;average human face&#39; was constructed from a sample
of 30 frontal view human faces, of which 12 were from females and 18 from males.

56
A suitably scaled colormap has been used to highlight grey-scale intensity
differences scaled colormap scaled colormap (negative)

Figure Average human face in grey-scale


The grey-scale differences, which are invariant across all the sample faces are
strikingly apparent. The eye-eyebrow area seem to always contain dark intensity
(low) gray-levels while nose forehead and cheeks contain bright intensity (high)
grey-levels. After a great deal of experimentation, the researcher found that the
following areas of the human face were suitable for a face detection system based
on image invariants and a deformable template. scaled colormap scaled colormap
(negative)

Figure Area chosen for face detection (indicated on average human face in gray
scale)

57
The above facial area performs well as a basis for a face template, probably
because of the clear divisions of the bright intensity invariant area by the dark
intensity invariant regions. Once this pixel area is located by the face detection
system, any particular area required can be segmented based on the proportions of
the average human face After studying the above images it was subjectively
decided by the author to use the following as a basis for dark intensity sensitive
and bright intensity sensitive templates. Once these are located in a subject&#39;s
face, a pixel area 33.3% (of the width of the square window) below this.

Figure : Basis for a bright intensity invariant sensitive template.

Note the slight differences which were made to the bright intensity invariant
sensitive template (compare Figures 3.4 and 3.2) which were needed because of
the pre- processing done by the system to overcome irregular lighting (chapter
six). Now that a suitable dark and bright intensity invariant templates have been
decided on, it is necessary to find a way of using these to make 2 A-units for a
perceptron, i.e. a computational model is needed to assign neurons to the
distributions displayed .

58
Fig : Scaned image detection

12.4 FACE DETECTION ALGORITHM

FIG : Face detection algorithm

59
Fig : Mouth detection Fig : Nose Detection

Fig : Eye Detection

60
13.APPLICATION:

1. The human face has a total of 80 “nodes” that can be used in facial recognition, but
it only takes 14-22 nodes to identify a person’s face. The mathematical algorithms
of biometric facial recognition follow several stages of image processing:
2. Capture. The first step is for the system to collect physical or behavioral
samples in predetermined conditions and during a stated period of time.
3. Extraction. Then, all this gathered data should be extracted from the samples
to create templates based on them.
4. Comparison. After the extraction, collected data is compared with the existing
templates.
Facial recognition technology has been traditionally associated with the security
sector but today there is active expansion into other industries including retail,
marketing and health. By 2022, the global facial recognition technology market is
projected to generate an estimated $9.6 billion in revenue with a compound
annual growth rate (CAGR) of 21.3 percent*.

Comparatively, the general IT market is projected to earn $2.65 trillion in revenue


by 2020, representing a 3.3 percent CAGR** from the period of 2015 to 2020. As
market demand increases and industry-specific needs arise, many companies are
exploring how AI can offer a competitive edge.

To gauge the growing impact of AI on facial recognition technology across


industries, we researched this sector in depth to help answer questions business
leaders are asking today, including:

 What types of AI applications are currently in use in the facial recognition technology
sector – and how are they being implemented in industries such as security and
healthcare?

 What tangible results have been reported on AI facial recognition applications that
are being implemented across industries?

 Are there any common trends among these innovation efforts – and how could these
trends affect the future of the facial recognition technology sector?

61
In this article we break down applications of artificial intelligence in the the facial
recognition technology market to provide business leaders with an understanding
of current and emerging trends that may impact their sector. We’ll begin with a
synopsis of the sectors we covered

Facial Recognition Applications Overview

Based on our assessment of the applications in the field today, the a


majority of facial recognition use-cases appear to fall into three major
categories:

 Security: Companies are training deep learning algorithms to recognize


fraud detection, reduce the need for traditional passwords, and to improve
the ability to distinguish between a human face and a photograph.

 Healthcare: Machine learning is being combined with computer vision to


more accurately track patient medication consumption and support pain
management procedures.

 Marketing: Fraught with ethical considerations, marketing is a burgeoning


domain of facial recognition innovation, and it’s one we can expect to see
more of as facial recognition becomes ubiquitous

SECURITY

1. Fraud detection
Occasionally, facial recognition technology may not be able to distinguish
between a human face and a photograph. As a result, this flaw can
greatly compromise security efforts. In an effort to address this
challenge,ai, the developers of a facial recognition doorbell called , are
using deep learning and facial recognition technology to distinguish a
human face from a photograph.

2. Account security

Passwords have become a burdensome cost of navigating the Internet


environment. MasterCard is one of the financial institutions looking to
circumvent the need for passwords through facial
recognition.The MasterCard Identity Check Mobile app reportedly verifies
online payments through either fingerprint or facial recognition. As

62
depicted in the video below, app users are able to verify their payments
by using their smartphone camera to capture a picture of their faces

3. Facial recognition controversy

In 2016, speculation that Saks Fifth Avenue’s new Toronto location would
utilize facial recognition made headlines. However, in response to an
inquiry on the retailer’s twitter page, Saks denied any use of facial
recognition technology at that particular location. However, during the
Twitter exchange an attempt to clarify if the technology is used in Saks
New York locations was not responded to.

4. Shoplifting prevention

In the retail sector, major retailers appear to be exploring facial


recognition technology for security purposes. However, some efforts have
been met with consumer privacy

y concerns.

In 2015, Walmart reportedly began testing facial recognition in some of


its stores in an effort to identify shoplifters but subsequently ended its
use. The retail giant later publically acknowledged the tests and claimed
that the technology did not provide an adequate ROI to justify continued
use. It appears that the unfavorable publicity may have been one of the
main reason behind this decision

HEALTH CARE

1.Medication adherence

Medication non-adherence or non-compliance occurs when a patient fails


to take their medication as prescribed by his or her physician. This is a
major challenge in the U.S., occurring among approximately 50 percent
of patients who receive medication prescriptions. This trend translates to
avoidable hospital admissions and a staggering $100 billion in estimated
annual costs.
63
Founded in 2010, AiCure is an AI company using facial recognition
technology and computer vision to improve medication adherence
practices. The company’s algorithm-driven software is delivered through
an app and AiCure claims that it can be accessed on any mobile device.

The app reportedly performs three core functions, it identifies the


patient, identifies the prescribed drug and can visually confirm if the
drug has been ingested by the patient. In the 1:30 minute video below,
the AiCure team provides a simulated demonstration of how the app
tracks medication adherence.

In a 2017 pilot study among 75 individuals with schizophrenia, over a


period of 24 weeks, AiCure reported a 89.7 percent drug adherence rate
compared to a 71.9 percent rate with a traditional drug adherence
monitoring method. These results provide a window into how facial
recognition technology could possibly impact healthcare outcomes and
the economy.

Based on its website, the company does not appear to be focused on a


particular disease or age demographic and appears interested in broad
adoption of its technology across applicable medical conditions.

One research study has shown that higher medication adherence


correlates with patients who have two simultaneously conditions
(comorbidity) and a higher number of prescribed medications. Chronic
conditions that had the lowest rate of adherence in this study were
diabetes and asthma.

2.Pain management

An estimated 100 million Americans suffer from chronic pain, a figure


equivalent to roughly one-third of the U.S. population. The annual
associated medical costs have been approximated up to $630 billion.
Globally, the figure is estimated at 1.5 billion individuals suffering from
some degree of chronic pain.

In the healthcare setting, performing an accurate assessment of patient’s


pain level is an imperfect science. Certain traditional pain assessment
methods rely significantly on a patient’s description of his or her pain
level. While non-verbal pain assessments tools are available, inherent
bias, reliability and sensitivity are some of the challenges encountered
regardless of the tool.

64
The facial recognition technology platform ePAT is a point of care app
designed to detect facial expression nuances which are associated with
pain. App users can also reportedly enter data on “non-facial pain cues”
such as “vocalisations, movements and behaviours” which are then
aggregated to provide a pain severity score

MARKETING

For example, the app Facedeals aimed to target customers with special


offers from businesses they frequent by integrating facial recognition
with their Facebook profiles. Specifically, facial recognition cameras
would be installed at the business entrance which would recognize
customers as they enter. Simultaneously, the customer would receive a
notification of a customized deal to their smartphone based on his or her
Facebook “Like” history

Consumers did not take to the technology as anticipated which led


Facedeals to reportedly change their name to Taonii, a mobile app that
essentially performs the same functions without the facial recognition
component. However, it is unclear how Taonii has performed. The
company’s website provides limited info and its Facebook page has not
been updated since its launch in 2014.

Other companies such as Listerine and Dutch coffee maker Douwe


Egberts have integrated facial recognition technology into marketing
campaigns or publicity efforts but applications that have produced hard
data on customer retention have yet to reported.

Marketing is one of the business domains being disrupted by Artificial Intelligence


the most. And it is no longer only about tracking user behavior in an eCommerce
app and displaying relevant ads to them later, or text-mining the Web for insights
into your target audience's preferences. Facial Recognition technology has,
recently, taken the process to an entirely new level. It makes it easier not only to
sell but also to buy in those instances when the buyer is poised for choice or is just
not aware of all the features of several similar products.

Talking of examples, according to a.list, the well-known tour provider Expedia is


now partnered with a Hawaii-based tourist agency to offer Face Recognition-
recommended tour options. The Facial Recognition solution, used by the company,
determines which of the Hawaii-based tourist activities, presented to the viewer
on the Expedia website, resonates with them most positively.

65
BANKING

Battling fraud in Banking has, probably, never ceased ever since the trade was in
its infancy.

Nowadays, multi-factor authentication solutions, which provide two- or, even,


three-step authentication, are used to reduce the amount of fraud, plaguing
banking institutions around the globe. These solutions, generally, succeed, but
may, sometimes, affect the customer experience unfavorably.

Besides, in some contexts, for example, in the case of ATM skimming, multi-factor
authentication is of no use.

As you will have, probably, guessed, a truly dependable solution can be provided
by Face Recognition.

14.CONCLUSION :

We proposed to build a high performance, scalable, agile, and low cost face
recognition in attendance system. We divide the proposed approach into several
small sub projects. First, we studied neural network and convolutional neural
network.

Based on the theory of deep learning, we built the Siamese network which will
train the neural network based on similarities. Then we examine and compare the
available open source data set, we chose ORL dataset and trained the model with
GPU. The model will take a human face image and extract it into a vector. Then
the distance between vectors are compared to determine if two faces on different
picture belongs to the same person. Then we did the study, compare, design and
build a system to work with the neural network model. The system uses client-
server architecture.

GPU is used on the server side to provide high performance. We also de- coupled
he main components of the system to make it flexible and scalable. We used the
non-block and asynchronies features of Node.JS to increase the system’s
concurrency. Since the entire system is modularized, it can be used in different
domains, thus reduced the development cost. Thus, the aim is to capture the video
of the students, convert it into frames, relate it with the database to ensure their
presence or absence, mark attendance to the particular student to maintain the

66
record. The Automated Classroom Attendance System helps in increasing the
accuracy and speed ultimately achieve the high-precision real-time attendance to

meet the need for automatic classroom evaluation. In this approach, a face
recognition based automated student attendance system is thoroughly described.

The proposed approach provides a method to identify the individuals by comparing


their input image obtained from recording video frame with respect to train
image. This proposed approach able to detect and localize face from an input
facial image, which is obtained from the recording video frame. Besides, it
provides a method in pre-processing stage to enhance the image contrast and
reduce the illumination effect. Extraction of features from the facial image is
performed by applying both LBP and PCA.

The algorithm designed to combine LBP and PCA able to stabilize the system by
giving consistent results. The accuracy of this proposed approach is 100 % for high-
quality images, 92.31 % for low-quality images and 95.76 % of Yale face database
when two images per person are trained. As a conclusion for analysis, the
extraction of facial feature could be challenging especially in different lighting. In
pre-processing stage, Contrast Limited Adaptive Histogram Equalization (CLAHE)
able to reduce the illumination effect. CLAHE perform better compared to
histogram equalization in terms of contrast improvement. Enhanced LBP with
larger radius size specifically, radius size two, perform better compared to original
LBP operator, with less affected by illumination and moreconsistent compared to
other radius sizes.

There may be various types of lighting conditions, seating arrangements and


environments in various classrooms. Most of these conditions have been tested on
the system and system has shown 100% accuracy for most of the cases. There may
also exist students portraying various facial expressions, varying hair styles, beard,
spectacles etc. All of these cases are considered and tested to obtain a high level
of accuracy and efficiency.

Thus, it can be concluded from the above discussion that a reliable, secure, fast
and an efficient system has been developed replacing a manual and unreliable
system. This system can be implemented for better results regarding the
management of attendance and leaves. The system will save time, reduce the
amount of work the administration has to do and will replace the stationery
material with electronic apparatus and reduces the amount of human resource
required for the purpose. Hence a system with expected results has been
developed but there is still some room for improvement. Automated Attendance
System has been envisioned for the purpose of reducing the errors that occur in
the traditional (manual) attendance taking system.

67
The aim is to automate and make a system that is useful to the organization such
as an institute. The efficient and accurate method of attendance in the office
environment that can replace the old manual methods. This method is secure
enough, reliable and available for use. No need for specialized hardware for
installing the system in the office. It can be constructed using a camera and
computer. Face recognition systems are part of facial image processing
applications and their significance as a research area are increasing recently.

Implementations of system are crime prevention, video surveillance, person


verification, and similar security activities. The face recognition system
implementation will be part of humanoid robot project at Atılım University. The
goal is reached by face detection and recognition methods.
Knowledge-Based face detection methods are used to find, locate and extract
faces in acquired images. Implemented methods are skin color and facial features.

Neural network is used for face recognition. RGB color space is used to specify skin
color values, and segmentation decreases searching time of face images. Facial
components on face candidates are appeared with implementation of LoG filter.
loG filter shows good performance on extracting facial compoments under
different illumination conditions. FFNN is performed to classifiy to solve pattern
recongition problem since face recognition is a kind of pattern recognition.
Classification result is accurate. Classification is also flexible and correct when
extracted face image is small oriented, closed eye, and small smiled.

Proposed algorithm is capable of detect multiple faces, and performance of system


has acceptable good results. The face detection and recognition algorithms were
studied thoroughly taking number of the test from different varying condition
images. For face detection combination of RGB and HSV model algorithm is used.
For face recognition principal component analysis method is used. Attendance of
the student are marked using the recognized face of every individual student and
the data is stored in an attendance sheet. The attendance of every student marked
automatically by recognizing their face with the face present in the data base. In
this system we have implemented an attendance system for a lecture, section or
laboratory by which lecturer or teaching assistant can record
students’ attendance. It saves time and effort, especially if it is a lecture with
huge number of students. Automated Attendance System has been envisioned for
the purpose of reducing the drawbacks in the traditional (manual)system. This
attendance system demonstrates the use of image processing techniques
SSSSSSSin classrooms.

68
15.FUTURE SCOPE :

When build the neural network model, there are many parameters
which can be tuned to increase the model performance. We can keep
tuning our models to increase its accuracy. Also, for a trained base
model, we can re-train it using a specific dataset. So another way to
increase the whole system’s performance is to capture the specific
people’s images and re-train the model based on this small dataset.

For example, if an organization with 3000 people uses this system, the
model can be trained to be very accurate on these 3000 people. We
can employ and automate this feature into the system.

* Identifiable online daters

* Better tools for law enforcement

* Full body recognisation

* Face recognition as advertising

* Shattered glass

The future of facial recognition technology is bright. Forecasters opine


that this technology is expected to grow at a formidable rate and will
generate huge revenues in the coming years.

Security and surveillances are the major segments which will be deeply
influenced. Other areas that are now welcoming it withopen arms are
private industries, public buildings, and schools.

It is estimated that it will also be adopted by retailers andbanking


systems in coming years to keep fraud in debit/creditcard purchases
and payment especially the ones that are online.

69
This technology would fill in the loopholes of largely prevalent
inadequate password system. In the long run, robots using facial
recognition technology may also come to foray. They can be helpful in
completing the tasks that are impractical or difficult for human beings
to complete.

The use of spherical canonical images allows us to perform matching in


the spherical harmonic transform domain, which does not require
preliminary alignment of the images. The errors introduced by
embedding into an expressional space with some predefined geometry
are avoided. In this facial expression recognition setup, end-to-end
processing comprises the face surface acquisition and
reconstruction ,Smoothening, sub sampling to approximately 2500
points.

Facial surface cropping measurement of large positions of distances


between all the points using a parallelized parametric version is
utilized. The general experimental evaluation of the face expressional
system guarantees better face recognition rates. Having examined
techniques to cope with expression variation, in future it may be
investigated in more depth about the face classification problem and
optimal fusion of color and depth information. Further study can be
laid down in the direction of allele of gene matching to the geometric
factors of the facial expressions. The genetic property evolution
framework for facial expressional system can be studied to suit the
requirement of different security models such as criminal detection,
governmental confidential security breaches etc. Today, one of the
fields that uses facial recognition the most is security.

Facial recognition is a very effective tool that can help law enforcers
recognize criminals and software companies are leveraging the
technology to help users access their technology. This technology can
be further developed to be used in other avenues such as ATMs,
accessing confidential files, or other sensitive materials. This can make
other security measures such as passwords and keys obsolete.

Another way that innovators are looking to implement facial


recognition is within subways and other transportation out lets. They

70
are looking to leverage this technology to use faces as credit cards to
pay for your transportation fee. Instead of having to go to a booth to
buy a ticket for a fare, the face recognition would take your face, run
it through a system, and charge the account that you’ve previously
created. This could potentially streamline the process and optimize
the flow of traffic drastically. The future is here.

71
16.REFERENCES :

 https://opencv.org/

 https://www.kaggle.com/serkanpeldek/face-
recognition-on-olivetti-dataset

 https://numpy.org/

 https://towardsdatascience.com/face-recognition-how-
lbph-works-90ec258

 https://en.wikipedia.org/wiki/
Facial_recognition_system
 Adelson, E. H., and Bergen, J. R. (1986) The Extraction
of Spatio-Temporal Energy in

 Human and Machine Vision, Proceedings of Workshop on


Motion: Representation and

 Analysis (pp. 151-155) Charleston, SC; May 7-9

 AAFPRS(1997). A newsletter from the American


Academy of Facial Plastic and Reconstructive
Surgery. Third Quarter 1997, Vol. 11, No. 3. Page
3.

72
 Baron, R. J. (1981). Mechanisms of human facial
recognition. International Journal of Man Machine
Studies, 15:137-178

 Beymer, D. and Poggio, T. (1995) Face Recognition From


One Example View, A.I. Memo No. 1536, C.B.C.L. Paper
No. 121. MIT

 Bichsel, M. (1991). Strategies of Robust Objects


Recognition for Automatic Identification of Human
Faces. PhD thesis, , Eidgenossischen Technischen
Hochschule, Zurich.
 Brennan, S. E. (1982) The caricature generator. M.S.
Thesis. MIT.

 Brunelli, R. and Poggio, T. (1993), Face Recognition:


Features versus Templates. IEEE Transactions on
Pattern Analysis and Machine Intelligence,
15(10):1042-1052

 Craw, I., Ellis, H., and Lishman, J.R. (1987). Automatic


extraction of face features. Pattern Recognition Letters,
5:183-187, February.

 Deffenbacher K.A., Johanson J., and O'Toole A.J. (1998)


Facial ageing, attractiveness, and distinctiveness.
Perception. 27(10):1233-1243

 Dunteman, G.H. (1989) Principal Component Analysis.


Sage Publications.

73
 Frank, H. and Althoen, S. (1994). Statistics: Concepts
and applications. Cambridge University Press. p.110

 Gauthier, I., Behrmann, M. and Tarr, M. (1999). Can


face recognition really be dissociated from object
recognition? Journal of Cognitive Neuroscience, in
press.

 Goldstein, A.J., Harmon, L.D., and Lesk, A.B. (1971).


Identification of human faces. In Proc. IEEE, Vol. 59,
page 748

 de Haan, M., Johnson, M.H. and Maurer D. (1998)


Recognition of individual faces and average face
prototypes by 1- and 3- month-old infants. Centre for
Brain and Cognitive

 Development, Department of Psychology, Birkbeck


College.

 Hadamard, J. (1923) Lectures on the Cauchy Problem


in Linear Partial Differential Equations , Yale
University Press

74

You might also like