Facial Detection Using Deep Learning, 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

FACIAL DETECTION USING DEEP

LEARNING
P.SARANYADEVI1 and R.DHARUNKUMAR2
1II B.Sc (CS with DA),

2
II B.E AUTOMOBILE.
1
KPR COLLEGE OF ARTS SCIENCE AND RESEARCH,COIMBATORE
2
GOVERNMENT COLLEGE OF ENGINEERING,ERODE
1
Pasupathinathan652@gmail.com and 2dharunkumar1104@gmail.com
Third select features which can be
used to identify each face uniquely
Abstract:
like size of the eyes, face etc. Finally,
In the recent past, we have observed compare these features to data we
that Facebook has developed an have to find the person name. As a
uncanny ability to recognize people human, your brain is wired to do all
in photographs. Previously, we had of this automatically and instantly. In
to tag people in photos by clicking on fact, humans are too good at
them and typing their name. Now as recognizing faces. Computers are not
soon as we upload a photo, capable of this kind of high-level
Facebook tags everyone on its own. generalization, so we must teach
Facebook can recognize faces with them how to do each step in this
98% accuracy which is pretty much process separately. The growth of
as good as humans can do. This face detection is largely driven by
technology is called Face Detection. growing applications such as credit
Face detection is a popular topic in card verification, surveillance video
biometrics. We have surveillance images, authentication for banking
cameras in public places for video and security system access.
capture as well as security purposes.
Keyword: Face detection,open
The main advantages of this
cv,numpy, DBN,RBM connection.
algorithm over other are uniqueness
and approval. We need speed and 1.INTRODUCTION
accuracy to identify. But face
In the last few decades, a lot of
detection is really a series of several
research has been done in face
related problems: First, look at a
detection. We can identify a person’s
picture and find all the faces in it.
face without the help of any human
Second, focus on each face and
support. In this paper, a system is
understand that even if a face is
implemented to evaluate the human
turned in a weird direction or in bad
face detection. A model that can be
lighting, it is still the same person.
used in many types of devices to
detect digital images is called face Vision) is a library started by Intel in
detection. It is a specific case of 1999. It is a classifier that helps in
object-class detection. Isearches for processing the images. The main
the location and size of all the focus is mainly on image processing
features belongingto a given class. and can be implemented on the latest
The main priority of this model will be algorithms. It comes with a
on the frontal face detection. In this programming interface to Python. It is
facedetection model, it firstly detects used under a BSD license to be used
expected human eye areasby first in academic projects. Itcomes with
evaluating all the possible valley the FaceDetection class for face
regions in given gray-level frame or detection. To perform the face
picture. The fitness value of a feature detection, the following modules need
is determined based on its projection to be
on the eigen-faces. After so many imported. 1. cv2 – It is an
iterations, the symmetryof the face is OpenCVpackage object or module
determined and the presence of and has the functions image
various facial features is tested and processing.
confirmed. In other terminologies, we 2.numpy – The image will be stored
call facedetection as face-priority AF in numpy arrays.
(auto focus), a function that detects
3. sys–to perform console related
human faces so that focus will be set
functions like taking input from
along with appropriate
theconsole. The first step will be
exposure.OpenCV will be used for
detecting the face in each image and
face detection.
use the HaarCascadewhich is given
by OpenCV. The haarcascadesare
present inOpenCV and can be found
in the location /data/haarcascades>
directory of theOpenCV installation.
Haarcascade_frontalface_default.xml
and haarcascsade_eye.xml are used
to detect the face and eyes
respectively. The xml files will be
loaded using cv2.CascadeClasssifier
function which takes the path to it.
1.2
Haarcascades_frontalface_default
Using Haarcascade classifier, the
face detection is done by the
FLOW CHART OF FACE DETECTION following classifiers:
MODEL haarcascade_eye and
haarcascade_frontalface_default.To
1.1OpenCV train the classifier, the algorithm
OpenCV (Open Source Computer requires a number of images (positive
and negative). The features are
extracted from the images. In the
below figure 1, the use of haar
feature is shown. Every feature
considered by this model is a single
value gathered by getting the
difference between the aggregate of
pixels which come under white
rectangle and the aggregate of the
pixels which come under the black
rectangle.

2 Literature survey
Face detection is a great study in the
computer vision. In olden days
i.e.(before 2000) many studies and
practical performances of the face
detection was not satisfactory until
Viola and Jones proposed a work.
These Viola and Jones [1][2] are the
first who are applying rectangular
boxes for the face. But, it has lot of
drawbacks as its feature size was
large. In a 24 × 24 image, the total
1.3Deep Belief Network (DBN) number of Haar_like features is
160,000[3] and also it is not handled
In machine learning, DBN is described for wild faces and frontal faces. After
as a graphical model or type of deep finding the above problem, people
learning architecture. The Deep Belief have put lot of effort to introduce with
Network probabilistically reconstructs its more complicated features (HOG,
inputs. Below is the schematic overview SIFT, SURF, and ACF)[4][5]. In [6]
of DBN. The arrows are the directed introduced the new feature NPD
connections present in the graphical which will differentiate the two pixels
model. It is a composition of simple and intensity. Another well known method
unsupervised networks such as is Dlib[7] which took support vector
Boltzmann machines and autoencoders, machine as a classifier in the face
where each subnetworks hidden layer detection. Enlarging the robustness in
serves as the detection is another greatly
the visible layer for the next. studied topic. One of the simple
method is to combine multiple
detections that should be trained
separately in different views [8] .Zhu
et al. [9] is applied multiple de-
formable models to take faces in
different views.Shenet[10] al given a constructed as input is possible after
retrieval based model combined with that hidden layer of first RBN has visible
different learning. These models layers for the second RBN. If the
require training and testing where it second RBN is trained with the help of
will take more time and its first, this process is repeated until every
performance is less. In 2002 Garcia layer in the network finishes the training
et al. [11] introduced a neural network process. In the face detection process,
to find the semi -frontal human faces hidden layer finds the edges of the face
in the complex images. In 2005 and visible layer finds the features of
Osadchy et al. [12] trained a the faces. A DBN can work globally
convolutional neural network for the which effects a model should improve
face detection. Over the last few performance. Like our camera lenses
years, a lot of face detection and face slowly focus on the picture the reason
recognition work has been done. As it DBN works better is highly technical
is the best way for recognizing a and a stack RBM will work as a single
person. Because it does not require unit. After the initial training these
any human work to recognize faces. models create RBMs which detect the
hidden patterns in the dialogue. To
Since a lot of methods invented for
finish the training we introduce labels to
face recognition and face detection. the pattern for the supervised learning
3. Methodology and need sample set of features which
will increase our results in better way.
DBN is an implementation of deep
For this, we need a small set of data set
neural net. These DBN create a neural which should be reasonable for our real
network layer by layer i.e., starting from
world application. The main reason we
the input layer to end with output layer. are going for DBN is because the
In between input and output, different training process is completed in
no of hidden layers are present. Every reasonable manner. It provides very
layer should be trained with RBM. An good results compared to other
RBM is an extract feature to re-
algorithms. In DBN we are using
construct inputs. By combining all recurrent feed forward neural network.
RBM’s we are introducing a The reason we are going for recurrent
collaborating method getting a power feed forward neural network is it can
full new model which solves our handle change in input /output size. In
problem that is DBN. Just like MLP, conventional we can handle fixed no of
DBN is also considered a back input /output size. In this method
propagation with RBM to provide a
(RFFN) the output layer will forward the
powerful trained network. In terms of input for next layer after its going to
network structure, a DBN is identical to end. If the training model is not
MLP. But when we come to training, complete, then it comes back to start
those are entirely different. This training and train by considering previous output
process is the key feature for training to input for present training.
developing powerful trained networks. A
DBN is called as a stack of RBN’s. In
these hidden layers one RBN is visible
layer for the above one. The first RBM
4. Implementationand Results
First, we need to install OpenCV
package for python. This package
can be downloaded from python
website or by using pip install
command. Then we use two cascade
files
haarcascade_frontalface_default.xml
and haarcascade_eye.xml files which
are available under GNU licence and
can be used without permission also.
3.1 Algorithm The whole code is implemented in
python and need a working webcam
Step 1: Train the 1st layer with RBM to capture images or videos.The
at raw input x = h0 units. model is implemented successfully
Step 2: By using 1st layer we will get and is able to recognise faces in still
visible pattern data for the second layer. images, videos, paintings and
Step 3: Train the second layer from webcam captures. For still images
output 1st layer as visible layer and with and paintings the model is able to
combination of2 nd hidden layer. Step 4: recognise eyes also.Result for
Repeat 2, 3 steps until it completes the different images is shown below.
total training process. Step 5: Tune all
the parameters by using a small set of
dataset to increase the accuracy of
results i.e., supervised training creation.
5. Conclusions channel features for multi-view face
detection. In Biometrics (IJCB), 2014
The proposed model is able to
IEEE International Joint Conference on,
recognize faces correctly but when tried
pages 1– 8. IEEE, 2014.
for videos, it takes more time for
processing. The advantage of this [6] Shengcai Liao, Anil K Jain,
model is that it is able to recognize and Stan Z Li. A fast and accurate
blurred images and side face images unconstrained face detector. IEEE
also which other traditional models are transactions on pattern analysis and
incapable of recognizing in such case. machine intelligence, 38(2):211–223,
The onlydrawback is that it fails to 2016.
recognize eyes with glasses. In future,
[7] Davis E. King. Dlib-ml: A
this can be extended to recognize
machine learning toolkit. Journal of
persons using video capture which will
Machine Learning Research,
be helpful in getting identities from
10:1755–1758, 2009.
CCTV cameras that can police to
identify the person in no time. It can also [8] Michael Jones and Paul Viola.
be implemented in home security Fast multi-view face detection.
systems as well. Mitsubishi Electric Research Lab TR-
20003-96,
References:
3:14, 2003. [9] Xiangxin Zhu and
[1] Paul Viola and Michael Jones. Deva Ramanan. Face detection,
Rapid object detection using a pose estimation, and landmark
boosted cascade of simple localization in the wild. In Computer
features. In Computer Vision and Vision and Pattern Recognition
Pattern Recognition, 2001. CVPR (CVPR), 2012 IEEE Conference on,
2001. Proceedings of the 2001 pages 2879– 2886. IEEE, 2012.
IEEE Computer Society
[10] XiaohuiShen, Zhe Lin, Jonathan
Conference on, volume 1, pages
Brandt, and Ying Wu. Detecting and
I–511. IEEE, 2001.
aligning faces by image retrieval. In
[2] Paul Viola and Michael J Jones.
Proceedings of the IEEE Conference
Robust real-time face detection.
on Computer Vision and Pattern
International journal of computer
Recognition, pages 3460–3467, 2013
vision, 57(2):137–154, 2004.
[11] C. Garcia and M. Delakis. A
[3] T. Mita, T. Kaneko, O. Hori, Joint neural architecture for fast and robust
Haar-like Features for Face Detection, face detection. In Pattern
“Proceedings of the Tenth IEEE Recognition, 2002. Proceedings. 16th
International Conference on Computer International Conference on, 2002. 2
Vision”, 1550- 5499/05 ©2005 IEEE. [4]
[12] M. Osadchy, Y. L. Cun, M. L.
Jianguo Li and Yimin Zhang. Learning
Miller, and P. Perona. Synergistic
surf cascade for fast and accurate
face detection and pose estimation
object detection. In Proceedings of the
with energy-based model. In In
IEEE Conference on Computer Vision
Advances in Neural Information
and Pattern Recognition, pages 3468–
Processing Systems (NIPS), 2005.
3475, 2013. [5] Bin Yang, Junjie Yan,
Zhen Lei, and Stan Z Li. Aggregate

You might also like