A Summer/Industrial Training Report On: Core and Advance Java

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

A

SUMMER/INDUSTRIAL
TRAINING REPORT ON
CORE AND
ADVANCE JAVA

Submitted in partial fulfillment of the requirements for the award of Degree of


BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE & ENGINEERING
SUPERVISION BY:
Mr. Manish Gupta

(1th April, 2020 – 31th May, 2020)

Submitted by:
PANDEY HIMANSHU
Submitted to:
R.NO-1900100109009 MR. . Manish Gupta

(Assistant Professor)

UNITED INSTITUTE OF TECHNOLOGY NAIANI , PRAYAGARAJ (U.P.)


Automated Multiple Face Recognition using Deep Learning for Security and
Surveillance applications

Abstract. Face recognition is some of the difficult processes due to a large number of wild datasets. Deep
learning is the research concentrate on the latest years. Because of its superior accomplishment it is widely used in
the area of pattern recognition initiated deep learning structure is collected of a set of complicated designed CNN.
Deep learning gave valid resolution in the matter of recognition execution. In our present paper our purpose is to
consider deep learning established face recognition below atmosphere like disparate aspects of head positions,
difficult clarification, faulty exterior characteristic localization, precision using deep learning. We are using
OpenCV, haarcascade for detecting faces, eyes, and smile. LBPH Face Recognizer for training data recognition of
faces. CNN (Convolution Neural Networks) for facial extractions without any flaws and with more accuracy.

Keywords: CNN, Face recognition, deep learning, LBPH, OPENCV, haarcascade.

1 Introduction

Around the two-decade face, recognition has completely discovered due to its many possible applications in the fields
of biostatics and information security, countless algorithms have been initiated to recognize human faces by images or
videos. As we can see that we live in a digital world we expect everything to be digital. So in that case face recognition
is a trending topic now. Facial recognition has become the model among essentially every individual accessory or each
safety organization based on it one or another way. As the number of face recognition applications increases the
research on this has increases oppressively. It is used all around the world like for security purposes in CCTV motion
picture system, for identifying people in the case of theft, murder, etc., used in online payments, a mobile application
like face lock, Facebook, Snapchat, etc., healthcare centers for the security of documents of patients, biometric purposes
for taking attendance in school, college, office, law enforcement, command administration, risk inspection, escape run,
surveillance, fugitive hunt, smart cards, ATMs, driving license, and passport to name a few.

There are many people who have done much research, project on this topic and many people have used many different
techniques to overcome face recognition problem algorithms are design in such a way that the system trains the given
data set on its own. There are many problems in face recognition like there is no accuracy, no proper feature extraction,
no proper face detection, no proper face recognition especially in case of twins, can’t detect and recognize the face
when a face is turned in different angles. These problems can be overcome by deep learning because modules are
efficient in research to concentrate on a character by itself.

1.1 Face recognition using deep learning:

Deep learning is single section of machine learning operation. Deep learning is an AL function that impressionist the
activities of the human brain in processing data for use in detecting objects, identifying speech, converting
languages, and making decisions. Deep learning AL is able to learn without human direction, drawing from data that
is both unstructured and unlabeled. Deep learning a form of machine learning, it can be used to detect frauctor
money censoring, among other functions. Deep learning explain huge amount of unformed data that would normally
take humans decades to realize and process deep learning is class of machine learning algorithms that uses several
layers to successively extract higher level features from the raw input. For example, in image processing, lower
layers may identify edges, while higher layers may identify the concepts relevant to human such as digits or letters
or faces.
Fig.1. Working of CNN in deep learning.

2 Literature survey

At starting so many researchers have worked on face recognition but they have defined the faces of a person only in an
image, by storing more personal images. By storing more number of person images through storing more data they
have detected face in an image. Since the data have already stored but this paper concentrates on recognizing a face in
both images and videos. Previous studies have worked on individual part detection such as jaw line, nose, mouth, eyes,
etc. and also they have explained the face model by comparing the different feature detection and the connections
among the different features. Starting Bledsoe [1] and Kanade [2] worked on many selves- moving and semi-self-
moving face recognition plans. They have designed and grouped based on distances between the face points and the
ratio among face points. Geoffrey Hinton used some machines called sparse restricted Boltzmann machines [3] and also
“Deep belief network” [4].for shared representation learning, cross model learning, etc. After these works P Ek man
and W.v.friesen “Facial action coding system” [5] came and worked on facial behavior. For this he has classified
two conceptual approaches.

We all know that face recognition technology is the most demanding task due to its elaborated nature. Face recognition
is a digital technology and its position and dimensions of a human face in a random image and we need to know that
we can also detect the other object like trees, cars, buildings, bodies, etc. Using face recognition. And this technology is
also called as object class detection. In this process the challenge is to detect the position and dimension of the object in
an image. In this technique we also need to know that face localization. It is the process to locate facial parts in the
given image. “Face recognition from video using generalized mean deep learning neural”. This is the algorithm by two
scientists they are Poonam Sharma and RN Yadav [6]. They have tried to detect the ace in a video based on deep
neural networks. And they have recognized the face using deep neural networks very accurately and they have proved
that their algorithms are most favorable to detect the faces with accuracy. “Deep learning networks for face detection” is
the algorithm by Xueyi ye, Xueting Chen, Huahua Chen, Yafeng Gu, and Qiuyun [7]. These people have proposed face
recognition using deep learning and they are presented control the challengers of detecting faces accurately in non
feasible situations. “An incremental face recognition system based on deep learning” proposed by Lufan Li ,Zhang Jun,
Jiwaei Fei, Shuohao Li [8] this is the algorithm based on deep learning techniques. This is implemented by open with
the help of Facenet network architecture. This algorithm is planned and executed to modernize and upgrade through the
new data amount. “Facial detection with expression recognition using artificial neural networks” proposed by Michael
Owayjal, Roger Achkar, MOUSSA ISKANDAR [9] it is a motorized vision system. This algorithm is executed using
MATLAB. Artificial Neural Networks(ANN)uses multi layer perception. The system will provide the ability to crop,
non- white background, and categorize various faces placed in the same image.

Table 1. Literature survey of face recognition using deep learning

Sl No. Paper title Year Methodology Training dataset


1 Face recognition from 2016 Generalized mean Pasc and YouTube
video using method. dataset.
generalized mean deep
learning neural.
2 Deep learning network 2015 Multi-layemon linear -
for face detection. mapping
3 An incremental face 2017 S-Ddl using SVM Face Net dataset
recognition system Algo training
based on deep
learning.
4 Face detection with 2016 MATLAB, Ann with -
expression recognition Back propogation
using artificial neural
networks.
5 Face recognition on 2015 Feed forward Algo, -
system in android back propagation algo
using neural networks.

3 SYSTEM ARCHITECTURE

In the research work, the facial identification is done duely process of following sequences adopted in the
algorithm. by the following sequence. Firstly, when a face is detected of any person, it forms a frame, secondly,
facial features will be extracted with the help of CNN and The element derivation of the faces will be analyzed by
deep learning of architectural features and same is implemented in precise for the perfectness and these derived
features of details will be reserved in the system classified correspondingly.

Fig.2. Steps in face recognition

Fig.3. Proposed system architecture

When it comes to humans how do we recognize people? When we meet any person for the first time we usually
observe their facial features like eyes, nose, mouth, eyebrows, jawline, etc. so next when you meet the person you
recognize them by remembering their unique facial features. It is very easy for humans to recognize the faces of
people. But how about the computer system, How do they recognize a face?
Well, let me tell you it is very difficult for a computer system to recognize a face. So we are supposed to train it.
Now training and recognizing has become easy by using deep learning CNN, OpenCV, and LBPHFaceRecognizer.
There are three easy steps to recognize faces they are
Data gathering, train the recognizer, and recognition. Let us see the explanation below.
• Data gathering: the deep care is taken in gathering data of the features of the faces of person which is
required for identification.

• Train the recognizer: nourishing the face data and identical names of individual face cut through the
recognizer in structure to read.

• Recognition: after nourishing the different faces of those people and examine if the face recognizer you just
trained identifies them.

In data gathering we have given persons face data that you want to identify.

3.1 HAARCASCADE

Using Haar-cascade has been made to identify any facial peculiarities easily.Haar features are important parts of
the classifier. The features of the face in the provided images can be efficiently detected from the classifier's help.
The result of each feature is gathered in the following steps. As a process of the first step, the sum of pixels for
both tones of black and white rectangles is estimated by pixels. The difference in pixels for black and white
rectangles is calculated and thereby gives a unique numerical value and thus it also supports the detection method
of images in numerous angles and alignments. The system of recognizing image is generated such that,
scrutinizing for features in an image begins from the top-left corner to the complete image for these features to
determine the face. This procedure can be repeated multiple times to get the best possible result. This procedure of
operation is repeated for a greater number of times. In every attempt, we get some results that are compiled in the
subsequent round but the complete result is compiled collectively when the features have to be submitted. The
results which are accumulating a complete image will get the desiredprecision only when we are dealing with a
monochrome set of image structures.

Training FaceRecognizer:

When it comes to training the recognizer there are three ways in which we can do. They are:
• Local Binary Patterns Histograms(LBPH)

• FisherFaces

• EigenFaces

• Local Binary Patterns Histograms (LBPH):cv2.face.createLBPHFaceRecognizer()

• FisherFaces:cv2.face.createFisherFaceRecognizer()

• EigenFaces:cv2.face.createEigenFaceRecognizer()

In EigenFace Recognizer the design system emphasizes the fact that, for face recognition it is not required to
consider the entire face. Moreover, we humans recognize some person by by his/her distinct peculiarities like their
eyes, mouth, nose, jaw line, and eyebrows.

It means that, we concentrate on the main areas of the face which differentiates from others. For instance, from eyes
to nose there is an observable change, and similarly from nose to mouth. In this way, you can compare them by
looking at those fields, because by catching the most modification among faces, they help you in distinguishing one
face from the other.
Fig.4. EIGENFACES RECOGNITION

Eigen faces recognizer recognizes the face in this way. All the training images of all the people are verified and
considered as a whole. The relevant and useful features are extracted from these and other images are discarded. In
this system important features are called principal components .Fig4 shows the image showing the variance
extracted from a list of faces.

Fisher Face Recognizer is an improvised version of EigenFaces. If we have observed EigenFace recognizer
considers all the training images as whole and from that it extracts the principal components of the face at once. By
doing so, it fails to focus on the features that discriminates an individual face from that of another. That is, it mainly
concentrates on the features, which represents faces of all the images in training set as whole.

Fig.5. FISHERFACE RECOGNITION

Fisher Face recognition can be achieved by adapting EigenFaces in such a way that it can extract features from the
faces of each person separately instead of extracting it from the whole. By doing this, even if an individual face has
high illumination change, it is assured that the other person’s extraction process is not affected. The algorithm
FisherFaces face recognizer extracts the principal components that discriminates one individual from that of the
other. It ensures the feature of one does not dominate over the other.From the following image you can come to
know about how principal components using theFisherFaces algorithm works.
Fig.6. working of Fisher Face Recognizer Principal Components

One thing to note here is that Fisher Faces as it were avoids highlights of one individual from getting to be
overwhelming, but it still considers light changes as a valuable include. We know that light variety is not a valuable
feature to extract because it is not portion of the real confront.

Local binary patterns histograms (LBPH) Face Recognizer


LBPH Face Recognizer is the improvised version of both the above algorithms. It can avoid the following
drawbacks.
• Both Eigen faces and Fisher faces are affected by the illumination of light in the environment.

Now let us see the process of LBPH Face Recognizer and how it overcomes it.

The LBPH Face Recognizer Process:


Let us see how histogram works: Presume if you take a 3×3 window and move it across one portrait. At individual
move, correlate the pixel at the center, and its surrounding. Recognize the neighbors with intensity value greater or
lesser than equal to the center pixel by value 1 and 0. When we read these values of 0 and 1 under the observation of
3x3 window in a clockwise pattern. When we adopt this procedure over the entire portrait. We will have a detail of
local binary patterns.

Fig.7. Working of LBPH FACE RECOGNIZER PROCESS

LBP modification to binary. Local Binary Patterns applied to Face Detection and Recognition.
Now, once a list of local binary patterns is created, you create each one into a decimal number using techniques
binary to decimal conversion, and then the histogram of each such image is created. You can see a sample of a
histogram of a particular image shown below:

Fig.8. Histogram of each image

In the end, you will have one plot for each face in the training data set. That means thar if you have given 100
images in the training data set then local binary pattern histogram will extract 100 plots after training and store them
for later recognition. The procedure also keeps track of which plot belongs to which person.

Afterward during recognition, the process is as follows:

Whenever the recognizer is referred by a new image, It generates an plot for that image. Later, it compares that plot
with previously stored ones. Therefore, the system finds the best match.

Beneath is a organization of faces and their respective LBPH images. We can see that the local binary pattern faces
are not affected by changes in light conditions:

Fig.9. It recognizes face in any light condition

When training data first it reads the training images at individual person with their labels, itrecognize faces from
every image and assigns each detected face as numbered tag of the person it belongs.Thereby LBPH face recognizer
is better than the other two.

That is why we are using OpenCV, an LBPH Face recognizer for training faces. The algorithm for face recognition
using deep learning is first it reads training images of individual person along with their identity, detects faces from
each image and assigns each detected face of numbered label of the person it belongs. Then it educates OPENCV’s,
LBPH recognizer by feeding data as we prepared steps of method inserting few new faces in front of a camera it
preprocesses and it extracts the features of facial landmarks like eyes, nose, mouth, eyebrows, jaw lines, etc. and
then tries to match with given data. If it matches then it shows the name of the person. We are using
HAARCASCADE for the detection of facial features like eyes, smile, etc.

4 RESULTS

While running the algorithm, firstly we give a new image that is captured through webcam. Secondly, the recognizer
creates a new histogram of that new profile. Thirdly, it differentiates the new histogram with the older histogram
which we had given before for training. Finally, it identifies the best compared match of the person and thereby
identifies the person by tagging the person's name left above the corner of the detection box. By using CNN we can
extract facial features like eyes, nose, mouth, jaw line, eyebrows, etc., with more efficiency, and by using
Haarcascade you can detect faces with more efficiency and can be performed well by using this algorithm. By
incrementing the series of layers of neural network and as good as information of the person it will remove the
problem of recognizing twins. As you can see in the below image it detects a face, it recognizes a person's name, it
detects eyes, it detects a smile, etc.

fig.10. Our result which we have got

5 CONCLUSION

By using deep learning algorithms, we get a good result for recognition of faces. It first detects faces from a given
input image with the help of haar cascade, it acts appropriate output which acts as input for tagging the face system,
and hence it provides tagged face with their particular name as output. When it comes to face detection, whenever
the face is detected the possibility of face been recognized is represented by detection box. Once the face is detected
it checks with the input image which we have given if it matches it shows the name of the person or just tags the
person name just left above a corner of face detected box. We extend the concept of face recognition using deep
learning to detect the face automatically and tag the person's name with accuracy. If you want more accuracy then
it’s quite obvious you should give more input images of the person. If the no. of images increases the accuracy also
increases.

6 FUTURE WORK
Governments across the world are ready for investing their resources in facial recognition technology, when it
comes to leaders of face recognition market then it is US and China. The technology is familiar to advance and will
design hefty credits in the coming years. Surveillance and security are the dominant trades that will be are the major
industries that will be actively convinced by passionately convinced by technology schools, colleges, and even in
healthcare are also planning to implement the facial recognition technology on their premises for better management
complicated technology used in facial technology is also making its way to the robotics industry.
7 REFERENCES

1. Bledsoe, W. W. The model method in facial recognition. Tech. rep. PRI:15, Panoramic research Inc., Palo Alto, CA.1964
2. Kanade, T. Computer recognition of human faces. Birkhauser, Basel, Switzerland, and Stuttgart, Germany,1973
3. “A Practical Guide to Training Restricted Boltzmann Machines”, Geoffrey Hinton.
4. "Deep belief networks", Geoffrey Hinton, (2009), Scholarpedia. 4 (5): 5947
5. P Ek man and Wvfriesen“ Facial action coding system” ,1977
6. “Face Recognition from Video using Generalized Mean Deep Learning Neural Network” Poonam Sharma, R.N Yadav, K.V
Arya - 2016 4th International Symposium on Computational and Business Intelligence.
7. “Deep Learning Network for Face Detection” Xueyi Ye, Xueting Chen, Huahua Chen, Yafeng Gu, QiuyunLv- 978-1-4673-
7005-9 /2015 IEEE.
8. “An incremental face recognition system based on deep learning”, Lufan Li, Zhang Jun, Jiwaei Fei, Shuohao Li, DOI:
10.23919/MVA.2017.7986845, IEEE
9. “Facial Detection with Expression Recognition using Artificial Neural Network”, Michel Owayjan, Roger Achkar, Moussa
Iskandar. DOI: 10.1109/MECBME.2016.7745421, IEEE
10. “Face Recognition system in Android Using Neural Networks”, Stoimen Stoimenov, GeorgiT. Tsenov, Valeri M.
Mladenov.DOI: 10.1109/NEUREL.2016.7800138, IEEE.

You might also like